![]() |
This unique volume presents the scientific achievements, significant discoveries and pioneering contributions of various academicians, industrialist and research scholars. The book is an essential source of reference and provides a comprehensive overview of the author's work in the field of mathematics, statistics and computer science.
Sample Chapter(s)
Databased Intrinsic Weights of Indicators of Multi-Indicator Systems and Performance Measures of Multivariate Rankings of Systemic Objects (1,254 KB)
https://doi.org/10.1142/9789814704830_fmatter
The following sections are included:
https://doi.org/10.1142/9789814704830_0001
In this paper, we discuss concepts, methods, and tools of partial order based multivariate ranking of objects leading to novel and innovative measures of performance of ranking methods for given data sets/data matrices of objects and features (indicators). We also develop novel and innovative intrinsic differential weights of relative importance of indicators with implications on their prioritization and subsequent selection status. We also provide illustrative examples using 25x3 data matrix with 25 objects and 3 indicators, giving intrinsic relative weights of the indicators indicating their databased relative importance. Further, we derive the rankings of the objects using different ranking methods constructing multi-indicator object rank scores, given by the weighted composite index, comparability weighted net superiority index, MCMC-based weighted indicator cumulative rank frequency distribution index, and MCMC-based average rank index. Finally, the ranking performance measures of these ranking methods are computed for the illustrative data matrices/data sets. We conclude the paper with selected references and extended bibliography.
https://doi.org/10.1142/9789814704830_0002
SuDoKu is a popular combinatorial puzzle. In recent years, there has been a growing interest in applications of SuDoKu-based experimental designs. We review the applications and bring out a few finer points of data analyses, but we skip the combinatorial aspect. SuDoKu was first interpreted as a statistical experimental design by Subramani and Ponnuswamy (2009), and a correct ANOVA-based data analysis was provided by Saba and Sinha (2014), who accounted for non-orthogonality between some components of variation. Variants of SuDoKu that achieve additional orthogonality, and hence change the statistical analyses, are found in Sarkar and Sinha (2014). We extend these finer points of data analyses also to designs based on mutually orthogonal SuDoKu Latin squares.
https://doi.org/10.1142/9789814704830_0003
Compliance with legislation and inclination towards sustainability have been major drivers for electronics manufacturers to implement end-of-life (EOL) and end-of-use (EOU) take back models. Designing of an efficient product recovery network entails key strategic decisions such as locating facilities with suitable capacities and determining an efficient distribution network for the reverse flow. Process of evaluation and selection of suitable location for managing recovery processes must allow the evaluation of opinions of all the stakeholders owing to its major economical, environmental as well as social impact. Multi criteria decision making (MCDM) techniques are therefore required to deal with the numerous conflicting tangible and intangible attributes. The paper aims at developing a MCDM model for an electronics manufacturing company seeking to sustainably manage its recovery processes for EOL and EOU products. It determines the optimal location of a recovery facility center (RFC) and optimal collection routes. For this purpose, the Decision Making Trial and Evaluation Laboratory (DEMATEL) is utilized to determine the interdependencies among the various conflicting criteria considered for evaluation of alternative locations. Analytic Network Process (ANP) is then applied to generate the relative importance of the locations. Locating, capacity and routing decisions are further incorporated by developing a mixed integer linear programming model under fuzzy environment which determines optimum capacity of the RFC to be opened with adoption of best technology and optimal routes of transportation with optimal selection of vehicles. The mathematical model performs trade-off between economical and environmental performance (in terms of carbon emission) of the proposed reverse logistics (RL) network.
https://doi.org/10.1142/9789814704830_0004
Effective inventory management and distribution has become critical to industries such as manufacturing, retailing, transportation, health, and service, as they strive to improve their customer service, cash flow and profit margin, while meeting the challenges of global competition, product proliferation, shorter life cycles and demand uncertainty. Environmental awareness throughout supply chains is also growing due to the regulatory policies legislated by governments and increasing pressure from voluntary organizations. As a result, supply chain partners need to analyze their operations such as inventory control, freight transportation, and warehousing activities for achieving an eco-efficient supply chain. At the core of inventory management is stocking control, which ensures that the right amount of stock is available to support the company's targeted fill rate in the market at minimum cost. Companies must determine and manage specific service levels so that customers across the supply chain are served in time. Otherwise, stock-outs quickly translate into lost sales. Transportation activities play a vital role in the effective management of a supply chain. In addition to being major contributor for accomplishment of service level, it has an immense impact on overall carbon footprint of the supply chain. But finding the optimal balance among these factors is not easy, especially due to the vast global market size. In this paper, an integrated inventory control and transportation model is proposed to obtain optimal stock keeping units (SKU) and Safety Stock for each product as well as each of the locations at minimum cost for the next planning horizon with environmental considerations to reduce the overall carbon footprint. Further, at the end of each period, current solution is put to test to evaluate possible deviations from prior fixed target and a modified solution is obtained, if required, for continual improvement in supply chain system design. The model has been validated through a case study.
https://doi.org/10.1142/9789814704830_0005
The IT industry has enthralled the pace of development of technologies. The initial focus, which was towards the development and maintenance of hardware, has shifted towards embedded systems (a combination of hardware and software) such as routers, LCDs, automated water boilers, mobile phones, and satellites for direct telecast on televisions. With sophistication in technology, reliability issues pertaining to both software and hardware have simultaneously raised. For rendering the embedded systems fault tolerant, various redundancy techniques can be applied. Numerous models for the optimization of reliability of embedded systems have been proposed, based on different fault tolerant techniques like NVP, RB scheme but has no roots in “build-or-buy”. The software components can be procured as COTS from vendor or can be built in-house. This decision is an important part of development process in which either choice can be valid depending on the situation. The proposed models are based on RB/1/1 fault tolerant technique which allows the subsystem to tolerate one hardware and one software fault. The first optimization model relinquishes to selection of hardware and software components for embedded system, incorporating build-or-buy strategy for the software components, such that overall system reliability can be maximized simultaneously minimizing the overall cost. Besides procuring the software component as COTS or developing it in-house, a pre-existing component can be made reusable after fabrication. The extended model shall incorporate reuse-build-buy decision for software components.
https://doi.org/10.1142/9789814704830_0006
The present work is aimed at seeking the solution of a problem based on two-temperature thermoelaticity with one relaxation parameter as well as with two relaxation parameters in a unified way of an annular cylinder, whose both the inner and upper surfaces are assumed to be stress free and are subjected to ramp-type heating. The Eigen value method together with the Laplace transform is employed to derive the solution of the problem. Numerical values of physical quantities are computed for a suitable material and results are displayed graphically to show the distributions of the fields inside the medium and comparison are made with previous theory.
https://doi.org/10.1142/9789814704830_0007
Television advertising, being one of the biggest sources of revenue for any television network, is highly practiced across the globe. The shows aired on a television network have time designated for commercial advertising in the breaks between them and also have varied sets of audience with varied TV rating points. The advertisers wanting to place their advertisements on a television network have certain non-negotiable requirements and preferences. Fulfilling the advertiser's requisitions, a TV network's problem is to maximize the revenue generated from allocating advertisements to the commercial breaks of shows appearing on their network. In this paper, we develop a multi objective mathematical programming model to determine the optimal placement of advertisements of multiple products of different advertisers, within the breaks of shows broadcasted in different day-parts on a television channel, over a planning horizon. The objectives considered are maximization of the network's revenue and minimization of the penalty cost from product conflict violation. The model also takes into consideration the advertisers' requirements for non-repetition of the same advertisement in a break, the bounds on the frequency of airings of an advertisement, preferences for locations in different day-parts and the minimum proportion specified for the appearance of advertisements' in different day-parts, as constraints in the formulated model. The model is solved through goal programming approach, to attain an optimal trade-off between the two conflicting objectives and get a compromised optimal solution. A case study of a television channel is presented to validate the model and is solved through the optimization software LINGO.
https://doi.org/10.1142/9789814704830_0008
Three parameter Weibull distributions do not satisfy the usual regularity conditions for maximum likelihood estimation of parameters. Again, as the threshold parameter becomes very close to the smallest observation the log-likelihood function becomes unbounded. Further, a shape parameter with a value lass than unity makes the density J-shaped which excludes the availability of a consistent local maximum. The presence of censoring increases the complexity further. The present paper performs maximum likelihood estimation through Differential Evolution, an evolutionary computation method which does not require the differentiability of the likelihood function, for censored Weibull data. It successfully obtains maximum likelihood estimates and their precision for simulated data sets.
https://doi.org/10.1142/9789814704830_0009
Ranked set sampling is a cost-effective sampling technique, which has been used in many real life situations effectively. In this paper, we have made an attempt to explore its applications for improving statistical quality control techniques, which are usually based on simple random sampling; An illustration is given to show its applications in statistical quality control techniques.
https://doi.org/10.1142/9789814704830_0010
In this article, an approximate solution for Duffing equations with cubic and quantic nonlinearities is obtained using the Differential Transform Method and Pade approximation technique. The concept of Differential Transform Method is briefly introduced and applied it to given problem to derive solution of nonlinear equation. The major concern of this paper is successfully use of Adomian polynomial to assess the nonlinearities. The results are compared with the numerical solution by fourth-order Runge–Kutta method and found with good agreement. Results are shown by graphs for whole range of time domains accurately.
https://doi.org/10.1142/9789814704830_0011
Cross efficiency evaluation, regarded as a Data Envelopment Analysis (DEA) extension tool, is an effective approach for ranking Decision Making Units (DMUs) which consumes multiple inputs and produces multiple outputs. In addition to self-evaluation assessment, as done in traditional DEA models, peer-evaluation assessment is carried out, wherein; the efficiency of the target DMU is evaluated using the weights determined by other peer DMUs. The classical cross-efficiency DEA models, using crisp data, cannot effectively deal with real-world problems having vagueness and/or uncertainty. This paper introduces a novel Fuzzy DEA Cross-Efficiency model which evaluates performance by constructing two virtual DMUs viz. the ideal DMU and the anti-ideal DMU, having fuzzy parameters. An ideal DMU is constructed using minimum inputs to yield maximum outputs whereas an anti-ideal DMU is constructed using maximum inputs to produce minimum outputs. These virtual DMUs serve as reference points for performance evaluation and ranking of considered DMUs. A numerical illustration to demonstrate the applicability of the proposed model is also presented.
https://doi.org/10.1142/9789814704830_0012
The paper discusses the changing concept of poverty in general and points out the limitations of the poverty line generally used for identifying the poor. Instead of traditional rule based on “income” and “expenditure” approach the new approach is discussed, which appears more relevant in the present scenarios. Using scan statistic methods the most likely and secondary clusters of districts are identified by the SaTScan Software.
https://doi.org/10.1142/9789814704830_0013
Data Envelopment Analysis (DEA) has been recognized, over the recent years, as a valuable analytical research tool for performance evaluation of several, similar entities engaged in different activities. Characterized by single (multiple) input(s) and output(s), this technique distinguishes between efficient and inefficient units, thereby forming an efficient frontier. It measures the level of efficiency of non-frontier units and identifies benchmarks against which such inefficient units can be compared. In the classical DEA approach, an optimization model is formulated and solved to evaluate the efficiency score of each Decision Making Unit (DMU) separately. The Joint Optimization DEA model presentedin this paper extends the performance measurement DEA technique by evaluating the performance of all DMUs simultaneously. An Interactive method is designed which considers the gap between the target and achieved values of inputs and outputs of the DMUs and provides the decision maker with an appropriate framework to choose the most preferred solution.
https://doi.org/10.1142/9789814704830_0014
An effort has been made to develop stochastic model for a repairable system of two non-identical units working in different weather conditions. The system starts operation with the original unit (called main unit) and the other substandard unit (called duplicate unit) is kept a spare in cold standby. The repair activities have been tackled by a single server immediately on need basis. The operation and repair of the units are not allowed in abnormal weather. The system has been analyzed at different epochs by adopting semi-Markov process and regenerative point technique. The distributions of failure time of the units and change of weather conditions follow negative exponential while that of repair times of the units are taken as arbitrary with different probability density functions. The expressions for some important measures of system effectiveness are derived in steady state. The behavior of some reliability measures have been observed for arbitrary values of various parameters and costs.
https://doi.org/10.1142/9789814704830_0015
The recent surge of Tuberculosis (TB) cases in India and in its seven North-Eastern States in particular, has introduced a renewed interest in the estimation of TB risk surfaces in the neighborhood and identification of units having elevated risk. The present paper maps the TB risk surfaces on the basis of district level incidence rates for the seven neighboring North-eastern states of India. The risk surface is represented with a set of random effects through Bayesian Hierarchical approach. In the present case, the random effects are modeled through conditional autoregressive (CAR) prior exhibiting a single level of spatial smoothness. Also, attempt has been made to identify the risk boundaries having elevated risk through localized spatial structure by modeling the weighted contiguity matrix for geographically adjacent areas as binary random quantities.
https://doi.org/10.1142/9789814704830_0016
Advertising for a product is imperative for a firm for sustenance in a competitive market. It is performed through various media vehicles of which television media with a large reach to the audience has always attracted the firms to advertise their product. In this media, an advertiser has multiple options available for placement of its commercial in terms of different categories of channels, time slots of telecasting different programs and number of breaks within a program. The problem faced by an advertiser is to select the appropriate combination among these multiple options for placement of its commercial. In this paper, we formulate a multiple objective programming problem for an advertiser carrying out an advertising campaign through different categories of television channels. The model optimally places an advertiser's product in breaks within multiple programs scheduled on different day parts of television channels over a planning horizon, minimizing the cost incurred and maximizing the reach of the advertisement for its product. With these two objectives, the model is constrained on diversification of budget among the channels, bounds on the frequency of commercial in time slot of each channel and preferences of the advertiser for allocation of advertisement. The trade-off between the two objectives is achieved by solving it through goal programming technique. A case study for a firm advertising a product on multiple television channels is presented to validate the model and solved using optimization software LINGO.
https://doi.org/10.1142/9789814704830_0017
Some pioneering works on the methods of fitting functional relationship which are often used in estimating linear regression models are discussed. This fundamental problem in the theory of errors has drawn special attention of the leading mathematicians and scientists, in particular, since eighteenth century. We discuss some current investigations based on analysis of the Lp-norm regression which raise questions about the appropriateness and applications of the least squares based regressions in making predictions.
https://doi.org/10.1142/9789814704830_0018
In the present study an attempt has been made to evaluate the impact of family planning on fertility in Jharkhand State through the Prevalence Model. If prevalence levels of both programme and non-programme contraception are known, this technique permits the estimation of gross natural and potential fertility for assessing births averted. With the emergence of the National Family Health Survey (NFHS: 2005-06) to monitor family planning and health activities, this method becomes a useful tool. Of special, interest is the ability of the procedure to yield estimates by age group as well as by type of contraceptive methods used. In the study, the standard method-specific use-effectiveness levels weight observed use and prevalence level by method have been applied. Of the total births averted in Jharkhand State by programme contraception, 80 per cent of births were averted by Sterilization users in 2005–06 while the spacing methods users contributed to about 20 per cent of the birth prevention. The spacing methods need to be strengthening for the greater use. With regard to the births averted by non-programme contraception/natural methods, the main contribution was made by the users of Rhythm of about 45 per cent, which was followed by the users of Withdrawal of 43 per cent and by the other methods of 13 per cent. Of the total birth averted in Jharkhand State, the contribution of programme contraception and non-programme contraception/natural methods is about 85 per cent and 15 per cent respectively in 2005–06. The programme contraception has the dominance role to control fertility however the non-programme contraception/natural methods use should also be enhanced at the places where accessibility of programme contraception is poor.
https://doi.org/10.1142/9789814704830_0019
This paper demonstrates the application of spatial statistical tools for effective governance by way of designing an epidemiologically guided public healthcare system for eradication of poliomyelitis from India. Spatial scan statistic tools detect high priority areas or ‘hotspots’ for focused management response to interrupt the transmission of wild poliovirus in Bihar, India. This paper highlights probable lacunae in the existing acute flaccid paralysis surveillance as well as suggests a modified surveillance mechanism while detecting ‘hotspot’ districts in Bihar for effective management intervention for interrupting the transmission of wild poliovirus in the state.
https://doi.org/10.1142/9789814704830_0020
This paper deals with the stochastic modeling and analysis of Bloom caster system of continuous casting shop area of Bhilai steel plant. System consists of two tundishes along with one mold with rolling belt and gas cutter each. Initially one tundish is in preparation while other is kept in the cold standby. Once the functioning of all the required units gets started it is continued till the completion of the cycle, after this whole system is sent to scheduled maintenance. Blooms obtained are cut into the required sizes with the help of gas cutters and are sent to Bloom storage yard. Failure time distributions of all units are taken to be negative exponential whereas repair and maintenance time distributions are taken to be arbitrary. Using regenerative point technique, several system characteristics such as mean time to system failure (MTSF), availability, busy period analysis of the repairman which are useful to the system managers and engineers, are evaluated. At last some graphs are plotted in order to highlight the important results.
https://doi.org/10.1142/9789814704830_0021
A two-parameter generalized exponential-Lindley distribution, of which the exponential distribution and the Lindley (1958) distribution are particular cases, has been introduced. Its moments, failure rate function, mean residual life function have been discussed. The maximum likelihood method and the method of moments have been discussed for estimating its parameters. The distribution has been fitted to some data-sets to test its goodness of fit.
https://doi.org/10.1142/9789814704830_0022
For implementing the programs and policies of the urban planning department we need to know the current populations of the urban areas of interest. Many times the required information is not easily available. In view of these scenarios the paper deals with the estimation of population totals based on ranked set sampling (RSS), which is relatively a new sampling method that utilizes only ranking information about the randomly selected sampling units with respect to the characteristic of interest without using their exact measurements. The estimator is more efficient than that of a simple random sample with the same size. Two illustrations based on reported data sets are given. The findings are expected to be of much help to the policy and decision makers, and also to those who look for a cost-effective sampling method to estimate the urban populations.
https://doi.org/10.1142/9789814704830_0023
Analysis of rainfall data strongly depends on its distribution pattern. It has long been a topic of interest in the fields of meteorology in establishing a probability distribution that provides a good fit to rainfall. In this study three major statistical distributions are fitted to the daily precipitation data for nine meteorological subdivisions of NWI regions such as, Punjab, Haryana, Chandigarh &Delhi, Himachal Pradesh, Uttarakhand, East and West Uttar Pradesh, Gujarat, East and West Rajasthan, Jammu and Kashmir for the period 1982 to 2013. These distributions are Log-normal, Gamma and Weibull distributions. To study the goodness of fit, Chi-square test is taken over the DJF data among with individual December, January and February data for 91 days. From the Chi-square test it shows that lognormal distribution is more suitable for Daily Precipitation on NWI region among these three distributions.
https://doi.org/10.1142/9789814704830_0024
In this paper, strategies based on linear systematic sampling (lss) and circular systematic sampling (css) procedures, when the population size is not a multiple of sample size, have been compared under certain models. It is found that the usual sample mean in lss can be employed in preference to the other competing known estimators in lss and css. A method of lss which can be looked upon as an extension of centered systematic sampling has been suggested for the purpose of eliminating the effect of linear trend for the case when population size is not a multiple of sample size and it has been shown that, under a linear trend model, this method gives better results than the one due to Bell house and Rao [3] who have extended Yates' end corrections to css. An empirical investigation based on two real populations confirms that a conventional strategy involving lss and the usual estimator based thereon is superior to two other well-known strategies when the population units are arranged in ascending order of a related variable.
https://doi.org/10.1142/9789814704830_0025
Considered is the mean square error decomposition model for estimation of population mean and the problem of estimation of measurement variance under two-stage sampling. The measurement technique considered here is that of employing personal interviewers for data collection. The sample mean has been defined taking care of interviewer's effects and the measurement variance associated with the estimator has been derived with an effort to estimating the measurement variance.
https://doi.org/10.1142/9789814704830_0026
In this paper we present various developments that took place after seminal work of Dr. N. Karmarkar of Bell Laboratory and indicate several directions in which future progress can be achieved. Finally we present various consequences of the Interior-Point Revolution in different disciplines of applied mathematics.
https://doi.org/10.1142/9789814704830_0027
The present work proposes combined exponential ratio-product type estimators for the study variable to estimate the population mean under non-response, by using auxiliary information in a stratified random sampling setup. The mean squared error (M.S.E.) and minimum M.S.E of suggested estimators are derived under Proportional and Neyman allocation. An empirical study illustrates relative efficiency performance of the constructed estimators.
https://doi.org/10.1142/9789814704830_0028
In this article, we illustrate an analytical method, namely Homotopy Analysis Transform Method (HATM) which is a combination of Homotopy Analysis Method (HAM) and Laplace Decomposition Method (LDM). This scheme is simple to apply linear and nonlinear fractional differential equations and having less computational work in comparison of other exiting methods. The most useful advantage of this method is to solve the fractional nonlinear differential equation without using Adomain polynomials and He's polynomials for the computation of nonlinear terms. The proposed method has no linearization and restrictive assumptions throughout the process. Presently, homotopy analysis transform method is used to solve time fractional Fokker-Planck equation and similar equations. The series solution obtained by HATM converges very fast. A good agreement between the obtained solution and some well-known results has been demonstrated.
https://doi.org/10.1142/9789814704830_0029
This work considers a natural generalization of primitive words with respect to a language. Given a language L, a nonempty word Lis said to be L-primitive if w is not a proper power of any word in L. After ascertaining the number of primitive words in submonoids of a free monoid, the work proceeds to count L-primitive words in submonoids of a free monoid. The work also studies the distribution of L-primitive words in certain subsets of a free monoid.
https://doi.org/10.1142/9789814704830_0030
We intend to examine and explore the capability of ranked set sampling, a relatively new sampling technique for the data collection that provides more efficient estimators of some parameters of interest than the corresponding estimators based on simple random sampling. The main focus of the investigation is to develop a well-designed and cost-effective sampling technique. RSS is one of the sampling methods that can help accomplish such objectives. In this paper, the performance of RSS is compared with that of the linear regression method based on SRS. These procedures are illustrated using real data sets consisting of height and weight of sugarcane provided by the Government of Bihar, India and the data set on tuber referred to by Kumar (2013). The numerical investigation, in turn, suggests that the RSS method performs better than the linear regression method based on SRS in most of the real-life situations.
https://doi.org/10.1142/9789814704830_0031
Recent years have witnessed reverse logistics (RL) gaining immense importance as a profitable and sustainable business strategy. A systematic decision-making model is required to guide organization wanting to engage in the reverse logistics business for market competence and environmental obligation. The main reverse logistics functions can be categorized as: collection of the returns; dismantling of the unwanted products; repair and refurbishment of the recovered products and components. In this study, a mathematical model is developed which can assist the decision makers of a company to assess the possibility of choosing one of the following three logistics operating channels (LOC) for managing each of the above reverse processes: the company takes control of the process itself; it completely outsources the services related to the process to third party service provider (3PRL); it coordinates the process through collaborative partnership with 3PRL. To evaluate the above three operating channels, the Analytical Hierarchy Process-Quality Function Deployment (AHP-QFD) methodology is used to incorporate the voice of the stakeholders. The requirements of the stakeholders determine the evaluating criteria using a series of house of quality (HOQ) in which the weights of the evaluating criteria are achieved using AHP. The criteria for assessment are therefore based on the strategic goal of implementing a sustainable RL system. The result from the AHP-QFD approach along with other quantitative data is used to parameterize the mathematical model which determines the most appropriate alternative for each of the RL activity. A case study is included to validate the proposed method.
https://doi.org/10.1142/9789814704830_0032
A parallel system of two identical units has been studied by considering arbitrary distributions for the random variables associated with failure time, repair time and replacement time. There is a single server who visits the system immediately to carry out repair activities. The unit does not work as new after repair and so called degraded unit. The degraded unit at its failure is replaced by new one in order to avoid unnecessary expenses on its repair. The system is considered in up-state if any one or two of new and/or degraded unit(s) are operative. Some measures of system effectiveness such as mean sojourn times, mean time to system failure (MTSF), availability, busy period of the server, expected number of visits by the server and profit incurred to the system model are obtained using semi-Markov process and regenerative point technique. Giving particular values to various parameters and costs, the numerical results for MTSF and profit function are obtained by considering exponential distribution for all random variables.
https://doi.org/10.1142/9789814704830_0033
The paper reviews the practice and conceptual framework of Free and Open Software (FOSS) along with its evolution and global status. It is argued that Open Source software has come as a relief for programmers especially when technologies have evolved and are rapidly changing in the current environment of business, enterprise and academia. In this respect the concept of “FOSS Adoption Index (FAI)” is examined vis-a-vis the developing world with special reference to India. It is clear from the specifics of the FAI model which includes its parameters and the computational procedures involved that this index may serve us well in the understanding of the evolution of FOSS. The FAI model shows a medium to low levels of FOSS adoption amongst Indian organizations including the academia, which is particularly revealing because of the fact that sharing of information has been a mainstay in the academic environment and in the development of science and technology. The paper further investigates the above issues with respect to Apache web server's development as a robust and popular web server which is currently being used by 60% of all web servers. This shows that there may exist some key factors in the development of Apache web server which contribute to the popularity of open source software among various software developers on a FOSS platform.
https://doi.org/10.1142/9789814704830_0034
Electric machines are widely used in the present day industries which constitute about a 75-80% of the total deployed due to their user-friendly interface. Generally speaking machinery is prone to multiple types of faults and an intelligent monitoring layer can ensure smooth operation and inform as maintenance alerts mitigating any abrupt failures. In this work we present and ANDROID/TIZEN interface which integrates multiple AI and signal processing techniques to detect the different faults interrupting smooth performance of machinery in a very short span of time. We demonstrate the thought process, simulation, final software interface and test results to confirm its effectiveness.
https://doi.org/10.1142/9789814704830_0035
Broadband wireless access (BWA) is becoming quite popular today as it satisfy demand of users and also support different real time services and applications. WiMAX is Next Generation Networks which provides BWA solution for Wireless Metropolitan Area Networks (WMANs) with Point to Multipoint (PMP) support and high throughput with large covering distance. WiMAX supports Quality of Service (QoS) features. Quality of Service is the main consideration for different applications which are using network resources. These applications are voice over IP (VOIP), audio and video streaming, conferencing, gaming, web browsing etc.
In this paper performance of video traffic over WiMAX network using different service classes has been investigated. For this WiMAX module based on network simulator NS-2 is used. Various parameters that determine QoS of applications is analyzed. QoS parameters such as, throughput, packet loss, average delay and average jitter is compared for different services classes.
https://doi.org/10.1142/9789814704830_0036
Artificial Neural Networks (ANNs) are modelling tools having the ability to adapt to and learn complex topologies of inter-correlated multidimensional data. ANNs are inspired by biological neuron processing, have been widely used in different field of science and technology incorporating time series forecasting, pattern recognition and process control. ANN has been successfully used for forecasting of groundwater table and quality parameters like nitrate, total dissolved solids. In case of groundwater quality prediction, availability of good quality data of better precision is required. ANNs are classified as Feed-forward neural networks (FFNNs), Recurrent neural networks (RNNs), Elman Backpropagation Neural Networks, Input Delay feed-forward Backpropagation Neural Network, Hopfield Network. The artificial neural networks (ANNs) ability to extract significant information provides valuable framework for the representation of relationships present in the structure of the data. The evaluation of the output error after the retraining of an ANN shows us that this procedure can substantially improve the achieved results. Through this review work it is observed that in most hydrological modeling cases FFNN and LM algorithm performed well till today's published research work.
https://doi.org/10.1142/9789814704830_0037
Outlier detection is a great area of interest in the field of data mining. It has been observed that there exist several application domains in which direct mapping is possible between outliers in data and real world anomalies. Outlier detection is an important research topic in various application domains and knowledge disciplines. This paper presents a new approach DBOD to overcome the disadvantage of well-known outlier detection algorithm LOF. The computation of LOF score is a tedious task because a large number of K nearest neighbor queries are handled. Our proposed algorithm is based upon a new definition of densities of data points known as point density. Point density is knearest neighbor divided by k-distance. We compared our results with LOF and found that our proposed DBOD detects outliers more accurately and effectively.
Compared the result on real world data set taken from UCI repository, kdd cup-99 and Wisconsin breast cancer dataset on simulator, and it has been found that our proposed DBOD detects outliers more accurately and effectively in comparison with LOF.
https://doi.org/10.1142/9789814704830_0038
High Dimensional data spaces are very common in areas like medicine, biology, bioinformatics, web data, text documents. This high dimensionality brings different challenges when applied with algorithms such as slowness, sensitivity to initial values, either early or slow convergence etc. Various algorithm for the large data set has been proposed in literature. In this paper a literature survey on nature inspired particle swarm optimization (PSO) used for dimensionality reduction is presented and a new variant of PSO, Enhanced Velocity PSO (EVPSO) has been proposed. Convergence analysis of proposed method has been conducted. This work is an attempt of computational, mathematical and statistical analysis to provide direction to the researchers who are working in the field of dimensionality reduction with PSO.
https://doi.org/10.1142/9789814704830_0039
In this paper, the authors have proposed a way to predict the core human body temperature through Android smartphones. The touch screen in any Android smartphone works on self-capacitance. The capacitance value is then converted into the corresponding frequency value. By slightly changing the source code of the Android operating system this frequency value can be used to predict the body temperature. The temperature change values corresponding to their frequency change values are stored in a database beforehand.
https://doi.org/10.1142/9789814704830_0040
Wireless Sensor Network (WSN) is a popular research topic because of its multidimensional applications. Recent technological advancements in computation, communication, hardware and software play the most important role in the development of low cost, low power, small in size multifunctional sensor nodes. Since, radio transmission consumes lot of energy, limited battery power becomes an important issue in WSNs. Hence, battery power is the most critical parameter to decide the lifetime of sensor nodes in WSNs. WSN requires an energy efficient routing protocol for achieving longer life of sensor nodes. Different applications of WSNs (disaster management, border protection, security surveillance) need to deploy sensor nodes in large numbers. So, the main requirement of an algorithm is to form clusters by grouping sensor nodes in disjoint and non-overlapping manner. This paper presents a comparative study between energy efficient hierarchical cluster-based routing and context-aware cluster-based routing in Wireless Sensor Networks (WSNs).
https://doi.org/10.1142/9789814704830_0041
Emotion plays an important role in speech processing. An experiment has been performed for speech emotion recognition using vowel region. Starting and ending points of vowel regions are identified using vowel onset and vowel offset points. In most of the previous work prosodic features have been extracted at the utterance level. This approach extracts Mel frequency cepstral coefficients (MFCCs) spectral features, which is used to classify eight different emotional classes such as Anger, Disgust, Fear, Happy, Sadness, Neutral, Sarcastic and Surprise. This experiment has been done on IITKGP: SEHSC database. Gaussian mixture model (GMM) is used as a classifier. Recognition rate of the model using utterance level speech signal for male and female speakers are 65.47% and 65%, while improved recognition rate using vowel region for male and female speakers are 75.59% and 69.58% respectively.
https://doi.org/10.1142/9789814704830_0042
The arrangement of number in several ways gives us amazing & surprising objects, objects that we try to define historically mathematically, or relate them to religious context. This paper also presents such a magical arrangement of numbers called Magic Square, this paper is mainly concerned with the algorithm for finding the magic squares. In this text we will discuss two algorithms to find the magic square. The first one finds the magic square of odd order while the second one can be used to find the magic square of doubly even order (i.e. in multiple of 4 not 2). The algorithm presented in this paper will be explained by sample runs for magic square of small order i.e. 3 for odd order magic square while 4 for doubly even order magic square, but we have successfully tested these algorithms for higher order magic square too.
https://doi.org/10.1142/9789814704830_0043
Street light system is a significant challenge for researchers to save electricity. The power consumptions and control of street light system is not economically feasible in most of the existing systems due to its resource and maintenance. Wireless is an efficient and compatible environment for most of the applications due to higher utilization of Bluetooth, wifi, and global system for mobiles (GSM) or describes as Global packet radio service (GPRS) where the data pass through as a packet. Smart city people are moving towards modern communication as wireless sensor network provide comfort and satisfaction towards minimum power utilization and other resources without affecting city nature. Efficient control of street lights with minimum power consumption and optimum usage of light is a significant research problem in wireless sensor network. This paper focuses on various methods used for street light control system. It also discusses placement of existing lamps with Light Emitting diode (LED).
https://doi.org/10.1142/9789814704830_0044
Software testing is one of the important and crucial phases of software development life cycle. In context of time, cost and effective testing the prime need is test case optimization. In present testing scenario the meta-heuristic methods are used for optimization problem and also provide good optimized result. The most popular algorithm used meta-heuristic software optimizations are genetic algorithm, particle swarm optimization, artificial bee colony, ant colony, memetic algorithm, harmony search and many more. In this paper we provide the comparative study of all metaheuristic approaches and find which is more towards the solutions i.e. coverage of requirement (fault coverage) and also take less time (cost) for test case execution and support software quality.
https://doi.org/10.1142/9789814704830_0045
In India, speed and red light violation are the main cause of accidents. Over speed reduces our reaction time and it makes difficult to control a vehicle during obstacle moves into our path. Violating the red light increases our chances of crashing. At present in India to detect the over speed vehicles and red light violation there should be an efficient and automated system. In this paper we proposed the framework of smart city traffic management and surveillance system for automatic over speeding vehicle detection and red light violation detection system. The proposed model will help to detect the vehicles having over speed and red light violation. This proposed system is based on pattern recognition and image processing techniques.
https://doi.org/10.1142/9789814704830_0046
Online social networks (OSNs) act as a platform to build online community. Recent years show an unprecedented surge in the OSN users. It leads not only to interesting opportunities but even new challenges. In link-prediction problem, we are given a snapshot of an OSN at time t and aim to predict links (e.g., friendships) that may emerge in the network between t and a later time t´ or we can imagine the setting in which some links existed at time t but are missing at t´. Link prediction problems can also be viewed as predicting hidden or missing links in OSNs. In attribute inference problem our aim is to infer the missing or partial attribute data for network nodes. This situation is usual, when users in OSNs set their profiles to be publicly invisible or create an account without providing any attribute information. In this paper we show that accuracy of attribute inference attack can be improved significantly by first inferring missing links.
https://doi.org/10.1142/9789814704830_0047
An attempt has been made to develop SIRA model to balance the number of nodes in computer network. The SIRA model has Susceptible(S)-Infectious(I)-Recovered(R)-Appended(A) compartments. It has been assumed that an updated antivirus runs on computer network in continuous interval of time to avoid attacks of computer virus. The concept of natural death has been has been implemented. The threshold number and stability of the system have been established. Numerical methods are implemented to solve and simulate the system of equation developed under the real parametric values.
https://doi.org/10.1142/9789814704830_0048
In-service condition monitoring signifies a continuous assessment of health of industries and its equipment continuously throughout its serviceable life without any shutdown or hampering its normal working. Various condition based maintenance techniques comprises of detection of parameters like vibration, machine temperature, acoustic, current monitoring, etc are getting growing interests and showing considerable development for a current scenario of health monitoring of industrial machines that advice maintenance decisions based on different parameters, by investigating these parameters it enables us achieve the condition of the machine under consideration. The major purpose of implementing condition based monitoring is not only for planned maintenance rather to increase productivity, decline overall maintenance cost and enhance safety as well. These techniques substantially lowers the under lying operation cost by able to select the most suitable technique for a specific industrial scenario. Different literature are serene over the years on a variety of parameters of CBM techniques of machines and an exhaustive survey is sensed essential with aim to reproduce the current state of the art developments emphasizing on in-service machinery without having a shutdown.
https://doi.org/10.1142/9789814704830_0049
Segmentation is needed for extracting the useful or concerned information. There are lots of segmentation algorithms available. Here, we have discussed various segmentation strategies based on edge, threshold, super pixel, texture and some preprocessing technique for dealing with complex structured images. Experimental are carried out for the segmentation of pedestrians on OSU thermal pedestrian dataset based on GBSF and ARMSF proposed by Z. Lieu et al. The results on pedestrian segmentation are promising.
https://doi.org/10.1142/9789814704830_0050
This paper presents reliability modeling of transaction oriented autonomic grid service. Transaction oriented autonomic grid technology is aimed at providing reliable services for users by hiding the complexity of the service and protecting the system from various failures. The reliability in these systems is greatly affected by the occurrence of failures. A coloured petri net model, CPN-TOGS (CPN-Transaction Oriented Grid Service), is presented in this paper for analyzing the empirical reliability in transaction oriented autonomic grid service. The model maintains the recovery of the failed processes by using both local level and replicated level recovery.
https://doi.org/10.1142/9789814704830_0051
Spoken language identification refers to the set of algorithms and procedures to identify language with high accuracy of recognition rate. Accuracy and speed are two major concerns in language identification for real time applications. It becomes more challenging in the noisy environment due to the addition of different types and levels of noise from the environment. This paper covers language identification performance in a noisy environment in the context of Indian languages by using speaker independent Indian speech corpus (IITKGPMLILSC). The experiment is carried out by extracting acoustic features (13 MFCCs, 1 Delta and 1 Delta-Delta) from the raw speech signal. Gaussian mixture models (GMMs) are trained for each language with varying number of mixture components. For the noisy environment, white noise with zero mean is added at different SNR (dB) values. The performance of the proposed LID system is evaluated for clean and noisy speech signal. The average recognition rate of clean speech is 56.48%, while of noisy speech signal is 14.84%. The recognition rate of the proposed LID system varies according to different noise level, duration of speech utterance and mixtures of GMMs.
https://doi.org/10.1142/9789814704830_0052
Software reliability growth models are helping the software society in predicting and analyzing the software product in term of quality. During the software-testing phase, software reliability is highly related to the amount of development resources spent on detecting and correcting latent software errors, i.e. the amount of testing effort expenditures. Some researches proposed in the literature to study in the fault detection and fault correction processes. However software reliability can be enhanced considerably during testing with faults being detected and corrected by testers. The allocation of testing resources such as man power and CPU hours during testing phase can largely influence fault detection speed and the time to correct a detected fault. The testing resource allocation is usually depicted by testing effort function, which has been incorporated into software reliability model. Fault correction process is usually modelled as a delayed process of fault detection process. In this paper we proposed a model of fault detection and fault correction processes based on Log-logistic testing effort function. In addition the methods of data analysis and comparison criteria are presented. Models parameters are estimated by the least square estimation method, and computational experiments performed on actual software failure dataset. A comparative analysis to evaluate the effectiveness of the proposed model with the other existing model are also performed. Results show that the proposed model can give fairly better prediction.
https://doi.org/10.1142/9789814704830_0053
Wireless Sensor Network is a wireless network that consists of independent sensors which communicate with each other in distributed fashion to sense and monitor the environment. The goal of Wireless sensor network is to have long life time and high reliability with maximum coverage. LEACH is one of the first and the most discussed hierarchical, cluster-based routing approaches for sensor networks. In WSNs, the LEACH protocol has got high attention due to its simplicity, energy-efficiency and load-balancing nature.
In this paper we have proposed an improvement over LEACH protocol for the Wireless Sensor Networks by putting a check on how the cluster-heads are chosen. For this, the clusters are sub-divided into groups. Choosing the location of cluster heads properly, can greatly improve the life-time of the network. Simulation results demonstrate that our proposed modification achieve our claim.
https://doi.org/10.1142/9789814704830_0054
In IEEE 802.11 based WLAN system; the mobile nodes (MN) are connected through access points (APs). During mobility a MN leaves one AP and is associated to new APs, A handoff process will occur. To provide a better seamless connectivity, Handoff process latency should be very small. Handoff latency is a combination of scanning, re authentication and reassociation latency. Reauthentication latency is major contributing factor that affects the performance of handoff and increase the handoff latency. In this paper we present a novel approach for reducing the Reauthentication latency, and network overhead. For reducing the re-authentication latency we apply pre-authentication mechanism which is preceded by the mobility prediction to consider the user mobility behavior as the contributing factor in the pre-authentication. With the help of mobility predication the central server sends a pre authentication key to the APs and also sends the Ids of AP to the MN and MN store into their buffer. The simulation results show a good factor of improvement over the latency values in WLAN environment.
https://doi.org/10.1142/9789814704830_0055
Most of the human feelings are expressed through face and by seeing one's face, one can easily identify whether he is happy or sad or angry. So for truly knowing the feeling behind words, facial expression must be correctly recognized. This paper gives a brief overview of the facial expression recognition system, discusses few approaches with different strategies, available datasets and the classifiers commonly used. We have used histogram of oriented gradients features for facial expression and the results are promising.
https://doi.org/10.1142/9789814704830_0056
Cloud computing is internet based network diagram that represent internet, or various parts of it, as schematic clouds. The term, characteristics and services associated with internet based computing is called cloud computing. Characteristics means infrastructures, provisioning, network access and managed metering. The primary business associated model which is employed in software, platform and infrastructure as a service and common deployment model which is deployed by service providers and users to use and maintain the cloud services like private, public, community and hybrid clouds are discussed in this papers. In this paper cloud computing refers to different types of services and applications being delivered in the internet cloud but the fact is, in many cases the devices use to access such services and applications do not require any special applications. That is cloud computing is everywhere. Cloud computing also promise to cut operational and capital costs.
https://doi.org/10.1142/9789814704830_0057
Forests are essential for survival and sustenance of life. Their growth should be optimized so that greater benefits are derived from them. With such a large establishment and geographical base, the monitoring and decision making becomes very critical. The inherent delays hamper the decision process required at a particular time. The increasing area covered by forest plantations creates a demand for trustworthy mechanisms to ensure they are responsibly established and managed. However, most are focused exclusively or prevalently on natural or semi-natural forests, while only a few are specific to planted forests or plantations. The main aim is to assess whether and to what extent planted forests are properly considered within the existing sets of standards/guidelines and to identify areas for improvements, is based on a series of comparative analysis. This paper focus to carry out the full potential of convergence of GIS and Mobile Technology for plantation, with emphasis on technically viable infrastructure solution based on sustainability principles. Integration of GIS and Mobile is being proposed with an objective to enable a single window access to information and services being provided by various formations and to establish a collaborated environment.
https://doi.org/10.1142/9789814704830_0058
Internet traffic analysis is one of the popular research interest area because of its benefits for many applications primarily to the task of internet traffic classification. Many work in this field have progressed in the direction of classification and also for traffic prediction. This paper presents survey on traffic classification and prediction. The paper also mentions the importance of decision trees in the field of traffic classification.
https://doi.org/10.1142/9789814704830_0059
Search Engines are integral part of web today. This paper attempts to give overview of available search engine, with main focus on Google and VisiNav which are using different techniques for searching the web data.
https://doi.org/10.1142/9789814704830_0060
The importance of social network is increasing day by day as it is becoming a popular and convenient medium for communication between the social users. It enables user to share their profile data, ideas, videos and any content they have with them. It gives public developer opportunity to use it and extend applications of social network service. The people associated with social network are not much aware about the security threats and risk involved with sharing their data. They trust the network and easily avail their data to them. They reduce the original privacy alerts for their convenience and hence make it easier for attackers to attack. The users awareness about these risks and challenges will protect the loss of private and personal information. This survey paper examines the issues related to the social networks and several possible ways for an attacker to attack.
https://doi.org/10.1142/9789814704830_0061
This paper shows the reduced rule can be nearly efficient to without pruned decision tree rules for checking the genuinity of banknotes (currency). Pruning is required to minimize the decision tree, its additional nodes and rules, which is irrelevant to reduce the errorrate for classification. Pruning reduces the complexity and easy to understand. In this paper Pruned decision tree rules decides the classification of banknotes whether it is genuine or fake with a good accuracy. Sensitivity, accuracy, specificity are the metrics applied to check the performance of all experiments. All the conducted experiments have been done using cart. The experimental results obtained by pruned decision tree rules and without decision tree rules are compared also. We also put the extracted rule by which it perfectly explains that how it comes to a better prediction accuracy. Its major aim to show by decision tree and its extracted rule how fake notes and genuine notes have been classified.
https://doi.org/10.1142/9789814704830_0062
Inspire of using standardization efforts, medical diagnosis is still considered an art. This status is owed due to the fact that medical diagnosis required a proficiency in coping with uncertainty simply that is not found in today's computing machinery. The diagnostic decision in medicine is frequently encountered with uncertainties. Modeling of this uncertainties in the process of diagnosis of disease under fuzzy environment is an important subject. Various efforts have been made to model then uncertainties in this area through fuzzy sets and its generalizations. The interval-valued fuzzy set is referred to as an i-v fuzzy set. This study is to propose a comparative study for medical diagnosis based on cluster analysis by taking the lower and upper bounds from a given matrix of patients, symptoms, diseases relation.