Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    The epidemic cost of interconnected networks

    We study the effect of networks interconnection on the epidemic cost of them. When an epidemic will eventually die out, we find that the total cost of the epidemic over the interconnected network is approximate to the sum of the total epidemic cost of the two sub-networks. We also prove that the interconnection behavior can reduce epidemic threshold. Thus network interconnecting makes an epidemic more easily to occur, while keeping the total cost of the epidemic almost unchanged.

  • articleNo Access

    RELIABLE INTERNET-BASED MASTER-WORKER COMPUTING IN THE PRESENCE OF MALICIOUS WORKERS

    We consider a Master-Worker distributed system where a master processor assigns, over the Internet, tasks to a collection of n workers, which are untrusted and might act maliciously. In addition, a worker may not reply to the master, or its reply may not reach the master, due to unavailabilities or failures of the worker or the network. Each task returns a value, and the goal is for the master to accept only correct values with high probability. Furthermore, we assume that the service provided by the workers is not free; for each task that a worker is assigned, the master is charged with a work-unit. Therefore, considering a single task assigned to several workers, our objective is to have the master processor to accept the correct value of the task with high probability, with the smallest possible amount of work (number of workers the master assigns the task). We probabilistically bound the number of faulty processors by assuming a known probability p < 1/2 of any processor to be faulty.

    Our work demonstrates that it is possible to obtain, with provable analytical guarantees, high probability of correct acceptance with low work. In particular, we first show lower bounds on the minimum amount of (expected) work required, so that any algorithm accepts the correct value with probability of success 1 - ε, where ε ≪ 1 (e.g., 1/n). Then we develop and analyze two algorithms, each using a different decision strategy, and show that both algorithms obtain the same probability of success 1 - ε, and in doing so, they require similar upper bounds on the (expected) work. Furthermore, under certain conditions, these upper bounds are asymptotically optimal with respect to our lower bounds.

  • articleNo Access

    A Hybrid Approach for Task Scheduling Based Particle Swarm and Chaotic Strategies in Cloud Computing Environment

    This paper presents a hybrid approach based discrete Particle Swarm Optimization (PSO) and chaotic strategies for solving multi-objective task scheduling problem in cloud computing. The main purpose is to allocate the summited tasks to the available resources in the cloud environment with minimum makespan (i.e. schedule length) and processing cost while maximizing resource utilization without violating Service Level Agreement (SLA) among users and cloud providers. The main challenges faced by Particle Swarm Optimization (PSO) when used to solve scheduling problems are premature convergence and trapping into local optimum. This paper presents an enhanced Particle Swarm Optimization algorithm hybridized with Chaotic Map strategies. The proposed approach is called Enhanced Particle Swarm Optimization based Chaotic Strategies (EPSOCHO) algorithm. Our proposed approach suggests two Chaotic Map strategies: sinusoidal iterator and Lorenz attractor to enhanced PSO algorithm in order to get good convergence and diversity for optimizing the task scheduling in cloud computing. The proposed approach is simulated and implemented in Cloudsim simulator. The performance of the proposed approach is compared with the standard PSO algorithm, the improved PSO algorithm with Longest job to fastest processor (LJFP-PSO), and the improved PSO algorithm with minimum completion time (MCT-PSO) using different sizes of tasks and various benchmark datasets. The results clearly demonstrate the efficiency of the proposed approach in terms of makespan, processing cost and resources utilization.

  • articleFree Access

    DIRTY INPUTS IN POWER PRODUCTION AND ITS CLEAN UP: A SIMPLE MODEL

    This paper develops a model in which a country, which only has access to a dirty technology for producing electric power in the short run, looks to expand its production in the long run by only permitting new power plants based on clean technology. The model mimics current reality in which major developing countries are being pushed by factors, such as the Paris Climate agreement of 2015 and the large burden of mortality and morbidity resulting from use of fossil fuels, to rely more on clean technologies. Our model shows how emissions and emission intensity of power output after the adoption of clean technologies are increasing in the targets for power production set by the government before availability of such technology and supply variables such as the wage rate and expenses on fixed capital, and decreasing in the tax on power production before the availability of clean technologies. Finally, it is seen that for low enough cost of the clean resource input, a country with a higher demand is able to set a higher target for production with the dirty technology when the clean technology is not available and yet achieves lower emissions and emission intensity in the long run.

  • articleNo Access

    COST ANALYSIS OF THE R-UNRELIABLE-UNLOADER QUEUEING SYSTEM

    This paper analyzes the unloader queueing model in which N identical trailers are unloaded by R unreliable unloaders. Steady-state analytic solutions are obtained with the assumptions that trip times, unloading times, finishing times, breakdown times, and repair times have exponential distributions. A cost model is developed to determine the optimal values of the number of unloaders and the finishing rate simultaneously, in order to minimize the expected cost per unit time. Numerical results are provided in which several steady-state characteristics of the system are calculated based on assumed numerical values given to the system parameters and the cost elements. Sensitivity analysis is also studied.

  • articleNo Access

    A HIGH ROBUSTNESS AND LOW COST CASCADING FAILURE MODEL BASED ON NODE IMPORTANCE IN COMPLEX NETWORKS

    In this paper, we investigate the trade-off problem between the high robustness against cascading failures and the low costs of network constructions in complex networks. Since the important nodes with highly connected components usually play a key role in the network structure and network dynamics, we propose an optimal capacity allocation model based on node importance. The novel model will increase the capacities of those important nodes but reduce the network construction cost with the appropriate capacity allocation parameter. Moreover, we also discover that our matching model can enhance the robustness against cascading failures on the IEEE 300 network.

  • articleNo Access

    A Review of Cost and Makespan-Aware Workflow Scheduling in Clouds

    Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.

  • articleNo Access

    A New Three-Level Design of Nano-Scale Subtractor Based on Coulomb Interaction of Quantum Dots

    The quantum-dot cellular automata (QCA) is popular nanotechnology to process at deep sub-micron levels. In recent years, in QCA technology, numerous multi-layer circuits of adders and subtractors have been developed. However, little attention has been made to the QCA circuit’s instantiation of the subtractors schemes. This paper gives a three-layered subtractor with simple access to inputs and outputs as an essential block in QCA technology. This design was created, optimized, and simulated using QCADesigner-E. The results revealed that the suggested proposal effectively reached a higher level of efficiency, speed, and cost, owing primarily to the use of the three layers’ design. Also, this architecture offers a platform to access the input and output lines more easily. The multi-layer crossover technique is used to build this design. According to the simulation findings, the suggested subtractor in QCA technology employs 22 QCA cells. The simulation findings demonstrated that the proposed design surpasses the majority of previous results in the literature.

  • articleNo Access

    An Optimal Selection and Placement of Distributed Energy Resources Using Hybrid Genetic Local Binary Knowledge Optimization

    In recent times, the virtual power plant (VPP) is gaining more attention in power system engineering due to its tremendous potential in enhancing sustainable urbanism, in which, it supplies clean energy from distributed generators. Electricity is deemed a basic requirement for future automotive and ultra-modern technologies. The deficiency of traditional energy resources and their complex generation process make the production cost of electricity increase dramatically. Moreover, traditional power distribution systems are encountering issues in distributing electrical energy to fulfill customer demands. Therefore, this paper proposes a novel power management system named ‘the hybrid genetic local binary knowledge (HGLBK) algorithm’ to manage power distribution in the transmission lines and to optimize the total operation cost of the network. The hybrid optimization algorithm effectively controls the load by supplying the surplus power load to the adjacent feeders thereby optimally selecting and placing the distributed energy resource (DER). The proposed concept is implemented at Kayathar, Tamil Nadu in India, and their real-time data are utilized for modeling the VPP. The proposed VPP concept is implemented in the IEEE-9 bus system and the performance of VPP is simulated using the MATLAB software. The performance of the proposed HGLBK algorithm is assessed by comparing its effectiveness with the existing approaches.

  • articleNo Access

    Unanticipated Software Evolution: Evaluating the Impact on Development Cost and Quality

    Unanticipated Software Evolution (USE) techniques enable developers to easily change any element of the software without being obligated to anticipate and isolate extension points. However, we have not found empirical validations of the impact of USE on development cost and quality. In this work, we design and execute an experiment for USE, in order to compare its resulting metrics — time, lines of code, test coverage and complexity — using OO systems as baseline. 30 undergraduate students were subjects in this experiment. The results suggest that USE has significant impact on the lines of code and complexity metrics, reducing the amount of lines changed and the McCabe cyclomatic complexity on software evolution.

  • articleNo Access

    The Allocation Scheme of Software Development Budget with Minimal Conflict Attributes

    During the process of software development, a significant challenge revolves around accurately estimating the associated costs. The primary goal of project managers is to ensure the delivery of a highly trustworthiness product that aligns with the designated budgetary constraints. Nonetheless, the trustworthiness of software hinges upon a range of distinct attributes. When implementing a budget allocation scheme to enhance these attributes, conflicts among them may arise. Thus, it becomes imperative to select an appropriate allocation scheme that effectively mitigates conflict-associated costs. In this paper, we will define the conflict costs and establish costs estimation models. The difficulty coefficient constraint for improving attributes is established. Subsequently, we will analyze the relative importance weights of these attributes. Drawing upon the conflict costs, importance weights, and difficulty coefficient constraint, we present an algorithm to determine an appropriate budget allocation scheme, which can minimize conflict-associated costs. Finally, we provide an illustrative example that demonstrates the practicability of our proposed algorithm. This research offers valuable insights to software managers, aiding them in the reasonable allocation of budgetary resources, thereby maximizing overall benefits.

  • articleNo Access

    INSPECTION FREQUENCY OPTIMIZATION MODEL FOR DEGRADING FLOWLINES ON AN OFFSHORE PLATFORM

    Many offshore oil and gas installations in the North Sea are approaching the end of their designed lifetimes. Technological improvements and higher oil prices have developed favorable conditions for more oil recovery from these existing installations. However, in most cases, an extended oil production period does not justify investment in new installations. Therefore cost-effective maintenance of the existing platform infrastructure is becoming very important.

    In this paper, an inspection frequency optimization model has been developed which can be used effectively by the inspection and maintenance personnel in the industry to estimate the number of inspections/optimum preventive maintenance time required for a degrading component at any age or interval in its lifecycle at a minimum total maintenance cost. The model can help in planning inspections and maintenance intervals for different components of the platform infrastructure. The model has been validated by a case study performed on flowlines installed on the top side of an offshore oil and gas platform in the North Sea. Reliability analysis has been carried out to arrive at the best inspection frequency for the flowline segments under study.

  • articleNo Access

    SURVEILLANCE TEST INTERVAL OPTIMIZATION FOR NUCLEAR PLANTS USING MULTI OBJECTIVE REAL PARAMETER GENETIC ALGORITHMS

    Surveillance Test Interval is an important parameter for the standby systems of a plant with respect to its availability, cost and other issues. Stand-by systems are not required during normal operations but are essentially required when demanded by the plant operations. Stand-by systems are tested and maintained periodically in order to ensure their serviceability.

    In this paper an attempt has been done to optimize the unavailability, cost and manrem consumption with respect to Surveillance Test Interval for the standby systems of nuclear plants. In this work a novel approach has been introduced where Real Parameter Genetic Algorithm (GA) has been used for the multi-objective optimization problem in hand. Application of Genetic Algorithms in similar problems has been reported in literature. But the approach proposed in this paper differs from the exixting methods significantly. In this work real-parameter GA has been used which makes the algorithm simple by not having the overhead of encoding and decoding of solutions. More-over a multi-objective optimization method has been proposed that not only takes care of optimizing the unavailability but also cost and manrem consumption. Here we have mainly concentrated on the Emergency Core Cooling System of a Research Reactor, but the same idea can easily be extended for the other standby systems.

  • articleNo Access

    RELIABILITY ANALYSIS OF HYBRID ENERGY SYSTEM

    A hybrid energy system integrates renewable energy sources like wind, solar, micro-hydro and biomass, fossil fuel power generators such as diesel generators and energy storage. Hybrid energy system is an excellent option for providing electricity for remote and rural locations where access to grid is not feasible or economical. Reliability and cost-effectiveness are the two most important objectives when designing a hybrid energy system. One challenge is that the existing methods do not consider the time-varying characteristics of the renewable sources and the energy demand over a year, while the distributions of a power source or demand are different over the period, and multiple power sources can often times complement one another. In this paper, a reliability analysis method is developed to address this challenge, where wind and solar are the two renewable energy sources that are considered. The cost evaluation of hybrid energy systems is presented. A numerical example is used to demonstrate the proposed method.

  • articleNo Access

    Two-Dimensional Generalized Framework to Determine Optimal Release and Patching Time of a Software

    Demand for highly reliable software is increasing day by day which in turn has increased the pressure on the software firms to provide reliable software in no time. Ensuring high reliability of the software can only be done by prolonged testing which in result consumes more resources which is not feasible in the existing market situation. To overcome this, software firms are providing patches after software release so as to fix the remaining number of bugs and to give better product experience to users. An update/fix is a minor portion of software to repair the bugs. With such patches, organizations enhance the performance of the software. Delivering patches after release demands extra effort and resources which are costly and hence not economical for the firms. Also, early patch release might cause improper fixation of bugs, on the other hand, delayed release may increase the chances of more failure during the operational phase. Therefore, determining optimal patch release time is imperative. To overcome the above issues we have formulated a two-dimensional time and effort-based cost model to regulate the optimum release and patch time of software, so that the total cost is minimized. Proposed model is validated on a real life data set.

  • articleNo Access

    Bi-Criterion Problem to Determine Optimal Vulnerability Discovery and Patching Time

    In the last decade, we have seen enormous growth in software security related problems. This is due to the presence of bad guys who keep eye on the software vulnerabilities and create the security breach. Because of which software firms face huge loss. The problems of the software firms is two folded. One is to decide the optimal discovery time of the software vulnerability and another one is to determine the optimal patching time of those discovered vulnerability. Optimal discovery time of vulnerability is necessary as not disclosing the vulnerability on time may cause serious loss in the coming future. On the other hand, after discovering the vulnerabilities, it is more important to fix them too. Fixing of vulnerabilities is done by patching. But when to patch the vulnerabilities is also a great concern for the software firms. As delay in patch may cause more breaches in security and disadoption of the software and early patching early may reduce the risk but bad patching may increase the risk of security breach even after remedial patch release. In the current work, we have proposed a bi-criterion framework to minimizing cost and risk together under risk and budgetary constraints to determine the optimal vulnerability discovery and patching time. The proposed model is validated using real life data set.

  • articleNo Access

    Availability and Comparison of Spare Systems with a Repairable Server

    This article analyzes availability characteristics among three various spare system configurations. Each configuration includes various amount of primary and warm standby units, and an unreliable server is responsible for repairing the failed units. The server may fail or break-down when it is repairing. Additionally, it is assumed the server’s time to break-down has exponential distribution. When the primary unit fails, the standby unit replaces it immediately where the replacement is perfect. The repair time of the failed units and the recovery time of the break-down server follow a random variable. Next, a useful model is built based on the three system configurations. We develop the explicit expressions of availability measures among three spare system configurations. The cost/benefit is also compared among three configurations with given to the distribution parameters, and to the cost of the primary and warm standby units.

  • articleNo Access

    Optimizing Imperfect Coverage Cloud-RAID Systems Considering Reliability and Cost

    This paper considers a cloud provider selection problem to optimize the cloud-redundant array of independent disks (RAID) storage system subject to imperfect coverage (IPC), where an uncovered disk fault causes extensive damages to the entire system despite the presence of adequate redundancy. Given available cloud storage disk providers with disks having different costs, failure parameters and coverage factors, the objective of the optimal design is to select the combination of cloud providers minimizing system unreliability or cost. Both unconstrained and constrained optimization problems are considered. The solution methodology encompasses an analytical, combinatorial method for reliability analysis of the considered cloud-RAID storage system with the IPC behavior. Based on practical design parameters, the brute force approach is utilized to obtain the optimal design configuration. Several case studies are provided to illustrate the proposed optimization problems and solution method.

  • articleNo Access

    A Multi-Criteria Decision Model Considering Labor and Safety for Inspection Intervals of Condition Monitoring

    We develop a multi-criteria decision model to help decision-makers choose the best inspection interval policy considering the combination of conflicting criteria like cost, downtime, and reliability. We investigate a multi-criteria decision model, which can simultaneously identify the best inspection intervals for monitoring the failures of system trails to be inspected and the best strategy for system maintenance. We can also consider the combined cost, downtime, and reliability criteria. We find the optimal inspection interval for the maximized reliability of the product, minimized cost, and minimized downtime. The multi-attribute utility theory and the properties of each utility function considering dependent or independent relationships between attributes are investigated. We conduct the analysis and help the decision-makers for the company to make appropriate decisions. The effect of various parameters on the optimal maintenance strategy is also investigated statistically. Finally, we discuss numerical examples with managerial insights to illustrate the suggested approach.

  • articleNo Access

    An SRGM Using Fault Removal Efficiency and Correction Lag Function

    Software reliability growth models (SRGMs) have shown remarkable research since the early 1970s. Usually, growth models are used throughout the software testing process to determine the failure pattern, consumption of testing efforts, total expected cost and reliability. Many existing models start with the underlying premise that defects are found and immediately fixed. But in reality, initially faults are detected, isolated and then corrected. This study enhances the SRGM by including the detection and correction procedure to address this discrepancy, considering the correction lag function and fault removal efficiency. In addition, this study also examines the overall anticipated cost and reliability associated with the suggested model. Furthermore, numerical outcomes are shown using actual datasets to verify the validity of the proposed model.