Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In Wireless Sensor Network (WSN), node localization is a crucial need for precise data gathering and effective communication. However, high energy requirements, long inter-node distances and unpredictable limitations create problems for traditional localization techniques. This study proposes an innovative two-stage approach to improve localization accuracy and maximize route selection in WSNs. In the first stage, the Self-Adaptive Binary Waterwheel Plant Optimization (SA-BWP) algorithm is used to evaluate a node’s trustworthiness to achieve accurate localization. In the second stage, the Gazelle-Enhanced Binary Waterwheel Plant Optimization (G-BWP) method is employed to determine the most effective data transfer path between sensor nodes and the sink. To create effective routes, the G-BWP algorithm takes into account variables like energy consumption, shortest distance, delay and trust. The goal of the proposed approach is to optimize WSN performance through precise localization and effective routing. MATLAB is used for both implementation and evaluation of the model, which shows improved performance over current methods in terms of throughput, delivery ratio, network lifetime, energy efficiency, delay reduction and localization accuracy in terms of various number of nodes and rounds. The proposed model achieves highest delivery ratio of 0.97, less delay of 5.39, less energy of 23.3 across various nodes and rounds.
We aim at mapping streaming applications that can be modeled by a series-parallel graph onto a 2-dimensional tiled chip multiprocessor (CMP) architecture. The objective of the mapping is to minimize the energy consumption, using dynamic voltage and frequency scaling (DVFS) techniques, while maintaining a given level of performance, reflected by the rate of processing the data streams. This mapping problem turns out to be NP-hard, and several heuristics are proposed. We assess their performance through comprehensive simulations using the StreamIt workflow suite and randomly generated series-parallel graphs, and various CMP grid sizes.
We consider the problem of scheduling an application on a parallel computational platform. The application is a particular task graph, either a linear chain of tasks, or a set of independent tasks. The platform is made of identical processors, whose speed can be dynamically modified. It is also subject to failures: if a processor is slowed down to decrease the energy consumption, it has a higher chance to fail. Therefore, the scheduling problem requires us to re-execute or replicate tasks (i.e., execute twice the same task, either on the same processor, or on two distinct processors), in order to increase the reliability. It is a tri-criteria problem: the goal is to minimize the energy consumption, while enforcing a bound on the total execution time (the makespan), and a constraint on the reliability of each task. Our main contribution is to propose approximation algorithms for linear chains of tasks and independent tasks. For linear chains, we design a fully polynomial-time approximation scheme. However, we show that there exists no constant factor approximation algorithm for independent tasks, unless P=NP, and we propose in this case an approximation algorithm with a relaxation on the makespan constraint.
Due to its large leakage power and low density, the conventional SARM becomes less appealing to implement the large on-chip cache due to energy issue. Emerging non-volatile memory technologies, such as phase change memory (PCM) and spin-transfer torque RAM (STT-RAM), have advantages of low leakage power and high density, which makes them good candidates for on-chip cache. In particular, STT-RAM has longer endurance and shorter access latency over PCM. There are two kinds of STT-RAM so far: single-level cell (SLC) STT-RAM and multi-level cell (MLC) STT-RAM. Compared to the SLC STT-RAM, the MLC STT-RAM has higher density and lower leakage power, which makes it a even more promising candidate for future on-chip cache. However, MLC STT-RAM improves density at the cost of almost doubled write latency and energy compared to the SLC STT-RAM. These drawbacks degrade the system performance and diminish the energy benefits. To alleviate these problems, we propose a novel cache organization, companion write cache (CWC), which is a small fully associative SRAM cache, working with the main MLC STT-RAM cache in a master-and-servant way. The key function of CWC is to absorb the energy-consuming write updates from the MLC STT-RAM cache. The experimental results are promising that CWC can greatly reduce the write energy and dynamic energy, improve the performance and endurance of MLC STT-RAM cache compared to a baseline.
Moore’s law has been one of the reason behind the evolution of multicore architectures. Modern multicore architectures offer great amount of parallelism and on-chip resources that remain underutilized. This is partly due to inefficient resource allocation by operating system or application being executed. Consequently the poor resource utilization results in greater energy consumption and less throughput. This paper presents a fuzzy logic-based design space exploration (DSE) approach to reconfigure a multicore architecture according to workload requirements. The target design space is explored for L1 and L2 cache size and associativity, operating frequency, and number of cores, while the impact of various configurations of these parameters is analyzed on throughput, miss ratios for L1 and L2 cache and energy consumption. MARSSx86, a cycle accurate simulator, running various SPALSH-2 benchmark applications has been used to evaluate the architecture. The proposed fuzzy logic-based DSE approach resulted in reduction in energy consumption along with an overall improved throughput of the system.
This paper presents a method to characterize, identify and classify some pathological Electroencephalogram (EEG) signals. We use some Time Frequency Distributions (TFDs) to analyze its nonstationarity. The analysis is conducted by the spectrogram (SP), the Choi–Williams Distribution (CWD) and the Smoothed Pseudo Wigner Ville Distribution (SPWVD). The studies are carried on some real EEG signals collected from a known database. The estimation of the best value of parameters for each distribution is achieved using the Rényi entropy (RE). The time-frequency results have permitted to characterize some pathological EEG signals. In addition, the Rényi Marginal Entropy (RME) is used for the purpose of detecting the peak seizures and discriminates between normal and pathological EEG signals. The frequency bands are evaluated using the Marginal Frequency (MF). The EEG signal classification of two sets A and E containing normal and pathologic EEG signals, respectively, is performed using our proposed method based on energy extraction of signals from time-frequency plane. Also, the Moving Average (MA) is used as a tool to obtain better classification results. The results conducted on real-life EEG signals illustrate the effectiveness of the proposed method.
The rebel of global networked resource is Cloud computing and it shared the data to the users easily. With the widespread availability of network technologies, the user requests increase day by day. Nowadays, the foremost complication in Cloud technology is task scheduling. The cargo position and arrangement of the tasks are the two important parameters in the Cloud domain, which can provide the Quality of Service (QoS). In this paper, we formulated the optimal minimization of makespan and energy consumption in task scheduling using Local Pollination-based Gray Wolf Optimizer (LPGWO) algorithm. In the hybrid concept, Gray Wolf Optimizer (GWO) algorithm and Flower Pollination Algorithm (FPA) are combined and used. In the presence of GWO, the best searching factor is used to increase the convergence speed and FPA is used to distribute the data to the next packet of candidate solution using local pollination concept. Chaotic mapping and OBL are used to provide a suitable initial candidate for task solutions. Therefore, the experiments delivered better task scheduling results in low and high heterogeneities of physical machines. Ultimately, the comparison with the simulation results had shown the minimum convergence speed of makespan and energy consumption.
In this paper, a monitoring technique based on the wireless sensor network is investigated. The sensor nodes used for monitoring are developed in a simulation environment. Accordingly, the structure and workflow of wireless sensor network nodes are designed. Time-division multiple access (TDMA) protocol has been chosen as the medium access technique to ensure that the designed technique operates in an energy-efficient manner and packet collisions are not experienced. Fading channels, i.e., no interference, Ricean and Rayleigh, are taken into consideration. Energy consumption is decreased with the help of ad-hoc communication of sensor nodes. Throughput performance for different wireless fading channels and energy consumption are evaluated. The simulation results show that the sensor network can quickly collect medium information and transmit data to the processing center in real time. Besides, the proposed technique suggests the usefulness of wireless sensor networks in the terrestrial areas.
Wireless Sensor Networks (WSNs) have come across several things which include collecting data, handling data and distribution for super visioning specific applications such as the services needed, managing anything that occurred naturally, etc. They totally rely on applications. Therefore, the WSNs are classified under major networks. This is very essential. It can be defined as a network of networks that helps in proper flow of data. The main characteristics of WSN include its continuous changes in topologies, connected nodes with several chips and tunic routing protocol. There should be the better utilization of the available resources so that its life span may exceed. There should be an effective usage of available assets to avoid the waste. In our research, we propose a hybrid approach, namely, the Power Control Tree-Based Cluster (PCTBC) to identify the Sybil attacks in WSNs. It employs several stages structured clustering of nodes based on position and identity verification. This approach is utilized for the usage of energy consumption, effectiveness of detecting Sybil attacks inside clusters. The main aspects put into consideration are the efficient routing protocol of the distance between the hops and the energy power remains, where the distance between the hops is being computed using the Received Signal Strength Indication (RSSI) and also the packet transmission can be properly tuned on the basis of the distance. Also, the proposed approach considers the energy consumption for the transmission of the defined packet.
The emergence of cloud computing in big data era has exerted a substantial impact on our daily lives. The conventional reliability-aware workflow scheduling (RWS) is capable of improving or maintaining system reliability by fault tolerance techniques such as replication and checkpointing based recovery. However, the fault tolerant techniques used in RWS would inevitably result in higher system energy consumption, longer execution time, and worse thermal profiles that would in turn lead to a decreased hardware lifespan. To mitigate the lifetime-energy-makespan issues of RWS in cloud computing systems for big data, we propose a novel methodology that decomposes the complicated studied problem. In this methodology, we provide three procedures to solve the energy consumption, execution makespan, and hardware lifespan issues in cloud systems executing real-time workflow applications. We implement numerous simulation experiments to validate the proposed methodology for RWS. Simulation results clearly show that the proposed RWS strategies outperform comparative approaches in reducing energy consumption, shortening execution makespan, and prolonging system lifespan while maintaining high reliability. The improvements on energy saving, reduction on makespan, and increase in lifespan can be up to 23.8%, 18.6%, and 69.2%, respectively. Results also show the potentiality of the proposed method to develop a distributed analysis system for big data that serves satellite signal processing, earthquake early warning, and so on.
A prerequisite to ensure the stability of the power supply system is suitable functioning of transmission line equipment. However, the increasing deployment of transmission lines in modern power systems has introduced significant challenges to line inspection. While deep learning-based image detection techniques have shown promise in improving the efficiency and accuracy of insulator detection, they often require substantial computational resources and energy. This limitation hinders the consistent guarantee of accuracy and real-time performance on resource-constrained drones. To address this issue, this paper investigates the co-optimization problem of energy consumption and analytic accuracy in insulator image detection on unmanned aerial vehicles (UAVs). We propose a latency-aware end-edge cooperative insulator detection task offloading scheme with high energy efficiency and accuracy that aims to achieve optimal performance. Initially, we conducted an experimental analysis to examine the influence of input image resolution on the accuracy and latency of the CNN-based insulation detection model. Subsequently, we develop a model that takes into account the latency, analytic accuracy and energy consumption for image detection task offloading. Finally, we formalized a nonlinear integer optimization problem and designed a particle swarm optimization (PSO)-based task offloading scheme to optimize task accuracy and energy consumption while adhering to latency constraints. Extensive experiments validated the effectiveness of the proposed end-edge cooperative insulator detection method in optimizing accuracy and energy consumption.
The routing information is hard to maintain and the energy is limited in highly dynamic wireless sensor network. To solve these problems, energy-saving geographic routing (ESGR) is proposed, which does not maintain the network topology and can save energy. A node broadcast its position information to its neighboring nodes before transmitting data. The neighboring nodes compute the position of the virtual relay node using the data transmitter position, the base station position and the energy consumption for circuits and propagation. The neighboring nodes determine whether to become the relay node through competition based on its position, the destination position and the virtual relay node position. The neighboring nodes compute the delay time distributedly according to the competition strategy. The neighboring node with the shortest delay time can respond to the data sender first and become the sole relay node. The handshake mechanism efficiently prevents the collision among the neighboring nodes during competition, which is of high communication efficiency. When a routing hole is found, the relay region is changed and an approaching destination relay strategy is adopted, which reduces the impact of routing holes. The simulation shows that the proposed algorithm is better than BLR, because of the lower energy consumption and lower packet loss ratio. The ESGR algorithm is more appropriate for highly dynamic wireless network.
Energy consumption is important to consume less power, reducing toxic fumes released by plants, preserving natural resources, and protecting ecosystems against damage. The challenging characteristics in energy supply include lack of renewable energy adoption, and policy and energy management are 0considered essential factors. An artificial intelligent building with a multi-energy planning method (AIBMEM) has been proposed to design multi-energy systems to achieve the best policy and energy management techniques. The intelligent construction problem with multi-energy is framed as a predictive energy model to minimize the overall utilization of energy levels. The normal distribution with the artificial intelligent model is introduced to solve the problem of renewable energy. The experimental results based on reliability, effectiveness, preservation, energy consumption, and control systems show that the suggested model is better than existing models, producing good performance analysis results.
Intelligent transportation systems (ITS) are a collection of technologies that can enhance transport networks and public transit and individual decision-making about various elements of travel. ITS technologies comprise cutting-edge wireless, electronic and automated technology intending to improve safety, efficiency and convenience in surface transit. In certain cases, reducing energy usage has proven to be an ITS advantage. In this report, the primary energy advantages of a range of ITS systems established through models, pilot projects/field tests and extensive use are examined and summarized. In worldwide driving, the Internet of Things (IoT) solutions play a vital role. A new age of communication leading to ITS will be the communication between cars via IoT. IoT is a mixture of data and data analysis data storage and processing to manage the traffic system efficiently. Energy management, which is seen as an efficient, innovative approach to highly efficient energy generation plants. It simultaneously takes care of optimizing traditional sources of the IoT based intelligent transport system, helps to automate railways, roads, airways and shipways, which improve customer experience in the process. Following an evaluation of the situation, a proposal named energy management in intelligent transportation (EMIT) improves energy efficiency and economic efficiency in transportation. It improves energy management to reduce economic and ecological waste by decreasing global transport energy consumption. The sustainable development ratio is 85.7%, accidents detection ratio is 85.3%, electric vehicle infrastructure ratio is 83.6%, intelligent vehicle parking system acceptance ratio is 82.15%, and reduction ratio of energy consumption is 91.4%.
One of the significant approaches in implementing the routing of WSNs is clustering that leads to scalability and extending of network lifetime. In the clustered WSN, cluster heads (CHs) utilize maximum energy to another node. Moreover, it balanced the load present in the sensor nodes (SNs) between the CHS for enhancing the network lifespan. Moreover, the CH plays an important part in efficient routing, as well as it must be selected in an optimal way. Thus, this work intends to introduce a cluster-based routing approach in WSN, where it selects the CHs by the optimization algorithm. A new hybrid seagull rock swarm with opposition-based learning (HSROBL) is introduced for this purpose, which is the hybridized concept of rock hyraxes swarm optimization (RHSO) and seagull optimization algorithm (SOA). Further, the optimal CH selection is based on various parameters including distance, security, delay, and energy. At the end, the outcomes of the presented approach are analyzed to extant algorithms based on delay, alive nodes, average throughput, and residual energy, respectively. Based on throughput, alive node, residual energy, as well as delay, the overall improvement in performance is about 28.50%.
This paper proposes a new tri-objective scheduling algorithm called Heterogeneous Reliability-Driven Energy-Efficient Duplication-based (HRDEED) algorithm for heterogeneous multiprocessors. The goal of the algorithm is to minimize the makespan (schedule length) and energy consumption, while maximizing the reliability of the generated schedule. Duplication has been employed in order to minimize the makespan. There is a strong interest among researchers to obtain high-performance schedules that consume less energy. To address this issue, the proposed algorithm incorporates energy consumption as an objective. Moreover, in order to deal with processor and link failures, a system reliability model is proposed. The three objectives, i.e., minimizing the makespan and energy, while maximizing the reliability, have been met by employing a method called Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). TOPSIS is a popular Multi-Criteria Decision-Making (MCDM) technique that has been employed to rank the generated Pareto optimal schedules. Simulation results demonstrate the capability of the proposed algorithm in generating short, energy-efficient and reliable schedules. Based on simulation results, we observe that HRDEED algorithm demonstrates an improvement in both the energy consumption and reliability, with a reduced makespan. Specifically, it has been shown that the energy consumption can be reduced by 5–47%, and reliability can be improved by 1–5% with a 1–3% increase in makespan.
To achieve China’s goal of “carbon peak and carbon neutrality”, the energy transition and development are of vital importance and urgent, and there are still many major issues that need to be studied urgently. In this context, based on the existing literature, this article combs and reviews the latest developments in the application of structural equation modeling methods in the energy industry from the dimensions of energy safety production, energy consumption behavior, energy enterprise economy, and new energy development, and believes that Has broad application prospects in the future energy transition.