Cloud computing’s simulation and modeling capabilities are crucial for big data analysis in smart grid power; they are the key to finding practical insights, making the grid resilient, and improving energy management. Due to issues with data scalability and real-time analytics, advanced methods are required to extract useful information from the massive, ever-changing datasets produced by smart grids. This research proposed a Dynamic Resource Cloud-based Processing Analytics (DRC-PA), which integrates cloud-based processing and analytics with dynamic resource allocation algorithms. Computational resources must be able to adjust the changing grid circumstances, and DRC-PA ensures that big data analysis can scale as well. The DRC-PA method has several potential uses, including power grid optimization, anomaly detection, demand response, and predictive maintenance. Hence the proposed technique enables smart grids to proactively adjust to changing conditions, boosting resilience and sustainability in the energy ecosystem. A thorough simulation analysis is carried out using realistic circumstances within smart grids to confirm the usefulness of the DRC-PA approach. The methodology is described in the intangible, showing how DRC-PA is more efficient than traditional methods because it is more accurate, scalable, and responsive in real-time. In addition to resolving existing issues, the suggested method changes the face of contemporary energy systems by paving the way for innovations in grid optimization, decision assistance, and energy management.
A mobile agent starting at an arbitrary node of an m × k grid, for 1 < m ≤ k, has to explore the grid by visiting all its nodes and traversing all edges. The cost of an exploration algorithm is the number of edge traversals by the agent. Nodes of the grid are unlabeled and ports at each node v have distinct numbers in {0,…, d − 1}, where d = 2, 3, 4 is the degree of v. Port numbering is local, i.e., there is no relation between port numbers at different nodes. When visiting a node the agent sees its degree. It also sees the port number by which it enters a node and can choose the port number by which it leaves a visited node. We are interested in deterministic exploration algorithms working at low cost.
We consider the scenario in which the agent is equipped with a stationary token situated at its starting node. The agent sees the token whenever it visits this node. We give an exploration algorithm working at cost O(k2) for 2 × k grids, and at cost O(m2k), for m × k grids, when 2 < m ≤ k.
The gathering over meeting nodes problem requires the robots to gather at one of the pre-defined meeting nodes. This paper investigates the problem with respect to the objective function that minimizes the total number of moves made by all the robots. In other words, the sum of the distances traveled by all the robots is minimized while accomplishing the gathering task. The robots are deployed on the nodes of an anonymous two-dimensional infinite grid which has a subset of nodes marked as meeting nodes. The robots do not agree on a global coordinate system and operate under an asynchronous scheduler. A deterministic distributed algorithm has been proposed to solve the problem for all those solvable configurations, and the initial configurations for which the problem is unsolvable have been characterized. The proposed gathering algorithm is optimal with respect to the total number of moves performed by all the robots in order to finalize the gathering.
Bioinformatics can be considered as a bridge between life science and computer science, where high performance computational platforms and software are required to manage complex biological data. In this paper we present PROTEUS, a Grid-based Problem Solving Environment that integrates ontology and workflow approaches to enhance composition and execution of bioinformatics application on the Grid. Architecture and preliminary experimental results are reported.
The emergence of Grid infrastructures like EGEE has enabled the deployment of large-scale computational experiments that address challenging scientific problems in various fields. However, to realize their full potential, Grid infrastructures need to achieve a higher degree of dependability, i.e., they need to improve the ratio of Grid-job requests that complete successfully in the presence of Grid-component failures. To achieve this, however, we need to determine, analyze and classify the causes of job failures on Grids. In this paper we study the reasons behind Grid job failures in the context of EGEE, the largest Grid infrastructure currently in operation. We present points of failure in a Grid that affect the execution of jobs, and describe error types and contributing factors. We discuss various information sources that provide users and administrators with indications about failures, and assess their usefulness based on error information accuracy and completeness. We describe two real-life case studies, describing failures that occurred on a production site of EGEE and the troubleshooting process for each case. Finally, we propose the architecture for a system that could provide failure management support to administrators and end-users of large-scale Grid infrastructures like EGEE.
Federated, secure, standardized, scalable, and transparent mechanism to access and share resources, particularly data resources, across organizational boundaries that does not require application modification and does not disrupt existing data access patterns has been needed for some time in the computational science community. The Global Federated File System (GFFS) addresses this need and is a foundational component of the NSF-funded eXtreme Science and Engineering Discovery Environment (XSEDE) program. The GFFS allows user applications to access (create, read, update, delete) remote resources in a location-transparent fashion. Existing applications, whether they are statically linked binaries, dynamically linked binaries, or scripts (shell, PERL, Python), can access resources anywhere in the GFFS without modification (subject to access control). In this paper we present an overview of the GFFS and its most common use cases: accessing data at an NSF center from a home or campus, accessing data on a campus machine from an NSF center, directly sharing data with a collaborator at another institution, accessing remote computing resources, and interacting with remote running jobs. We present these uses cases and how they are realized using the GFFS.
In a context where networks grow larger and larger, their nodes become more likely to fail. Indeed, they may be subject to crashes, attacks, memory corruptions… To encompass all possible types of failure, we consider the most general model of failure: the Byzantine model, where any failing node may exhibit arbitrary (and potentially malicious) behavior.
We consider an asynchronous grid-shaped network where each node has a probability λ to be Byzantine. Our metric is the communication probability, that is, the probability that any two nodes communicate reliably. A number of Byzantine-resilient broadcast protocols exist, but they all share the same weakness: when the size of the grid increases, the communication probability approaches zero.
In this paper, we present the first protocol that overcomes this difficulty, and ensures a communication probability of 1−4λ on a grid that may be as large as we want (for a sufficiently small λ, typically λ < 10−5). The originality of the approach lies in the fractal definition of the protocol, which, we believe, could be used to solve several similar problems related to scalability. We also extend this scheme to a 3-dimensional grid and obtain a 1−2λ communication probability for λ < 10−3.
We consider the following periodic sorting procedure on two-dimensional meshes of processors: Initially, each node contains one number. We proceed in rounds each round consisting of sorting the columns of the grid, and, in the second phase, of sorting the rows according to the snake-like ordering. We exactly characterize the number of rounds necessary to sort on an l × m-grid in the worst case, where l is the number of the rows and m the number of the columns. An upper bound of ⌈ log l⌉ + 1was known before. This bound is tight for the case that m is not a power of 2. Surprisingly, it turns out that far fewer rounds are necessary if m is a power of 2 (and m ≪ l) in this case, exactly min {log m + 1, ⌈ log l⌉ + 1} rounds are needed in the worst case.
The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.
The distributed computing infrastructure of the ATLAS Experiment includes over 170 sites and executes up to 3 million computing jobs daily. PanDA (Production and Distributed Analysis) is the Workload Management System responsible for task and job execution; its key components are the broker and job scheduler that define the mapping of computing jobs to the resources. The optimization of this mapping is crucial for handling the expected computational payloads during the HL-LHC era. Considering the heterogeneity and the distributed structure of the Worldwide LHC Computing Grid (WLCG) infrastructure that provides computing resources for analyzing the data, there is a need for specific approaches for evaluating computing resources according to their ability to process different types of workflows. This evaluation can potentially enhance the efficiency of the Grid by optimally distributing different types of payloads in heterogeneous computing environments. To tackle this challenge, this research proposes a method for evaluating WLCG resources regarding their ability to process user analysis payloads. This evaluation is based on leveraging available information about job execution on PanDA queues within the ATLAS computing environment.
In this paper, we apply market mechanism and agent to build grid resource management, where grid resource consumers and providers can buy and sell computing resource based on an underlying economic architecture. All market participants in the grid environment including computing resources and services can be represented as agents. Market participant is registered with a Grid Market Manager. A grid market participant can be a service agent that provides the actual grid service to the other market participants. Grid market participants communicate with each other by communication space that is an implementation of tuple space. In this paper, Grid agent model description is given. Then, the structure of Grid Market is described in detail. The design and implementation of agent oriented and market oriented grid resource management are presented in this paper.
Rooted trees are usually drawn planar and upward, i.e., without crossings and with-out any parent placed below its child. In this paper we investigate the area requirement of planar upward drawings of rooted trees. We give tight upper and lower bounds on the area of various types of drawings, and provide linear-time algorithms for constructing optimal area drawings. Let T be a bounded-degree rooted tree with N nodes. Our results are summarized as follows:
• We show that T admits a planar polyline upward grid drawing with area O(N), and with width O(Nα) for any prespecified constant a such that 0<α<1.
• If T is a binary tree, we show that T admits a planar orthogonal upward grid drawing with area O (N log log N).
• We show that if T is ordered, it admits an O(N log N)-area planar upward grid drawing that preserves the left-to-right ordering of the children of each node.
• We show that all of the above area bounds are asymptotically optimal in the worst case.
• We present O(N)-time algorithms for constructing each of the above types of drawings of T with asymptotically optimal area.
• We report on the experimentation of our algorithm for constructing planar polyline upward grid drawings, performed on trees with up to 24 million nodes.
We present the SweGrid Accounting System (SGAS) — a decentralized and standards-based system for Grid resource allocation enforcement that has been developed with an emphasis on a uniform data model and easy integration into existing scheduling and workload management software.
The system has been tested at the six high-performance computing centers comprising the SweGrid computational resource, and addresses the need for soft, real-time quota enforcement across the SweGrid clusters.
The SGAS framework is based on state-of-the-art Web and Grid services technologies. The openness and ubiquity of Web services combined with the fine-grained resource control and cross-organizational security models of Grid services proved to be a perfect match for the SweGrid needs. Extensibility and customizability of policy implementations for the three different parties that the system serves (the user, the resource manager, and the allocation authority) are key design goals. Another goal is end-to-end security and single sign-on, to allow resources to reserve allocations and charge for resource usage on behalf of the user.
We conclude this paper by illustrating the policy customization capabilities of SGAS in a simulated setting, where job streams are shaped using different modes of allocation policy enforcement. Finally, we discuss some of the early experiences from the production system.
Quick and accurate identification of the root cause of failures is an important prerequisite for any reliable system. However with increasing Grid size and complexity, the manual diagnosis of application faults becomes impractical, tedious and time-consuming. So far there has been a lack of systematic and comprehensive studies in organization and classification of Grid faults. We address this gap with a multi-perspective Grid fault taxonomy precisely describing an incident using eight different characteristics. Based on that, we develop a pragmatic model-based technique for application-specific fault diagnosis using indicators, symptoms and rules. Customized wrapper services then apply this knowledge to reason about the root causes of failures. In addition to applying user-provided diagnosis models we demonstrate that given a set of past classified fault events it is possible to automatically extract new models through learning. We investigated and compared several supervised classification learning and cluster analysis algorithms for this purpose. Our approach was implemented as part of the Otho Toolkit, a framework for "service-enabling" legacy applications by synthesizing tailor-made wrapper service.
Dynamic load balancing is a key factor in achieving high performance for large scale distributed simulations on grid infrastructures. In a grid environment, the available resources and the simulation's computation and communication behavior may experience critical run-time imbalances. Consequently, an initial static partitioning should be combined with a dynamic load balancing scheme to ensure the high performance of the distributed simulation. In this paper, we propose a dynamic load balancing scheme for distributed simulations on a grid infrastructure. Our scheme is composed of an online network analyzing service coupled with monitoring agents and a run-time model repartitioning service. We present a hierarchical scalable adaptive JXTA service based scheme and use simulation experiments to demonstrate that our proposed scheme exhibits better performance in terms of simulation execution time. Furthermore, we extend our algorithm from a local intra-cluster algorithm to a global inter-cluster algorithm and we consider the proposed global design through a formalized Discrete Event System Specification (DEVS) model system
A red-white coloring of a nontrivial connected graph G of diameter d is an assignment of red and white colors to the vertices of G where at least one vertex is colored red. Associated with each vertex v of G is a d-vector, called the code of v, whose ith coordinate is the number of red vertices at distance i from v. A red-white coloring of G for which distinct vertices have distinct codes is called an identification coloring or ID-coloring of G. A graph G possessing an ID-coloring is an ID-graph. The minimum number of red vertices among all ID-colorings of an ID-graph G is the identification number or ID-number of G. It is shown that the grid Pm□Pn is an ID-graph if and only (m,n)≠(2,2) and the prism Cn□K2 is an ID-graph if and only if n≥6.
This paper presents AVE and CPFR concepts and their characteristics, establishes and analyzes the AVE-based CPFR working flow, and illustrates the content of the grid resource management and the mission in relation to the corresponding grid resource management system. It focuses on the working flow of the AVE-based CPFR. On this basis, it proposes the AVE-related CPFR mechanism grounded on grid, and further analyzes its working principles, grid methods matching the AVE-related CPFR working flow, the strengths of this mechanism as well as the n-tier prediction working flow. In addition, it constructs the selection model of credit-granting guarantee approaches and provides evidence for it. Under this mechanism, credit risk in the AVE enterprises can be optimized, and the AVE chain matches the working mechanism of CPFR in their capacities of real-time resource sharing, n-tier resource allocation, mission assignment, control, and supervision. It is hoped that the distance management and risk blockage could be achieved in supply chains within AVE enterprises by establishing a strong self-organized and self-control working chain.
Weighted grids are linearly independent sets {gw : w ∈ W} of signed tripotents in Jordan* triples indexed by figures W in real vector spaces such that {gugvgw} ∈ ℂgu-v+w (= 0 if u - v + w ∉ W). They arise naturally as systems of weight vectors of certain abelian families of Jordan* derivations. Based on Neher's grid theory, a classification of association free non-nil weighted grids is given. As a first step beyond the setting of classical grids, the complete list of complex weighted grids of pairwise associated signed tripotents indexed by ℤ2 is established.
Electricity services are crucial for human well-being and to a country’s socio-economic development. Despite its importance, low levels of electricity adoption continue to prevail in most rural areas in SSA. Low socio-economic development has been attributed among others factors to lack of modern energy sources especially electricity among rural households, which has been identified as a major setback in propelling empowerment and development at household and community level. There is minimal or no research conducted to understand the socio-economic dynamics of electricity adoption among households in Meru-South Sub-County. Household interviews were conducted from 150 randomly selected households using closed and opened ended questionnaires. Data collected was analyzed using descriptive statistics and regression. Result revealed that the largest proportion of the respondents were non-adopters. Possible predictor factors that significantly influenced adoption were distance from the transformer, education level, gender, household size, and income. Results further indicated that accessibility (proximity of the transformer) and cost of connection were perceived as the utmost prior challenges to electricity adoption by households. It was recommended that rural electrification project should be in considerate of household level characteristics in process of planning for electricity dissemination in rural areas to ensure heterogeneity in electricity adoption.
Renewable energy sources connected in distribution systems utilizing power electronics devices to interface lead to various power quality problems. This chapter presents a review on power quality issues associated with the grid-connected renewable energy systems and mitigation techniques. To mitigate the power quality issues, an effective role is played by power electronic devices and custom power devices such as active power filters (APFs) and flexible AC transmission systems (FACTs). This chapter also discusses IEC and IEEE standards for grid-connected renewable energy system.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.