In recent years, the axial radiation model has emerged as a pivotal framework for Wireless Sensor Networks (WSN), particularly in enhancing intelligent sensing platforms. This study delves into the WSN structured around the axial radiation model, encompassing critical aspects like model reconstruction, node deployment, and routing optimization. Our focus is on evaluating the performance metrics that influence the responsiveness of these intelligent platforms. We introduce the Multi-block Directed Radiation (MBDR) routing algorithm, designed to extend system operational time and boost data transmission efficiency. Comprehensive experimental analyses demonstrate the significant advancements of MBDR in survivability, data transmission rates, and regional balance within central axis radiation model environments.
This paper investigates the use of weak convergence in Stratonovich stochastic differential equations (SDEs), shifting the focus from the robust convergence techniques previously employed. We introduce a novel application of the trivial coupling method within the weak convergence framework, specifically addressing non-invertible equations. Our approach simplifies the handling of random scenarios and computational tasks, with potential applications spanning physics, biology, and engineering. We provide a detailed account of the method, including its theoretical background and practical implementation using MATLAB. Our results confirm the validity of our approach, demonstrating its effectiveness even with degenerate diffusion coefficients. This advancement in weak convergence strategies offers new insights and practical solutions for complex systems and opens avenues for further research.
With the advent of the semiconductor age and later the age of nanotechnology, the thin film and coating field have established their importance and reasons for doing in-depth studies. Different sophisticated physical techniques, like chemical vapor deposition, sputtering, evaporation, molecular beam epitaxy, etc., and the conventional spin coating or dip coating, have been employed to get thin films of specific materials/compounds. Of all these, physical techniques are particularly preferred for their ability to develop good quality thin film with high uniformity. In the field of experimental material science, there are tremendous efforts in thin film development and the study of their different properties. The properties include topological, electrical, electronic, optical, or other. At the same time, though less explored, there are developments of theoretical understanding regarding the basic mechanism of thin film growth by specific growth mechanism. In doing this, the basic mechanism of thin film growth has been categorized into different broad classes with specific features. The main features include the time dependence of interface width and values of different scaling exponents. Apart from these, studies also addressed different morphological, optical, or electrical properties of the as-grown thin films of specific material. This paper gathers the existing literature that reports the simulation-based theoretical studies related to thin film growth by different algorithms like random deposition, ballistic deposition, random deposition with surface relaxation, or their different combinations. Not only that, but the paper also summarizes different reports related to the simulation-based prediction of the material properties. As the topic is relatively new, the collection of reports added in the last 20 years has been considered.
The paper has different sections. Section 1 gives basic introductory ideas related to thin film development and its properties. Sections 2 and 3, respectively, deal with the basics of different existing models and the basic steps involved in the simulation. Section 4 gathers the related results reported by various researchers, followed by a short discussion and final concluding remarks. Undoubtedly, this paper is the first review work in this field and thus will serve as an invaluable source of information for future workers.
III-nitride materials have attracted wide interests for optoelectronic and electronic applications due to their unique and tuneable semiconducting and optical properties. Gallium nitride (GaN) exhibits wide bandgap of 3.42eV, high thermal and chemical stability, with a strong potential to act as an electron selective contact (i.e., emitter) in GaN/Si heterojunction solar cells. However, despite all the advantages, to date, there is no published research that has which utilized GaN as an emitter in the GaN/Si heterojunction solar cells. In this work, SCAPS-1D simulation is utilized to investigate the electrical properties of GaN emitter on 150μm-thick monocrystalline silicon (mono c-Si) absorber layer in GaN/Si heterojunction solar cell architecture. GaN emitter with the thickness of 30–120nm are studied and the effects toward open-circuit voltage (VOC), short-circuit current density (JSC), fill factor (FF), power conversion efficiency (PCE) and external quantum efficiency (EQE) of the solar cell are analyzed. The optimum thickness of the GaN emitter is found to be 60nm with JSC of 37.53mA/cm2, VOC of 628.73mV and PCE of 19.43%. The findings show the potential of the GaN/Si heterojunction solar cell for the future of the PV industry, especially for applications in harsh and extreme conditions.
Cloud computing’s simulation and modeling capabilities are crucial for big data analysis in smart grid power; they are the key to finding practical insights, making the grid resilient, and improving energy management. Due to issues with data scalability and real-time analytics, advanced methods are required to extract useful information from the massive, ever-changing datasets produced by smart grids. This research proposed a Dynamic Resource Cloud-based Processing Analytics (DRC-PA), which integrates cloud-based processing and analytics with dynamic resource allocation algorithms. Computational resources must be able to adjust the changing grid circumstances, and DRC-PA ensures that big data analysis can scale as well. The DRC-PA method has several potential uses, including power grid optimization, anomaly detection, demand response, and predictive maintenance. Hence the proposed technique enables smart grids to proactively adjust to changing conditions, boosting resilience and sustainability in the energy ecosystem. A thorough simulation analysis is carried out using realistic circumstances within smart grids to confirm the usefulness of the DRC-PA approach. The methodology is described in the intangible, showing how DRC-PA is more efficient than traditional methods because it is more accurate, scalable, and responsive in real-time. In addition to resolving existing issues, the suggested method changes the face of contemporary energy systems by paving the way for innovations in grid optimization, decision assistance, and energy management.
A system using aerial vehicles to autonomously locate and retrieve ground packages of various colors and shapes within a designated area was developed. This system was notable for its use of an innovative rotor configuration that offered a higher degree of control compared to traditional designs. To accurately identify and track the packages, a multi-sensor approach combining vision and Light Detection and Ranging technologies was implemented, integrating data into an extended Kalman filter framework for precise position and velocity estimates. An electro-permanent magnet was employed to securely grab and release the packages, which have a ferrous material. The process of path planning and collision avoidance was conducted in a decentralized manner, leveraging a shared global map among the airborne vehicles. This paper details system technical design, including the integration of various technologies, and shares insights and outcomes derived from its application.
Prevotella copri is a prominent constituent of the human gastrointestinal microbiome, and its fluctuating abundance has been linked with positive and negative influences on diseases such as Parkinson’s disease and rheumatoid arthritis. Prevotella copri demonstrates flexibility against drugs. There is presently no vaccine approved by the FDA against prevotella copri,and treatment options are restricted. Hence, this research work was designed to create an in silico-based vaccine for prevotella copri.The protein sequences of two distinct strains ofprevotella copriwere retrieved from NCBI. The T-cell and B-cell epitopes were obtained and then analyzed for antigenicity, allergenicity, docking and simulation. The peptide comprises linear B-cell and T-cell epitopes from proteins identified as potential novel vaccine candidates. The molecular dynamics (MD) simulations and protein-protein docking results revealed that the vaccine exhibits strong and Sustained interaction with Toll-like receptor 4 (TLR4). The constructed sequence was integrated into the pET-30a (+) biological vehicle (vector) for subsequent analysis expression in E. coli through the SnapGene server. The constructed multi-epitopic vaccine candidate was assessed for its structural, physicochemical and immunological properties. The results demonstrated solubility, stability, antigenicity and nonallergenicity and showed a strong affinity for its target receptors. The in silico study represents a significant step forward in designing a vaccine that could effectively eliminate Prevotella copri globally.
This paper uses two methodologies to explore the extent to which greater labor force participation among older Malaysians can expand Malaysia’s labor supply. The Milligan–Wise method estimates the potential to increase the labor force participation rate of older Malaysians by estimating how much they would work if they were to work as much as those with the same mortality rate in the past. The Cutler, Meara, and Richards-Shubik (2013) method estimates the same potential by estimating how much older Malaysians would work if they worked as much as their younger counterparts in similar health. We made further simulations to quantify the capacity of older Malaysians to work after they are 60 years old. The results show significant additional work capacity among older people in Malaysia, particularly males, urban dwellers, and those with low educational attainment.
This paper presents a modeling methodology devoted to the performance evaluation of parallel architectures. The methodology is based on the decomposition of the modeling process into seven stages. In each stage specific techniques, appropriate to parallel architectures, are applied, such as aggregation methods, a thorough distinction between the architecture model and the application program model, the constitution of program classes using data analysis techniques. The methodology is then illustrated through a case study, the loosely Coupled Array of Processors (lCAP) system, designed at IBM Kingston. Use of the lCAP model allows to predict the performance of the lCAP system, in terms of response time, resource utilization, waiting times, and to investigate many alternatives with regard to the system configuration (e. g. number of system components, component interconnection scheme, component characteristics), or to the parallel program structure (e. g. parallel task granularity, load imbalance).
In this paper we introduce a class of trees, called generalized compressed trees. Generalized compressed trees can be derived from complete binary trees by performing certain ‘contraction’ operations. A generalized compressed tree CT of height h has approximately 25% fewer nodes than a complete binary tree T of height h. We show that these trees have smaller (up to a 74% reduction) 2-dimensional and 3-dimensional VLSI layouts than the complete binary trees. We also show that algorithms initially designed for T can be simulated by CT with at most a constant slow-down. In particular, algorithms having non-pipelined computation structure and originally designed for T can be simulated by CT with no slow-down.
We formulate a microscopic model of the stock market and study the resulting macroscopic phenomena via simulation. In a market of homogeneous investors periodic booms and crashes in stock price are obtained, When there are two types of investors in the market, differing only in their memory spans, we observe sharp irregular transitions between eras where one population dominates the market and eras where the other population dominates. When the number of investor subgroups is three the market undergoes a dramatic qualitative change — it becomes complex. We show that complexity is an intrinsic property of the stock market. This suggests an alternative to the widely accepted but empirically questionable random walk hypothesis.
An efficient simulation algorithm using an algebra of transients for gate circuits was proposed by Brzozowski and Ésik. This algorithm seems capable of predicting all the signal changes that can occur in a circuit under worst-case delay conditions. We verify this claim by comparing simulation with binary analysis. For any feedback-free circuit consisting of one- and two-input gates, we prove that all signal changes predicted by simulation occur in binary analysis, provided that wire delays are taken into account. Two types of finite automata play an important role in our proof.
In this paper, we study the problem of maintaining sensing coverage by keeping a small number of active sensor nodes and using a small amount of energy consumption in wireless sensor networks. This paper extends a result from 22 where only uniform sensing range among all sensors is used. We adopt an approach that allows non-uniform sensing ranges for different sensors. As opposed to the uniform sensing range node scheduling model in 22, two new energy-efficient models with different sensing ranges are proposed. Our objective is to minimize the overlapped sensing area of sensor nodes, thus to reduce the overall energy consumption by sensing and communication to prolong the whole network's life time, and at the same time to achieve the high ratio of coverage. Extensive simulation is conducted to verify the effectiveness of our node scheduling models.
The dominant trend in scientific computing today is the establishment of platforms that span multiple institutions to support applications at unprecedented scales. On most distributed computing platforms a requirement to achieve high performance is the careful scheduling of distributed application components onto the available resources. While scheduling has been an active area of research for many decades most of the platform models traditionally used in scheduling research, and in particular network models, break down for platforms spanning wide-area networks. In this paper we examine network modeling issues for large-scale platforms from the perspective of scheduling. The main challenge we address is the development of models that are sophisticated enough to be more realistic than those traditionally used in the field, but simple enough that they are still amenable to analysis. In particular, we discuss issues of bandwidth sharing and topology modeling. Also, while these models can be used to define and reason about realistic scheduling problems, we show that they also provide a good basis for fast simulation, which is the typical method to evaluate scheduling algorithms, as demonstrated in our implementation of the SIMGRID simulation framework.
Transient simulation of a gate circuit is an efficient method of counting signal changes occurring during a transition of the circuit. It is known that this simulation covers the results of classical binary analysis, in the sense that all signal changes appearing in binary analysis are also predicted by the simulation. For feedback-free circuits of 1- and 2-input gates, it had been shown that the converse also holds, if wire delays are taken into account. In this paper we generalize this result. First, we prove that, for any feedback-free circuit N of arbitrary gates, there exists an expanded circuit, constructed by adding a number of delays to each wire of N, such that binary analysis of
covers transient simulation of N. For this result, the number of delays added to a wire is obtained from the transient simulation. Our second result involves adding only one delay per wire, which leads to the singular circuit
of N. This result is restricted to circuits consisting only of gates realizing functions from the set
, functions obtained by complementing any number of inputs and/or the output of a function from
, and FORKS. The numbers of inputs of the AND, OR and XOR gates are arbitrary, and all functions of two variables are included. We show that binary analysis of such a circuit
covers transient simulation of N. We also show that this result cannot be extended to arbitrary gates, if we allow only a constant number of delays per wire.
Restarting automata were introduced by Jančar et al. to model the so-called analysis by reduction. A computation of a restarting automaton consists of a sequence of cycles such that in each cycle the automaton performs exactly one rewrite step, which replaces a small part of the tape content by another, even shorter word. Here we consider a natural generalization of this model, called shrinking restarting automaton, where we only require that there exists a weight function such that each rewrite step decreases the weight of the tape content with respect to that function. While it is still unknown whether the two most general types of one-way restarting automata, the RWW-automaton and the RRWW-automaton, differ in their expressive power, we will see that the classes of languages accepted by the shrinking RWW-automaton and the shrinking RRWW-automaton coincide. As a consequence of our proof, it turns out that there exists a reduction by morphisms from the language class to the class
. Further, we will see that the shrinking restarting automaton is a rather robust model of computation. Finally, we will relate shrinking RRWW-automata to finite-change automata. This will lead to some new insights into the relationships between the classes of languages characterized by (shrinking) restarting automata and some well-known time and space complexity classes.
Here we introduce cooperating distributed systems of restarting automata and establish that in mode = 1 they correspond to the non-forgetting restarting automaton.
The Wireless Parallel Turing Machine (WPTM) is a new computational model recently introduced and studied by the authors. Its design captures important features of wireless mobile computing. In this paper we survey some results related to the descriptive complexity aspects of the new model. In particular, we show a tight relationship about (a) wireless parallel computing, (b) alternating, and (c) synchronized alternating Turing machines. This relationship opens, e.g., the road to circuit complexity by offering an elegant WPTM characterization of bounded-fan-in uniform circuit families, such as NC and NCi. The structural properties of computational graphs of WPTM computations inspire definitions of new complexity measures capturing important aspects of wireless computations: energy consumption and the number of broadcasting channels used during computation. These measures do not seem to have direct counterparts in alternating computations. We mention results related to these new structural measures, e.g., a polynomial time–bounded complexity hierarchy based on channel complexity, lying between P and PSPACE which seems to be incomparable to the standard polynomial–time alternating hierarchy.
Recently, many parallel computing models using dynamically reconfigurable electrical buses have been proposed in the literature. The underlying characteristics are similar among these models, but they do have certain differences that can take the form of restrictions on configurations allowed. This paper presents a constant time simulation of an R-Mesh on an LR-Mesh (a restricted model of the R-Mesh), proving that in spite of the differences, the two models possess the same complexity. In other words, the LR-Mesh can simulate a step of the R-Mesh in constant time with a polynomial increase in size. This simulation is based on Reingold's algorithm to solve USTCON in log-space. The simulation is also the first to be executed in constant time. The only drawback of this simulation is that the resources required by the LR-Mesh to simulate the R-Mesh are quite big, which is not suitable for any practical application.
Three variants of determinism are introduced for CD-systems of restarting automata, called strict determinism, global determinism, and local determinism. In mode = 1 globally deterministic CD-systems of restarting automata are of the same expressive power as nonforgetting deterministic restarting automata of the same type, which corresponds to the situation for nondeterministic CD-systems. On the other hand, for the various types of restarting automata without auxiliary symbols, strictly deterministic CD-systems of restarting automata are strictly less expressive than the corresponding deterministic types of nonforgetting restarting automata. Further, globally deterministic CD-systems of restarting automata can be simulated by locally deterministic CD-systems of restarting automata of the same type. In fact, we conjecture that, for all types of restarting automata without auxiliary symbols, the latter are strictly more expressive than the former, but they are strictly less expressive than the corresponding nondeterministic CD-systems of restarting automata.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.