Search name | Searched On | Run search |
---|---|---|
Keyword: Data Analysis (36) | 27 Mar 2025 | Run |
You do not have any saved searches
In this paper, we have considered a spatially flat FRW universe filled with pressureless matter and dark energy (DE). We have considered a phenomenological parametrization of the deceleration parameter q(z) and from this, we have reconstructed the equation-of-state (EoS) for DE ωϕ(z). This divergence-free parametrization of the deceleration parameter is inspired from one of the most popular parametrization of the DE EoS given by Barboza and Alcaniz [see E. M. Barboza and J. S. Alcaniz, Phys. Lett. B666 (2008) 415]. Using the combination of datasets (Type Ia Supernova (SN Ia) + Hubble + baryonic acoustic oscillations/cosmic microwave background (BAO/CMB)), we have constrained the transition redshift zt (at which the universe switches from a decelerating to an accelerating phase) and have found the best fit value of zt. We have also compared the reconstructed results of q(z) and ωϕ(z) and have found that the results are compatible with a ΛCDM universe if we consider SN Ia + Hubble data, but inclusion of BAO/CMB data makes q(z) and ωϕ(z) incompatible with ΛCDM model. The potential term for the present toy model is found to be functionally similar to a Higgs potential.
In this work, using a detailed dataset furnished by National Health Authorities concerning the Province of Pavia (Lombardy, Italy), we propose to determine the essential features of the ongoing COVID-19 pandemic in terms of contact dynamics. Our contribution is devoted to provide a possible planning of the needs of medical infrastructures in the Pavia Province and to suggest different scenarios about the vaccination campaign which possibly help in reducing the fatalities and/or reducing the number of infected in the population. The proposed research combines a new mathematical description of the spread of an infectious diseases which takes into account both age and average daily social contacts with a detailed analysis of the dataset of all traced infected individuals in the Province of Pavia. These information are used to develop a data-driven model in which calibration and feeding of the model are extensively used. The epidemiological evolution is obtained by relying on an approach based on statistical mechanics. This leads to study the evolution over time of a system of probability distributions characterizing the age and social contacts of the population. One of the main outcomes shows that, as expected, the spread of the disease is closely related to the mean number of contacts of individuals. The model permits to forecast thanks to an uncertainty quantification approach and in the short time horizon, the average number and the confidence bands of expected hospitalized classified by age and to test different options for an effective vaccination campaign with age-decreasing priority.
Automated reliability assessment is essential for systems that entail dynamic adaptation based on runtime mission-specific requirements. One approach along this direction is to monitor and assess the system using machine learning-based software defect prediction techniques. Due to the dynamic nature of software data collected, Instance-based learning algorithms are proposed for the above purposes. To evaluate the accuracy of these methods, the paper presents an empirical analysis of four different real-time software defect data sets using different predictor models.
The results show that a combination of 1R and Instance-based learning along with Consistency-based subset evaluation technique provides a relatively better consistency in achieving accurate predictions as compared with other models. No direct relationship is observed between the skewness present in the data sets and the prediction accuracy of these models. Principal Component Analysis (PCA) does not show a consistent advantage in improving the accuracy of the predictions. While random reduction of attributes gave poor accuracy results, simple Feature Subset Selection methods performed better than PCA for most prediction models. Based on these results, the paper presents a high-level design of an Intelligent Software Defect Analysis tool (ISDAT) for dynamic monitoring and defect assessment of software modules.
The world evolution of the severe acute respiratory syndrome coronavirus 2 (SARS-Cov2 or simply COVID-19) led the World Health Organization to declare it a pandemic. The disease appeared in China in December 2019, and it has spread fast around the world, especially in European countries like Italy and Spain. The first reported case in Brazil was recorded in February 26, and after that the number of cases grew fast. In order to slow down the initial growth of the disease through the country, confirmed positive cases were isolated to not transmit the disease. To better understand the early evolution of COVID-19 in Brazil, we apply a Susceptible–Infectious–Quarantined–Recovered (SIQR) model to the analysis of data from the Brazilian Department of Health, obtained from February 26, 2020 through March 25, 2020. Based on analytical and numerical results, as well on the data, the basic reproduction number is estimated to R0=5.25. In addition, we estimate that the ratio between unidentified infectious individuals and confirmed cases at the beginning of the epidemic is about 10, in agreement with previous studies. We also estimated the epidemic doubling time to be 2.72 days.
In this paper, we have considered flat Friedmann–Robertson–Walker (FRW) model of the universe and reviewed the modified Chaplygin gas as the fluid source. Associated with the scalar field model, we have determined the Hubble parameter as a generating function in terms of the scalar field. Instead of hyperbolic function, we have taken Jacobi elliptic function and Abel function in the generating function and obtained modified Chaplygin–Jacobi gas (MCJG) and modified Chaplygin–Abel gas (MCAG) equation of states, respectively. Next, we have assumed that the universe is filled in dark matter, radiation, and dark energy. The sources of dark energy candidates are assumed as MCJG and MCAG. We have constrained the model parameters by recent observational data analysis. Using χ2 minimum test (maximum likelihood estimation), we have determined the best-fit values of the model parameters by OHD+CMB+BAO+SNIa joint data analysis. To examine the viability of the MCJG and MCAG models, we have determined the values of the deviations of information criteria like △AIC, △BIC and △DIC. The evolutions of cosmological and cosmographical parameters (like equation of state, deceleration, jerk, snap, lerk, statefinder, Om diagnostic) have been studied for our best-fit values of model parameters. To check the classical stability of the models, we have examined the values of square speed of sound v2s in the interval (0,1) for expansion of the universe.
Bose–Einstein correlations of identical particles reveal the shape and size of the particle emitting source of the given boson. These correlations are pivotal for a better understanding of the source dynamics and developing techniques to examine the propagation of quantum chaos in the presence of coherence at a certain temperature and momentum. Femtoscopy is of the utmost importance during this transformation for data processing and coherence-chaos analysis discrepancies. We introduced an evolving source to describe chaotic information inside quantum entanglement and the impact of the coherence order is discussed in this research. We also investigate the role of source size parameter and particle number in the dynamics of the temperature profiles with the function of momentum variation. The influences of the modeling factors on the correlations with their normalized correlator in the perspective of source coherence have been evaluated and the orientation of the source has an effect on the diffusivity of the fluid under consideration. Such consequences can cause an increase or decrease the genuine correlations due to the distributions of velocity and temperature. The main findings of the paper have been illustrated using the graphical representations of the considered correlations according to the geometry of the expanding source. Such results reveal the lucrativeness in the field of engineering applications.
The proliferation of fractal artificial intelligence (AI)-based decision-making has propelled advances in intelligent computing techniques. Fractal AI-driven decision-making approaches are used to solve a variety of real-world complex problems, especially in uncertain sports surveillance situations. To this end, we present a framework for deciding the winner in a tied sporting event. As a case study, a tied cricket match was investigated, and the issue was addressed with a systematic state-of-the-art approach by considering the team strength in terms of the player score, team score at different intervals, and total team scores (TTSs). The TTSs of teams were compared to recommend the winner. We believe that the proposed idea will help to identify the winner in a tied match, supporting intelligent surveillance systems. In addition, this approach can potentially address many existing issues and future challenges regarding critical decision-making processes in sports. Furthermore, we posit that this work will open new avenues for researchers in fractal AI.
This paper describes an expert system to predict National Hockey League (NHL) game outcome. A new method based on both data and judgments is used to estimate the hockey game performance. There are many facts and judgments that could influence an outcome. We employed the support vector machine to determine the importance of these factors before we incorporate them into the prediction system. Our system combines data and judgments and used them to predict the win–lose outcome of all the 89 post-season games before they took place. The accuracy of our prediction with the combined factors was 77.5%. This is to date the best accuracy reported of hockey games prediction.
Decision support for planning and improving software development projects is a crucial success factor. The special characteristics of software development aggregate these tasks in contrast to the planning of many other processes, such as production processes. Process simulation can be used to support decisions on process alternatives on the basis of existing knowledge. Thereby, new development knowledge can be gained faster and more cost effective.
This chapter gives a short introduction to experimental software engineering, describes simulation approaches within that area, and introduces a method for systematically developing discrete-event software process simulation models. Advanced simulation modeling techniques will point out key problems and possible solutions, including the use of visualization techniques for better simulation result interpretation.
Taiji-1 is the first technology demonstration satellite of the Taiji Program in Space, which, served as the pre-PathFinder mission, had finished its nominal science operational phase and successfully accomplished the mission goal. The gravitational reference sensor (GRS) on-board Taiji-1 is one of the key science payloads that coupled strongly to other instruments, sub-systems and also the satellite platform itself. Fluctuations of the physical environment inside the satellite and mechanical disturbances of the platform generate important noises in the GRS measurements, therefore their science data can also be used to evaluate the performance of the μN-thrusters and the stability of the platform. In this work, we report on the methods employed in Taiji-1 GRS data processing in the systematical modelings of the spacecraft orbit and attitude perturbations, mechanical disturbances, and internal environment changes. The modeled noises are then removed carefully from the GRS science data to improve the data quality and the GRS in-orbit performance estimations.
Modern gait analysis results in large quantities of correlated data. A current challenge in the field is the development of appropriate data analysis techniques for the representation and interpretation of these data. Knee osteoarthritis is a common debilitating disease of the musculoskeletal system that has been the focus of many gait studies in recent years. Various data analysis techniques have been used to extract pathological information from gait data in these studies. The following review discusses the successes and limitations of many of these analysis techniques in the attempt to understand the biomechanics of knee osteoarthritis.
This study presents evidence that executable computer programs and human genomes contain similar patterns of repetitive code. When viewed with sequence visualization tools, these similarities are both striking and pervasive. The primary similarities are listed in order of scale: (1) homopolymers, (2) tandem repeats, (3) distributed repeats, (4) isochores, (5) and entire chromosome/file organization. Most strikingly, data visualization reveals that executable codes regularly make extensive use of tandem repeats which exhibit similar visual patterns as seen in higher genomes. In biology these tandem repeat patterns are normally attributed to replication errors, insertions, deletions, and substitutions. Similarly, on a larger scale, executable codes display regions with different ratios of 1's and 0's which parallel the isochore patterns within chromosomes, caused by local variation in the number of A/T vs. G/C. Further, blocks of data are stored at the beginning or end of a file, while the primary instructions occupy the middle of a file. This creates the same organizational patterns observed in human chromosome arms, where repetitive sequences are grouped near the telomeres and centromeres.
I propose that these similarities can be explained by universal constraints in efficient information encoding and execution. The genome may be viewed as the executable program that encodes life. Given the evidence that computer programs and genomes use many of the same patterns of organization, despite having very different context, it should be informative to explore the ways in which knowledge of computer architecture can be applied to biology and vice versa.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
The field of gravitational-wave astronomy has been opened up by gravitational-wave observations made with interferometric detectors. This review surveys the current state-of-the-art in gravitational-wave detectors and data analysis methods currently used by the Laser Interferometer Gravitational-Wave Observatory in the United States and the Virgo Observatory in Italy. These analysis methods will also be used in the recently completed KAGRA Observatory in Japan. Data analysis algorithms are developed to target one of four classes of gravitational waves. Short duration, transient sources include compact binary coalescences, and burst sources originating from poorly modeled or unanticipated sources. Long duration sources include sources which emit continuous signals of consistent frequency, and many unresolved sources forming a stochastic background. A description of potential sources and the search for gravitational waves from each of these classes are detailed.
In this paper, we have considered the generalized cosmic Chaplygin gas (GCCG) in the background of Brans–Dicke (BD) theory and also assumed that the Universe is filled in GCCG, dark matter and radiation. To investigate the data fitting of model parameters, we have constrained the model using recent observations. Using χ2 minimum test, the best-fit values of the model parameters are determined by OHD+CMB+BAO+SNIa joint data analysis. We have drawn the contour figures for different confidence levels 1σ, 2σ and 3σ. To examine the viability of the GCCG model in BD theory, we have also determined △AIC and △BIC using the information criteria (AIC and BIC). Graphically, we have analyzed the natures of the equation of state parameter and deceleration parameter for our best-fit values of model parameters. Also, we have studied the square speed of sound v2s which lies in the interval (0,1) for expansion of the Universe. So, our considered model is classically stable by considering the best-fit values of the model parameters due to the data analysis.
In this paper, we consider the problems of identifying the most appropriate model for a given physical system and of assessing the model contribution to the measurement uncertainty. The above problems are studied in terms of Bayesian model selection and model averaging. As the evaluation of the “evidence” Z, i.e., the integral of Likelihood × Prior over the space of the measurand and the parameters, becomes impracticable when this space has 20÷30 dimensions, it is necessary to consider an appropriate numerical strategy. Among the many algorithms for calculating Z, we have investigated the ellipsoidal nested sampling, which is a technique based on three pillars: The study of the iso-likelihood contour lines of the integrand, a probabilistic estimate of the volume of the parameter space contained within the iso-likelihood contours and the random samplings from hyperellipsoids embedded in the integration variables. This paper lays out the essential ideas of this approach.
With the rapid development of sharing economy, bike-sharing becomes essential because of its zero emission, high flexibility and accessibility. The emergence of the public bicycle system not only alleviates the traffic pressure to a certain extent, but also contributes to solving the “last kilometer” problem of public transportation. However, due to the concentrated use of shared bikes, many shared bikes are left in disorder, which seriously affects the urban environment and causes traffic problems. How to manage the allocation of bike-sharing and improve the city’s shared cycling system have become a highly discussed issue. We, taking Beijing as an example, research on the allocation of shared bikes by using the open source data provided by Amap, Baidu Map and websites of shared bikes, which are used to analyze the allocation, and establish an optimizing comprehensive evaluation model to evaluate the required level. In the end, we look forward the future of bike-sharing market.
In this paper, we examine the properties of the Jones polynomial using dimensionality reduction learning techniques combined with ideas from topological data analysis. Our data set consists of more than 10 million knots up to 17 crossings and two other special families up to 2001 crossings. We introduce and describe a method for using filtrations to analyze infinite data sets where representative sampling is impossible or impractical, an essential requirement for working with knots and the data from knot invariants. In particular, this method provides a new approach for analyzing knot invariants using Principal Component Analysis. Using this approach on the Jones polynomial data, we find that it can be viewed as an approximately three-dimensional subspace, that this description is surprisingly stable with respect to the filtration by the crossing number, and that the results suggest further structures to be examined and understood.
Several large-scale gravitational wave (GW) interferometers have achieved long term operation at design sensitivity. Questions arise on how to best combine all available data from detectors of different sensitivities for detection, consistency check or veto, localization and waveform extraction. We show that these problems can be formulated using the singular value decomposition (SVD)1 method. We present techniques based on the SVD method for (1) detection statistic, (2) stable solutions to waveforms, (3) null-stream construction for an arbitrary number of detectors, and (4) source localization for GWs of unknown waveforms.
Exploring the cosmological variations of fundamental dimensionless constants, such as the fine-structure constant, α, the proton-to-electron mass ratio, μ, and the gravitational constant, G, is essential for testing new phenomena beyond the standard cosmological models. This study focuses on investigating the potential variation of constants by analyzing strong gravitational fields associated with white dwarf stars. Utilizing the observed spectrum of G191-B2B, we present a new constraint on the cosmological variation of μ over extended time scales, Δμ/μ=(0.084±1.044)×10−8, incorporating the gravitational redshift z≈5×10−5. This finding represents a new tool for checking the parameters for Grand Unified Theories (GUTs).
Please login to be able to save your searches and receive alerts for new content matching your search criteria.