Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The network public opinion information resources include text, pictures, videos and other modes, resulting in high sharing loss value. The network public opinion information resources sharing method is based on data analysis and artificial intelligence algorithm. First, based on spatial theory, a spatial model of the emotional dimension of network public opinion big data is constructed to dynamically capture and express the multi-dimensionality and dynamism of public opinion emotions. Subsequently, advanced multimodal neural network technology was utilized to accurately identify and extract deep features of network public opinion information resources, effectively addressing data heterogeneity. Furthermore, designing and implementing a resource sharing mechanism based on semantic fusion algorithm promote efficient matching and sharing of resources through deep semantic alignment and composite semantic relationship mining. Finally, simulation tests were conducted from four aspects: data analysis, shared loss values, feature recognition effectiveness, and shared performance. The results showed that the proposed method performed well in quantitative experiments, with lower sharing loss values (about 0.01), more accurate identification of network public opinion big data features, and significantly shorter sharing completion time, average waiting time, and resource download time than the comparative methods, only 7.66s, 2.03s, and 5.04s, respectively, proving its stronger sharing ability and superior performance.
With the deep integration of embedded network technology and data analysis technology, the content analysis and dissemination optimization of platform media have ushered in unprecedented technological innovations. The traditional information dissemination model often relies on manually designed feature extraction methods, such as keyword-based matching, TF-IDF and other statistical methods. These methods are limited in efficiency and accuracy when dealing with large-scale, high-dimensional data. Embedded network technology can automatically extract high-level and abstract features from raw text data, significantly improving the accuracy and efficiency of feature extraction. In this context, based on the essence of embedded networks, this paper innovatively integrates and reconstructs the core elements of traditional information dissemination models, and successfully constructs a new knowledge graph model of the current situation of news dissemination. This model fully utilizes the powerful advantages of embedded networks in feature extraction and learning capabilities. It can not only accurately depict the complex dynamic changes that occur over time in news events, but also deeply reveal the potential mechanisms and laws of news dissemination. In the experimental verification phase, the model demonstrated excellent performance. Especially in identifying sudden news feature words, the average recall rate, accuracy and F-score of the model reached about 45%, 35% and 40%, respectively. This result not only indicates a significant improvement in the accuracy of the model in feature word detection, but also fully validates the enormous potential of embedded networks in the field of news communication analysis.
For most high-precision experiments in particle physics, it is essential to know the luminosity at highest accuracy. The luminosity is determined by the convolution of particle densities of the colliding beams. In special van der Meer transverse beam separation scans, the convolution function is sampled along the horizontal and vertical axes with the purpose of determining the beam convolution and getting an absolute luminosity calibration. For this purpose, the van der Meer data of luminometer rates are separately fitted in the two directions with analytic functions giving the best description. With the assumption that the 2D convolution shape is factorizable, one can calculate it from the two 1D fits. The task of XY factorization analyses is to check this assumption and give a quantitative measure of the effect of nonfactorizability on the calibration constant to improve the accuracy of luminosity measurements.
We perform a dedicated analysis to study XY nonfactorization on proton–proton data collected in 2022 at √s=13.6TeV by the CMS experiment [The CMS Collab., Luminosity measurement in proton-proton collisions at √s=13.6 TeV in 2022 at CMS (2024). A detailed examination of the shape of the bunch convolution function is presented, studying various biases, and choosing the best-fit analytic 2D functions to finally obtain the correction and its uncertainty.
By employing summary statistics obtained from Persistent Homology (PH), we investigate the influence of Redshift Space Distortions (RSD) on the topology of excursion sets formed through the super-level filtration method applied to three-dimensional matter density fields. The synthetic fields simulated by the Quijote suite in both real and redshift spaces are smoothed by accounting for the Gaussian smoothing function with different scales. The RSD leads a tendency for clusters (˜β0) to shift toward higher thresholds, while filament loops (˜β1) and cosmic voids (˜β2) migrate toward lower thresholds. Notably, ˜β2 exhibits greater sensitivity to RSD compared to clusters and independent loops. As the smoothing scales increase, the amplitude of the reduced Betti number curve (˜βk) decreases, and the corresponding peak position shifts toward the mean threshold. Conversely, the amplitude of ˜βk remains almost unchanged with variations in redshift for z∈[0–3]. The analysis of persistent entropy and the overall abundance of k-holes indicates that the linear Kaiser effect plays a significant role compared to the nonlinear effect for R≳30Mpch−1 at z=0, whereas persistent entropy proves to be a reliable measure against nonlinear influences.
The paper presents results on factorization by similarity of fuzzy concept lattices with hedges. A fuzzy concept lattice is a hierarchically ordered collection of clusters extracted from tabular data. The basic idea of factorization by similarity is to have, instead of a possibly large original fuzzy concept lattice, its factor lattice. The factor lattice contains less clusters than the original concept lattice but, at the same time, represents a reasonable approximation of the original concept lattice and provides us with a granular view on the original concept lattice. The factor lattice results by factorization of the original fuzzy concept lattice by a similarity relation. The similarity relation is specified by a user by means of a single parameter, called a similarity threshold. Smaller similarity thresholds lead to smaller factor lattices, i.e. to more comprehensible but less accurate approximations of the original concept lattice. Therefore, factorization by similarity provides a trade-off between comprehensibility and precision.
We first describe the notion of factorization. Second, we present a way to compute the factor lattice directly from input data, i.e. without the need to compute the possibly large original concept lattice. Third, we provide an illustrative example to demonstrate our method.
We have performed parallel large-scale molecular-dynamics simulations on the QSC-machine at Los Alamos. The good scalability of the SPaSM code is demonstrated together with its capability of efficient data analysis for enormous system sizes up to 19 000 416 964 particles. Furthermore, we introduce a newly-developed graphics package that renders in a very efficient parallel way a huge number of spheres necessary for the visualization of atomistic simulations. These abilities pave the way for future atomistic large-scale simulations of physical problems with system sizes on the μ-scale.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
Unsupervised statistical learning (USL) techniques, such as self-organizing maps (SOMs), principal component analysis (PCA) and independent component analysis explore different statistical properties to efficiently process information from multiple variables. USL algorithms have been successfully applied in experimental high-energy physics (HEP) and related areas for different purposes, such as feature extraction, signal detection, noise reduction, signal-background separation and removal of cross-interference from multiple signal sources in multisensor measurement systems. This paper presents both a review of the theoretical aspects of these signal processing methods and examples of some successful applications in HEP and related areas experiments.
The world evolution of the severe acute respiratory syndrome coronavirus 2 (SARS-Cov2 or simply COVID-19) led the World Health Organization to declare it a pandemic. The disease appeared in China in December 2019, and it has spread fast around the world, especially in European countries like Italy and Spain. The first reported case in Brazil was recorded in February 26, and after that the number of cases grew fast. In order to slow down the initial growth of the disease through the country, confirmed positive cases were isolated to not transmit the disease. To better understand the early evolution of COVID-19 in Brazil, we apply a Susceptible–Infectious–Quarantined–Recovered (SIQR) model to the analysis of data from the Brazilian Department of Health, obtained from February 26, 2020 through March 25, 2020. Based on analytical and numerical results, as well on the data, the basic reproduction number is estimated to R0=5.25. In addition, we estimate that the ratio between unidentified infectious individuals and confirmed cases at the beginning of the epidemic is about 10, in agreement with previous studies. We also estimated the epidemic doubling time to be 2.72 days.
The size of a random surface can be measured in at least two ways from two lengths: the radius of gyration and the smallest length such that a box constructed with this length contains the surface. In this paper it is shown that both lengths are non-self-averaging.
The field of gravitational-wave astronomy has been opened up by gravitational-wave observations made with interferometric detectors. This review surveys the current state-of-the-art in gravitational-wave detectors and data analysis methods currently used by the Laser Interferometer Gravitational-Wave Observatory in the United States and the Virgo Observatory in Italy. These analysis methods will also be used in the recently completed KAGRA Observatory in Japan. Data analysis algorithms are developed to target one of four classes of gravitational waves. Short duration, transient sources include compact binary coalescences, and burst sources originating from poorly modeled or unanticipated sources. Long duration sources include sources which emit continuous signals of consistent frequency, and many unresolved sources forming a stochastic background. A description of potential sources and the search for gravitational waves from each of these classes are detailed.
In this paper, we study the late accelerating expansion of the universe by incorporating bulk viscous matter with the running vacuum. The running vacuum is assumed to be varied as the square of the Hubble parameter (ρΛ∝H2), while the coefficient of bulk viscosity of matter is taken to be proportional to the Hubble parameter (ξ∝H). We have analytically solved for the Hubble parameter and estimated the model parameters using the combined data set SNIa+CMB+BAO+OHD+QSO. The evolution of the cosmological parameters was analyzed, and the universe’s age is estimated to be 13.94Gyr. The evolution of the universe in the present model marked considerable improvement compared to bulk viscous matter-dominated models. The transition from matter-dominated decelerated phase to vacuum energy-dominated accelerating phase occurred at a transition redshift, zT=0.73, and the evolution asymptotically approaches a de Sitter epoch. We have obtained the coefficient of bulk viscosity of the matter component as 9.94×104kgm−1s−1 which is two orders of magnitude less than the value predicted by most of the bulk viscous matter-dominated models. The statefinder analysis distinguishes our model from the ΛCDM model at present, and the r−s trajectory reveals the quintessence behavior of the vacuum energy. The model was found to satisfy the generalized second law of thermodynamics, and the entropy is maximized in the far future evolution.
A key feature of collaboration in science and software development is to have a log of what and how is being done - for private use and reuse and for sharing selected parts with collaborators, which most often today are distributed geographically on an ever larger scale. Even better if this log is automatic, created on the fly while a scientist or software developer is working in a habitual way, without the need for extra efforts. The CAVES and CODESH projects address this problem in a novel way, building on the concepts of virtual state and virtual transition to provide an automatic persistent logbook for sessions of data analysis or software development in a collaborating group. A repository of sessions can be configured dynamically to record and make available the knowledge accumulated in the course of a scientific or software endeavor. Access can be controlled to define logbooks of private sessions and sessions shared within or between collaborating groups.
We present a global analysis of latest solar and reactor neutrino data in the three-neutrino mixing scheme by using both the simple grid scanning and the Markov Chain Monte Carlo (MCMC) sampling methods. Accuracy and efficiency of the two sampling methods are compared and advantages of the latter are discussed. The fitting results of three oscillation parameters θ12, θ13 and are provided with both old and new evaluations of the reactor antineutrino flux. Possible correlation between the fitting parameters is also discussed.
In this paper, we introduce model-independent data analysis procedures for identifying inelastic WIMP-nucleus scattering as well as for reconstructing the mass and the mass splitting of inelastic WIMPs simultaneously and separately. Our simulations show that, with 𝒪(50) observed WIMP signals from one experiment, one could already distinguish the inelastic WIMP scattering scenarios from the elastic one. By combining two or more data sets with positive signals, the WIMP mass and the mass splitting could even be reconstructed with statistical uncertainties of less than a factor of two.
In this paper, we revisit our model-independent methods developed for reconstructing properties of Weakly Interacting Massive Particles (WIMPs) by using measured recoil energies from direct Dark Matter detection experiments directly and take into account more realistically non-negligible threshold energy. All expressions for reconstructing the mass and the (ratios between the) spin-independent and the spin-dependent WIMP–nucleon couplings have been modified. We focus on low-mass (mχ≲15 GeV) WIMPs and present the numerical results obtained by Monte Carlo simulations. Constraints caused by non-negligible threshold energy and technical treatments for improving reconstruction results will also be discussed.
Taiji-1 is the first technology demonstration satellite of the Taiji Program in Space, which, served as the pre-PathFinder mission, had finished its nominal science operational phase and successfully accomplished the mission goal. The gravitational reference sensor (GRS) on-board Taiji-1 is one of the key science payloads that coupled strongly to other instruments, sub-systems and also the satellite platform itself. Fluctuations of the physical environment inside the satellite and mechanical disturbances of the platform generate important noises in the GRS measurements, therefore their science data can also be used to evaluate the performance of the μN-thrusters and the stability of the platform. In this work, we report on the methods employed in Taiji-1 GRS data processing in the systematical modelings of the spacecraft orbit and attitude perturbations, mechanical disturbances, and internal environment changes. The modeled noises are then removed carefully from the GRS science data to improve the data quality and the GRS in-orbit performance estimations.
In this paper, we have considered the generalized cosmic Chaplygin gas (GCCG) in the background of Brans–Dicke (BD) theory and also assumed that the Universe is filled in GCCG, dark matter and radiation. To investigate the data fitting of model parameters, we have constrained the model using recent observations. Using χ2 minimum test, the best-fit values of the model parameters are determined by OHD+CMB+BAO+SNIa joint data analysis. We have drawn the contour figures for different confidence levels 1σ, 2σ and 3σ. To examine the viability of the GCCG model in BD theory, we have also determined △AIC and △BIC using the information criteria (AIC and BIC). Graphically, we have analyzed the natures of the equation of state parameter and deceleration parameter for our best-fit values of model parameters. Also, we have studied the square speed of sound v2s which lies in the interval (0,1) for expansion of the Universe. So, our considered model is classically stable by considering the best-fit values of the model parameters due to the data analysis.
In this paper, we have considered flat Friedmann–Robertson–Walker (FRW) model of the universe and reviewed the modified Chaplygin gas as the fluid source. Associated with the scalar field model, we have determined the Hubble parameter as a generating function in terms of the scalar field. Instead of hyperbolic function, we have taken Jacobi elliptic function and Abel function in the generating function and obtained modified Chaplygin–Jacobi gas (MCJG) and modified Chaplygin–Abel gas (MCAG) equation of states, respectively. Next, we have assumed that the universe is filled in dark matter, radiation, and dark energy. The sources of dark energy candidates are assumed as MCJG and MCAG. We have constrained the model parameters by recent observational data analysis. Using χ2 minimum test (maximum likelihood estimation), we have determined the best-fit values of the model parameters by OHD+CMB+BAO+SNIa joint data analysis. To examine the viability of the MCJG and MCAG models, we have determined the values of the deviations of information criteria like △AIC, △BIC and △DIC. The evolutions of cosmological and cosmographical parameters (like equation of state, deceleration, jerk, snap, lerk, statefinder, Om diagnostic) have been studied for our best-fit values of model parameters. To check the classical stability of the models, we have examined the values of square speed of sound v2s in the interval (0,1) for expansion of the universe.
In this paper, we consider the problems of identifying the most appropriate model for a given physical system and of assessing the model contribution to the measurement uncertainty. The above problems are studied in terms of Bayesian model selection and model averaging. As the evaluation of the “evidence” Z, i.e., the integral of Likelihood × Prior over the space of the measurand and the parameters, becomes impracticable when this space has 20÷30 dimensions, it is necessary to consider an appropriate numerical strategy. Among the many algorithms for calculating Z, we have investigated the ellipsoidal nested sampling, which is a technique based on three pillars: The study of the iso-likelihood contour lines of the integrand, a probabilistic estimate of the volume of the parameter space contained within the iso-likelihood contours and the random samplings from hyperellipsoids embedded in the integration variables. This paper lays out the essential ideas of this approach.