https://doi.org/10.1142/S012915642540347X
The ecological suitability evaluation of low-carbon urban planning is affected by the irregular distribution of ground buildings, resulting in low evaluation accuracy. Therefore, the ecological suitability evaluation method of low-carbon urban planning based on big data technology is proposed. The joint feature detection method of building and block data is used to realize visual remote sensing detection of low-carbon urban spatial planning. The collected remote sensing images of low-carbon urban spatial planning are vertically connected, spatially embedded, vertically stacked, and spatial planning parameters are extracted. The edge contour detection and feature clustering analysis of the remote sensing images of low-carbon urban spatial planning are carried out. It realizes the irregular point marking of the low-carbon city ecological adaptability spatial image and the fusion processing of the multi-spectral image and the full-color image, so as to generate the change detection map of the low-carbon city planning ecological suitability evaluation, and realize the optimization evaluation of the low-carbon city ecological adaptability space through the distribution of the detection map. The experiment shows that the accuracy of ecological suitability evaluation of low-carbon city planning using this method is high, and the model parameter detection effect of urban building space matching is good, which improves the evaluation ability of ecological suitability space of low-carbon city.
https://doi.org/10.1142/S0129156425403274
To solve the problems of unit noise and vibration causing inconvenience to residents, this paper proposes a unit noise and vibration device and designs a unit noise and vibration control system. Adaptive active noise and vibration control is adopted, and DSP is used as the core of the entire system, which includes the acoustic sensor part, the peripheral signal conditioning part and the signal processing control part, so that it can achieve real-time control of unit noise and vibration. In addition, the improved independent component analysis method is used to achieve the noise reduction effect of unit noise and vibration. Finally, a unit noise and vibration detection system are designed, MEMS microphone detection is applied to achieve real-time detection of unit noise and vibration. Experiments show that the noise reduction effect of the system studied in this paper is relatively good. The vibration effect is better when the distance from the unit is 1m, and the noise value is reduced from 67dB to 48dB. Meanwhile, when the vibration distance from the unit is 22m, the noise drops to 37dB, and the noise value detected by the system in this paper is not much different from the actual noise value.
https://doi.org/10.1142/S0129156425403080
In view of the problem that hydropower station equipment is prone to multiple faults during operation, and to detect SF6 gas leakage faults, this paper proposes a gas detection method based on differential photoacoustic spectroscopy. First, differential absorption photoacoustic spectroscopy technology is used to detect flowing SF6 gas. The system includes a detection module, acquisition module, sampling module, amplification module, control module, and calculation module. Nitrogen is used as power to control the six-way valve for information exchange. SF6 gas is quantitatively injected, samples are transmitted, and the photoacoustic signal is amplified and analyzed. Then, the equipment state space model is constructed, and the particle filter algorithm is applied for state variable estimation. The process is divided into state prediction, update, and resampling. Finally, the residual value is obtained by comparing the real-time measured value and the estimated value, and an adaptive threshold method is added to detect equipment faults and avoid false alarms. The experimental results show that the system in this paper detects a vibration signal of 0.89V when the current is 1000A, and has a good early warning effect on contact system faults and point faults.
https://doi.org/10.1142/S0129156425402840
This paper explores the application and benefits of the Federated Averaging (FedAvg) algorithm in optimizing power grid data quality as the power grid evolves toward more intelligent, data-driven systems, ensuring high-quality data becomes critical to the effective operation and management of the grid. However, optimizing data quality is a complex challenge due to the involvement of multiple data holders, each with privacy concerns that prevent the sharing of sensitive information. The FedAvg algorithm offers a promising solution by enabling the aggregation of data insights across distributed systems without the need to share raw data, thus preserving privacy while improving data quality. This study provides a comprehensive evaluation of the FedAvg algorithm’s impact on power grid data quality through a detailed implementation process. The research outlines the algorithm’s step-by-step optimization procedure, highlighting key design choices, such as model aggregation strategies, communication protocols, and iterative updates. By analyzing real-world application cases, we demonstrate how FedAvg addresses challenges such as data heterogeneity, missing data, and inconsistencies across different grid regions. Additionally, we present a series of experimental results that include a range of data quality metrics — such as accuracy, consistency, and reliability — to assess the algorithm’s effectiveness. The findings of this study show that the FedAvg algorithm can significantly enhance the accuracy and consistency of power grid data. Through its distributed approach, it not only improves the quality of data but also enhances the operational efficiency and reliability of the grid. The paper provides a clearer understanding of how FedAvg can be effectively implemented in power grid systems and its direct impact on data quality. This research contributes to the broader field of grid management by offering practical insights into leveraging federated learning techniques for data optimization while maintaining privacy, thus offering a more comprehensive and scalable solution for modern power grids.
https://doi.org/10.1142/S0129156425402426
Network security situational awareness is gaining increasing attention due to its capability to globally and dynamically detect potential network security risks. However, traditional security situational awareness models often exhibit poor classification performance, resulting in lower-than-expected acceleration and scalability ratios. In this paper, we propose a novel security situational awareness approach for wireless communication networks based on a decision tree model. First, reconfigure the category division module to categorize the attack data into four different types. Then, using time windows to segment the data flow between the network and the host promotes the design of effective security event detection mechanisms in the model. Finally, a comprehensive network security situational awareness model was constructed at the joint level using decision tree algorithm. The experimental results show that the proposed method can significantly improve the acceleration ratio, and the space occupancy ratio can reach 80, indicating that the proposed method can have a high level of processing capability and accurate perception in network security situations, providing a guarantee for the security of wireless communication networks.
https://doi.org/10.1142/S012915642540213X
Aiming at the problems of low positioning accuracy, poor map readability and weak robustness when the mobile robot implements the SLAM technology due to the existence of dynamic objects in the mobile robot working environment, an SLAM technology algorithm for mobile robot dynamic environment positioning based on semantic information is proposed. Firstly, the front end of the ORB-SLAM2 framework will be used, combined with the YOLO v4 target detection algorithm, to extract the ORB features of the input image. Meanwhile, the YOLO v4 target detection algorithm is used to obtain the dynamic and static areas of the object containing semantic information in the scene image, to obtain the preliminary semantic dynamic and static areas, and to perform semantic segmentation in the image; Then, the dynamic object region is selected by using the image polar equation, and the ORB feature points distributed on the dynamic object are eliminated. Finally, the processed feature points and adjacent key frames are used for inter-frame matching to estimate the camera pose to build a static environment map. The experiment used the open TUM dataset to compare the proposed algorithm with the traditional ORB-SLAM2 test results show that the proposed algorithm improves the pose estimation accuracy by 75% compared with the ORB-SLAM2 in the dynamic environment, and the map construction effect is significantly enhanced. The experimental results show that this algorithm can eliminate the influence of dynamic objects on SLAM technology, improve the positioning accuracy of the system, and expand the application field of the system.
https://doi.org/10.1142/S0129156425401913
There are significant identification errors in the vibration displacement identification of deep-water pile foundation bridge piers on soft soil foundations. Therefore, this paper proposes a new method for identifying the vibration displacement of deep-water pile foundation bridge piers on soft soil foundations based on multi-source information fusion. The characteristics of soft soil structure are analyzed and the stress relationship between soft soil and solid soil skeleton is calculated. Deep water pile foundation bridge piers on soft soil foundation determine the structural characteristics of deep water pile foundation bridge piers on soft soil foundation, construct displacement curves based on the elastic deformation curvature of deep water pile foundation bridge piers on soft soil foundation, and determine the curvature of each point when the bridge pier displacement occurs. Analyze the factors that cause the vibration of deep-water pile foundation bridge piers on soft soil foundations, and determine the characteristics of the vibration of deep-water pile foundation bridge piers on soft soil foundations by calculating the amplitude under different factors. Analyze the different failure modes and shear strength, bending strength, displacement ductility coefficient under the vibration conditions of bridge piers, and analyze the load on the bridge piers to complete the extraction of multi-source information on the vibration displacement of deep-water pile foundation bridge piers on soft soil foundation. Analyze the multi-source information fusion structure, introduce the least squares support vector machine method to complete the fusion of information on the vibration displacement impact of deep-water pile foundation bridge piers on soft soil foundation. Based on this, select strain measurement points for deep-water pile foundation bridge piers on soft soil foundation, and arrange displacement identification along the entire bridge pier at equal intervals to achieve accurate displacement identification. The experimental results show that the proposed method reduces the error of displacement recognition.
https://doi.org/10.1142/S0129156425401895
There are problems such as large tracking deviation in basketball flight trajectory tracking. To this end, this paper proposes an accurate tracking method of basketball flight trajectory based on YOLO and multi-level data association. Digital imaging equipment is used to collect basketball flight image video sequences, and the frame difference of the basketball flight image video sequence is defined. The spatial invariant feature decomposition method is used in the three-dimensional flight space to obtain the energy function of the binary multi-dimensional image and analyze the inherent mode function of the image. The visual feature extraction method is used to extract the basketball flight trajectory; the Gaussian kernel function is used to set the Gaussian distribution to approximate the two-dimensional convolution operator, and the weight of the product of each neighborhood pixel value and the weight is used as the pixel value of the pixel, and the size of the weight increases as the distance from the center point decreases. The image information is retained. The Gaussian proportion mixture model is used to extract the texture information and noise information existing in the image, and the Rudin–Osher–Fatemi model is used to eliminate the texture information existing in the image to achieve denoising processing of basketball images. The mining algorithm ML_T2 of multi-level association rules is used to mine multi-level data of basketball flight trajectory characteristics. Support and confidence are introduced to describe the importance of mining multi-level data of basketball flight trajectory characteristics. All frequent item sets are found. According to the obtained frequent items Set, generate corresponding strong association rules, set a minimum confidence threshold, and realize multi-level data mining of basketball flight trajectory characteristics. On this basis, build a network structure for basketball flight trajectory tracking and enhance basketball flight trajectory tracking image data. Set the loss function for basketball flight trajectory tracking, and integrate the multi-level data of basketball flight trajectory characteristics to build an accurate basketball flight trajectory tracking model based on YOLO and implement the research. The feasibility of this method was verified through experiments.
https://doi.org/10.1142/S0129156425401871
Player shadows in basketball games show complexity and diversity due to various factors such as lighting conditions, player movements, camera positions, etc. In the shadow removal task, the attention mechanism aids the model in focusing on the key areas in the image, that is, the players themselves and the shadows around them, and may ignore some details, resulting in poor effects in the shadow removal of embedded basketball game players. To this end, a shadow removal method for embedded basketball game players with design attention and multi-scale fusion is proposed. By extracting the texture of the shadow image area of the local embedded basketball player, setting the gray value threshold of the center point of the image area window, obtaining the Local Binary Patterns (LBP) code of the local texture feature structure of the image, scanning the LBP code of the shadow image of the embedded basketball player, and obtaining the final characteristic parameters of the shadow image of the embedded basketball player, the pixels of the shadow image of the embedded basketball player are balanced and adjusted, and the size of the image is determined by the slope of the transformation function, and the shadow image of the embedded basketball player is within the normal proportion range. The consistency of multi-scale detail pixels is adjusted to achieve pixel segmentation of shadow images of embedded basketball players. Analyzing the basic principles of the attention mechanism, the shadow images of embedded basketball players are collected for key-value pairs to represent action information, and a key-value set is set. Encoding is carried out by the encoder in the attention mechanism, and the dependence of the foul action after encoding is calculated and decoded to determine the shadow image of the embedded basketball player to achieve shadow recognition by obtaining the shadow pixel point chromaticity space of the embedded basketball player color vector value, we determine the chromaticity similarity difference, compare the chromaticity similarity difference with the threshold, design a multi-scale fusion embedded basketball player shadow removal model, and implement the research. Experimental results show that the proposed method can effectively remove the shadow of embedded basketball players.
https://doi.org/10.1142/S0129156425401858
Some terms may contain multiple components, each part has its own specific meaning, and the overall meaning may have new changes, which increases the difficulty of translation. Therefore, there are problems such as poor test results in the consistency test of translation of cultural external propaganda terms. To this end, this paper designs a translation consistency detection method for cultural external promotion terms based on N-gram. The distance function is used to characterize the translation data vector of cultural external propaganda terms, the characteristics of the translation data of cultural external propaganda terms are determined through the word frequency statistical method, and the correlation between the translation data of cultural external propaganda terms is analyzed with the help of Laplace smoothing method, so as to realize the relationship analysis of the translation data of cultural external propaganda terms. Set the consistency detection rules for the translation of cultural external publicity terms, including: functional dependency, conditional functional dependence, inclusion dependency, negative constraints, time constraints, Fixing rules, and numerical rules. Based on these rule settings, the translation of cultural external publicity terms is clarified. Design the overall structure of the translation consistency detection of cultural external promotion terms based on N-gram, that is, dictionary generation module, data processing module, translation data generation module, and translation data filtering module. According to the functions of different modules, the translation data of cultural external promotion terms are extracted, and a translation consistency detection model of cultural external promotion terms is designed based on N-gram, input the extracted translation data of cultural external promotion terms, and complete the consistency detection. Experimental results show that this method can effectively improve the translation consistency detection results of cultural external propaganda terms.
https://doi.org/10.1142/S0129156425401810
The blended English teaching database may include non-relational databases and relational databases. The data structure, query method and storage mechanism of each database are different, which increases the difficulty of data integration and analysis, thereby reducing the recommendation validity of personalized recommendations for blended English teaching database services. To this end, a personalized recommendation method for blended English teaching database services based on content retrieval is proposed. Analyze the unique attributes of non-relational and relational databases in the blended English teaching database, accurately define the complex relationships between tables in different databases, and then build a mapping relationship diagram model of the blended English teaching database to achieve the optimization of the database structure graphical conversion. Through cluster analysis of the behavioral data of the service objects of the blended English teaching database, the user’s preference characteristics are captured, and these preference data are efficiently organized in a labeled manner, achieving accurate matching of user preferences. Entropy value vectorization technology is used to efficiently process the service data text in the blended English teaching database. Through content retrieval technology, the similarity between graph data and retrieval data is calculated, and interference factors in the data are effectively eliminated. On this basis, a personalized recommendation model based on content retrieval is constructed to provide users with more accurate and efficient recommendation services. Experimental results show that the proposed method shows significant advantages in improving recommendation effects.
https://doi.org/10.1142/S0129156425401718
NBA is one of the top basketball events in the world. It is of great significance to predict NBA events comprehensively and accurately. Therefore, this paper puts forward the research on the prediction of NBA events based on a hybrid machine learning model. Construct a hybrid machine learning model, set the pre-processing flow of NBA event prediction data, solve the probability core cluster of NBA event prediction data, and form a data stream mining matrix of NBA event prediction; based on the integrated learning method in machine learning, according to the principle of constructing the prediction index of NBA events, the prediction index is constructed; based on the deep learning method in machine learning, the weight of target factors is used as a parameter to measure the prediction accuracy, and hierarchical evaluation is carried out to realize the prediction of NBA events. The experimental results show that the research method shows a high goodness of fit in all NBA games, and the predicted results are highly consistent with the actual results, and the predicted results are closer to the real results, with small root mean square error and average absolute error, which has a good prediction effect.
https://doi.org/10.1142/S012915642540169X
Sports have great benefits for human health, and monitoring the intensity of sports operation can help adjust the state of human health. However, there are certain shortcomings in the current methods for monitoring sports intensity. Therefore, this paper designs an intelligent sports intensity monitoring method based on embedded web sensors. By analyzing the changes in the human body state during sports, the monitoring data of intelligent sports intensity are determined, including heart rate intensity, blood oxygen saturation, body temperature during exercise, and tilt angle changes during exercise. Such data are used as the object for monitoring exercise intensity, and by adopting a nonintegrated approach, changes in the monitoring environment are determined, designing a dedicated embedded network system with web server functionality, and completing the design of an embedded web sensor monitoring system. Based on the embedded web sensor monitoring system, periodic and nonperiodic key points during human sports are determined, the peak and valley values of acceleration changes during human sports are calculated, and a first-order autoregressive model is established to determine the relationship between monitoring data during human sports. The monitoring data are input into the embedded web sensor monitoring system to complete the monitoring. The experimental results show that the method proposed in this paper has good performance in monitoring sports intensity.
https://doi.org/10.1142/S0129156425401706
To address the issues of low accuracy and significant noise impact in edge continuity detection of art and design images, this paper proposes a visual attention-based edge continuity detection method for art and design images. Determine the edge position points of art and design images using the Laplace operator, and obtain the horizontal and vertical gradients of the image edges by calculating the first-order difference method to determine the amplitude of the image edge gradient; By using the maximum and minimum operations in binary morphology instead of intersection and union operations, the image grayscale is determined, and the HSI color space features are determined to complete the edge feature extraction of art and design images. Using the SUSAN operator to clarify the kernel similarity zone of the edges of art and design images, removing similar edge image pixels, using spatial domain denoising and frequency domain denoising to reduce the edge noise of art and design images, and achieving edge preprocessing of art and design images. Introducing visual attention mechanism to transform the pixel space of edge features in art and design images, performing threshold segmentation on the edges of art and design images, and continuously annotating the segmented edge pixels. Introducing loss function to improve the convergence speed of detection, constructing an art and design image edge continuity detection model based on visual attention mechanism, outputting detection results, and implementing research. The experimental results show that the proposed method has better continuity due to the successful avoidance of noise interference by introducing visual attention mechanisms and other operations. As the number of detected pixels increases, the detection deviation rate of the proposed method remains between 0.02% and 0.03%. The proposed method has a lower detection bias rate and can effectively improve the performance of edge continuity detection in art and design images.
https://doi.org/10.1142/S0129156425401664
In order to improve the teaching effect of public physical education courses to the greatest extent and lay the foundation for the process of teaching reform, this paper puts forward the design of a public physical education course quality evaluation system based on the “5V” characteristics of computer big data. Analyze the characteristics of big data “5V” and study its application in public physical education courses; analyze the action estimation, prior, and update, failure recovery of public physical education course teaching, and get the three-dimensional visual positioning result of public physical education course teaching; collect the upper arm action image, extract the upper arm action image contour feature through edge contour detection, analyze the contour feature error characteristics, obtain the pixel component weighted value of the contour feature, further obtain the fuzzy error value of the upper arm action image contour feature through calculation, and according to the fuzzy error value, the design of public physical education course quality evaluation system is completed. The experimental results show that the average error recognition accuracy of the research method is 99.63%. The class hours are adjusted according to the proportion of disciplines, and the corresponding teaching plan is formulated. The accuracy of recommending internship posts for students is relatively high.
https://doi.org/10.1142/S0129156425401652
In order to solve the problem that the expression ability and generalization ability of shallow learning networks to complex functions are limited, and to improve the accuracy of college education quality evaluation, a college education quality evaluation method based on a deep learning network is proposed. Starting from the three aspects of the educational environment, educational quality, and student development, this paper constructs an educational quality evaluation index system including three primary indicators and nine secondary indicators. Take the secondary index in the evaluation index system as the input of the deep learning network, optimize the weights of each layer of the deep learning network by using the unsupervised pre-training model, determine the conditional probability distribution and joint probability distribution of each layer in the restricted Boltzmann machine (RBM) based on the bottom-up unsupervised learning process, and the output layer optimizes the parameters of each layer according to the input differential mean opinion score (DMOS) value and constructs the regression model between the abstract primary index and DMOS value, The objective evaluation results of education quality are obtained according to the prediction of a regression model. The test results show that the linear correlation coefficient and grade correlation coefficient between the evaluation results of this method and the subjective evaluation results are closer to 1.
https://doi.org/10.1142/S0129156425401640
In order to realize the recognition of athletes’ arm trajectories with low data labeling cost, a semi-supervised learning-based method is proposed for volleyball players’ arm trajectory recognition. A support vector machine framework is employed for the recognition of volleyball players’ arm trajectories. To augment the dataset of volleyball sports samples and minimize the expense of data labeling, semi-supervised learning techniques are incorporated. The optimization of the support vector machine is combined with graph-based semi-supervised learning to develop a graph-based fuzzy least-squares support vector machine, and the classification results of graph-based fuzzy least-squares support vector machine are solved by the dyadic form and the representation theorem. Complete the training. Input the recognized volleyball player’s movement data into the trained graph-based fuzzy least squares support vector machine, and output the recognition results of volleyball player’s arm trajectory. The experimental results show that the method has the highest recognition accuracy when the width of the Laplace kernel function is 15, and the method can accurately lock the athlete’s arm and track the athlete’s arm movement in the recognition of a real game.
https://doi.org/10.1142/S0129156425401548
In the evolving landscape of English oral language instruction, the integration of online and offline (blended) teaching methodologies has emerged as a significant approach to enhance learning outcomes. This study addresses the need for a robust and nuanced evaluation framework capable of assessing the quality of blended English oral language teaching. Motivated by the limitations of traditional evaluation methods in capturing the complexities and subjective nuances of blended teaching environments, we propose a novel evaluation model that leverages the Analytic Hierarchy Process (AHP) and Fuzzy Association Analysis. This paper outlines a systematic approach to construct a comprehensive evaluation indicator system tailored to the specific demands of English oral language teaching. Utilizing AHP, we determine the relative weights of various evaluation indicators, laying a solid foundation for a structured assessment framework. Subsequently, Fuzzy Association Analysis is employed to handle the inherent uncertainties and subjectivities associated with teaching quality assessment, thereby facilitating a more accurate and holistic evaluation. An empirical case study is presented to demonstrate the practical application of our proposed AHP-Fuzzy model in evaluating the quality of a specific blended English oral language teaching program. The results underscore the model’s effectiveness in providing a detailed and nuanced assessment of teaching quality, highlighting its superiority over traditional evaluation methods in addressing the complexities of blended teaching environments.
https://doi.org/10.1142/S0129156425401536
The classification and processing of multimedia audio and video teaching resources data is crucial for the development of the next generation of multimedia technology. In music courses, the traditional method of merging, classification, and identification of multimedia audio and video teaching resources uses functional traditional fuzzy C-average class methods, and treats the entire document as a systematic research object. However, this method cannot subdivide documents, let alone handle information about multimedia audio and video teaching resources that are specific to music courses. To address this issue, we propose a method of improving the fuzzy C-average polyet algorithm to classify and identify multimedia audio and video teaching resources with dual subtraction backgrounds. First, we use information entropy as a standard for classification and identification, and leverage the nonlinear mapping ability of neural networks to calculate and blur the weight fuzzy C-average polyet algorithm. This approach solves the issues of inaccurate classification and incompetence of classes. For actual test verification, we used five documents, with five documents in each category and three functional items. The results show that the improved fuzzy C-average polyet algorithm can more effectively identify and classify multimedia audio and video teaching resources in classified music courses. It is less characteristic, more distributed in the distribution of multimedia audio teaching resources in random music courses, and has strong convergence and high application value. Overall, this study demonstrates the effectiveness of the proposed method in classifying and identifying multimedia audio and video teaching resources in music courses. The improved fuzzy C-average polyet algorithm can be used as a valuable tool for researchers and practitioners in the field of multimedia technology.
https://doi.org/10.1142/S0129156425401524
With the rapid evolution of digital technology, graphic design has become increasingly pivotal across various domains. While traditional image enhancement methods have addressed issues in texture boundaries and information retrieval, they often neglect challenges posed by noise in graphic design, leading to uneven enhancements. Therefore, this study proposes a multi-scale detail enhancement method to improve the visual perception quality of graphic design images. Nonlinear transformation is applied to the image to obtain a preliminary enhanced image. Subsequently, both the preliminary enhanced image and the low brightness image are simultaneously fed into a multi-scale feature extraction block for feature extraction. In order to improve the ability of online learning of semantic features, a U-shaped feature enhancement module is introduced in each scale feature extraction branch, which increases the feature extraction of contextual information. Finally, the enhanced image is obtained by integrating multi-scale feature information. The experimental results show that this method is relatively superior in terms of visual effects and metrics, and significantly improves color restoration, texture preservation, and detail enhancement, providing a promising direction for image enhancement in graphic design.
https://doi.org/10.1142/S012915642540155X
This paper proposes a deep learning-based method for path planning of college students’ morality and faith instruction. Using a questionnaire and sampling inspection, the teaching effect index test and automatic monitoring of morality and faith teaching data are conducted. The feasible teaching strategy index of college students’ morality and faith teaching is based on students’ deep learning needs and expectations for ideological and political teaching in colleges and universities. The actual effect and teaching effect of teaching reform and innovation serve as the basic monitoring coefficient. Using deep learning and dynamic optimization detection methods, a feature clustering model for data collection and deep and surface learning of morality and faith instruction for college students is developed. A deep learning model for teaching morality and faith to college students is developed using a significant feature analysis method. Implement multidimensional spatial path optimization and data fitting. Perform quantitative regression analysis of college students’ critical, creative, and higher-order thinking in the index data of morality and faith teaching strategies. Detect and extract index data on morality and faith teaching strategies for college students. The test results show that this method improves the practicability and originality of morality and faith instruction for college students by optimizing the course planning and index data of feasible teaching strategies.
https://doi.org/10.1142/S0129156425401561
The network public opinion information resources include text, pictures, videos and other modes, resulting in high sharing loss value. The network public opinion information resources sharing method is based on data analysis and artificial intelligence algorithm. First, based on spatial theory, a spatial model of the emotional dimension of network public opinion big data is constructed to dynamically capture and express the multi-dimensionality and dynamism of public opinion emotions. Subsequently, advanced multimodal neural network technology was utilized to accurately identify and extract deep features of network public opinion information resources, effectively addressing data heterogeneity. Furthermore, designing and implementing a resource sharing mechanism based on semantic fusion algorithm promote efficient matching and sharing of resources through deep semantic alignment and composite semantic relationship mining. Finally, simulation tests were conducted from four aspects: data analysis, shared loss values, feature recognition effectiveness, and shared performance. The results showed that the proposed method performed well in quantitative experiments, with lower sharing loss values (about 0.01), more accurate identification of network public opinion big data features, and significantly shorter sharing completion time, average waiting time, and resource download time than the comparative methods, only 7.66 s, 2.03 s, and 5.04 s, respectively, proving its stronger sharing ability and superior performance.
https://doi.org/10.1142/S0129156425401391
The economic and social development in the digital era has put forward new requirements for the cross-border flow of financial data. Financial data are an important carrier of financial information. Due to the sensitivity of financial data, its cross-border flow involves citizens’ personal privacy, the interests of financial institutions and even national financial security. In practice, if there is no effective regulation on the cross-border flow of financial data, it will not only be difficult to explore the potential value of financial data, but also lead to various risks. At present, the United States and Europe adopt different regulatory models for the cross-border flow of financial data. In view of the problems existing in the supervision of cross-border capital flow in our country, this paper puts forward some policy suggestions for improving the existing cross-border capital flow monitoring system in our country. The system takes main supervision and off-site monitoring as the two main lines, sets up the framework of China’s cross-border capital flow monitoring institutions and off-site monitoring content framework, and constructs China’s cross-border capital flow monitoring index system on the basis of learning from international and domestic experience. The results show that cross-border capital flow has a significant effect on bank risk-taking. Considering the heterogeneity of cross-border capital, it is found that capital outflow and portfolio investment have a greater impact on bank risk-taking. A good economic development environment will not only bring profits to banks but also reduce their default risk.
https://doi.org/10.1142/S0129156425401366
Cyber language is a kind of chatting language that was created to improve the efficiency of chatting at the beginning of its creation. Therefore, the creation of some words does not conform to the grammatical rules of Chinese language and literature. Therefore, this paper puts forward the promotion model of cyber language and Chinese language literature development based on quasi-linear regression analysis. A hierarchical structure analysis model is established to analyze the quantitative characteristics of the hierarchical constraint indicators, and a quantitative table of the status distribution of the evaluation constraint indicators promoted by the development of cyber language and Chinese language and literature is obtained. The quasi-linear regression analysis structure is used to build a clustering model to promote evaluation, and the feature clustering analysis is realized through quantitative index feature analysis. In the quasi-linear regression analysis model, the influence index parameters are configured, and the fuzzy influence characteristic parameters are combined to cluster the parameter configuration results, and the classification prediction and index analysis models are constructed. According to the hierarchical distribution density and grid clustering, the development evaluation and influence model construction of cyber language and Chinese language literature are realized. The results of empirical analysis show that this method has strong quantitative feature analysis ability, accurate and reliable evaluation results, and improves the reliability and confidence level of the evaluation and influence model construction of cyber language and Chinese language literature development.
https://doi.org/10.1142/S0129156425401378
In order to effectively and accurately recognize students’ emotions in English teaching, and timely regulate students’ emotions, a method of emotion recognition and regulation in English teaching based on emotion computing technology is proposed. Through the skin color model, students’ facial images in the English teaching classroom obtained by the camera are searched for skin color regions, and students’ facial expression images in English teaching are detected, to carry out size normalization and grayscale normalization on the detected facial expression image, preprocess the facial expression image, use the binary method to locate the main facial organs of eyes and mouth that affect emotion in the preprocessed facial expression image, extract the edge features of facial expression image and the features of eyes and mouth, and take all the extracted features as the input of the model, output students’ emotion categories, and make corresponding teaching strategy adjustment and students’ emotion regulation according to students’ emotion categories, so as to finally realize English teaching emotion recognition and regulation. Experiments show that this method can effectively and fully detect the facial expression images of students learning English, and it is efficient for English teaching emotion recognition.
https://doi.org/10.1142/S012915642540138X
In recent years, with the rapid development of deep learning, neural network and other related technologies and their wide application in image processing research, the degree of intelligence in the field of image processing research is getting higher and higher. Especially in the research of image small target detection, because of its relatively small shape, image small target has relatively few and unclear feature information and detail information, which makes it easy to make errors and omissions in recognition and detection. To improve the precision of satellite image small target detection, this paper processes a small image target super-resolution detection based on ACGAN and enhanced DETR. In this paper, ACGAN is used to classify and generate the feature information of small targets in images to enhance the feature information of small targets to be measured. Second, the DETR algorithm model is improved, and a small target detection algorithm model with image super-resolution enhancement is designed to generate high-resolution images by super-resolution reconstruction of images with enhanced target feature information to improve the detection accuracy of image small targets. The experiments in this paper show that after the processing of target feature enhancement and image super-resolution quality enhancement, the detection accuracy of image small targets has been improved significantly.
https://doi.org/10.1142/S0129156425401342
Leveraging multiple data sources to enhance tourism resource management and visitor behavior analysis has become a key challenge in the context of the booming smart tourism industry. In this study, we explore how to integrate and optimize multiple data sources including social media activities, user reviews, tourism statistics, and geographic information to build a comprehensive information management platform for smart tourism resources. Given the limitations inherent in isolated and decentralized data processing approaches in the smart tourism domain, we propose a new approach using deep learning autoencoders for efficient extraction and fusion of meaningful features from heterogeneous datasets. Our methodology encompasses a rigorous data collection and preprocessing phase, ensuring data quality and consistency, followed by the application of autoencoders to learn high-level feature representations conducive to data integration. The fused data facilitate the development of strategies for the optimal allocation of tourism resources and nuanced analysis of visitor behavior patterns. Experimental evaluations demonstrate the model’s proficiency in capturing intricate data relationships, significantly enhancing the predictive accuracy for tourism demand forecasting, and enabling personalized visitor recommendations. The results underscore the potential of our approach to revolutionize smart tourism management practices by providing actionable insights into resource optimization and visitor engagement strategies, thereby contributing to the sustainable growth of the tourism sector.
https://doi.org/10.1142/S0129156425401354
Using deep learning methods, this study provides insights into the significant impact of corporate executive behavior on firm performance, particularly through the lens of vocal emotions. Considering that emotions play a crucial role in leadership effectiveness as well as corporate success, this paper employs a Long Short-Term Memory (LSTM) network to meticulously categorize the emotions in executive speeches into positive, neutral, and negative categories. The initial stage requires rigorous pre-processing of the speech signal, including collection, denoising and feature extraction using Mel Frequency Cepstrum Coefficients (MFCC). Subsequently, LSTM models are trained on these preprocessed data for sentiment classification. This study further innovates by combining sentiment analysis with Key Performance Indicators (KPIs) to scrutinize the correlation between executives’ emotional expressions and company performance. Through statistical analysis and machine learning techniques, we assess the significance of this correlation and present evidence that highlights the predictive power of executives’ emotional expressions on firm performance metrics. Our findings not only contribute to an understanding of the nuanced ways in which leadership behavior impacts firm performance, but also open avenues for enhancing executive training and performance assessment methods. This paper demonstrates the classification accuracy of our model and its effectiveness in correlating executive emotions with firm performance, providing valuable insights into the interplay between leadership emotional intelligence and firm success.
https://doi.org/10.1142/S0129156425401317
This study endeavors to introduce a method for measuring stock market investment risk, leveraging data mining techniques alongside decision trees (DTs). By harnessing extensive stock market data and integrating steps such as data cleaning, feature selection, and model construction within data mining technology, an effective risk measurement model is formulated. Specifically, DTs serve as the primary modeling tool, adept at capturing intricate relationships and nonlinear characteristics prevalent within the stock market, thereby facilitating precise measurement of investment risks. Through empirical analysis, the efficacy and viability of the proposed method in risk measurement are substantiated, furnishing investors with a pivotal decision-making reference. Overall, this study contributes to the ongoing discourse on stock market risk assessment by integrating advanced data mining methodologies, thereby enhancing the accuracy and reliability of risk evaluation in investment decision-making processes.
https://doi.org/10.1142/S0129156425401329
In order to present the evaluation effect of sports on users’ health, this paper puts forward the construction of sports machinery error model based on wireless communication technology. A model for evaluating human health by sports is constructed, which consists of data layer, logic layer and display layer. The data layer is used to obtain sports event data, real-time sports data, and health monitoring data, and transmit them to the logic layer. The logic layer fuses human health data and extracts the characteristics of human health information. Combining with wireless communication technology, the characteristics are input into the long-short memory neural network, which outputs the results of sports health pattern recognition after forward and reverse operations, thus realizing the construction of sports machinery error model. The experimental results show that the model can effectively improve the BMI index value of the human body and reduce the maximum loss value, and the output results have higher reliability and fit, faster iteration speed and better performance.
https://doi.org/10.1142/S0129156425401330
In order to realize the rapid mining of sports data and intelligently formulate the sports training scheme that matches the athletes’ sports state, this paper studies the sports data mining and sports training decision support system based on big data technology. Leveraging MapReduce and OLAP, the system efficiently mines sports data and applies an improved Apriori algorithm to extract frequent itemsets tailored to athletes’ current states. These itemsets directly inform personalized training schemes, enhancing decision support for athletes. Experiments confirm the system’s scalability and precision in large-scale sports databases, effectively matching athletes’ training needs.
https://doi.org/10.1142/S0129156425401305
With the leap of network technology and the vigorous development of online teaching, many universities are actively adopting online means to optimize teaching and course management. This paper focuses on building an efficient and comprehensive auxiliary teaching platform that integrates functions such as student learning monitoring, course management, online testing, and teacher–student interaction, aiming to improve the quality of education. Designs based on distributed B/S architecture and web technology to ensure efficient resource allocation and expansion, meet the needs of large-scale concurrent learning, and achieve cross-platform access to enhance user experience. The platform features personalized learning support, enhanced interactive collaboration, and the construction of a multimedia teaching database through data analysis. The innovation lies in using neural network technology to create an intelligent question answering module, utilizing cosine similarity to automatically group teaching resources, and using graph convolution and variational autoencoder techniques to construct a student performance monitoring model. Experimental verification shows that the platform runs stably, effectively reduces the blocking rate, and significantly improves students’ academic performance, pass rate, and learning interest. This design not only successfully completed the teaching task, but also provided valuable experience for the application of online-assisted teaching systems in subject education.
https://doi.org/10.1142/S0129156425401172
To address the issues of high redundancy in sentence features and poor semantic similarity analysis in English translation, a boundary definition model for complex sentence clauses in English translation based on the Huffman tree and objective function is proposed. This model analyzes the boundary structure of complex sentences and clauses in English translation, determines the mathematical expected value of the probability distribution of boundary features, introduces the maximum entropy model to assess the key degree of different features, and utilizes conditional random fields to mine hidden features. Additionally, by employing a hierarchical clustering algorithm to analyze the distance between boundary features of complex sentences and clauses in English translation, similar images are merged based on the minimum distance between data points. Feature redundancy scores are obtained through an attention mechanism, and the weight of boundary features of complex sentences and clauses in English translation is calculated. By using the edit distance in semantic similarity to determine the boundary distance of clauses, and then using cosine similarity to calculate the similarity between the boundary features of complex clauses in English translation, a Huffman tree and objective function are introduced to construct a model for defining the boundary of complex clauses in English translation. Input the boundary feature values of complex sentences and clauses in English translation to complete the final definition. The experimental results show that the proposed method performs well in the task of defining the boundaries of complex sentences and clauses in English translation, with semantic similarity analysis results remaining above 95% and reaching up to 99%, significantly better than the comparative methods. Meanwhile, the recall curve obtained by the proposed method is closest to the ideal curve and has a small fluctuation range, stable between 90% and 98%, further verifying its accuracy and robustness in boundary delineation. In addition, when the sample data size is 1000, the confidence level of the proposed method is as high as 99.6%, which is higher than the 95.6% and 95.1% of the comparison methods. As the sample size increased to 2500, the confidence level of the proposed method remained at a high level of 99.4%, while the confidence level of the comparative method decreased to 94.2% and 94.1%, respectively. These data results fully demonstrate the effectiveness of the proposed method in reducing redundant interference and improving confidence. In summary, the proposed method has improved the performance of defining the boundaries of complex sentences and clauses in English translation.
https://doi.org/10.1142/S0129156425401184
Marketing methods often have high costs and limited effectiveness, making it difficult to stand out in fierce market competition. A bidirectional personalized recommendation algorithm based on customer preferences is proposed to help small and medium-sized enterprises more accurately locate their target customers. Based on other customers’ and neighbors’ purchases, the customer’s purchase information is first expanded. Calculate a customer’s product preference weight, assess a customer’s purchasing preferences, and provide personalized product recommendations based on a customer’s preferences. Design the methodology. Finally, mining customers similar to the sample customers to form a community, providing merchants with recommendations for potential customers and precise customer maintenance based on the sample customers provided by the merchants. The algorithm’s efficacy can be proven by conducting experiments on real datasets, which can be used in personalized recommendation research.
https://doi.org/10.1142/S0129156425401056
Quadtree is a widely used data structure for representing and managing two-dimensional spatial data. This paper proposes a grid business data management model utilizing the Quadtree method to address the challenges in handling grid data. The model integrates Quadtree’s spatial data structure with the specific needs of grid business data management to enhance storage efficiency and query performance. The implementation involves several key steps: First, the grid data is partitioned into distinct regions based on spatial characteristics. Second, a Quadtree is constructed to organize these regions hierarchically. Third, efficient data storage and querying mechanisms are developed based on this structure. Experimental results indicate that the proposed model significantly improves data management for grid systems, providing enhanced support for grid operation and management through increased efficiency in data storage and retrieval.
https://doi.org/10.1142/S0129156425401068
It is very important to accurately predict the population pattern in the framework of spatial planning in the township development track. In this paper, the basic principle and application field of population forecasting method of urban spatial planning are deeply studied, and the applicability of BP neural network method of genetic evolution to predict population size is described. The study initially used genetic algorithms to refine the initial weights and structure of BP neural networks to improve their proficiency and generalization ability in the interpretation of demographic data. The empirical results show that the method produces superior predictive performance on multiple township demographic data sets, especially when trying to cope with complex population dynamics. In addition, when benchmarked against traditional forecasting models, the technology showed significant enhancements in the accuracy, stability, and adaptability of predictive models. These results suggest that combining GA-driven evolution with BP neural networks provides a more robust and precise tool for population prediction.
https://doi.org/10.1142/S012915642540107X
User’s basic attributes, behavior characteristics, value attributes, social attributes, interest attributes, psychological attributes, and other factors will lead to poor user experience, information overload, interference, and other negative effects. In order to develop more accurate marketing strategies, optimize user experience, and improve the conversion rate and user satisfaction of e-commerce platforms, an accurate construction method of e-commerce user profile based on artificial intelligence algorithm and big data analysis is proposed. Based on big data analysis technology, the basic attributes, behavior characteristics, value attributes, social attributes, interest attributes, and psychological attributes of e-commerce users are collected and integrated from multiple dimensions. The improved sequential pattern mining algorithm (PBWL) is applied to mine the frequent sequential pattern in the e-commerce user behavior, and to reveal the user’s behavior habit. The comprehensive attribute representation of e-commerce users is obtained by combining the LINE network model and the convolutional neural network. The firefly K-means clustering algorithm is used to cluster the e-commerce users, group the users based on the similarity of user attribute information, create different types of user clusters, and achieve the accurate construction of an e-commerce user profile. The experimental results show that this method can build an accurate e-commerce user profile and provide strong support for personalized recommendation and precision marketing of e-commerce platforms. This method can dig deeply into the behavior habits of e-commerce users and accurately reflect their interest preferences and consumption characteristics. This method can quickly and stably cluster e-commerce users, and the clustering effect of user profiles is optimal. This method can also divide the data into meaningful groups according to the user’s consumption behavior, and reveal the characteristics and values of different groups.
https://doi.org/10.1142/S0129156425401081
A control system for automatically generating a train operation evaluation curve based on a hybrid genetic algorithm has been proposed in order to improve the safety of automatic train operation. The system is based on Internet of Things (IoT) mobile devices and utilizes various sensors such as accelerometers, gyroscopes, barometers, and GPS to collect real-time data on the driver’s acceleration, angular velocity, air pressure, and position, among other parameters. 5G wireless communication technology is used to achieve high-speed data transmission and real-time communication with the cloud. Based on cloud data, a spatial grid area planning model is constructed to automatically generate the train operation evaluation curve. Using a spatial dynamic programming method, the entire network of Electric Multiple Unit (EMU) trains is treated as a whole, and a dynamic model of the EMU train is constructed. The spatial area parameters of the EMU train’s automatic driving operation evaluation curve are combined with the dynamic model analysis method. By identifying and analyzing environmental parameters such as train speed and distance, the EMU train’s automatic driving operation evaluation curve is optimized. A hybrid genetic evolution learning optimization algorithm is used to fit the motion spatial parameters of the EMU train’s automatically generated driving operation evaluation curve, and a spatial behavior analysis simulation is created for the EMU train’s automatically generated control driving operation curve. Through the use of hybrid genetic evolution learning optimization technology, adaptive control and automatic driving operation curve simulation planning for the automatically generated driving operation evaluation curve of the maglev train are achieved, as well as simulation and algorithm optimization design for the automatically generated driving operation evaluation curve of the maglev train. The simulation results show that the method has good adaptability and strong automatic control capability. The experimental results demonstrate the control effect of the proposed method on energy consumption and train stopping error, indicating that the proposed method can effectively improve the parameter adjustment and offset correction ability of the evaluation curve generated by the high-speed train driving operation.
https://doi.org/10.1142/S0129156425400932
This paper investigates and compares the performance of three distinct models for monitoring abnormal electricity consumption behavior: Support Vector Regression (SVR), Long Short-Term Memory (LSTM), and a novel model that incorporates an attention mechanism known as PSO-ATT-LSTM (PAL), which integrates a Particle Swarm Optimization (PSO) algorithm with an attention mechanism into the LSTM framework. The PAL model is specifically designed to enhance forecasting accuracy by optimizing the network parameters through PSO and focusing on significant temporal features via the attention mechanism. This design allows PAL to outperform the other two models in predicting future electricity consumption, particularly in identifying anomalous patterns. The study utilizes a dataset with hourly electricity consumption values and evaluates the models using metrics such as Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE). The experimental results demonstrate the superiority of the PAL model in terms of predictive accuracy and its ability to capture abnormal consumption behaviors more effectively than SVR and LSTM.
https://doi.org/10.1142/S0129156425400944
Since distributed optical fiber sensors will generate strong interference waves during seismic wave acquisition, in order to obtain high-resolution seismic waves, a distributed optical fiber sensor interference wave suppression method in high-resolution seismic observation is proposed. By analyzing the generation process and characteristics of interference waves in distributed optical fiber sensors in high-resolution seismic observation, it is clear that the interference waves are cable waves and spring waves. The recursive cyclic translation adaptive threshold sheet transformation method is established. The improved particle swarm optimization algorithm based on multi-stage and multi-model is used to optimize the adaptive threshold and improve the interference wave suppression effect. The optimized recursive cyclic translation adaptive threshold sheet transformation method is used to decompose the seismic wave collected by the distributed optical fiber sensor, suppress the interference wave part, and obtain the effective seismic wave after interference wave suppression. The experimental results show that for single seismic wave and multi-channel seismic wave, this method can not only ensure the coherence of effective seismic wave, but also suppress interference wave more thoroughly. Under different noise levels, the correlation coefficient and amplitude attenuation of this method are high, and it has better interference wave suppression effect.
https://doi.org/10.1142/S012915642540097X
China and Russia have completely different cultural backgrounds, historical traditions and ways of thinking, which makes it difficult to directly correspond or accurately convey many culturally unique concepts, customs and expressions during the translation process. For example, some idioms, sayings or cultural symbols may have completely different meanings or no equivalents in the two cultures. Therefore, there are problems such as poor verification resulting in the consistency verification of Chinese and Russian traditional cultural external publicity translation text corpus. In order to solve this problem, this paper designs a consistency verification method for Chinese and Russian traditional cultural external publicity translation text corpus based on CNN-BiGRU. Through the part-of-speech correspondence of the Chinese and Russian traditional cultural external publicity translation text corpus, it is represented by vectors, the feature weights of the Chinese and Russian traditional cultural external publicity translation text corpus are calculated, the collocation information between the feature items is determined, and the maximum likelihood of the features is calculated using data smoothing technology. The rules are adjusted to extract the characteristics of the Chinese and Russian traditional cultural external publicity translation text corpus. Preprocess the key feature source documents of each Chinese and Russian traditional culture external publicity translation text corpus, determine the source documents and their corresponding key features of the traditional culture external publicity translation text corpus, and use TF-IDF and IDF to calculate the key feature sources of the corpus. Based on the rarity of document data, the BERT model is introduced as an encoder to determine the key features of the Chinese and Russian traditional cultural external publicity translation text corpus, integrating the CNN-BiGRU algorithm to design a consistency verification algorithm for Chinese and Russian traditional cultural external publicity translation text corpus. Set up the convolution layer, weight sharing layer, pooling layer, and BiGRU verification layer to build a CNN-BiGRU model to achieve consistency verification of the Chinese and Russian traditional cultural external publicity translation text corpus. Experimental results show that this method can improve the consistency verification effect of Chinese and Russian traditional cultural external publicity translation text corpus.
https://doi.org/10.1142/S0129156425400993
The connection between the friction negative and related damping is on the low-speed hydraulic cylinder, which will generate the vibration and influence of the hydraulic cylinder, which inhibits the accuracy of the hydraulic equipment. In order to effectively solve this problem, this paper creates a neural network model, considering the negative damping and related damping effects under low-speed conditions, and intended the coupling mechanism of negative damping and related damping. Based on the converter of the hydraulic cylinder, the nonlinear stiffness characteristics of the sensor hydraulic cylinder were analyzed, and the requirements of the hydraulic cylinder collection were obtained according to the requirements of the energy star collection of the liquid voltage star, and the equivalent of the sensor liquid pressure cylinder was obtained. At the same time, the sensor system implements feedback control on vibration velocity and displacement based on changes in piston position, in order to analyze the coupling mechanism of hydraulic cylinder vibration under low-speed operating conditions. The results of the study show that the function of the feedback controller is systematic, and the antipity control contains the possibility of minimizing serious damage and can effectively improve the efficiency and stability of the system. In the actual manufacturing process, we studied the vibration coupling effect of the liquid cylinder under low-speed conditions, proved the change characteristics of the vibration amplitude and frequency of the hydraulic cylinder, and revealed the related damping in the vibration.
https://doi.org/10.1142/S012915642540083X
In the era of cloud computing, businesses are increasingly relying on cloud platforms to streamline their operations and deliver services efficiently. Recommendation systems play a pivotal role in suggesting suitable services and resources to enhance user experience and optimize resource allocation. This paper presents a novel approach, the Multi-Model Fusion Recommendation Algorithm (MMFRA), which integrates multiple recommendation models using advanced fusion techniques to enhance the accuracy of recommendations in cloud platform business scenarios. The implementation process of MMFRA involves combining diverse recommendation models, such as collaborative filtering, content-based filtering, and matrix factorization, into a unified framework that leverages their strengths while mitigating individual limitations. This fusion process is designed to achieve higher precision in service recommendations by considering various aspects of user behavior and preferences. Through a comprehensive evaluation in a simulated cloud platform environment, MMFRA demonstrates superior performance in terms of recommendation accuracy and user satisfaction. The proposed algorithm offers significant potential for enhancing the effectiveness of cloud platform services, ultimately benefiting both service providers and users.
https://doi.org/10.1142/S0129156425400841
This paper presents an innovative approach to analyzing and enhancing the effectiveness of cultural dissemination using a multi-layered algorithm framework based on feature selection and weight learning. We first employ the Least Absolute Shrinkage and Selection Operator (Lasso) regularization technique for feature selection, identifying the most informative features crucial for predicting the power of cultural transmission. Following this, a reinforcement learning framework based on Deep Q-Networks (DQN) is established, incorporating a reward mechanism that favors feature combinations promoting cultural dissemination. Through interaction with the environment, the model learns the weights of these features, reflecting their contribution to successful cultural transmission. The identified features and learned weights are then integrated into a multi-layered algorithmic framework. Each layer of this framework represents a different aspect of cultural transmission, such as content creation, dissemination channels, and audience feedback, ensuring effective interaction between layers. Finally, the model is applied to real-world cultural dissemination cases, like popular music, movies, or literary works, to validate its effectiveness. The results demonstrate the potential of this approach in providing insightful strategies for optimizing cultural dissemination.
https://doi.org/10.1142/S0129156425400853
Given the massive and complex nature of new media advertising information, users need to effectively search and filter relevant advertisements. This paper proposes a multifunctional search optimization scheme for new media advertising information based on XML technology. This scheme first integrates user attribute data, constructs membership functions and deterministic index systems to accurately characterize the comprehensive attributes of advertisements and classify them. By calculating the average value of constant threshold elements, a model of the attractiveness of advertisements to users is established to quantify the similarity between text information and queries, as well as user browsing behavior. On this basis, design a search model to sort and merge advertising information, generating accurate search results. Experiments have shown that dynamic new media advertisements are more attractive than static types, and this method significantly improves search efficiency to over 90%, while ensuring a high level of 98% match between search content and keywords.
https://doi.org/10.1142/S0129156425400877
It is difficult for traditional generation methods to accurately match dance movements and dance music in the automatic generation of dance. This paper introduces the technologies related to deep learning (DL) and proposes a system for automatic dance generation based on DL. The dance generation algorithm is the system’s linchpin. The first step is to extract dance and audio characteristics. Identifying the skeletal data of the dance movement is crucial to the extraction process. This paper employs an enhanced 3D convolutional neural network to determine the dance movement skeleton sequence. In the second step, a generative model capable of generating dance moves that precisely match the dance music is designed. The experimental results demonstrate that the dance movement recognition method proposed in this paper is highly accurate, that the dance generation method is very close to the actual dance movement, that the music matching rate is more accurate, and that the dance generation effect is favorable.
https://doi.org/10.1142/S0129156425400865
There is a problem of poor predictive performance when predicting the remaining service life of products. Therefore, this paper proposes a finite data product remaining service life prediction method based on Gaussian stochastic processes. Analyze the types of product failure and degradation under limited data, and determine the factors affecting the degradation performance of product life through constant stress accelerated degradation tests, step stress accelerated degradation tests, and sequential stress accelerated aging tests; using the Wiener process to analyze the performance degradation characteristics of products, setting product failure thresholds, defining the remaining life of the product based on the degradation process, and using equal interval partitioning methods to determine the product health index, and clarifying the product failure process under limited data; by calculating the mathematical expected value, the numerical characteristics of the remaining life of the product are defined. Variance and moments are used to determine the predicted second-order central moment and origin moment. Based on the monotonic increasing characteristics of the inverse Gaussian process, the probability distribution of the random variable of product failure is determined after the Gaussian process. Under limited data, the actual working time of the product and the degradation amount at the time scale are clarified, and a preset threshold is set for the degradation amount. Build a residual life prediction model for product degradation and output the prediction results. The experimental results show that this method can accurately predict the remaining service life of the product and has good predictive performance.
https://doi.org/10.1142/S0129156425401159
Graphs provide essential means for organizing and analyzing complex equipment data. Although link prediction techniques have been widely applied to enhance knowledge graphs, existing methods still show room for improvement in accuracy, especially when dealing with sparse data. To address this, we introduce ELPGPT (Large Language Models Enhancing Link Prediction in Electrical Equipment Knowledge Graph), a novel approach that integrates large language models into link prediction to enhance the accuracy of relation prediction within electrical equipment knowledge graphs. The core of the ELPGPT method lies in the combination of large language models with traditional knowledge graph link prediction techniques. By leveraging the deep semantic understanding capabilities of large language models, this method effectively extracts relational features and enhances the handling of sparse data. Additionally, we employ a Retrieval-Augmented Generation (RAG) approach, which, by integrating external data sources, further enhances the precision and relevance of predictions. Experiments on the Electrical Equipment Knowledge Graph (EEKG) demonstrate that ELPGPT significantly improves performance across several metrics, including Hit@k, Mean Rank (MR), and Mean Reciprocal Rank (MRR). These results validate the effectiveness and potential applications of this method in the domain of link prediction for electrical equipment knowledge graphs.
https://doi.org/10.1142/S0129156425401160
The stable operation of the power system is closely related to the national economy and people’s livelihoods. Therefore, the timely detection, qualitative assessment, and handling of major equipment defects are crucial. The classification of defect levels in main electrical equipment is a fundamental task in this process, which is often manually completed, supplemented by knowledge bases or expert systems. However, this approach is time-consuming, labor-intensive, involves challenging human–machine interaction, and relies on expert experience. Conversational large language models, such as ChatGPT, ERNIE Bot, ChatGLM, have garnered widespread recognition in various domains. However, these models may have errors in the reasoning process, resulting in biased or even erroneous outputs, which is referred to as the “hallucinations”. The hallucinations’ problem of large language models poses challenges in specific fields. To mitigate the hallucinations in large language models, researchers often seek to incorporate domain-specific knowledge into these models through methods like fine-tuning or prompt learning. In order to enhance model performance while minimizing computational costs, this study adopts the prompt learning approach. Specifically, we propose a large language model prompt learning framework based on knowledge graphs, aiming to provide the large language model with reasoning support by leveraging specific information stored within the knowledge graph and receive explainable reasoning result. Experimental results demonstrate that our module achieved superior results on the power defect dataset compared to the non-prompt method.
https://doi.org/10.1142/S0129156425400737
Lithium battery, as its main power source, needs to ensure the safety of drivers and passengers not only under some complex external conditions, but also under harsh use conditions, even when damaged. At some point throughout this process, it is required to evaluate the status of the battery itself in order to assure safe usage of the battery and to develop a more effective battery management plan. SOC, SOH, and condition of power are all variables that are commonly used to describe the state of a lithium battery (SOP). The ability of the battery to constantly provide or receive power, the remaining service the life cycle of the battery, and the ability of the battery to output or receive power promptly are all described by these three characteristics. In order to effectively evaluate the health status of batteries, this paper proposes a dual-mode extended Kalman filter (EKF) algorithm for the remote estimation of SOC and SOH of high-energy lithium batteries. In the estimating procedure, the open circuit voltage (OCV) is also included as a state variable in the iterative process, which allows for more accurate results. In this paper, the state space equation is established based on the first-order RC equivalent circuit model, and the battery state estimation and parameter identification are completed by using the double EKF (the dual extended Kalman filtering, DEKF) algorithm, resulting in the realization of the estimation of SOC and SOH.
https://doi.org/10.1142/S0129156425400749
In order to realize the automatic location of urban and rural emergency shelters, the optimal location method of urban and rural emergency shelters based on the improved ant colony algorithm is proposed. Combined with the method of urban and rural land block grid space planning, the spatial grid planning of urban and rural distribution and the method of remote sensing image covering big data detection are adopted to extract the spatial grid geospatial detection model of urban and rural emergency shelters. The remote sensing images of urban and rural emergency shelters and the wireless sensor information sampling method of unmanned aerial vehicle (UAV) are adopted to establish the remote sensing images of urban and rural emergency shelters and the monitoring map database of UAV wireless sensor information. The improved ant colony optimization algorithm is adopted to optimize the path in the process of urban and rural emergency shelters’ site selection, and the overall geographical and geometric characteristics of urban and rural emergency shelters’ site selection are analyzed. The texture, color, shape and other characteristics related to the location of urban and rural emergency shelters are extracted, and the distributed optimization control method of dynamic ant colony is adopted to detect the distribution of geometric deformation characteristics and optimize the location of urban and rural emergency shelters. The location is realized by spatial enhancement of remote sensing dynamic information and the location of spatial feature points. The simulation results show that this method has a good positioning ability and a strong ability to optimize the location of emergency shelters in urban and rural areas.
https://doi.org/10.1142/S0129156425400713
Tourism route planning is affected by the factors such as tourism destination and tourists’ preference, which leads to poor automatic matching of routes. A mathematical model of tourism route planning based on deep learning is proposed. Under the condition of total route constraints, according to the historical tourism preference information and prior information of tourists, a big data model for the distribution of personalized characteristics of tourists is established, spatial constraints and time constraints parameters are input, and the feature matching of geographic location information and tourist interest parameter information is carried out by using the deep learning method, and the statistical feature quantity of the parameters of the tourism route planning model is extracted. Under the control of the deep learning and geographic information data set, Carry out multi-constraint and multi-objective hierarchical analysis of tourists and tourism destinations, and realize the optimization design of tourism route planning algorithm. The simulation results show that the accuracy of this method is high and the deviation is small. It improves the satisfaction level of tourists and helps users complete a series of visual analysis tasks such as route mining, route planning and destination analysis.
https://doi.org/10.1142/S0129156425400695
To improve the accuracy of large-scale text recognition and reduce recognition time, this paper proposes a Transformer-based large-scale corpus linguistic knowledge Seq2Seq text recognition method. This method first extracts linguistic knowledge Seq2Seq text features from a corpus based on frequency cross-entropy, then represents word vectors based on a fuzzy word model, and finally introduces the Transformer decoding concept to construct a large-scale corpus linguistic knowledge Seq2Seq text recognition model, achieving linguistic knowledge Seq2Seq text recognition function. The experimental results show that the proposed method has a recognition accuracy higher than 93.5%, and the recognition time for thousands of pending data is less than 3.2ms, which is better than the comparative method. The application effect is good.
https://doi.org/10.1142/S0129156425400804
In recent years, the rapid growth of mobile communication has led to the continuous expansion of network scale, and the energy consumption of wireless communication has also increased rapidly. Under the high-density networking conditions, frequent communication activities greatly increase the energy consumption. In order to reduce channel competition, optimize resource scheduling and reduce equipment power consumption, the target wake-up time (TWT) mechanism in IEEE 802.11ax system plays an important role through timely scheduling and transmission. To improve channel efficiency for multi-user (MU) transmission and tackle with complicated traffic conditions, this paper presents a traffic awareness-based TWT scheduling scheme (TAT). The traffic is predicted and classified using a spatio-temporal series model for traffic-based TWT parameter generation. Subsequently, the time–frequency scheduling table is developed to adaptively assign Resource Units (RUs) and time slices to active STAs. Simulation results show that TAT can guarantee the Quality of Service (QoS) under dynamic changes in power service and delay requirements while reducing power consumption, effectively meeting the energy-saving demands in the intelligent access scenarios for power terminals.
https://doi.org/10.1142/S0129156425400920
The construction of new electric power systems has led to a multiplication in the number of access nodes for electric power IoT information communication, accompanied by a deepening trend in business cloudification. However, due to the insufficient resource linkage and regulation capabilities of the electric power communication network on the cloud side, network side, and edge side, coupled with the lack of a convenient and unified cloud-network resource convergence and control mechanism, the existing electric power communication architecture is gradually finding it difficult to fully meet the demand for deterministic multi-service bearer. Therefore, it is necessary to evolve the original network equipment within the network toward a white box model, opening up interfaces for fine-grained sensing and full-area control, in order to provide more accurate and controllable network transmission services. In this paper, we first propose a cloud-network cooperative resource scheduling architecture for the electric power white box network. This architecture establishes a bearer network to achieve high consistency and cooperativeness of cloud-side end computing power through white box switches, uniformly schedules the global resources of the cloud-network convergence network, and improves the network service quality between cloud-side and side-side. Furthermore, we design a multi-dimensional re-source scheduling method for cloud-network synergy in the electric power white box network. This involves constructing a white box network virtualized heterogeneous resource (VHR) model and a heterogeneous resource control flow (HRCF) model. We transform the problem of merged deployment of service function chains (SFCs) into an SFC merger deployment problem by jointly controlling heterogeneous information, communication, storage, and arithmetic resources, along with the white box switch-specific pipelining resources in the white box network. We then carry out merger optimization of the same kind of function chains. Additionally, we propose a G-GS algorithm based on heuristic methods. Simulation results demonstrate that the method presented in this paper can significantly reduce the cost consumption of white box pipeline resources in the electric power white box network, thereby reducing the processing delay of SFC and improving the quality of electric power service bearing.
https://doi.org/10.1142/S012915642540066X
Addressing the challenges of complex movement trajectories and rapid action changes in martial arts performances, this study introduces a novel posture recognition algorithm based on Random Forest and Skeletal Feature Extraction (RF-SFE) for martial arts leg movements. Unlike traditional posture recognition methods that struggle with accuracy, RF-SFE aims to provide intelligent analysis of training postures to assist practitioners in efficient training. The algorithm initially employs advanced skeletal feature extraction techniques to identify and articulate the relative positions and movements specific to martial arts. These extracted spatial features enhance the flexibility in modeling the unique dynamics of martial arts. Subsequently, Random Forest classification is utilized to categorize different leg movements, leveraging its strength in handling high-dimensional data and providing robust classification. Comparative experiments on diverse martial arts posture datasets demonstrate a significant improvement in recognition rates over baseline methods. This validates the effectiveness of the RF-SFE method in recognizing martial arts postures, offering scientific guidance for practitioners’ training regimens.
https://doi.org/10.1142/S0129156425400610
This research focuses on improving the level of intelligent control of power systems, with special emphasis on accurate prediction of short-term power loads to meet the growing power demand and improve the quality of power supply. In the field of short-term power load forecasting, many factors affect the accuracy, so the core problem of this study is how to improve the accuracy of load forecasting. Two improved short-term load forecasting methods are proposed. First, the error compensation kernel random weight neural network is used to establish an input index system including seasonality, weather change, industrial production and other factors to improve the forecast accuracy. Second, the method based on multivariate chaotic time series and weighted kernel random weight neural network aims to reduce the prediction error caused by chaotic dynamics and sample weights, and better capture the complex dynamic changes of power loads. A highly accurate power load forecasting model with low levels of RMSE and MAE was obtained. Compared with the existing methods, these two improved methods have obvious advantages in accuracy, generalization performance and training efficiency, and provide a reliable tool for intelligent control of power system and promote the sustainable development of power system.
https://doi.org/10.1142/S0129156425400646
With the rapid advancement of the digital economy, the financial market confronts unprecedented complexity and interconnectivity, rendering the precise prediction of credit bond default risk particularly crucial. This paper introduces a novel credit bond default risk measurement model (GST-GRU) predicated on a spatio-temporal attention network and genetic algorithm, designed to enhance the accuracy and robustness of risk prediction. Initially, data preprocessing is undertaken, encompassing time series data cleaning, completion of missing values for historical financial information and bond default statuses, and extraction of spatial discrete information through independent vector coding. Subsequently, a spatio-temporal attention mechanism is employed to amplify feature information in both domains, while the GRU network captures the long-term dependencies within the time series data. Thereafter, model parameters are refined using a genetic algorithm to ensure global optimality. Experimental results demonstrate that the GST–GRU model markedly improves prediction accuracy across multiple public and self-constructed datasets, surpassing traditional models. This research furnishes robust technical support for risk management in the financial market, fosters the evolution of credit bond default risk prediction technology, and lays the groundwork for the intelligence and automation of future financial systems.
https://doi.org/10.1142/S0129156425400622
The velocity at the top of the upper mantle of the Earth’s crust is an important parameter to study the evolution of the Earth’s crust. The formation and evolution of surface tectonic units are closely related to the velocity structure at the top of the upper mantle. The propagation time of seismic waves between earthquakes and stations in modern seismology has high accuracy and reliability, including the study of the Earth’s ring structure and the source of deep driving force of plate tectonics are mainly from the interpretation of velocity structure, using the time information to extract velocity information is the most important method in geophysical methods, which has been widely applied to obtain the velocity structure of the crustal mantle. Seismic tomography is one of the most effective means to study the velocity structure of the Earth’s interior. The results obtained by manual extrapolation and distribution assumptions are sometimes unsatisfactory due to the difficulty of knowledge acquisition and the limitations of experts’ knowledge of multidimensional data. With the development of artificial intelligence technology, the interpretation accuracy and credibility of seismic laminar imaging can be improved by using the self-organized learning ability possessed by artificial neural networks and their powerful classification computational power. Therefore, the method studied in this paper is a seismic walk-time laminar imaging method that uses artificial intelligence technology and seismic-related data to invert the velocity structure of the upper mantle of the Earth’s crust. The method can be divided into four parts: model parameterization, calculation of travel time and path, inversion calculation and performance evaluation. The first part is to divide the model space of the study area into blocks or grid nodes (model parameterization), calculate the path and travel time of seismic wave rays from the source to the station, and then build a neural network model based on the difference between the theoretical travel time and the observed travel time of seismic waves, i.e., travel time residual, and finally obtain the seismic laminar imaging of the study area.
https://doi.org/10.1142/S0129156425400555
Building Information Modeling (BIM) technology has revolutionized the architectural and construction industries by providing detailed digital representations of buildings, enhancing design efficiency and project management. Despite its widespread application, the utilization of BIM in optimizing interior space planning, particularly in boutique accommodations like guesthouses, remains underexplored. This research addresses the gap by introducing a novel method for interior environment space planning of guesthouses based on the topological relationships derived from BIM data. This study developed a new algorithm that utilizes detailed topology information from BIM in order to make space planning decisions more efficiently, thereby improving space utilization efficiency and guest experience. Through a comprehensive analysis of BIM topological relationships and the application of advanced optimization techniques, our method aims to optimize the use of space while considering the unique constraints and requirements of guesthouse environments. The proposed algorithm demonstrates significant improvements in spatial efficiency and design quality when tested against traditional planning methods on real-world BIM datasets. This research not only contributes a novel approach to the field of architectural design and planning but also offers practical implications for the enhancement of interior spaces in guesthouses, potentially influencing future applications of BIM technology in the hospitality industry.
https://doi.org/10.1142/S0129156425400567
At present, the urban traffic system is faced with the problems of congestion and low efficiency, and the traditional methods have certain limitations when dealing with the complex urban road network. This paper aims to explore a new method of capacity Optimization of Road Networks in a Smart City environment and proposes a method based on a Graph Neural network, named “Smart City Road optimization Graph Neural Network” (SCRO-GNN). SCRO-GNN first collects and preprocesses multi-source data, including road network data, traffic flow, accident records, environmental factors, etc. The key node and edge features, including the number of lanes at the intersection, the traffic flow of the section, and the adjacency matrix of the road network are defined to characterize the structure of the road network. Then, the graph neural network model is constructed and trained to predict the traffic flow of different sections and evaluate the road capacity by using the graph structure of the road network. This paper tests the performance of SCRO-GNN on real road networks in multiple cities. The results show that, compared with the traditional traffic flow prediction model, SCRO-GNN can significantly improve the prediction accuracy, especially when dealing with the highly complex urban road network structure. Based on these predictions, the proposed optimization strategy performed well in reducing traffic congestion and improving the efficiency of road use. The research in this paper not only demonstrates the potential of graph neural networks in smart city road network optimization, but also provides a new direction for future traffic system research. The successful implementation of the SCRO-GNN approach is expected to provide more efficient and intelligent solutions for urban traffic management and planning.
https://doi.org/10.1142/S0129156425400579
With the rapid development and wide application of a new round of information technologies, such as the global Internet of Things, the new generation of mobile Internet, cloud computing, etc., informatization has opened a new stage of smart tourism (ST), and ST has become the main trend of tourism development in China at present and for a long time to come. Tourism industry has become one of the important pillar industries in Jiangxi, and it plays an increasingly prominent role in stimulating and supporting economic and social development. Wisdom tourism is an important means to improve tourism service, tourism management and tourism marketing, and an important starting point to promote the upgrading of tourism industry in Jiangxi Province, and to achieve carry-over and leapfrog development and overtaking at corners. This paper is constantly improving and updating on the basis of data warehouse integration mechanism. Under the data warehouse mechanism, through the effective integration of spatial data, virtual reality data and dynamic data, a comprehensive data warehouse oriented to tourism promotion, scenic spot management and department management is formed. It aims to promote the rapid implementation of ST, bring tourists a comfortable tourism experience, save costs and improve efficiency for tourism service providers.