In the long history of human beings, with the continuous exploration and research of natural phenomena and social life, many scientific fields have emerged, and robots are the product of this technological development to a certain stage. At present, there are hundreds of different types of robots applied in production and daily life in the world, which have achieved significant economic benefits. However, its technical issues have gradually emerged. For example, the shortcomings in visual perception and other aspects cannot be effectively addressed. Object recognition is not precise enough, and information resources cannot be effectively utilized to achieve control functions. These are the main factors that constrain the further progress and improvement of robots. The emergence of big data and Artificial Intelligence (AI) has brought unprecedented opportunities to robots. Especially, the application of big data analysis in intelligent manufacturing and smart city construction is becoming increasingly widespread, thus providing new solutions for robot services. They not only enable people to quickly and accurately grasp a large amount of valuable knowledge, but also better tap into the enormous potential contained in human intelligence, which largely drives the robot industry towards intelligence. By summarizing the existing research results, this paper explored the development trend of robot object recognition systems, and focused on its key technologies, the feature matching-based pattern recognition and acceleration strategy-based detection efficiency improvement. In response to the current problems, corresponding solutions were proposed and comparative experiments were designed. This proved that the anti-interference detection accuracy of the robot object recognition system based on big data and AI algorithm improved by about 12.48%, thus hoping to provide reference for future robot system development.
With the deepening reform of the power market and carbon market, great progress has been made in informatization. Power information may be stored in many scattered places, and it is difficult to share data between different departments or systems. This leads to fragmentation and redundancy of information and makes information exchange difficult. Blockchain can improve the reliability of Power-Carbon Management System (briefly described as PCMS for convenience) data processing. PCMS informatization has become the basis for improving the quality and efficiency of project management and maximizing the environmental and economic benefits of the project. Because the power information management system can effectively control the flow of information and resource allocation. Due to the requirement of low-carbon and stable power production, PCMS attaches great importance to the application and implementation of information in power management, but does not attach enough importance to the informatization of power production management. Therefore, this paper analyzed the current situation, characteristics and existing problems of PCMS through machine learning algorithm, then constructed the design principles, and finally proposed the optimization path of PCMS according to the principles. The information collection ability and system control ability of the optimized PCMS were better than the original PCMS. The information collection ability was 14.2% higher than the original, and the system control ability was 9.8% higher than the original. In general, both blockchain and machine learning can improve the data reliability of PCMS.
Piano is a pass in the field of music and has been popular among the public. With more and more people joining the ranks of learning piano, traditional piano teaching can no longer meet the needs of piano learners in time and space. In order to improve the quality and effectiveness of piano teaching, this paper effectively integrates machine learning algorithms into the teaching system. Specifically, we constructed a piano gesture recognition model based on the Extreme Learning Machine (ELM) algorithm to achieve accurate recognition and analysis of student piano playing gestures. The experimental results show that the piano gesture recognition model based on the extreme learning machine algorithm combined with sensors can effectively obtain the dynamic information of the learner’s hand joints and present it intuitively. Compared with the traditional recognition methods, the model has a substantial improvement in the recognition fight rate. In addition, in the comparison experiments, the k-means model shows better comprehensive classification performance and is able to recommend appropriate piano learning resources for learners based on the classification results. The results of the application experiments show that the model is still able to respond to the corresponding requests in a shorter period of time as the number of learner requests increases, and recommend the corresponding resources for the learners, with a success rate of over 99%.
To overcome the shortcomings of traditional modulation identification methods in small sample signal processing, this paper proposes a new model transfer modulation identification algorithm that integrates machine learning. The algorithm first uses a combination of convolutional neural network (CNN) and long short-term memory network (LSTM) to fully tap and utilize its feature extraction capabilities in the spatial and temporal dimensions. It significantly improves the performance of the model in these two dimensions through a parallel structure, which can ensure high efficiency and accuracy in signal recognition tasks. Then, through model migration technology, the common features in the pre-trained model are retained and fine-tuned to adapt to new signal recognition needs, effectively solving the problem of small sample signal recognition. Experimental results show that the algorithm proposed in this paper significantly improves signal recognition accuracy, with an average accuracy rate of 96%. In the 16APSK and 16QAM signal recognition tasks, the accuracy is as high as 100% and 99%, respectively, which demonstrates excellent performance.
University culture is an effective way of ideological and political education for college students. The degree of identity of university culture can reflect the construction of university culture and provide a reference for the construction of university culture. This paper proposes that the construction of cultural identity should become the mission and task of the cultural quality education of universities in China in the new era. Chinese cultural identity is the Chinese people’s positive recognition of the most significant attribute of the Chinese national community, and its core is the recognition of its basic value. The objects of cultural identity include both the excellent traditional culture of the Chinese nation and the cultural nature with a sense of the times formed in the process of historical development. Strengthening the cultural identity of college students is the key problem that needs to be solved urgently in the talent training of colleges and universities, and is the core content of moral cultivation and cultural quality education. At present, the study of university culture lacks a quantitative perspective. This paper combines qualitative and quantitative analysis, draws on the Delphi expert survey method, and uses embedded machine learning technology to build a university cultural identity evaluation index system covering six first-level indicators, including value identity, institutional identity, behavioral identity, emotional identity, symbolic identity, and organizational identity. On this basis, empirical analysis is carried out, providing a new perspective for university culture research.
Recently, auditors have used audit risk models’ conceptual tools to assess and control the many risks that could occur during an audit. The tool guides the auditor in determining the necessary evidence for each relevant allegation and the required categories of evidence. The importance of refining audit risk assessment models depends on the crucial role that these models perform in allocating resources and reducing financial disparities. The challenging characteristic of such an audit risk assessment model is that inaccurate information can lead to incorrect risk assessments, legal frameworks and failure to detect material misstatements in financial statements. Hence, in this research, BP neural network-enabled machine learning (BPNN-ML) technologies have been improved for the audit risk assessment model. In that financial disparity, the regression algorithm was used to establish the audit risk assessment for data processing and entirely monitor it. The suggested method provides a flexible framework that may be utilized by a wide range of organizations, including global enterprises and financial institutions, to optimize audit processes and ensure regulatory compliance. This research adds to advancing auditing procedures and regulatory compliance efforts in contemporary company contexts by addressing the issues inherent in traditional methods and proposing a practical approach to these concerns. The experimental analysis of BPNN-ML outperforms physical monitoring in terms of feature importance ranking analysis, contextual adaptability analysis, sensitivity analysis, performance analysis and optimized risk assessment analysis.
A lacquer painting is a transparent coating that, if dry, creates a firm, long-lasting finish. The design is intended to be chip-resistant, waterproof, and breathable. Lacquers can be painted on a variety of materials, like wood and metal, and can occur in a range of hues and clear finishes. Lacquer painting is a challenge that encompasses ethnic, social, and temporal concerns, as well as unique theoretical and practical difficulties. Using lacquer painting creation with realistic themes as an essay, people can investigate the main issues associated with lacquer painting creation with realistic themes in the context of contemporary time. The importance of emotional cognitive expression in lacquer painting is acquired, and the requirement of social computing in the disciplines of machine learning (ML) and decision control. Implementation of the lacquer painting-designed better style transfer algorithm (TA) has paved the way for the creation of artistic and cultural ceramic goods. Hence machine learning-transfer algorithm (ML-TA) has been designed to lacquer paint regardless of the exquisite sheen that produces. Finishes can be matte, satin, and glossy depending on the number of layers and method utilized, however, they can always have a smooth surface texture. Lacquer painting can either exhibit a clean translucent appearance or be tinted to seem colored. Lacquer painting is comparable to varnish and urethane finishes, however is applies uniquely. Lacquers are often sprinkled on the surface rather than brushed on and manually rubbed with a material.
This paper evaluates the application of the Internet of Things (IoT) in designing smart teaching systems by optimizing ideological and political education (IPE) resources using machine learning (ML). To increase educational reform’s efficacy, we also examine the issues with conventional IPE instruction. This study further develops a cutting-edge, smart framework for teaching ideological and political concepts in the educational setting and changes teaching according to real-life scenarios. Comprising the perception, network, and application stages, the IPE platform is a three-layer IoT architecture. The IoT and internet connections are used by the technology to receive effective IPE instructional activity information in real time. After that, the data are sent over the internet to the data center, where it is used as the initial information for applications, analysis of information, and modeling training. We are able to collect and acquire IPE teaching activity data sets on instructive and educational developments in courses by using IoT devices. The principal component analysis (PCA) approach is used to remove interface characteristics, date, and time elements from the data to increase the evaluation’s correctness. To remove noisy information from the data, the z-score normalization method is used. In addition, this paper suggests a novel efficient Cat Boost method inspired by hunter–prey optimization (AHPO-ECB) for evaluating IPE performance. In this case, the AHPO approach is employed to reduce the misprediction rate of the CB. The suggested model is then used to analyze the predicted performance using a Python program. The results of the experiments indicate that, in comparison to other models currently in use, the suggested AHPO-ECB model performs at the highest level when assessing IPE teaching performance.
This research focuses on the generation of urban public space color design schemes using Artificial Intelligence (AI). The designed AI-based system predicts colors for the environment, culture, and users and designs a color palette for smart cities. Neural networks and clustering algorithms are used to determine the best hues and shades, depending on the input parameters like climate, architectural style, and general looks in a specific region. It evokes sophisticated colors in varied urban contexts, URBAN FABRICS proposes a specific color answer to the aesthetic and performative beautification of the public realm. In addition to adopting elements of AI binding into the design principles, the system enables dynamic colors to adapt to the changing needs and external conditions. Testing the model in multiple cities demonstrated its ability to generate unique, context-sensitive designs, improving both aesthetic value and user satisfaction. This research highlights the role of AI in modern urban planning, presenting an innovative approach to color design that balances artistic creativity with data-driven insights. The findings offer practical implications for architects, urban planners, and designers seeking to enhance urban public spaces through personalized, AI-assisted color schemes.
Diabetes makes people more susceptible to disorders like heart disease, kidney disease, Parkinson’s, cataracts, respiratory issues, and other illnesses. Numerous tests are employed to capture essential data for diagnosing diabetes, and the assessment is then used to decide the most appropriate course of action. Nevertheless, it is still challenging to identify the precise traits essential for the emergence of diabetes. Finding the critical factors that influence the target diabetes condition is the primary goal of this research. The Stochastic Learning Strategy with Varying Regularization Logistic Activation Multilayer Perceptron (MLP) model has been implemented with the Indian Diabetes patient dataset extracted from the UCI database repository. Data preprocessing was accomplished for the patients of the Indian diabetes dataset. This research tries to extract the correlation and distribution of all features from the Indian Diabetes Patient dataset. All classifiers demonstrate an accuracy of less than 80% when applied to the Indian Diabetes patient dataset with and without feature scaling, except the Random Forest classifier, which is found to have an accuracy of 78%, which is not the best. The Indian Diabetes patient dataset is fitted with MLP with various learning rates such as Constant learning, Constant momentum and Constant Nesterov momentum learning rate, Invasive Scaling, Invasive Scaling momentum and Invasive Scaling Nesterov momentum learning rate, Adaptive learning, Adaptive momentum and Adaptive Nesterov momentum learning rate for various activation layers like ReLU (Rectified Linear Unit), Identity, tanh and logistic activation layer and the performance is analyzed. The MLP with a logistic activation layer under the Adaptive momentum Learning rate strategy is examined with varying regularization parameter alpha to strengthen the prediction accuracy. The proposed Stochastic Learning Strategy with Varying Regularization Logistic Activation MLP model was further analyzed with the performance metrics precision, recall, FScore, runtime and accuracy and the experimental results exhibit 97% accuracy for MLP with logistic activation layer under Adaptive momentum Learning rate toward predicting the diabetes disease.
Music is a valuable instrument and conduit for human emotional communication, particularly in current generations. However, that can be provided that conventional music production processes are expensive and demand a lot of time, humans and cash. Several elements, including tempo, mode, volume and melody, contribute to the emotional valence of a musical composition; however, tempo is often considered the most essential. Automatic recognition of emotions in music relies on a valid emotional psychology model. The challenging characteristics are Automated Recognition of Music Style and emotions, lack of musical depth, ethical concerns and loss of human connection. Emotions are interpretive and imply that musical emotion mental models use diverse modeling approaches. Machine learning (ML) is used to high-frequency neurophysiologic data to increase the accuracy of hit song predictions. They demonstrated that using ML to brain data acquired while individuals listened to new music, popular songs could be predicted with near-perfect accuracy. A strategy for obtaining audio features and generating sequential data for learning networks with Long Short-Term Memory (LSTM) units is provided. Hence, ML-LSTM has been designed for expression depending on the music style. It utilizes facial expression detection and learning computational methods to recommend music to users depending on their mood. Music emotion regression aims to discover more precise music emotions, rather than categorization, that requires differentiating emotions like pleasure, rage, sorrow and peacefulness. That saves time and effort manually categorizing music and helps create a playlist appropriate for a specific individual based on their emotional traits.
In the era of burgeoning digital educational resources, tailoring personalized learning experiences for individuals has emerged as a paramount concern. This paper delineates the development of an innovative online learning resource recommendation system, underpinned by an advanced learner model. The study leverages a gamut of data mining methodologies, encompassing both machine learning and user behavior analytics, to craft learner models of high personalization. These models intricately consider various facets such as the learner’s prior knowledge, learning style, interests, and historical learning interactions. Central to our system is a sophisticated recommendation algorithm. This algorithm amalgamates decision tree methodologies with state-of-the-art natural language processing techniques, effectively sifting through an extensive corpus of online learning materials to pinpoint resources that resonate with individual learner profiles. The system’s efficacy was rigorously tested across multiple online learning platforms. Empirical results from these tests unequivocally demonstrate that our system surpasses conventional recommendation approaches, particularly in augmenting learner engagement and satisfaction. This research contributes a novel paradigm to the personalized recommendation of online learning resources. Moreover, it furnishes invaluable insights into the ongoing evolution of educational technologies, marking a significant stride in the realm of digital learning.
With the increasing complexity of financial markets, it is gradually becoming a research hotspot in the field of accurate stock market prediction. The ant lion optimization algorithm, due to its excellent global search ability and the advantages of machine learning models in data mining, aims to further improve the accuracy of stock market prediction. Therefore, this paper will explore how to combine the ant lion optimization algorithm with machine learning models to achieve effective prediction of stock market trends. First, a stock relationship graph is constructed using graph neural networks, with stocks as nodes and relationships between stocks as edges. The node embedding technique of graph neural networks is used to extract feature representations of stocks. Then, the ant lion algorithm is used to optimize the parameters of the graph neural network, enabling it to better fit the historical data of the stock market. Finally, the trained model is used to predict the future trend of the stock market. Through experimental verification, it has been found that the proposed method has high fitness during the training process and short training time; the loss value of ant lion’s optimized machine learning algorithm is relatively small, and the model prediction accuracy is high. Applying this method to Apple, Facebook, and Tesla results in higher strategic trading returns. This method improves the accuracy of stock market prediction by optimizing machine learning algorithms with ant lions, providing investors with more reliable decision support.
Optical Character Recognition (OCR) is widely used to digitize printed documents, extract information from forms, automate data entry, and enable text recognition in applications ranging from license plate recognition to handwritten document conversion. Machine learning and deep learning models have recently improved OCR performance, however hyperparameter tuning remains an issue. To solve this, this paper proposes an efficient method for recognizing Brahmi script characters that combines a Convolutional Neural Network (CNN) with a random forest classifier. First, a CNN-based autoencoder extracts features from Brahmi script images, which are then input into the random forest classification model. The hyperparameters of both models are optimized using a genetic algorithm (GA). Extensive experimental results reveal that the proposed approach achieves a significantly better accuracy of 97%, over competitive models.
In the field of news communication, the application of artificial intelligence technology, combined with software-defined network control and management of news communication algorithms, has driven a profound reshaping of news communication models. This paper constructs an advanced news production module based on natural language processing technology, which can accurately extract key information from massive data, significantly improving the accuracy and efficiency of news production. In addition, this paper integrates hybrid recommendation algorithms with LDA topic models and incorporates control and management mechanisms of software-defined networks, further enhancing the accuracy of personalized news recommendations. The experimental results show that compared to other models, the model proposed in this paper exhibits better performance in news generation and recommendation, and can maintain high stability and accuracy in different datasets, effectively improving users’ news click-through rates and browsing time. At the same time, the model greatly enriches the amount of news information, providing users with a more accurate personalized news service experience, and effectively enhancing the effectiveness and influence of news dissemination.
Predictive Maintenance (PdM) in industrial systems is crucial for minimizing downtime and reducing operational costs. Additionally, the model’s computational complexity could hinder its scalability in large, real-time data environments, limiting its applicability in some industrial settings. This study aims to enhance PdM through the integration of the Stochastic Fractal Search algorithm with Enriched Support Vector Regression (SFS-ESVR) to improve model performance in predicting machine failures in a big data environment. The dataset includes real-time industrial sensor data, including equipment performance metrics, failure records, and environmental factors. Data preprocessing involves cleaning the raw sensor data by removing outliers and filling missing values. Z-score normalization is employed to standardize the data, ensuring consistency and improving the performance of machine learning simulations in PdM tasks. Linear Discriminant Analysis (LDA) is used for feature extraction, reducing dimensionality and enhancing the model’s ability to differentiate between normal and failure events. SFS optimization algorithm is used to optimize the hyperparameters of the ESVR model, improving its performance in predicting machine failures. The SFS-ESVR model was implemented in Python. Evaluation metrics include accuracy (97.56%), precision (98.39%), recall (98.74%), and F1 score (98.68%), demonstrating the model’s effectiveness in predicting failures and optimizing maintenance schedules for industrial systems. A comparative analysis is performed with other existing algorithms, and the outcome demonstrates the efficacy of the proposed model for PdM in a big data environment.
In the context of artificial intelligence, machine vision technology has also become a research hotspot in the decoration industry. However, there are still many problems in the practical application process of engineering. Mainly affected by uncertain factors such as environmental noise, there is still no comprehensive method to determine the treatment plan for decoration pattern structure accurately. Neural network technology has been applied to various scenarios for recognition and has achieved good results. This paper provides a reference for balancing the accuracy and speed of the model by controlling the model parameters for sampling the characteristic structure of decoration patterns. Pre-train the pruned model using an embedded system and fully utilize a software-defined lightweight training model as a benchmark. Compared with traditional neural networks, it has the characteristics of a flexible structure, high computational efficiency, and strong adaptability. Moreover, based on the benchmark model for quantification, the experimental results of MobileNet and Adam quantization were compared. The feature recognition rate and computational cost were 8.4% and 11.3% lower than the comparison scheme, respectively, improving the efficiency and quality of image recognition. Optimization algorithms are more precise and have unique machine learning analysis capabilities, which can help enhance pattern recognition in the decoration industry and provide references for similar recognition needs in other industries.
This paper digests three-factor model by exploring the average impacts of factors on portfolio returns and how factors interact with each other. To do this, we use SHapley Additive exPlanations method (SHAP) to interpret the results obtained by XGBoost. We find that the factors have different impacts on portfolio returns and interact with each other in different ways. We also find that the average impacts of factors on portfolio returns are similar before and after the publication of the three-factor model and the 2008 financial crisis but the interactions between factors vary across times.
Using a unique panel dataset consisting of 2997 Chinese manufacturing firms publicly listed in the A-share market between 2003 and 2020, we examine whether and to what extent a firm’s perception of uncertainty affects green innovation. After integrating textual analysis with a machine learning approach to measure perception of uncertainty, we find that a firm’s perception of environmental uncertainty negatively affects the number of green patents submitted or approved. The negative effect is weaker for firms followed by more professional analysts, operating in more competitive markets, or located in regions with better institutional settings. In addition, there is significant heterogeneity in the negative effect between non-state-owned versus state-owned firms as well as polluting versus non-polluting firms. The results are robust to different measures of green innovation and perception of uncertainty, and after addressing for potential endogeneity problem. Our study contributes to the literature on behavioral environmental economics by demonstrating that it is not only the environment uncertainty but also how firms perceive the uncertainty matters for green innovation and corporate social responsibility.
Due to the rapid advancement of computational power and the Internet of Things (IoT) together with recent developments in digital-twin and cyber-physical systems, the integration of big data analytics techniques (BDAT) (e.g., data mining, machine learning (ML), deep learning, data farming, etc.) into traditional simulation methodologies, along with the enhancement of past simulation optimization techniques, has given rise to an emerging field known as simulation learning and optimization (SLO). Unlike simulation optimization, SLO is a data-driven fusion approach that integrates conventional simulation modeling methodologies, simulation optimization techniques, and learning-based big data analytics (BDA) to enhance the ability to address stochastic and dynamic decision-making problems in the real-world complex system and quickly find the best outcomes for the decision support in both real-time and nonreal-time analyses to achieve the data-driven digital twin capability. Although some literature in the past has mentioned similar applications and concepts, no paper provides a structural explanation and comprehensive review of relevant literature on SLO from both methodological and applied perspectives. Consequently, in the first part of this paper, we conduct a literature review on a novel SLO methodology emerging from the fusion of traditional simulation methodology, simulation optimization algorithms, and BDAT. In the second part of this paper, we review the applications of SLO in various contexts, with detailed discussions on manufacturing, maintenance, redundancy allocation, hybrid energy systems, humanitarian logistics, and healthcare systems. Lastly, we investigate potential research directions for SLO in this new era of big data and artificial intelligence (AI).
Please login to be able to save your searches and receive alerts for new content matching your search criteria.