Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In the long history of human beings, with the continuous exploration and research of natural phenomena and social life, many scientific fields have emerged, and robots are the product of this technological development to a certain stage. At present, there are hundreds of different types of robots applied in production and daily life in the world, which have achieved significant economic benefits. However, its technical issues have gradually emerged. For example, the shortcomings in visual perception and other aspects cannot be effectively addressed. Object recognition is not precise enough, and information resources cannot be effectively utilized to achieve control functions. These are the main factors that constrain the further progress and improvement of robots. The emergence of big data and Artificial Intelligence (AI) has brought unprecedented opportunities to robots. Especially, the application of big data analysis in intelligent manufacturing and smart city construction is becoming increasingly widespread, thus providing new solutions for robot services. They not only enable people to quickly and accurately grasp a large amount of valuable knowledge, but also better tap into the enormous potential contained in human intelligence, which largely drives the robot industry towards intelligence. By summarizing the existing research results, this paper explored the development trend of robot object recognition systems, and focused on its key technologies, the feature matching-based pattern recognition and acceleration strategy-based detection efficiency improvement. In response to the current problems, corresponding solutions were proposed and comparative experiments were designed. This proved that the anti-interference detection accuracy of the robot object recognition system based on big data and AI algorithm improved by about 12.48%, thus hoping to provide reference for future robot system development.
Each student in an adaptive education system has significant differences in knowledge background, ability level and cognitive style. Therefore, to build an adaptive teaching system, it is necessary to establish an operable, reasonable and individualized student model by clarifying students’ abilities and differences. The improved Apriori algorithm under big data is the most classic association rule algorithm, which is generated by a set of candidates, and it uses the iterative method of hierarchical search to traverse a set of frequency items in the transaction database. After finding the set of frequency items, select the association according to the trust rules. This paper studies how to apply the improved Apriori algorithm to an adaptive online education system in a big data environment. An evolutionary algebra is taken with mean fit of 80, population size of 20, mean fit of 0.28, population size of 60, mean fit of 0.26, population size of 80, mean fit of 0.25, mean population size of 0.25. The population size is 200, and the average fitting is 0.24. The larger the error, the smaller the error between each indicator of the test paper and the corresponding value specified by the user. The improved Apriori algorithm in the big data environment has designed five themes of rule mining, which are mainly used for class management: class linkage, class category linkage, student basic information linkage, lecture and basic information linkage, and lecture mode linkage. They play the role of teaching assistants with a specific role.
Against the backdrop of the digital age, the openness, equality and interaction of the Internet economy have injected new vitality into China’s traditional industries. The application of big data technology, especially in information integration and analysis, has become a key force in promoting the sustainable and healthy development of the national economy. This study focuses on the “Internet +” environment, discusses the impact of the aging problem of community workers on home care services, and proposes an optimization scheme based on a heuristic algorithm. The heuristic algorithm, inspired by the foraging behavior of ants in nature, optimizes the route selection problem by simulating an ant colony to choose the path with a high concentration of pheromones and shows outstanding application potential in the field of home care. The accuracy of the event detection algorithm is directly related to the performance of the load decomposition algorithm, and the change point detection algorithm can effectively identify the change point of the probability distribution in the time series data, which provides important input data for unsupervised clustering. Advanced computer theory, including the Hidden Markov model (HMM) and swarm intelligence optimization algorithm, is used in this research. By comparing different swarm intelligence algorithms, we find that the standard Gray Wolf optimization (SGWO) model is better than the basic Gray Wolf optimization (BGWO) algorithm and the improved Gray Wolf optimization (DGWO) algorithm in terms of stability and output results. The SGWO model significantly improves the efficiency of the load decomposition algorithm, which has been verified in the application of the smart elderly care service platform. The platform not only supports the operation of related technologies and information products but also realizes the seamless integration of information among various subjects of elderly care services. In addition, the factor hidden in the Markov model that can be selectively activated effectively monitors equipment status in the Internet of Things environment, provides real-time monitoring of user consumption behavior and fault information and further enhances the quality and efficiency of smart elderly care services.
The core of the development of the cultural industry is product content, and the traditional cultural industry usually accomplishes the analysis and prediction of industrial data with the help of manual power, which not only has high economic costs, but also has certain limitations and large errors in the prediction results. In view of this situation, this paper proposes a prediction system for the development quality of cultural industry based on big data and artificial intelligence, which reduces the error of cultural industry hierarchical time series through the constructed TCN time series prediction model and DNN interlayer error reconciler and improves the stability and quasi-degree of prediction of industrial development. The experimental results of the prediction model performance show that compared with other prediction models, the prediction model in this paper has higher accuracy, stability and reliability, and obviously reduces the inter-level error. The experimental results show that compared with other prediction models, the prediction model proposed in this paper has improved average prediction accuracy. The interlayer error is effectively reduced, demonstrating higher prediction accuracy and stronger stability. In addition, the prediction system can effectively analyze the macro-development of the cultural industry, regional development, academic research hotspots, investment and financing in the application experiments, and present the visualized prediction results to help users understand the development quality of the cultural industry more intuitively and clearly.
In today’s information age, big data has become an indispensable and important resource in various fields, and the education sector is no exception. With the explosive growth of educational data, how to effectively mine and utilize this data to optimize the education and teaching process has become a focus of attention for educators and researchers. Among them, association rule mining, as an important data mining technique, is increasingly widely used in the field of education. This investigation delves into the deployment of association rule mining within the framework of English online teaching models, capitalizing on the burgeoning domain of big data. In the era of exponentially advancing information technology, big data has crystallized as an integral component for educational enhancement in both pedagogical quality and methodologies. Initially, this paper dissects a spectrum of extant teaching paradigms propelled by big data analytics. The discourse then pivots to scrutinize the contemporary landscape and the evolution of English online pedagogy. Employing association rule analysis, the study excavates a trove of significant patterns and linkages from voluminous datasets of online educational activities. The insights gleaned serve as a compass for refining instructional strategies and judiciously distributing educational resources. The empirical evidence underscores the proposition that granular examination of student engagement metrics and scholastic achievement empowers educators to tailor bespoke educational trajectories, thereby amplifying pedagogical efficacy and enriching the academic voyage. Beyond furnishing an avant-garde outlook on English online instruction, the findings proffer a substantive benchmark for e-pedagogy across diverse academic disciplines.
Technological developments in the field of education have contributed to the current surge in popularity of online courses. Every day, there is an exponential increase in both the rate of development and the availability of learning information. The trend in education systems across the world is toward putting the student at the center of everything. Education systems throughout the world are shifting toward a model that is more tailored to each individual student. This allows current technology to adapt different qualities of humans. Finding an appropriate learning strategy for a course with a large body of material that may be quite a challenge. The recommendation of a learning route aids students in methodically completing their coursework and reaching their objectives. Smart robots and computers are now able to comprehend individual-specific demands, technology advancements like AI, Machine Learning, and Big Data have made this possible. This paper suggests an AI-based learning-teaching model (AI-L-TM) for recommending learning paths that centers on analyzing learning performance and acquiring new information. Educational analytics improves a plethora of English-language individualized learning experiences by evaluating supplied data to provide valuable learning results. Through the use of Internet of Things (IoT) devices, data mining methods, and classroom data gathering, this project seeks to enhance the English learning experiences of college and university students. Here, AI methods might be helpful for a number of reasons, such as creating a learning–teaching model that mimics human thinking and decision-making and reducing uncertainty to make the process more efficient. Using artificial intelligence techniques for adaptive educational systems within e-learning, this paper presents a range of topics related to the field. It discusses the pros and cons of these techniques, and how important they are for creating smarter and more adaptive environments for online learning.
Marketing methods often have high costs and limited effectiveness, making it difficult to stand out in fierce market competition. A bidirectional personalized recommendation algorithm based on customer preferences is proposed to help small and medium-sized enterprises more accurately locate their target customers. Based on other customers’ and neighbors’ purchases, the customer’s purchase information is first expanded. Calculate a customer’s product preference weight, assess a customer’s purchasing preferences, and provide personalized product recommendations based on a customer’s preferences. Design the methodology. Finally, mining customers similar to the sample customers to form a community, providing merchants with recommendations for potential customers and precise customer maintenance based on the sample customers provided by the merchants. The algorithm’s efficacy can be proven by conducting experiments on real datasets, which can be used in personalized recommendation research.
China’s growing economy contributes significantly to worldwide consumption. Economic rebalancing promises opportunities for manufacturing exporters, and that can weaken commodity demand in the long term. China exerts increasing influence on emerging countries through trade, investment, and ideas. China faces several important economic issues that can hinder future development, including distortive economic policies that have evolved into an overreliance on fixed investment and exports for economic growth, government assistance for state-owned businesses, and a weak banking system. Building an information infrastructure and enhancing the technical level and application capability of big data (BD) remain under the purview of the Chinese Government. Artificial intelligence (AI)-based hybrid artificial neural network (HANN) might boost total factor productivity by a large margin, influencing many sectors in China in ways that official statistics would miss, including changes to the labor market, investment patterns, and overall productivity. Hence, BD-HANN has a coefficient of variation, quantity graph analysis, standard deviation, and entropy index, some of the most conventional quantitative research tools used to examine disparities in regional economic growth. According to the neoclassical growth model, regions with lower starting values of the capital-labor ratio anticipate higher per capita income growth rates. Thus, poor areas will grow faster than rich ones, assuming that the only difference between regional economies is the level of their initial stock of capital.
The labor education evaluation system often has problems such as strong subjectivity and a single evaluation index, which makes it difficult to comprehensively and objectively reflect the students’ labor literacy and practical ability. Therefore, a college labor education evaluation system based on big data and K-means is constructed. To enhance labor education, we first collect relevant data. An evaluation ratio threshold is then set to eliminate low-quality data. By leveraging big data and cloud computing technology, we build a personalized labor education and teaching evaluation system. Within this system, the K-means clustering algorithm is employed to classify a vast amount of college labor education data. To optimize the clustering center, particle swarm optimization is introduced. Furthermore, a multi-level evaluation system is constructed using the AHP method. This approach enables a comprehensive and systematic assessment of the effectiveness of labor education. The experimental results show that the resource utilization efficiency of the design methods is more than 90%, the loss value is the lowest 0.39, the average iteration time is 4.462s, and the evaluation time is 15s. The data clustering results show higher clustering clarity.
The increasing reliance on New Energy Vehicles (NEVs) has highlighted the need for efficient fault prediction and maintenance strategies to ensure their optimal performance and longevity. This study introduces the Refined Puffin Algorithm-refined Scalable Random Forest Tree (RPA-SRFT), a novel hybrid model designed to predict faults and optimize maintenance decisions in NEVs. The dataset used fault prediction and optimization in NEV. It covers multiple vehicle models and includes real-time operational data for fault prediction. Data preprocessing includes cleaning and normalization to handle missing values, remove noise, and scale the data to ensure consistency across features. Feature extraction is performed using Principal Component Analysis (PCA), which reduces dimensionality while retaining key information necessary for accurate fault prediction. The Refined Puffin Algorithm (RPA) is applied for fine-tuning the hyperparameters of the Scalable Random Forest Tree (SRFT), ensuring the model is well-adapted to large-scale NEVs data. The implementation of the RPA-SRFT algorithm in Python resulted in a significant improvement in fault prediction, with recall (97.60%), F1-score (98%), accuracy (98.50%), and precision (98%) outperforming conventional models in terms of both predictive performance and scalability. By leveraging advanced machine learning technique and big data analysis model, it can proactively predict faults and optimize maintenance schedules, resulting in reduced downtime and lower maintenance costs.
In the development of the Internet of Things (IoT), the server layer handles data structures and algorithms, which can successfully solve the desert island problem of heterogeneous data, but it will occupy a large amount of server resources and cause system pressure. To address the additional computational burden, it is necessary to make full use of the computing power of the edge layer. Solving problems such as communication connections, data detection and channel reasoning between large-scale IoT terminal nodes in non-ideal environments is worthy of in-depth study. Traditional frequency offset compensation algorithms and clock synchronization technology can no longer be applied to large-scale IoT solution design. A technology that can accurately reason about frequency offset and increase data transmission rate is needed. For large-scale direct connection scenarios that consider frequency offset and data transmission rate, this paper considers that the equipment required by the IoT terminals sends downlink transmission data-related information to achieve signal coverage, combines the big data scenario of large-scale direct connection, and introduces the data transmission rate to build a homomorphic encryption transmission model for asynchronous data transmission in the IoT system. To reduce the impact of data transmission rate on reconstruction quality and performance, a method that combines asynchronous data transmission processing and channel inference prediction is constructed. An optimization scheme for multi-node detection and channel inference of Raft algorithm homomorphic encryption and convex optimization is designed. In the simulation test, the paper compares the detection bit error rate, channel inference quality and performance of the traditional Lasso algorithm and the OMP algorithm based on serial communication interference signal elimination. The experimental results show that the performance of the Raft algorithm proposed by the paper is better than the Lasso algorithm by 7.52% and the OMP algorithm by 10.64%, which has a greater technical advantage. Meanwhile, the innovative use of homomorphic encryption transmission method to asynchronously process big data information will promote the IoT and big data information processing, which can enable blockchain technology to play an important role in various types of IoT systems.
In view of the current problem of various cracks in the old residential areas, and the fact that these cracks may cause safety risks to the building, it is particularly important to carry out real-time monitoring, so as to give early warning in advance and take Corresponding safety measures. In this paper, it is proposed to use steel rulers, feeler gauges and acoustic detectors to measure the length, width and depth of cracks and establish an intelligent and integrated big data monitoring platform to monitor cracks in old residential areas in real time and give early warning to ensure that the houses are in a safe state. Through the big data system, the platform solves the problems of lagging traditional manual inspection, inefficient resource allocation and extensive management and lays the foundation for subsequent housing construction safety identification and crack treatment.
The intersection of big data and artificial intelligence has significantly transformed computational techniques for predicting real estate market trends, aligning closely with the thematic scope of “Frontiers in Computer Science.” Traditional real estate forecasting models often fail to capture the intricate spatial, temporal, and economic dependencies essential for robust prediction. Challenges such as data sparsity, nonlinearity, and market volatility remain inadequately addressed, limiting their scalability and adaptability in dynamic market conditions. To address these gaps, we propose a novel framework comprising the Dynamic Relational Price Network (DRPN) and the Market Adaptive Optimization Strategy (MAOS). DRPN integrates graph-based reasoning for spatial dependencies, temporal forecasting with recurrent networks, and hierarchical feature learning to improve interpretability and predictive accuracy. Meanwhile, MAOS dynamically adjusts model parameters and regularization strategies based on real-time market conditions, ensuring robust generalization across diverse scenarios. Experimental results demonstrate the superior performance of our approach in predictive accuracy, stability, and scalability compared to conventional methods, providing actionable insights into market dynamics. This research offers a scalable and adaptive solution to real estate forecasting, contributing to the broader applications of AI in computational market analysis.
Online social media platforms have emerged as integral channels for facilitating social interactions, with celebrities utilizing these platforms to engage with their fan base and cultivate a substantial following. The group of engaged fans, commonly referred to as “active fans”, represents individuals who actively communicate with celebrities and actively participate in discussions pertaining to the celebrities’ endeavors. For celebrities, the task of retaining and augmenting the count of active fans holds immense significance, as it significantly amplifies their social impact and commercial value. Here, we construct dynamic weighted active fan networks by leveraging data from 2021 on Sina Weibo, which stands as China’s largest social media platform. Through a comparative analysis encompassing the network’s structure, the growth rate and the duration of active fans, we delve into the influence wielded by six distinct thematic categories, endorsement, variety, public welfare, sports, music and national affairs. This analysis covers a cohort of nine celebrities spanning five diverse domains, including actors, singers, online influencers, anchors and athletes. The growth trajectory and life cycle exhibited by celebrity fans exhibit notable variations, both within and across the aforementioned themes. These dynamics are further influenced by the inherent structural attributes of the personal fan network belonging to each celebrity. Employing the K-Shape time series clustering algorithm, we have undertaken an in-depth exploration of outburst growth patterns observed in active fans and determined the optimal value of the number of clusters to be k=4 through comparative analysis. Our findings underscore that the themes of endorsement and public welfare exhibit all four growth patterns, namely Double-Peak, Oscillatory, Single-Peak and Continuous Growth Patterns. In contrast, when considering all themes collectively, they collectively demonstrate a single-peaked decaying growth pattern the insights gleaned from this study not only serve as a valuable reference and guide for celebrities across diverse domains who aspire to bolster and augment their social influence but also contribute to the burgeoning fan economy. Moreover, this research introduces novel perspectives and insights for scrutinizing patterns of fan growth and their corresponding dynamics.
This conceptual paper exclusively focused on how artificial intelligence (AI) serves as a means to identify a target audience. Focusing on the marketing context, a structured discussion of how AI can identify the target customers precisely despite their different behaviors was presented in this paper. The applications of AI in customer targeting and the projected effectiveness throughout the different phases of customer lifecycle were also discussed. Through the historical analysis, behavioral insights of individual customers can be retrieved in a more reliable and efficient way. The review of the literature confirmed the use of technology-driven AI in revolutionizing marketing, where data can be processed at scale via supervised or unsupervised (machine) learning.
This study investigates the application of big data analytics (BDA) and its impact on consulting firms’ competitiveness. Descriptive statistics, ordinary least square regression, and moderated regression analysis were used to analyze survey data obtained from 118 business and management consultants in Nigeria. The robustness of the result was evaluated using structural equation modeling. Result shows that the application level of BDA by consulting firms is generally moderate. BDA application has a significant positive impact on organizational competitiveness, although the strength of the relationship is weak. Further, data quality significantly moderates the relationship between BDA and organizational competitiveness. The study concludes that the application of BDA can enhance the competitiveness of consulting firms. However, the extent to which such benefit is realized is dependent on the quality of data applied in the analysis. The study contributes to knowledge by providing empirical evidence that the deployment of BDA can be a source of competitive advantage for consulting firms. The study adds to literature on management accounting in the digital economy and the application of big data to business and management consulting.
Fraud detection through the classification of highly imbalanced Big Data is an exciting area of Machine Learning research. On the one hand, in certain fraud detection application domains, the use of One-Class classifiers is an overlooked opportunity. On the other hand, for researchers faced with the task of building Machine Learning models for identifying fraud, when only legitimate transaction data is available, One-Class Classifiers are indispensable. We investigate the efficacy of SHapley Additive exPlanations (SHAP) as a feature selection technique for One-Class classification tasks. In this study we utilize authentic data from the Credit Card fraud and Medicare insurance fraud application domains. Our contribution is to show that researchers can use SHAP in conjunction with One-Class Classifiers to do feature selection on highly imbalanced datasets, and then build models, with the selected features, that yield performance similar to, or better than, models built using all features. Our results in Big Medicare data fraud detection show that an over 90% data reduction through feature selection can nevertheless coincide with the best performance in terms of Area under the Precision Recall Curve.
Since the 1970s, China’s economy has been changing rapidly, especially after joining the World Economic and Trade Organization, and with the increase in multinational enterprises, the development of overseas markets has become more convenient, contributing to the further strengthening of our economic power. The financial industry plays an essential role in the development of a country, and commercial banks are the leaders in the financial sector. They have made significant contributions to national economic growth. Under the background of substantial economic development, the economic management decisions of commercial banks have become more complex, which makes the risks faced by commercial banks increase continuously. The rapid growth of global informatization has effectively boosted the development of all walks of life. Based on the intelligent characteristics of big data, this paper analyzes the influencing factors of economic management decisions of commercial banks based on wise choices of big data, hoping to enable commercial banks to achieve the ideal expectation of financial management of “reducing risk and creating value” to enhance the competitiveness of commercial banks in the whole market.
In the past decades, there is a wide increase in the number of people affected by diabetes, a chronic illness. Early prediction of diabetes is still a challenging problem as it requires clear and sound datasets for a precise prediction. In this era of ubiquitous information technology, big data helps to collect a large amount of information regarding healthcare systems. Due to explosion in the generation of digital data, selecting appropriate data for analysis still remains a complex task. Moreover, missing values and insignificantly labeled data restrict the prediction accuracy. In this context, with the aim of improving the quality of the dataset, missing values are effectively handled by three major phases such as (1) pre-processing, (2) feature extraction, and (3) classification. Pre-processing involves outlier rejection and filling missing values. Feature extraction is done by a principal component analysis (PCA) and finally, the precise prediction of diabetes is accomplished by implementing an effective distance adaptive-KNN (DA-KNN) classifier. The experiments were conducted using Pima Indian Diabetes (PID) dataset and the performance of the proposed model was compared with the state-of-the-art models. The analysis after implementation shows that the proposed model outperforms the conventional models such as NB, SVM, KNN, and RF in terms of accuracy and ROC.
Cloud computing’s simulation and modeling capabilities are crucial for big data analysis in smart grid power; they are the key to finding practical insights, making the grid resilient, and improving energy management. Due to issues with data scalability and real-time analytics, advanced methods are required to extract useful information from the massive, ever-changing datasets produced by smart grids. This research proposed a Dynamic Resource Cloud-based Processing Analytics (DRC-PA), which integrates cloud-based processing and analytics with dynamic resource allocation algorithms. Computational resources must be able to adjust the changing grid circumstances, and DRC-PA ensures that big data analysis can scale as well. The DRC-PA method has several potential uses, including power grid optimization, anomaly detection, demand response, and predictive maintenance. Hence the proposed technique enables smart grids to proactively adjust to changing conditions, boosting resilience and sustainability in the energy ecosystem. A thorough simulation analysis is carried out using realistic circumstances within smart grids to confirm the usefulness of the DRC-PA approach. The methodology is described in the intangible, showing how DRC-PA is more efficient than traditional methods because it is more accurate, scalable, and responsive in real-time. In addition to resolving existing issues, the suggested method changes the face of contemporary energy systems by paving the way for innovations in grid optimization, decision assistance, and energy management.