User’s basic attributes, behavior characteristics, value attributes, social attributes, interest attributes, psychological attributes, and other factors will lead to poor user experience, information overload, interference, and other negative effects. In order to develop more accurate marketing strategies, optimize user experience, and improve the conversion rate and user satisfaction of e-commerce platforms, an accurate construction method of e-commerce user profile based on artificial intelligence algorithm and big data analysis is proposed. Based on big data analysis technology, the basic attributes, behavior characteristics, value attributes, social attributes, interest attributes, and psychological attributes of e-commerce users are collected and integrated from multiple dimensions. The improved sequential pattern mining algorithm (PBWL) is applied to mine the frequent sequential pattern in the e-commerce user behavior, and to reveal the user’s behavior habit. The comprehensive attribute representation of e-commerce users is obtained by combining the LINE network model and the convolutional neural network. The firefly K-means clustering algorithm is used to cluster the e-commerce users, group the users based on the similarity of user attribute information, create different types of user clusters, and achieve the accurate construction of an e-commerce user profile. The experimental results show that this method can build an accurate e-commerce user profile and provide strong support for personalized recommendation and precision marketing of e-commerce platforms. This method can dig deeply into the behavior habits of e-commerce users and accurately reflect their interest preferences and consumption characteristics. This method can quickly and stably cluster e-commerce users, and the clustering effect of user profiles is optimal. This method can also divide the data into meaningful groups according to the user’s consumption behavior, and reveal the characteristics and values of different groups.
The information source of financial management operation data comes from financial statements, accounting records and other aspects. The amount of data is large, which increases the calculation amount of risk prediction algorithm. In order to reflect the business operation status and risk changes in a timely manner, provide comprehensive and multi-dimensional risk information, use the advantage of big data analysis to quickly process massive data, and propose a prediction method for business financial management operation risk based on big data analysis. Preprocess enterprise financial management and operation data through steps such as data cleaning, standardization, discretization, and fuzzy rough set mining to remove noise, redundancy, incomplete, and uncertain information from the data, and obtain reduced and dimensionality enterprise financial management and operation data; on this basis, select a variety of risk indicators, and use the gray system model to extract key enterprise financial management operational risk indicators. The enterprise financial management operational risk indicators are input into the SVM model as training samples. According to the constraints of the SVM classification model, the classification hyperplane is established, and the results of enterprise financial management operational risk prediction are obtained. Particle swarm optimization algorithm is used to optimize the predefined parameters and cost parameters of the kernel function in the SVM model to improve the accuracy of risk prediction. The experimental results show that this method has excellent feasibility and accuracy. It can not only accurately predict the risk level of the enterprise, but also the prediction results are highly stable and reliable. The method also has strong robustness and is suitable for long-term risk prediction. While the prediction time is short, it can also ensure a better balance between the generalization ability and performance of the model, and provide an efficient and practical risk management tool for enterprises.
The big data heterogeneous Internet of Things (IoT) requires mobile edge computing (MEC) to process some data, and the data analysis system of MEC often has the problem of excessive terminal energy consumption (ECS) or long delay. So this study designed an energy-saving optimization algorithm for the task offloading processing module in the big data heterogeneous IoT analysis system, and designed and conducted simulation experiments to verify the application performance of the algorithm. The experimental results show that the #04 scheme of the designed algorithm has the lowest terminal ECS under the same conditions. Choosing the #04 scheme to build the algorithm, comparative analysis shows that when the edge server (ES) computing rate is 10 cycles/s, the weighted sum values of terminal ECS for EOPU, MPCO, exhaustive search, and local computing methods are 23.6 J, 23.9 J, 28.5 J and 84.5 J, respectively. Moreover, the algorithm possesses a significantly higher percentage of remaining time under different conditions of total SMD devices and total subchannels compared to other methods. This indicates that the designed algorithm can markedly enhance the processing performance of the task offloading model of the big data heterogeneous IoT data analysis system, and can also effectively reduce terminal ECS and system latency. The research results can provide reference for improving the processing ability of heterogeneous IoT big data analysis systems. The contribution of this study to the academic field lies in providing a model that can effectively reduce the operational ECS and time consumption of heterogeneous IoT big data analysis systems containing mobile animal networking devices. Moreover, from an industrial perspective, the results of this study contribute to improving the efficiency of information exchange and processing in the field of IoT computing, thereby promoting the promotion of IoT technology.
The constant elasticity of variance (CEV) model is widely studied and applied for volatility forecasting and optimal decision making in both areas of financial engineering and operational management, especially in option pricing, due to its good fitting effect for the volatility process of various assets such as stocks and commodities. However, it is extremely difficult to conduct parameter estimation for the CEV model in practice since the precise likelihood function cannot be derived. Motivated by the gap between theory and practice, this paper initiatively applies the Markov Chain-Monte Carlo (MCMC) method into parameter estimation for the CEV model. We first construct a theoretical structure on how to implement the MCMC method into the CEV model, and then execute an empirical analysis with big data of CSI 300 index collected from the Chinese stock market. The final empirical results reveal insights on two aspects: On one aspect, the simulated results of the convergence test are convergent, which demonstrates that the MCMC estimation method for the CEV model is effective; On the other aspect, by a comparison with other two most frequently used estimation methods, the maximum likelihood estimation (MLE) and the generalized moment estimation (GMM), our method is proved to be of high accuracy and has a simpler implementation and wider application.
Ethnic minority resources are very rich and contain rich historical resources and culture. Under the impact of modern information technology, the development of minority resources and the inheritance of ethnic culture are facing many challenges, and the current school education is lagging behind in exploring the ecological resources of minority groups, which makes the integration of creative education and minority groups encounter a bottleneck. In response to this situation, we make full use of the platform of creative education to actively explore the traditional skills contained in the lives of ethnic minorities. In the evaluation of creative education, we should not only focus on the evaluation of students’ works, but also on the improvement of students’ knowledge of various creative tools, their ability to use comprehensive subject knowledge, hands-on ability, solution ability and creativity ability during the whole learning process. Based on this, this paper proposes a back propagation neural network (BPNN)-based quality evaluation method for creative education to evaluate the quality of creative education from multiple dimensions. Experiments and comparisons show that the BPNN-based evaluation method proposed in this paper can better evaluate the whole process of creative education and help the further development of creative education in minority regions.
The fast warning for financial risk of enterprises has always been a realistic demand for their managers. Currently, this mainly relies on expert experience to make comprehensive analysis from massive business data. Benefitting from the strong computational performance of deep learning, this paper proposes a fuzzy neural network (FNN)-based intelligent warning method for financial risk of enterprises. An improved FNN structure with time-varying coefficients and time-varying time lags is established to extract features of enterprises from complex financial context. The algorithm of fuzzy C-means and fuzzy clustering based on sample data are studied. In this paper, the fuzzy C-means algorithm is used to cluster the samples, the input sample set is preprocessed, a new set of learning samples is formed, and then the neural network is trained. The enterprise financial risk sample and its modular FNN model are established, and the evaluation of the enterprise financial risk sample is simulated. Then, a decision part is added following the FNN part to output the warning results. After that, we have also conducted a case study as simulation experiments to evaluate the proposed technical framework. The obtained results show that it can perform well in the fast warning of financial risk for enterprises.
Big data and big data analysis are a multi-dimensional scientific and technological pursuit that has profound impact on the society as a whole. Though big data has become such a catchy buzzword, to make any significant stride in this pursuit, we must have a clear picture of what big data is and what big data analysis entails. In this paper, after a brief account on the landscape of big data and big data analysis, we focus attention on two issues: granularities of knowledge content in big data, and utility of inconsistencies in big data analysis.
This exploration aims to transfer, process and store multimedia information timely, accurately and comprehensively through computer comprehensive technology processing, and organically combine various elements under the background of big data analysis, so as to form a complete intelligent platform design for multimedia information processing and application. In this exploration, the intelligent vehicle monitoring system is taken as an example. Data acquisition, data transmission, real-time data processing, data storage and data application are realized through the real-time data stream processing framework of Flume+Kafka+Storm of big data technology. Data interaction is realized through Spring, Spring MVC, VUE front-end framework, and Ajax asynchronous communication local update technology. Data storage is achieved through Red is cache database, and intelligent vehicle operation supervision system is achieved through multimedia information technology processing. Its purpose is to manage the vehicle information, real-time monitor the running state of the vehicle and give an alarm when there are some problems. The basic functions of vehicle operation monitoring and management system based on big data analysis are realized. The research on the design of vehicle operation monitoring and management system based on big data analysis shows that big data technology can be applied to the design of computer multimedia intelligent platform, and provides a reference case for the development of computer multimedia intelligent platform based on big data analysis.
Mobile technology can make lessons of physical skills for the students to prepare them more vibrant. Its multicultural, consolidated, and communicative characteristics contribute to the improving teaching performance, excitement of learners, reformation of learning, and a basic idea of teaching mode. The challenge facing college physical education professionals is emotionally labile students, repetition of their education, and the lack of appropriate technology, resources, school violence, and behavioral issues. Because of the high levels of bullying experienced by students with learning and attention difficulties, misbehavior and absence are more likely to occur. The school environment and negative emotions can both aggravate academic difficulties. The proposed methodology involves evaluating student’s physical training based on big data analysis (ESPT-BDA) to compensate voluntary sport behavior by neglecting its unwanted attitude or values and solving the problem of uniform learning to meet the student’s requirements. Repetition helps hone a talent by putting it to use repeatedly. The interval at which a skill is performed is another critical component of repetition. Using spaced repetition as a learning strategy involves gradually increasing the time between repetitions of the same piece of knowledge. The regulating intelligent control framework is used to compute an effective method of directing the insufficient and improper system to achieve a specific objective in an uncertain context. A convolution neural network is implemented to automatically detect vital physical activity characteristics without a need for human monitoring. Various experiment’s findings demonstrate the effect of different sequences up to 96.51% properly classified test data and improve evaluation of intelligent control framework, intelligent practicing intensity control 94.3%.
The evaluation system of education effect is an important part of the whole teaching process, and the establishment of the evaluation system of college English teaching effect is an important work to test the effect of college English teaching. The traditional evaluation model is widely used and cannot be applied to a variety of teaching situations. Therefore, this paper proposes an evaluation model of college English education effect based on big data analysis. This paper determines the selection principle of the evaluation index of college English education effect, and on this basis, selects the evaluation index factors of college English education effect (experts, students and teachers), calculates the weight and membership matrix of the evaluation index, and outputs the comprehensive evaluation results of college English education effect, which realizes the construction of the evaluation model of college English education effect. The results show that: under the background of the experimental subjects (senior one and senior two), the evaluation errors of English education effect meet the needs of colleges and universities, which proves that the construction model is effective and feasible, and provides the basis and support for the reform of college English education. The range of assessment errors is between 0.78% and 1.44%, all consistent with the demands of the evaluation of the English education effect which demonstrates that the model is successful.
Big Data Analysis and Deep Learning are two fields of data science that are getting a proportion of interest nowadays. The relevance of big data has increased as a result of the massive amounts of domain-specific data that many public and private firms have gathered. This data may be useful for research on topics like national security, fraud, and medical informatics and detection. Huge volumes of data are analyzed by businesses like Google and Microsoft for business analysis and decision-making that affects both present and future technology. By using deep learning algorithms, high-level information is extracted. Multifaceted intellections are articulated as data illustrations using a tiered erudition approach. On the basis of comparably simpler abstractions created at the level before, sophisticated abstractions are learnt at a given level. The field of image processing is still in its infancy. Agriculture, textiles, transportation, and other fields have all made use of image alteration, image coding, compression, picture segmentation, and other technologies. Traditional image processing techniques, on the other hand, are unable to handle the huge quantity of image mockups available today. As a result, to raise the level and efficiency of image processing, a novel methodical area has emerged that focuses on exploring big data-based image processing technology and developing an image processing archetypal. The image processing prototypical based on big data offers recompenses including strong repeatability, high accuracy, extensive applicability, good flexibility, and a high potential for information reduction, according to existing big data research results. In this review, potential of Deep Learning and images processing to address some of the most challenging problems in Big Data Analytics, including the extraction of multifaceted designs from gigantic data sets, semantic indexing, data tagging, rapid facts repossession, and simplification of discriminative tasks will be explored. The integration and interaction of the three main topic is image processing, deep learning, and big data will be explained in this review. All three of these areas hold great promise for a variety of industries. The research issues highlighted in the integration and interplay of these broad domains are examined, as are some potential study avenues.
Education is a dynamic system by which students perceive the factors necessary to fit them into the society. Education is mainly intentional learning that grooms individuals to achieve success in their adult lives. Evaluation of teaching techniques, course management (CM), communication, and student monitoring are the main characteristics of today’s education system. The aim to plan the curriculum of education management in both schools and colleges leads to the implementation of an MS-BDA. The development process for evaluation of teaching techniques and CM includes the use of the sentiment analysis method, which assesses the emotional feelings of students studying the course by managing curriculum quality. The big data analysis with MNN is developed by considering the communication and student monitoring system. This system evaluates the monitoring model provided in MS-BDA for assessing student communication on merging the voice-over with the communication language processing system. The simulation analysis is performed based on accessibility, adaptability, and efficiency, proving the proposed framework’s reliability. Therefore, the system outputs an accuracy of 99.1% when compared to the existing methods.
We evaluate regional differences in road recovery in Fukushima Prefecture following the 2011 Tohoku Earthquake. We divided Fukushima Prefecture into seven regions, i.e., Soso, Iwaki, Kenhoku, Kenchu, Kennan, Aizu, and Minami-aizu regions. The cumulative usable road distance ratio of the main roads has been precisely calculated for each city from telematics probe-car data using the open source geographical information software. Defining the cumulative usable distance up to September 30, 2011 as 100%, the percentages of usable road distances were calculated. According to the results of our study, we conclude that the recovery conditions of regional roads in different areas of Fukushima Prefecture following the 2011 Tohoku Earthquake differed. The road recovery speeds in Coastal and Inland areas were not so different from each other. The road recovery speed in Aizu region was about a month slower than that in Coastal and Inland areas. The road recovery speed in Minami-aizu region was about a month slower than that in Aizu region. We concluded that the roads in these two regions whose recovery was significantly delayed were narrow, steep-walled, and located in mountainous regions.
This paper uses computer technology to analyze the data of the Yangtze River Delta from 2003 to 2019 and studies the impact of borrowing scale and borrowing function on total factor productivity. The results show that borrowed size and borrowed functions can accelerate the promotion of total factor productivity: technical efficiency and technological progress. The former can be improved by borrowed size and borrowed functions, and the latter can be further advanced by upgrading the industrial structure. Therefore, it is necessary to strengthen the accessibility between cities, improve technical efficiency and technological progress, and promote the matching of the industrial structure with urban scale and function, thereby realizing the integrated development of the Yangtze River Delta.
We calculated the usable distance of the main roads in the coastal area of Iwate Prefecture following the 2011 Tohoku Earthquake. Calculations were based on a vehicle tracking map built from G-BOOK telematics data using QGIS, an open source geographic information system. The main findings are as follows. First, the change in the accumulative usable road distance ratio during the research period differed from one municipality (i.e., city, town, or village) to the next. Second, the ratio increases in the usable distance of Kuji, Iwaizumi, and Noda were extremely delayed. We could determine related roads by analyzing the maps generated by QGIS software. For Kuji and Iwaizumi, delays were mostly dependent on Iwate Prefectural Road number 7 (Kuji-Iwaizumi line). For Noda, the delay was mostly dependent on Iwate Prefectural Road number 273 (Akka-Tamagawa line). Third, we determined in a previous study that the use of the main road in the southern coastal area of Iwate prefecture was completely recovered by April 29, 2011. However, in this study, when we precisely observed the change in the usable road distance ratio during the research period for each municipality (i.e., city, town, or village), the ratio increase in the usable distance of Kamaishi was delayed compared to other southern coastal cities.
We evaluate regional differences in road recovery in Miyagi Prefecture following the 2011 Tohoku Earthquake. We divided Miyagi Prefecture into three areas, i.e., Inland, Northern Coastal, and Southern Coastal areas. According to the results of our study, we conclude that the recovery conditions of regional roads in different areas of Miyagi Prefecture following the 2011 Tohoku Earthquake differed. In the Northern Coastal area, 80% of the road distance was usable by April 15, 2011 and 90% was usable by May 27, 2011. In the Southern Coastal area, 80% of the road distance was usable by March 31, 2011 and 90% was usable by April 8, 2011. Recovery in the Southern Coastal area was much faster compared to that in the Northern Coastal area. We assume that this is due to the shape of the coastlines. The coastlines in the Northern coastal area are primarily rias. The coastlines in the Southern coastal area are mostly sandy. Furthermore, we have concluded that the recovery conditions of the regional roads following the 2011 Tohoku Earthquake in the Northern Coastal area of Miyagi Prefecture were similar to those in the Southern Coastal area of Iwate Prefecture. In the disaster regions, similar recovery conditions were found according to geographic positions and features.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.