Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The integration of Artificial Intelligence (AI) and sign language recognition is a hot topic in the field of AI+Science, aimed at addressing communication barriers faced by the deaf and hard-of-hearing communities. This paper examines the integration of AI with sign language recognition, highlighting its potential to bridge communication gaps for the deaf and hard-of-hearing. It reviews the evolution of sign language recognition from data gloves to computer vision and underscores the role of extensive databases. The paper also discusses the benefits of multi-modal AI models in enhancing recognition accuracy. It highlights the importance of government and industry support, ethical data practices, and user-centered design in advancing this technology. The challenges and opportunities of integrating this technology into daily life, including technical, interface, and ethical considerations, are explored, emphasizing the need for user-focused solutions and innovative technical approaches.
In heterogeneous wireless sensor networks, the data collection method based on compressed sensing technology is susceptible to packet loss and noise, which leads to a decrease in data reconstruction accuracy in unreliable links. Combining compressed sensing and matrix completion, we propose a clustering optimization algorithm based on structured noise matrix completion, in which the cluster head transmits the compressed sampling data and compression strategy to the base station. The algorithm we proposed can reduce the energy consumption of the node in the process of data collection, redundant data and transmission delay. The rank-1 matrix completion algorithm constructs an extremely sparse observation matrix, which is adopted by the sink node to complete the reconstruction of the whole network data. Simulation experiments show that the proposed algorithm reduces network transmission data, balances node energy consumption, improves data transmission efficiency and reconstruction accuracy, and extends the network life cycle.
This paper proposes a research on cross-media intelligent perception and retrieval analysis technology based on deep learning education in view of the complex learning knowledge environment of artificial intelligence of traditional cross-media retrieval technology, which is unable to obtain retrieval information timely and accurately. Based on the cross-media theory, this paper analyzes the pre-processing of cross-media data, transforms from a single media form such as text, voice, image, and video to a cross-media integration covering network space and physical space, and designs a cross-media intelligent perception platform system. With multi-core method of typical correlation analysis algorithm, this paper develops a new joint learning framework based on the joint feature selection and subspace learning cross-modal retrieval. What’s more, the algorithm is tested experimentally. The results show that the retrieval analysis technology is significantly better than the traditional media retrieval technology, which can effectively identify text semantics and visual image classification, and can better maintains the relevance of data content and the consistency of semantic information, which has important reference value for cross-media applications.
A Battery Management System (BMS) can prolong the life of the battery but it depends on the accuracy of the adopted scheme. Different techniques have been developed to enhance the BMS by monitoring the State of Health (SOH) of the battery. In this paper, the detection of battery voltage is analyzed by using the cycle counting method, which is a conventional technique and compared with Artificial Neural Network (ANN), a heuristic method. The advantage of the proposed ANN method is that SOH can be monitored without disconnecting the battery from the load. Also, the sampling data to the ANN are derived from various techniques including Open Circuit Voltage (OCV) method, Ambient temperature measurement, and valley point detection. A feed-forward backpropagation algorithm is used to achieve the purpose of real-time monitoring of the LAB. The results show that the precise estimation of SOH can be obtained by Feed-Forward Neural Network (FFNN) when trained with more sampling data.
The article is on a survey of knowledge, attitudes and the usage of complementary and alternative medicine in Singapore. It explains the methods, the results and the conclusion of the experiment.
Constraint-based Genome-scale In Silico Models for Systems Biology.
Korean Systems Biology Project.
Systems Biology's Promises and Challenges.
Many real-world networks are known to exhibit facts that counter our knowledge prescribed by the theories on network creation and communication patterns. A common prerequisite in network analysis is that information on nodes and links will be complete because network topologies are extremely sensitive to missing information of this kind. Therefore, many real-world networks that fail to meet this criterion under random sampling may be discarded.
In this paper, we offer a framework for interpreting the missing observations in network data under the hypothesis that these observations are not missing at random. We demonstrate the methodology with a case study of a financial trade network, where the awareness of agents to the data collection procedure by a self-interested observer may result in strategic revealing or withholding of information. The non-random missingness has been overlooked despite the possibility of this being an important feature of the processes by which the network is generated. The analysis demonstrates that strategic information withholding may be a valid general phenomenon in complex systems. The evidence is sufficient to support the existence of an influential observer and to offer a compelling dynamic mechanism for the creation of the network.
Studying the school readiness is an interesting domain that has attracted the attention of the public and private sectors in education. Researchers have developed some techniques for assessing the readiness of preschool kids to start school. Here we benefit from an integrated approach which combines Data Mining (DM) and social network analysis towards a robust solution. The main objective of this study is to explore the socio-demographic variables (age, gender, parents' education, parents' work status, and class and neighbourhood peers influence), achievement data (Arithmetic Readiness, Cognitive Development, Language Development, Phonological Awareness), and data that may impact school readiness. To achieve this, we propose to apply DM techniques to predict school readiness. Real data on 306 preschool children was used from four different elementary schools: (1) Life school for Creativity and Excellence a private school located in Ramah village, (2) Sisters of Saint Joseph missionary school located in Nazareth, (3) Franciscan missionary school located in Nazareth and (4) Al-Razi public school located in Nazareth, and white-box classification methods, such as induction rules were employed. Experiments attempt to improve their accuracy for predicting which children might fail or dropout by first, using all the available attributes; next, selecting the best attributes; and finally, rebalancing data and using cost sensitive classification. The outcomes have been compared and the models with the best results are shown.
The purpose of this paper is to develop a Knowledge Management (KM) model in order to investigate and monitor the performance of HEIs in Oman. In this paper, a KM model of five KM components based on the relative performance factor and a critical set of 22 indicators from an original set of 180 indicators are proposed and outlined; in terms of the use of the KM categories and selected indicators, the performance of 30 Higher Education Institutions was measured and studied. In this paper, the literature and related approaches were studied. A KM model which is based on the five main components of KM, relative performance factor, indicators, weights, and data collection was proposed. Three types of data were collected from institutions to calculate the indicators/the relative performance factor, and several tests were carried out. In addition, the conceptualisation and operationalisation processes of the model are discussed. The proposed KM model was implemented in all the private institutions and the obtained results show the applicability and accuracy of the technique, and the proposed technique was compared with the classical approach. The results of the proposed approach have been tested and analysed comprehensively: these results of the statistical tests of the aforementioned proposed approach show a high level of accuracy of the results, and that the results of institutions differ significantly one from another, and also that the differences between institutions are not due to any random factor. The discussion of the numerical results shows that the performance of HEIs was placed in three categories. The findings of the paper highlight the performance of HEIs and they give significant evidence to individual HEIs and to the stakeholders so as to allow them to act on the findings and distinguish between them in terms of the actual performance.
Field measurements are important for understanding coastal processes, verifying and calibrating numerical and physical models, and as direct input to coastal designs. Many aspects of coastal engineering are hampered by the lack of high quality field data particularly during storms. This paper introduces the Sensor Insertion System (SIS), which uses a somewhat different approach for measurements that has proven useful at the US Army Corps of Engineers Field Research Facility (FRF). The SIS is a pier-mounted diverless instrument deployment and retrieval system that can be used to make measurements under calm or storm conditions anywhere across the surf zone. The SIS can operate in wave heights up to 5.6 m, with 20 m/s winds, and 2 m/s currents. The mobility of the SIS permits measurements to evolve with the morphology, thus avoiding many of the problems of traditional stationary instrument installations. The SIS approach is to use a single instrument array and reduce the cost and logistics of instrumenting the surf zone so that a long-term measurement capability can be maintained. This approach has helped overcome many of the obstacles of directly measuring storm longshore sediment transport processes at the FRF during the past six years. The utility of the SIS is demonstrated in this paper through examples of it's application for a variety of coastal science investigations.
Nowadays, there are a huge number of digital television platforms and channels, so it is not easy for the viewer to decide what they want to watch. Some television providers offer information about the programs they broadcast, but this information is usually scarce and there is no possibility to perform advanced operations like recommendation ones. For this reason, viewers could benefit from a system that integrates all the available information about contents, and applies semantics methodologies in order to provide a better television watching experience.
The main objective of this research is the design of a television content management system, called OntoTV, which retrieves television content information from various existing sources and represents all these data in the best way possible by using knowledge engineering and ontologies. These semantic computing techniques make it possible to offer the viewers more useful operations on the stored data than traditional systems do, and with a high degree of personalization. Additionally, OntoTV accomplishes all of this regardless of the TV platform installed and the client device used. The viewers' satisfaction when using this system has been also studied to prove its functionality.
Microchips constitute the basic ingredients in almost all products that use electricity as a power source, not only revolutionizing how we process information and communicate but also serving as indispensable parts in renewable energy systems and stable power grids. The production of leading-edge chips is of high military importance as support for advanced weapon systems using artificial intelligence and thereby has geopolitical implications. The production of microchips is economically efficient and technologically advanced, made possible through increased specialization and consolidation of activities along the global value chain. This has concentrated essential value chain activities in different geographical regions, including the US, Europe and Japan, with the most advanced, leading-edge production facilities located in Southeast Asia (South Korea and Taiwan). The highly capital-intensive, specialized and concentrated production activities make the global chip value chain tightly coupled and less flexible to withstand major disruptions, as witnessed during the COVID-19 pandemic. One consequence of this is that various governments from the US and Europe, as well as incumbent market leaders in Southeast Asia, attempt to advance their local production capacities, the eventual effects of which remain to be seen.
This chapter reviews the regulation of digital markets within the EU and the associated legislation and enforcement of the same. It is noted that the implementation of the General Data Protection Regulation (GDPR) should lead to major advantages for personal data protection while avoiding the dissemination of misleading information and unlimited data exploitation by dominant data platforms. However, the enforcement of this legislation has largely failed within the EU due to widespread animosity between the Irish and Luxembourg supervisory authorities, which even manifested as legal actions against other EU supervisory authorities, thereby, in reality, paralyzing the enforcement of the common rules across the entire EU. The Digital Services Act (DSA) and the Digital Markets Act (DMA), soon to be implemented, are largely attempting to circumvent the adverse effects of the failed enforcement of the GDPR but are unlikely to have much impact given the negative relations between the national supervisory authorities across the EU. This situation jeopardizes the entire standing of EU legislation as a reliable global standard-setter.
This chapter argues that state-owned Chinese integrated maritime logistics enterprises are about to change the power balance vis-à-vis the hitherto dominant, privately owned enterprises based in Europe. This shift, which has been actively supported as part of China’s ambitious Belt and Road Initiative, will directly affect the European Union’s common transport and competition policy. Within the larger Belt and Road Initiative, the Maritime Silk Road project can be seen as the umbrella concept for the comprehensive management of the entire supply chain between China and Europe. We discuss possible policy implications for both China and the European Union when it comes to managing the subtle balance between geopolitical considerations and efficient operations of trade and transport controlled by a few dominant actors. As part of our theoretical framework, we use two extensions of the classical obsolescing bargaining model: the one-tier bargaining model and a bargaining model of reciprocation. By combining the two models, we aspire to explain the changing nature of bargaining relations between, on the one hand, the Chinese government and its state-owned enterprises and, on the other, the private-owned European companies as a function of the goals, resources and constraints of the involved parties.
This paper expounds design method and hardware circuit of intelligent temperature controller based on serial chip. The watchdog circuit of controller is realized by serial chip 25045, LED display is realized by serial chip MC14489. Communication function with the computer is completed by MAX232C. The hardware and software design of A/D conversion is realized by serial chip MAX187. For the D/A output, not only the hardware connection and software program are provided, but also the time schedule of D/A converting circuit is analyzed. Compared with the past design, All of the chips which are used by the intelligent temperature controller are serial chips. The advantage is that it has small volume and saves I/O resource of CPU. The system has high reliability and stable performance and it is used expediently.
The biomedical sciences have experienced an explosion of data which promises to overwhelm many current practitioners. Without easy access to data science training resources, biomedical researchers may find themselves unable to wrangle their own datasets. In 2014, to address the challenges posed such a data onslaught, the National Institutes of Health (NIH) launched the Big Data to Knowledge (BD2K) initiative. To this end, the BD2K Training Coordinating Center (TCC; bigdatau.org) was funded to facilitate both in-person and online learning, and open up the concepts of data science to the widest possible audience. Here, we describe the activities of the BD2K TCC and its focus on the construction of the Educational Resource Discovery Index (ERuDIte), which identifies, collects, describes, and organizes online data science materials from BD2K awardees, open online courses, and videos from scientific lectures and tutorials. ERuDIte now indexes over 9,500 resources. Given the richness of online training materials and the constant evolution of biomedical data science, computational methods applying information retrieval, natural language processing, and machine learning techniques are required - in effect, using data science to inform training in data science. In so doing, the TCC seeks to democratize novel insights and discoveries brought forth via large-scale data science training.
The Robert W. Woodruff Library at Emory University is working on a project to merge its reference, circulation and technology help desks into one service point. The timeframe to implement this project is very short. Therefore, reference staff, working with circulation staff, have developed quick methods to analyze existing data and quickly gather new data to help inform decision-making about staffing levels, training, and a location for the new service point. This paper is a case study detailing some of the methods and processes used by staff to make decisions.
The design of high-speed multi-channel real time data collection system based on FPGA was introduced in the paper. The main components of the system and the achieving method of FPGA were described. EP2C5T144 was used for the hardware of FPGA to realize the controlling circuits, multiplexers translator and memory circuit of A/D translator. VHDL hardware description language was used for software of FPGA and TLC549 device with high speed and accuracy was used for A/D converter.