Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A Smart Tourism Resource Information Management Platform Based on Multi-Source Data Fusion

    Leveraging multiple data sources to enhance tourism resource management and visitor behavior analysis has become a key challenge in the context of the booming smart tourism industry. In this study, we explore how to integrate and optimize multiple data sources including social media activities, user reviews, tourism statistics, and geographic information to build a comprehensive information management platform for smart tourism resources. Given the limitations inherent in isolated and decentralized data processing approaches in the smart tourism domain, we propose a new approach using deep learning autoencoders for efficient extraction and fusion of meaningful features from heterogeneous datasets. Our methodology encompasses a rigorous data collection and preprocessing phase, ensuring data quality and consistency, followed by the application of autoencoders to learn high-level feature representations conducive to data integration. The fused data facilitate the development of strategies for the optimal allocation of tourism resources and nuanced analysis of visitor behavior patterns. Experimental evaluations demonstrate the model’s proficiency in capturing intricate data relationships, significantly enhancing the predictive accuracy for tourism demand forecasting, and enabling personalized visitor recommendations. The results underscore the potential of our approach to revolutionize smart tourism management practices by providing actionable insights into resource optimization and visitor engagement strategies, thereby contributing to the sustainable growth of the tourism sector.

  • articleNo Access

    Research on Intelligent Analysis Algorithm for Student Learning Behavior Based on Multi-source Sensor Data Fusion

    In the era of digital transformation, leveraging multi-source sensor data to analyze and enhance student learning behavior has become increasingly crucial. In this research, we propose a Learn-Sync Intelligent Fusion (LSIF) system that delves into Estimate Adam driven- Intelligent Gradient Boosting Machines (EA-IGBM) that integrate sensor data from diverse sources to provide extensive analysis for the student behavior based on engagement and performance using multi-source sensor data fusion. This research employs the sensor data for online and offline classes for predicting the student learning behaviors during the class. To provide a holistic view of student engagement and performance, these sensor data are normalized using min–max normalization. Recursive Feature Elimination (RFE) is employed to extract the normalized multi-source data to frame multiple features that are integrated using the feature-level fusion technique. The LSIF aggregates behavioral fusion data from online and offline activities, using the EA-IGBM algorithm to predict academic success and provide actionable feedback. This model stimulates using tensor flow 2.15. The proposed EA-IGBM algorithm for both offline and online learning engagement detection performance is more significant, which utilizes certain parameters like student engagement, interaction frequency, behavioral patterns, and distractive levels. The established system highlights the effectiveness of multi-source sensor data fusion in monitoring and optimizing student learning behavior.

  • articleNo Access

    The Key Role of Multi-Source Data Fusion in the Construction of Digital Transformation in Manufacturing Industry

    The foundation of social and economic growth is seen to be the manufacturing sector. Processing raw materials or components into completed products that are ready for consumer purchase is known as manufacturing. A new technology called the industrial internet of things (IIoT) has the potential to increase manufacturing productivity, reduce costs, and boost industrial intelligence. Digitalization and automation are constrained by political unpredictability, economic volatility, and a shortage of skilled labor. We proposed a completely unique transfer learning-based data fusion machine (TLDF) to deal with these troubles in IIoT. We suggested a multi-source deep Q-networks (MDQN) technique for task classification, task receiver, and private safeguarding for the manufacturing industry. Data fusion, which includes the gathering and analyzing of massive volumes of internet of things (IoT) data produced through industrial packages and gadgets, and the examination of this data is important to the improvement of producing manufacturing applications in IIoT. The outcome demonstrates that the suggested technique carried out low latency, excessive throughput, and accuracy. The focus of the research is on data fusion and its role in the digital transformation of the industrial sector. It emphasizes the need for ongoing innovation in data integration technology and the implementation of a comprehensive data management strategy.

  • articleNo Access

    An Unmanned Traffic Command System for Controlled Waterway in Inland River: An Edge-centric IoT Approach

    Unmanned Systems10 Dec 2024

    The controlled waterway in the upper reaches of the Yangtze River has become a bottleneck for shipping due to its curved, narrow and turbulent characteristics. Consequently, the competent authorities must establish controlled one-way waterways and signal stations to ensure traffic safety. These signal stations are often located in remote and uninhabited mountainous areas, causing great difficulties in the living and working conditions for the staff. Therefore, the trend has emerged toward unmanned and remote traffic command at signal stations. The vessels passing through it must obey the signal revealed by the Intelligent Vessel Traffic Signaling System (IVTSS) to pass in one direction. The accuracy of signals is directly related to traffic safety and efficiency. However, the unreliability of vessel sensing sensors in these areas and the latency of transmission and computation of large amounts of sensing data may negatively impact IVTSS. Hence, more information from the physical world is needed to ensure the stable operation of IVTSS, and we proposed an edge-computing-centric sensing and execution system based on IoT architecture to enhance the reliability of IVTSS. We conducted experiments using plug-and-play methods, reducing the command and recording error rates by 89.47% and 86.27%, respectively, achieving the goal of real-time perception control.

  • articleNo Access

    DESIGNING A FUSION-DRIVEN SENSOR NETWORK TO SELECTIVELY TRACK MOBILE TARGETS

    Sensor networks that can support time-critical operations pose challenging problems for tracking events of interest. We propose an architecture for a sensor network that autonomously adapts in real-time to data fusion requirements so as not to miss events of interest and provides accurate real-time mobile target tracking. In the proposed architecture, the sensed data is processed in an abstract space called Information Space and the communication between nodes is modeled as an abstract space called Network Design Space. The two abstract spaces are connected through an interaction interface called InfoNet, that seamlessly translates the messages between the two. The proposed architecture is validated experimentally on a laboratory testbed for multiple scenarios.

  • articleNo Access

    CLASSIFICATION OF MULTITEMPORAL REMOTE-SENSING IMAGES BY A FUZZY FUSION OF SPECTRAL AND SPATIO-TEMPORAL CONTEXTUAL INFORMATION

    A fuzzy-logic approach to the classification of multitemporal, multisensor remote-sensing images is proposed. The approach is based on a fuzzy fusion of three basic sources of information: spectral, spatial and temporal contextual information sources. It aims at improving the accuracy over that of single-time noncontextual classification. Single-time class posterior probabilities, which are used to represent spectral information, are estimated by Multilayer Perceptron neural networks trained for each single-time image, thus making the approach applicable to multisensor data. Both the spatial and temporal kinds of contextual information are derived from the single-time classification maps obtained by the neural networks. The expert's knowledge of possible transitions between classes at two different times is exploited to extract temporal contextual information. The three kinds of information are then fuzzified in order to apply a fuzzy reasoning rule for their fusion. Fuzzy reasoning is based on the "MAX" fuzzy operator and on information about class prior probabilities. Finally, the class with the largest fuzzy output value is selected for each pixel in order to provide the final classification map. Experimental results on a multitemporal data set consisting of two multisensor (Landsat TM and ERS-1 SAR) images are reported. The accuracy of the proposed fuzzy spatio-temporal contextual classifier is compared with those obtained by the Multilayer Perceptron neural networks and a reference classification approach based on Markov Random Fields (MRFs). Results show the benefit of adding spatio-temporal contextual information to the classification scheme, and suggest that the proposed approach represents an interesting alternative to the MRF-based approach, in particular, in terms of simplicity.

  • articleNo Access

    FACE AUTHENTICATION USING RECOGNITION-BY-PARTS, BOOSTING AND TRANSDUCTION

    The paper describes an integrated recognition-by-parts architecture for reliable and robust face recognition. Reliability and robustness are characteristic of the ability to deploy full-fledged and operational biometric engines, and handling adverse image conditions that include among others uncooperative subjects, occlusion, and temporal variability, respectively. The architecture proposed is model-free and non-parametric. The conceptual framework draws support from discriminative methods using likelihood ratios. At the conceptual level it links forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Layered categorization starts with face detection using implicit rather than explicit segmentation. It proceeds with face authentication that involves feature selection of local patch instances including dimensionality reduction, exemplar-based clustering of patches into parts, and data fusion for matching using boosting driven by parts that play the role of weak-learners. Face authentication shares the same implementation with face detection. The implementation, driven by transduction, employs proximity and typicality (ranking) realized using strangeness and p-values, respectively. The feasibility and reliability of the proposed architecture are illustrated using FRGC data. The paper concludes with suggestions for augmenting and enhancing the scope and utility of the proposed architecture.

  • articleNo Access

    EVALUATIONS OF PARTICLE FILTER BASED HUMAN MOTION VISUAL TRACKERS FOR HOME ENVIRONMENT SURVEILLANCE

    This paper presents a thorough study of some particle filter (PF) strategies dedicated to human motion capture from a trinocular vision surveillance setup. An experimental procedure is used, based on a commercial motion capture ring to provide ground truth. Metrics are proposed to assess performances in terms of accuracy, robustness, but also estimator dispersion which is often neglected elsewhere. Relative performances are discussed through some quantitative and qualitative evaluations on a video database. PF strategies based on Quasi Monte Carlo sampling, a scheme which is surprisingly seldom exploited in the Vision community, provide an interesting way to explore. Future works are finally discussed.

  • articleNo Access

    Gesture Recognition Based on Kinect v2 and Leap Motion Data Fusion

    This study proposed a method for multiple motion-sensitive devices (i.e. one Kinect v2 and two Leap Motions) to integrate gesture data in Unity. Other depth cameras could replace the Kinect. The general steps in integrating gesture data for motion-sensitive devices were introduced as follows. (1) A method was proposed to recognize the fingertip from depth images for the Kinect v2. (2) Coordinates observed by three motion-sensitive devices were aligned in space in three steps. First, preliminary coordinate conversion parameters were obtained through joint calibration of the three devices. Second, two types of devices were approached to the observed value of the standard Leap Motion by the least squares method twice (i.e. one Kinect and one Leap Motion on the first round, then two Leap Motions on the second round). (3) Data of the three devices were aligned with time by using Unity while applying the data plan. On this basis, a human hand interacted with a virtual object in Unity. Experimental results demonstrated that the proposed method had a small recognition error of hand joints and realized the natural interaction between the human hand and virtual objects.

  • articleNo Access

    Visual Target Interaction Modeling with Multichannel Data Fusion

    Facing the demand for accurate, fast, and natural human–computer interaction in the multi-screen control environment, the traditional exchange method of frequent switching control of multi-screen through mouse, keyboard, and other manual methods has problems of low efficiency, weak flexibility, and poor experience. In this paper, we research multi-screen precision control technology based on eye movement to break through the difficulties of precise recognition of human vision under complex environments and poor stability of vision estimation under frequent switching control of multi-screen. This study explores the technology for controlling multiple screens with high precision by tracking eye movements. It overcomes the challenges associated with reliably discerning human visual focus in intricate settings and the instability of visual assessments during rapid transitions between multiple screens. The experimental results of 1143 eye-movement capture tasks are studied, including the accurate recognition technology of human head posture and pupil gaze direction under complex backgrounds, illumination, and different visual field environments, and the theoretical modeling method of multi-screen precise target interaction based on eye-movement is explored. The expression data will be processed using convolutional neural networks, and the dataset will be trained through Python programming and tested on test samples collected by simulated flight crews in the laboratory with an accuracy rate of more than 85%.

  • articleNo Access

    BLACKBOARD CONCEPTS FOR DATA FUSION APPLICATIONS

    Data fusion has been defined as a process dealing with the association, correlation, and combination of data and information from multiple sources to achieve refined position and identity estimates for entities, and complete and timely assessments of related situations and threats, and their significance. This process (sometimes labeled a “technology”) is pervasive, i.e. capable of broad, multi-domain application. Indeed, data fusion has found extensive application in the commercial/industrial sector as well, in areas such as robotics and process control, and for numerous applications requiring intelligent, autonomous processes and capabilities. One of the purposes of this paper is to describe the evolving standard description of the data fusion process ascribed to by the U.S. Joint Directors of Laboratories (JDL) Data Fusion Subpanel (a Department of Defense organization), as well as components of the attendant lexicon and taxonomy.

    While the specific definitions of a “situation assessment (SA)” and a “threat assessment (TA)” have proven to be problem-dependent for most defense applications, these notions generally encompass a large quantity of knowledge which reflect the (dynamic) constituency-dependency relationships among objects of various classes as well as events and activities of interest. Formulation of hypotheses about situations and threats is a process having the following properties:

    • it employs many types of knowledge

    • it must consider multiple, asynchronous activities

    • multiple types of dynamic and static data must be processed

    • numerous sub-networks of interest in the situation/threat picture (numerous constituency-dependency relationships) exist—this leads to feedforward/backward inferencing requirements

    • information-processing strategies are required to produce estimates of aggregated force structures (given individual unit positions and identities), as well as aggregated behaviors (given individual events or activities)

    • the situational or threat state is often ephemeral and thus temporal reasoning capabilities must be part of the process

    The paper expands on the processes and techniques involved in SA and TA analysis, and describes, from various points of view, why the blackboard paradigm is properly applicable to problems of SA and TA analysis. This assessment includes various trade-off factors (features, benefits, and disadvantages or complexities) in applying blackboard concepts to data fusion related reasoning processes.

    Specific research and development by the authors and synthesis of the results of a survey on data fusion applications (shown within) has led to the formulation of a recommended generic, ideal blackboard architecture for these defense problems described in the paper.

  • articleNo Access

    A MULTILEVEL FUSION APPROACH TO OBJECT IDENTIFICATION IN OUTDOOR ROAD SCENES

    The task of object identification is fundamental to the operations of an autonomous vehicle. It can be accomplished by using techniques based on a Multisensor Fusion framework, which allows the integration of data coming from different sensors. In this paper, an approach to the synergic interpretation of data provided by thermal and visual sensors is proposed. Such integration is justified by the necessity for solving the ambiguities that may arise from separate data interpretations.

    The architecture of a distributed Knowledge-Based system is described. It performs an Intelligent Data Fusion process by integrating, in an opportunistic way, data acquired with a thermal and a video (b/w) camera. Data integration is performed at various architecture levels in order to increase the robustness of the whole recognition process. A priori models allow the system to obtain interesting data from both sensors; to transform such data into intermediate symbolic objects; and, finally, to recognize environmental situations on which to perform further processing. Some results are reported for different environmental conditions (i.e. a road scene by day and by night, with and without the presence of obstacles).

  • articleNo Access

    AUTOMATED DATA FUSION AND SITUATION ASSESSMENT IN SPACE SYSTEMS

    Space systems are an important part of everyday life. They provide global positioning data, communications, and Earth science data such as weather information. All space systems require satellite operators to ensure high performance and continuous operations in the presence of off-nominal conditions due to space weather and onboard anomalies. Similar to other high-stress, time critical operations (e.g., piloting an aircraft or operating a nuclear power plant), situation awareness is a crucial factor in operator performance during these conditions. Because situation awareness is largely acquired by monitoring large numbers of parameters, it is difficult to rapidly and accurately fuse the data to develop an accurate assessment. To aid operators in this task, we have developed a prototype Multi-Agent Satellite System for Information Fusion (MASSIF) for automated data fusion and situation awareness. This system is based on human cognitive decision-making models and integrates a fuzzy logic system for semantic data processing, Bayesian belief networks for multi-source data fusion and situation assessment, and rule-bases for automatic network construction. This paper describes initial simulation-based results to establish feasibility and baseline performance. We describe knowledge engineering efforts, belief network construction, and operator-interfaces for automated data fusion and situation awareness for a hypothetical geosynchronous satellite.

  • articleNo Access

    TOOLS FOR EXPERIENTIAL RECOGNITION

    Our objective in the interactive formulation of the "formal description schema - fds" model is the modeling of the prototypical, i.e. the subjective, perceptual ability of a human "expert", the ultimate human or robotic decision maker. In this paper, we present our fds-approach and methodology for solving the problem of modeling and exercising perceptual recognition [3–6]. We limit our discussion to one-dimensional variational profiles. We view the fds-model as a two-stage procedural model. Concerning the "early" (pre-attentive) recognition stage, we define the "structural identity of a k-norm class, k∈K" — SkID — as a tool for quick shadowing of sensory data and positioning instantiations of sufficient resemblance to interactively pre-defined spatio–temporal norm classes. Attentive recognition tools follow for assessing conformity of SkID-pointed occurrences.

  • articleNo Access

    ON THE SECURITY OF MICROAGGREGATION WITH INDIVIDUAL RANKING: ANALYTICAL ATTACKS

    Microaggregation is a statistical disclosure control technique. Raw microdata (i.e. individual records) are grouped into small aggregates prior to publication. With fixed-size groups, each aggregate contains k records to prevent disclosure of individual information. Individual ranking is a usual criterion to reduce multivariate microaggregation to univariate case: the idea is to perform microaggregation independently for each variable in the record. Using distributional assumptions, we show in this paper how to find interval estimates for the original data based on the microaggregated data. Such intervals can be considerably narrower than intervals resulting from subtraction of means, and can be useful to detect lack of security in a microaggregated data set. Analytical arguments given in this paper confirm recent empirical results about the unsafety of individual ranking microaggregation.

  • articleNo Access

    k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY

    Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k-anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, μ-Argus and k-Similar provide guarantees of privacy protection.

  • articleNo Access

    ACHIEVING k-ANONYMITY PRIVACY PROTECTION USING GENERALIZATION AND SUPPRESSION

    Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k-anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k-anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and μ-Argus are compared to MinGen. Both Datafly and μ-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and μ-Argus can additionally fail to provide adequate protection.

  • articleNo Access

    LEARNING RETRIEVAL EXPERT COMBINATIONS WITH GENETIC ALGORITHMS

    The goal of information retrieval (IR) is to provide models and systems that help users to identify the relevant documents to their information needs. Extensive research has been carried out to develop retrieval methods that solve this goal. These IR techniques range from purely syntax-based, considering only frequencies of words, to more semantics-aware approaches. However, it seems clear that there is no single method that works equally well on all collections and for all queries. Prior work suggests that combining the evidence from multiple retrieval experts can achieve significant improvements in retrieval effectiveness. A common problem of expert combination approaches is the selection of both the experts to be combined and the combination function. In most studies the experts are selected from a rather small set of candidates using some heuristics. Thus, only a reduced number of possible combinations is considered and other possibly better solutions are left out. In this paper we propose the use of genetic algorithms to find a suboptimal combination of experts for a document collection at hand. Our approach automatically determines both the experts to be combined and the parameters of the combination function. Because we learn this combination for each specific document collection, this approach allows us to automatically adjust the IR system to specific user needs. To learn retrieval strategies that generalize well on new queries we propose a fitness function that is based on the statistical significance of the average precision obtained on a set of training queries. We test and evaluate the approach on four classical text collections. The results show that the learned combination strategies perform better than any of the individual methods and that genetic algorithms provide a viable method to learn expert combinations. The experiments also evaluate the use of a semantic indexing approach, the context vector model, in combination with classical word matching techniques.

  • articleNo Access

    A Generalization of the Theory of Expertons

    With the advent of fuzzy logic applications in the field of economics and in the context of expert systems we are witnessing a new approach to data-gathering methods as the aggregation of data provided by various experts brings with it new data fusion techniques. In 1987, the exploration of these techniques gave rise to the experton concept as an integrating element that allows the collection of all information expressed by a group of experts relating to the level or degree of truth of a statement or the degree of fulfilment of a certain vague or imprecise characteristic. Over the thirty years since its formulation, the experton concept has been applied as a support element in decision-making processes in many areas of the social sciences. The aim of this article is to present a generalization of the experton concept for both the discrete and continuous cases, which respects known properties and has the potential to be practically applied in various situations where there is a need to perform a simulation of various opinion scenarios relating to a characteristic or statement, and thus explore new approaches to decision-making models.

  • articleNo Access

    DATA FUSION-BASED STRUCTURAL DAMAGE DETECTION UNDER VARYING TEMPERATURE CONDITIONS

    A huge number of data can be obtained continuously from a number of sensors in long-term structural health monitoring (SHM). Different sets of data measured at different times may lead to inconsistent monitoring results. In addition, structural responses vary with the changing environmental conditions, particularly temperature. The variation in structural responses caused by temperature changes may mask the variation caused by structural damages. Integration and interpretation of various types of data are critical to the effective use of SHM systems for structural condition assessment and damage detection. A data fusion-based damage detection approach under varying temperature conditions is presented. The Bayesian-based damage detection technique, in which both temperature and structural parameters are the variables of the modal properties (frequencies and mode shapes), is developed. Accordingly, the probability density functions of the modal data are derived for damage detection. The damage detection results from each set of modal data and temperature data may be inconsistent because of uncertainties. The Dempster–Shafer (D–S) evidence theory is then employed to integrate the individual damage detection results from the different data sets at different times to obtain a consistent decision. An experiment on a two-story portal frame is conducted to demonstrate the effectiveness of the proposed method, with consideration on model uncertainty, measurement noise, and temperature effect. The damage detection results obtained by combining the damage basic probability assignments from each set of test data are more accurate than those obtained from each test data separately. Eliminating the temperature effect on the vibration properties can improve the damage detection accuracy. In particular, the proposed technique can detect even the slightest damage that is not detected by common damage detection methods in which the temperature effect is not eliminated.