Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The development of vision-based human activity recognition and analysis systems has been a matter of great interest to both the research community and practitioners during the last 20 years. Traditional methods that require a human operator watching raw video streams are nowadays deemed as at least ineffective and expensive. New, smart solutions in automatic surveillance and monitoring have emerged, propelled by significant technological advances in the fields of image processing, artificial intelligence, electronics and optics, embedded computing and networking, molding the future of several applications that can benefit from them, like security and healthcare. The main motivation behind it is to exploit the highly informative visual data captured by cameras and perform high-level inference in an automatic, ubiquitous and unobtrusive manner, so as to aid human operators, or even replace them. This survey attempts to comprehensively review the current research and development on vision-based human activity recognition. Synopses from various methodologies are presented in an effort to garner the advantages and shortcomings of the most recent state-of-the-art technologies. Also a first-level self-evaluation of methodologies is also proposed, which incorporates a set of significant features that best describe the most important aspects of each methodology in terms of operation, performance and others and weighted by their importance. The purpose of this study is to serve as a reference for further research and evaluation to raise thoughts and discussions for future improvements of each methodology towards maturity and usefulness.
A person’s preference to select or reject certain meals is influenced by several aspects, including colour. In this paper, we study the relevance of food colour for such preferences. To this end, a set of images of meals is processed by an automatic method that associates mood adjectives that capture such meal preferences. These adjectives are obtained by analyzing the colour palettes in the image, using a method based in Kobayashi’s model of harmonic colour combinations. The paper also validates that the colour palettes calculated for each image are harmonic by developing a rating model to predict how much a user would like the colour palettes obtained. This rating is computed using a regression model based on the COLOURlovers dataset implemented to learn users’ preferences. Finally, the adjectives associated automatically with images of dishes are validated by a survey which was responded by 178 people and demonstrates that the labels are adequate. The results obtained in this paper have applications in tourism marketing, to help in the design of marketing multimedia material, especially for promoting restaurants and gastronomic destinations.
This paper presents empirical research aimed at studying what characterizes successful information technology (IT) projects. There are often doubts about what characterizes project success and who actually defines it. In this paper, we have reviewed the literature and present significant contributions to the discussion of what characterizes successful IT projects. Furthermore, a survey was conducted in Norway to collect data on successful IT projects. Research results show that the five most important success criteria are: (1) the IT system works as expected and solves the problems, (2) satisfied users, (3) the IT system has high reliability, (4) the solution contributes to improved efficiency and competitive power, and (5) the IT system realizes strategic, tactical and operational objectives.
IT business value research examines the organizational performance impacts of information technology. In this paper, we apply the value configuration of the value shop to describe and measure organizational performance. The value shop consists of the five primary activities of problem understanding, solutions to problems, decisions on actions, implementation of actions, and evaluations of actions in an iterative problem-solving cycle. Police investigation work is defined as value shop activities. Our empirical study of Norwegian police results in significant relationships between information technology use and investigation performance for all primary activities. The most important primary activities for IT use are problem understanding and implementation of actions, as both significantly improve value shop performance.
The information-based prediction models using machine learning techniques have gained massive popularity during the last few decades. Such models have been applied in a number of domains such as medical diagnosis, crime prediction, movies rating, etc. Similar is the trend in telecom industry where prediction models have been applied to predict the dissatisfied customers who are likely to change the service provider. Due to immense financial cost of customer churn in telecom, the companies from all over the world have analyzed various factors (such as call cost, call quality, customer service response time, etc.) using several learners such as decision trees, support vector machines, neural networks, probabilistic models such as Bayes, etc. This paper presents a detailed survey of models from 2000 to 2015 describing the datasets used in churn prediction, impacting features in those datasets and classifiers that are used to implement prediction model. A total of 48 studies related to churn prediction in telecom industry are discussed using 23 datasets (3 public and 20 private). Our survey aims to highlight the evolution of techniques from simple features/learners to more complex learners and feature engineering or sampling techniques. We also give an overview of the current challenges in churn prediction and suggest solutions to resolve them. This paper will allow researchers such as data analysts in general and telecom operators in particular to choose best suited techniques and features to prepare their churn prediction models.
Technology-enabled feedback encompasses many significant features including AI. It also allows students greater flexibility in providing feedback, and the university/institute enables them to make data-driven decisions as the collected data are free from bias and data analyses are accurate. There are different technology-centric feedback collections such as Open edX Insight, speech-to-text, Kahoot, Dialogflow, and so on. These technology-centric feedback collections mean collecting more realistic data from the students. Both qualitative and quantitative data could be analyzed by applying technology-centric data analyses tools. AMOS, LABLEAU, SAS, SPSS, Stata, Statista, XLSTAT, and ROOT are a few of the quantitative data analyses software, and Atlas, Airtable, Coda, Condens, MaxQDA, Notion, and NVivo are a few of the qualitative data analyses software. To make the students’ feedback and survey more successful we recommend sharing survey methods/survey questions with other educational institutions, conducting broader surveys, broad topics to ask questions on, technology-based analysis of qualitative feedback comments, and including skills development-focused questions.
There has been a lot of interest in time series forecasting in recent years. Deep neural networks have shown their effectiveness and accuracy in various industries. It is currently one of the most extensively used machine-learning algorithms for dealing with massive volumes of data due to the reasons stated above. Statistical modeling includes forecasting, which is used for decision-making in various fields. Time-varying variables may be forecasted based on their past values, which is the goal of forecasting. Developing models and techniques for trustworthy forecasting is an important part of the forecasting process. As part of this study, a systematic mapping investigation and a literature review are used. Time series researchers have relied on ARIMA approaches for decades, notably the autoregressive integrated moving average model, but the need for it to be stationary makes this method somewhat rigid. Forecasting methods have improved and expanded with the introduction of computers, ranging from stochastic models to soft computers. Conventional approaches may not be as accurate as soft computing. In addition, the volume of data that can be analyzed and the efficiency of the process are two of the many benefits of using soft computing.
Radio frequency identification (RFID) and wireless sensor networks (WSN) are two important wireless technologies that have a wide variety of applications and provide limitless future potentials. RFID facilitates detection and identification of objects that are not easily detectable or distinguishable by using current sensor technologies. However, it does not provide information about the condition of the objects it detects. Sensors, on the other hand, provide information about the condition of the objects as well as the environment. Hence, integration of these technologies will expand their overall functionality and capacity. This chapter first presents a brief introduction on RFID and then investigates recent research works, new patents, academic products and applications that integrate RFID with sensor networks. Four types of integration are discussed: (1) integrating tags with sensors; (2) integrating tags with wireless sensor nodes and wireless devices; (3) integrating readers with wireless sensor nodes and wireless devices; and (4) mix of RFID and wireless sensor networks. New challenges and future works are discussed at the end.