Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

Bestsellers

Handbook of Machine Learning
Handbook of Machine Learning

Volume 1: Foundation of Artificial Intelligence
by Tshilidzi Marwala
Handbook on Computational Intelligence
Handbook on Computational Intelligence

In 2 Volumes
edited by Plamen Parvanov Angelov

 

  • articleNo Access

    A Survey on the Metaheuristics Applied to QAP for the Graphics Processing Units

    The computational power requirements of real-world optimization problems begin to exceed the general performance of the Central Processing Unit (CPU). The modeling of such problems is in constant evolution and requires more computational power. Solving them is expensive in computation time and even metaheuristics, well known for their eficiency, begin to be unsuitable for the increasing amount of data. Recently, thanks to the advent of languages such as CUDA, the development of parallel metaheuristics on Graphic Processing Unit (GPU) platform to solve combinatorial problems such as the Quadratic Assignment Problem (QAP) has received a growing interest. It is one of the most studied NP-hard problems and it is known for its high computational cost. In this paper, we survey several of the most important metaheuristics approaches for the QAP and we focus our survey on parallel metaheuristics using the GPU.

  • articleNo Access

    AVAILABILITY OF UNUSED COMPUTATIONAL RESOURCES IN AN ORDINARY OFFICE ENVIRONMENT

    The study presented in this paper highlights an important issue that was subject for discussions and research about a decade ago and now have gained new interest with the current advances of grid computing and desktop grids. New techniques are being invented on how to utilize desktop computers for computational tasks but no other study, to our knowledge, has explored the availability of the said resources. The general assumption has been that there are resources and that they are available. The study is based on a survey on the availability of resources in an ordinary office environment. The aim of the study was to determine if there are truly usable under-utilized networked desktop computers available for non-desktop tasks during the off-hours. We found that in more than 96% of the cases the computers in the current investigation was available for the formation of part-time (night and weekend) computer clusters. Finally we compare the performance of a full time and a metamorphosic cluster, based on one hypothetical linear scalable application and a real world welding simulation.

  • articleNo Access

    A SURVEY OF ALGORITHMS AND ARCHITECTURES FOR H.264 SUB-PIXEL MOTION ESTIMATION

    This paper reviews recent state-of-the-art H.264 sub-pixel motion estimation (SME) algorithms and architectures. First, H.264 SME is analyzed and the impact of its functionalities on coding performance is investigated. Then, design space of SME algorithms is explored representing design problems, approaches, and recent advanced algorithms. Besides, design challenges and strategies of SME hardware architectures are discussed and promising architectures are surveyed. Further perspectives and future prospects are also presented to highlight emerging trends and outlook of SME designs.

  • articleNo Access

    MAINTAINABILITY PREDICTORS FOR RELATIONAL DATABASE-DRIVEN SOFTWARE APPLICATIONS: EXTENDED RESULTS FROM A SURVEY

    Software maintainability is a very important quality attribute. Its prediction for relational database-driven software applications can help organizations improve the maintainability of these applications. The research presented herein adopts a survey-based approach where a survey was conducted with 40 software professionals aimed at identifying and ranking the important maintainability predictors for relational database-driven software applications. The survey results were analyzed using frequency analysis. The results suggest that maintainability prediction for relational database-driven applications is not the same as that of traditional software applications in terms of the importance of the predictors used for this purpose. The results also provide a baseline for creating maintainability prediction models for relational database-driven software applications.

  • articleOpen Access

    Dynamic Software Product Line Engineering: A Reference Framework

    Runtime adaptive systems are able to dynamically transform their internal structure, and hence their behavior, in response to internal or external changes. Such transformations provide the basis for new functionalities or improvements of the non-functional properties that match operational requirements and standards. Software Product Line Engineering (SPLE) has introduced several models and mechanisms for variability modeling and management. Dynamic software product lines (DSPL) engineering exploits the knowledge acquired in SPLE to develop systems that can be context-aware, post-deployment reconfigurable, or runtime adaptive. This paper focuses on DSPL engineering approaches for developing runtime adaptive systems and proposes a framework for classifying and comparing these approaches from two distinct perspectives: adaptation properties and adaptation realization. These two perspectives are linked together by a series of guidelines that help to select a suitable adaptation realization approach based on desired adaptation types.

  • articleNo Access

    Software Industry Perception of Technical Debt and Its Management

    Technical debt (TD) expresses the lack of internal quality directly affecting software evolution. Therefore, it has gained the attention of software researchers and practitioners recently. Software researchers have performed empirical studies to observe the perspective of TD in different software cultures and organizations. However, it is important to replicate such studies in more places and with more practitioners to strengthen the perception of TD. In this paper, we present the results of a set of new research questions from an evolved survey design of a survey replication in the Uruguayan software industry to characterize how the software industry professionals understand, perceive, and adopt TD management (TDM) activities. The results allow us to observe that different participant contexts (startups, government, job roles) show different levels of awareness and perception of TD. Details in the form of the adoption of each TDM activity were presented. We could observe some difficulties in conducting some TDM activities that the practitioners consider very important, especially in TDM and monitoring. Differences in specific organizational contexts like startups and government could indicate the need for research efforts in other software engineering communities that meet their specific TD challenges and needs.

  • articleNo Access

    A SURVEY OF OBJECT-SPACE HIDDEN SURFACE REMOVAL

    We present a survey of the hidden surface removal literature, focusing on object-space algorithms. We give a brief definition and history of the problem, followed by a discussion of problems and algorithms associated with the priority ordering of faces. We go on to examine object-space algorithms in order of increasing object complexity: xy-parallel rectangles, ordered triangles, c-oriented faces, c-oriented polyhedra, polyhedral terrains, and general polyhedra. We also review recent work on merging visibility maps and moving viewpoints. Finally, we present a list of open problems.

  • articleNo Access

    A FIRST STAGE COMPARATIVE SURVEY ON HUMAN ACTIVITY RECOGNITION METHODOLOGIES

    The development of vision-based human activity recognition and analysis systems has been a matter of great interest to both the research community and practitioners during the last 20 years. Traditional methods that require a human operator watching raw video streams are nowadays deemed as at least ineffective and expensive. New, smart solutions in automatic surveillance and monitoring have emerged, propelled by significant technological advances in the fields of image processing, artificial intelligence, electronics and optics, embedded computing and networking, molding the future of several applications that can benefit from them, like security and healthcare. The main motivation behind it is to exploit the highly informative visual data captured by cameras and perform high-level inference in an automatic, ubiquitous and unobtrusive manner, so as to aid human operators, or even replace them. This survey attempts to comprehensively review the current research and development on vision-based human activity recognition. Synopses from various methodologies are presented in an effort to garner the advantages and shortcomings of the most recent state-of-the-art technologies. Also a first-level self-evaluation of methodologies is also proposed, which incorporates a set of significant features that best describe the most important aspects of each methodology in terms of operation, performance and others and weighted by their importance. The purpose of this study is to serve as a reference for further research and evaluation to raise thoughts and discussions for future improvements of each methodology towards maturity and usefulness.

  • articleNo Access

    A Study on How Food Colour May Determine the Categorization of a Dish: Predicting Meal Appeal from Colour Combinations

    A person’s preference to select or reject certain meals is influenced by several aspects, including colour. In this paper, we study the relevance of food colour for such preferences. To this end, a set of images of meals is processed by an automatic method that associates mood adjectives that capture such meal preferences. These adjectives are obtained by analyzing the colour palettes in the image, using a method based in Kobayashi’s model of harmonic colour combinations. The paper also validates that the colour palettes calculated for each image are harmonic by developing a rating model to predict how much a user would like the colour palettes obtained. This rating is computed using a regression model based on the COLOURlovers dataset implemented to learn users’ preferences. Finally, the adjectives associated automatically with images of dishes are validated by a survey which was responded by 178 people and demonstrates that the labels are adequate. The results obtained in this paper have applications in tourism marketing, to help in the design of marketing multimedia material, especially for promoting restaurants and gastronomic destinations.

  • articleNo Access

    IMAGE ENGINEERING AND RELATED PUBLICATIONS

    Image engineering is a discipline that includes image processing, image analysis, image understanding, and the applications of these techniques. To promote its development and evolvement, this paper provides a well-regulated explanation of the definition of image engineering, as well as its intention and extension. It also introduces a new classification of the theories of image engineering, and the applications of image technology. A thorough statistical survey on the publications in this discipline is carried out, and an analysis and discussion of the statistics from the classification results are presented. This work shows a general and an up-to-date picture of the status, progress, trends and application areas of image engineering.

  • articleNo Access

    Image Matting: A Comprehensive Survey on Techniques, Comparative Analysis, Applications and Future Scope

    In the era of rapid growth of technologies, image matting plays a key role in image and video editing along with image composition. In many significant real-world applications such as film production, it has been widely used for visual effects, virtual zoom, image translation, image editing and video editing. With recent advancements in digital cameras, both professionals and consumers have become increasingly involved in matting techniques to facilitate image editing activities. Image matting plays an important role to estimate alpha matte in the unknown region to distinguish foreground from the background region of an image using an input image and the corresponding trimap of an image which represents a foreground and unknown region. Numerous image matting techniques have been proposed recently to extract high-quality matte from image and video sequences. This paper illustrates a systematic overview of the current image and video matting techniques mostly emphasis on the current and advanced algorithms proposed recently. In general, image matting techniques have been categorized according to their underlying approaches, namely, sampling-based, propagation-based, combination of sampling and propagation-based and deep learning-based algorithms. The traditional image matting algorithms depend primarily on color information to predict alpha matte such as sampling-based, propagation-based or combination of sampling and propagation-based algorithms. However, these techniques mostly use low-level features and suffer from high-level background which tends to produce unwanted artifacts when color is same or semi-transparent in the foreground object. Image matting techniques based on deep learning have recently introduced to address the shortcomings of traditional algorithms. Rather than simply depending on the color information, it uses deep learning mechanism to estimate the alpha matte using an input image and the trimap of an image. A comprehensive survey on recent image matting algorithms and in-depth comparative analysis of these algorithms has been thoroughly discussed in this paper.

  • articleNo Access

    WHAT CHARACTERIZES SUCCESSFUL IT PROJECTS

    This paper presents empirical research aimed at studying what characterizes successful information technology (IT) projects. There are often doubts about what characterizes project success and who actually defines it. In this paper, we have reviewed the literature and present significant contributions to the discussion of what characterizes successful IT projects. Furthermore, a survey was conducted in Norway to collect data on successful IT projects. Research results show that the five most important success criteria are: (1) the IT system works as expected and solves the problems, (2) satisfied users, (3) the IT system has high reliability, (4) the solution contributes to improved efficiency and competitive power, and (5) the IT system realizes strategic, tactical and operational objectives.

  • articleNo Access

    INFORMATION TECHNOLOGY IN THE VALUE SHOP: AN EMPIRICAL STUDY OF POLICE INVESTIGATION PERFORMANCE

    IT business value research examines the organizational performance impacts of information technology. In this paper, we apply the value configuration of the value shop to describe and measure organizational performance. The value shop consists of the five primary activities of problem understanding, solutions to problems, decisions on actions, implementation of actions, and evaluations of actions in an iterative problem-solving cycle. Police investigation work is defined as value shop activities. Our empirical study of Norwegian police results in significant relationships between information technology use and investigation performance for all primary activities. The most important primary activities for IT use are problem understanding and implementation of actions, as both significantly improve value shop performance.

  • articleNo Access

    A Survey of Evolution in Predictive Models and Impacting Factors in Customer Churn

    The information-based prediction models using machine learning techniques have gained massive popularity during the last few decades. Such models have been applied in a number of domains such as medical diagnosis, crime prediction, movies rating, etc. Similar is the trend in telecom industry where prediction models have been applied to predict the dissatisfied customers who are likely to change the service provider. Due to immense financial cost of customer churn in telecom, the companies from all over the world have analyzed various factors (such as call cost, call quality, customer service response time, etc.) using several learners such as decision trees, support vector machines, neural networks, probabilistic models such as Bayes, etc. This paper presents a detailed survey of models from 2000 to 2015 describing the datasets used in churn prediction, impacting features in those datasets and classifiers that are used to implement prediction model. A total of 48 studies related to churn prediction in telecom industry are discussed using 23 datasets (3 public and 20 private). Our survey aims to highlight the evolution of techniques from simple features/learners to more complex learners and feature engineering or sampling techniques. We also give an overview of the current challenges in churn prediction and suggest solutions to resolve them. This paper will allow researchers such as data analysts in general and telecom operators in particular to choose best suited techniques and features to prepare their churn prediction models.

  • chapterNo Access

    Chapter 14: Technologies and Student Feedback Collection and Analysis

    Technology-enabled feedback encompasses many significant features including AI. It also allows students greater flexibility in providing feedback, and the university/institute enables them to make data-driven decisions as the collected data are free from bias and data analyses are accurate. There are different technology-centric feedback collections such as Open edX Insight, speech-to-text, Kahoot, Dialogflow, and so on. These technology-centric feedback collections mean collecting more realistic data from the students. Both qualitative and quantitative data could be analyzed by applying technology-centric data analyses tools. AMOS, LABLEAU, SAS, SPSS, Stata, Statista, XLSTAT, and ROOT are a few of the quantitative data analyses software, and Atlas, Airtable, Coda, Condens, MaxQDA, Notion, and NVivo are a few of the qualitative data analyses software. To make the students’ feedback and survey more successful we recommend sharing survey methods/survey questions with other educational institutions, conducting broader surveys, broad topics to ask questions on, technology-based analysis of qualitative feedback comments, and including skills development-focused questions.

  • chapterNo Access

    8: A Comprehensive Study on Time Series Analysis in Healthcare

    There has been a lot of interest in time series forecasting in recent years. Deep neural networks have shown their effectiveness and accuracy in various industries. It is currently one of the most extensively used machine-learning algorithms for dealing with massive volumes of data due to the reasons stated above. Statistical modeling includes forecasting, which is used for decision-making in various fields. Time-varying variables may be forecasted based on their past values, which is the goal of forecasting. Developing models and techniques for trustworthy forecasting is an important part of the forecasting process. As part of this study, a systematic mapping investigation and a literature review are used. Time series researchers have relied on ARIMA approaches for decades, notably the autoregressive integrated moving average model, but the need for it to be stationary makes this method somewhat rigid. Forecasting methods have improved and expanded with the introduction of computers, ranging from stochastic models to soft computers. Conventional approaches may not be as accurate as soft computing. In addition, the volume of data that can be analyzed and the efficiency of the process are two of the many benefits of using soft computing.

  • chapterNo Access

    Experimental Validation of New Software Technology

    When to apply a new technology in an organization is a critical decision for every software development organization. Earlier work defines a set of methods that the research community uses when a new technology is developed. This chapter presents a discussion of the set of methods that industrial organizations use before adopting a new technology. First there is a brief definition of the earlier research methods and then a definition of the set of industrial methods. A survey taken by experts from both the research and industrial communities provides insights into how these communities differ in their approach toward technology innovation and technology transfer.

  • chapterNo Access

    Integration of RFID and Wireless Sensor Networks

    Radio frequency identification (RFID) and wireless sensor networks (WSN) are two important wireless technologies that have a wide variety of applications and provide limitless future potentials. RFID facilitates detection and identification of objects that are not easily detectable or distinguishable by using current sensor technologies. However, it does not provide information about the condition of the objects it detects. Sensors, on the other hand, provide information about the condition of the objects as well as the environment. Hence, integration of these technologies will expand their overall functionality and capacity. This chapter first presents a brief introduction on RFID and then investigates recent research works, new patents, academic products and applications that integrate RFID with sensor networks. Four types of integration are discussed: (1) integrating tags with sensors; (2) integrating tags with wireless sensor nodes and wireless devices; (3) integrating readers with wireless sensor nodes and wireless devices; and (4) mix of RFID and wireless sensor networks. New challenges and future works are discussed at the end.

  • chapterNo Access

    Quality Management in Academic Library: A Case Study of the Science and Technology Area in Spain

    This paper presents partial results of a research developed during the year 2002 to 2006. Population analyzed was Science and Technology teachers of Spanish universities. The investigation has worked with sample academic users, distributed among 19 Spanish universities. The main contribution of this study is to present a BIQUAL tool. It is useful for the evaluation of the quality services in university libraries and especially of Science and Technology. This tool is created using the user's point of view. Results identify behaviour of these users and the aspects that concern the quality of the service in this environment. We also discuss about some problems and difficult experimented in this research. We analyzed the use of quantitative methods, in special, survey as well as it effectiveness to library quality management.

  • chapterNo Access

    Approaches to and Perceptions for Quality: Empirical Evidence for the Public Libraries in Greece

    Public Libraries can surely play a significant social, cultural and economic role. Improvements on quality however are a necessary prerequisite. On the other hand, quality is a complex and subjective concept, which should incorporate at any given time the true (expressed and implied) needs of all interested parties. This paper investigates and empirically assesses the current perceptions for quality in Greek public libraries in order to suggest a way forward for quality management implementation. For that purpose a survey based on semi-structured interviews with the directors of Greek public libraries has been constructed and the results are presented.