Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This work considers the Internet of Things (IoT) and machine learning (ML) applied to the agricultural sector within a real-working scenario. More specifically, the aim is to punctually forecast two of the most important meteorological parameters (solar radiation and the rainfall) to determine the amount of water needed by a specific plantation under different contour conditions. Three different state-of-the-art ML approaches, coupled with boosting techniques, have been adopted and compared to obtain hourly forecasting. Real-working conditions are referred to the situation in which training data are missing for a specific weather station near the specific field to be irrigated. A simple but effective approach, based on correlation between available weather stations, is considered to cope with this problem. Results, evaluated considering different metrics as well as the execution time, demonstrate the viability of the proposed solution in real IoT working scenario in which these forecasting are input data to successively evaluate irrigation needing.
This paper describes the problem of developing working paradigms foradvanced spatial data applications. The key role of interactive visualization in enabling the expertise of specialists, if effectively integrated into their working environments, is described. The scope forapplying intelligence in designing visualizations to support,rather than to supplant, the expert is explored. A systematic framework describing the visualization design process, and an approach to applying intelligence around metavisualizations of the visualization design process, are summarized.
Contrary to popular belief, not all intelligent systems have Quick Intelligence. There are a surprisingly large number of intelligent systems, quasi-intelligent systems and semi-intelligent systems that have Slow Intelligence. Such Slow Intelligence Systems are often neglected in mainstream research on intelligent systems, but they are really worthy of our attention and emulation. I will describe the characteristics of Slow Intelligence Systems and present a general framework for Slow Intelligence Systems. I will then discuss evolutionary query processing and mission control in emergency management systems as two examples of Artificial Slow Intelligence Systems. Researchers and practitioners are both invited to explore the applications of Slow Intelligence Systems in software engineering and knowledge engineering, and publish their findings in IJSEKE.
Drawing upon the past research on neuromorphic implementation of an emotionally intelligent control system, a developed version of the brain emotional learning based intelligent controller, BELBIC, is proposed in this paper. The modified system, which essentially realizes direct adaptive control scheme, deals very well for SISO plants, and does not need any pre-training. It is adaptive and robust with respect to changes in the environment since it learns to produce appropriate control actions on-line. An additional advantage of BELBIC is its simplicity as well as low computational load. The performance of the system is demonstrated via application on previously studied benchmarks involving optimization and adaptive control in nonlinear systems with MLP-backpropagation (BP) nets.
AI tools have advanced sufficiently such that they are integrated into decision making support systems for real applications and are impacting decision making in substantive ways. This paper reviews decision making theories and AI tools and the intelligent decision systems that result from the integration of these concepts.
This work aims to explore resources and alternatives for 3D facial biometry using Binary Patterns. A 3D facial geometry image is converted into two 2D representations, appointed as descriptors: A Depth Map and an Azimuthal Projection Distance Image. The first is known as traditional facial geometry, and the second is normal facial geometry that is able to capture the information of different geometries. The characteristics of Local Binary Patterns, Local Phase Quantisers and Gabor Binary Patterns were used with the Depth Map and Azimuthal Projection Distance Image to produce six new facial descriptors: 3D Local Binary Patterns, Local Azimuthal Binary Patterns, Local Depth Phase Quantisers, Local Azimuthal Phase Patterns, and Local Depth Gabor Binary Pattern Magnitudes and Phases. Then, this work uses the immune concept to propose a new approach to realize facial biometry, where the eight new facial descriptors were applied to an Artificial Intelligence algorithm named Artificial Immune Systems of Negative Selection. The analysis of the results shows the efficiency, robustness, precision and reliability of this approach, encouraging further research in this area.
Complex systems, as interwoven miscellaneous interacting entities that emerge and evolve through self-organization in a myriad of spiraling contexts, exhibit subtleties on global scale besides steering the way to understand complexity which has been under evolutionary processes with unfolding cumulative nature wherein order is viewed as the unifying framework. Indicating the striking feature of non-separability in components, a complex system cannot be understood in terms of the individual isolated constituents’ properties per se, it can rather be comprehended as a way to multilevel approach systems behavior with systems whose emergent behavior and pattern transcend the characteristics of ubiquitous units composing the system itself. This observation specifies a change of scientific paradigm, presenting that a reductionist perspective does not by any means imply a constructionist view; and in that vein, complex systems science, associated with multiscale problems, is regarded as ascendancy of emergence over reductionism and level of mechanistic insight evolving into complex system. While evolvability being related to the species and humans owing their existence to their ancestors’ capability with regards to adapting, emerging and evolving besides the relation between complexity of models, designs, visualization and optimality, a horizon that can take into account the subtleties making their own means of solutions applicable is to be entailed by complexity. Such views attach their germane importance to the future science of complexity which may probably be best regarded as a minimal history congruent with observable variations, namely the most parallelizable or symmetric process which can turn random inputs into regular outputs. Interestingly enough, chaos and nonlinear systems come into this picture as cousins of complexity which with tons of its components are involved in a hectic interaction with one another in a nonlinear fashion amongst the other related systems and fields. Relation, in mathematics, is a way of connecting two or more things, which is to say numbers, sets or other mathematical objects, and it is a relation that describes the way the things are interrelated to facilitate making sense of complex mathematical systems. Accordingly, mathematical modeling and scientific computing are proven principal tools toward the solution of problems arising in complex systems’ exploration with sound, stimulating and innovative aspects attributed to data science as a tailored-made discipline to enable making sense out of voluminous (-big) data. Regarding the computation of the complexity of any mathematical model, conducting the analyses over the run time is related to the sort of data determined and employed along with the methods. This enables the possibility of examining the data applied in the study, which is dependent on the capacity of the computer at work. Besides these, varying capacities of the computers have impact on the results; nevertheless, the application of the method on the code step by step must be taken into consideration. In this sense, the definition of complexity evaluated over different data lends a broader applicability range with more realism and convenience since the process is dependent on concrete mathematical foundations. All of these indicate that the methods need to be investigated based on their mathematical foundation together with the methods. In that way, it can become foreseeable what level of complexity will emerge for any data desired to be employed. With relation to fractals, fractal theory and analysis are geared toward assessing the fractal characteristics of data, several methods being at stake to assign fractal dimensions to the datasets, and within that perspective, fractal analysis provides expansion of knowledge regarding the functions and structures of complex systems while acting as a potential means to evaluate the novel areas of research and to capture the roughness of objects, their nonlinearity, randomness, and so on. The idea of fractional-order integration and differentiation as well as the inverse relationship between them lends fractional calculus applications in various fields spanning across science, medicine and engineering, amongst the others. The approach of fractional calculus, within mathematics-informed frameworks employed to enable reliable comprehension into complex processes which encompass an array of temporal and spatial scales notably provides the novel applicable models through fractional-order calculus to optimization methods. Computational science and modeling, notwithstanding, are oriented toward the simulation and investigation of complex systems through the use of computers by making use of domains ranging from mathematics to physics as well as computer science. A computational model consisting of numerous variables that characterize the system under consideration allows the performing of many simulated experiments via computerized means. Furthermore, Artificial Intelligence (AI) techniques whether combined or not with fractal, fractional analysis as well as mathematical models have enabled various applications including the prediction of mechanisms ranging extensively from living organisms to other interactions across incredible spectra besides providing solutions to real-world complex problems both on local and global scale. While enabling model accuracy maximization, AI can also ensure the minimization of functions such as computational burden. Relatedly, level of complexity, often employed in computer science for decision-making and problem-solving processes, aims to evaluate the difficulty of algorithms, and by so doing, it helps to determine the number of required resources and time for task completion. Computational (-algorithmic) complexity, referring to the measure of the amount of computing resources (memory and storage) which a specific algorithm consumes when it is run, essentially signifies the complexity of an algorithm, yielding an approximate sense of the volume of computing resources and seeking to prove the input data with different values and sizes. Computational complexity, with search algorithms and solution landscapes, eventually points toward reductions vis à vis universality to explore varying degrees of problems with different ranges of predictability. Taken together, this line of sophisticated and computer-assisted proof approach can fulfill the requirements of accuracy, interpretability, predictability and reliance on mathematical sciences with the assistance of AI and machine learning being at the plinth of and at the intersection with different domains among many other related points in line with the concurrent technical analyses, computing processes, computational foundations and mathematical modeling. Consequently, as distinctive from the other ones, our special issue series provides a novel direction for stimulating, refreshing and innovative interdisciplinary, multidisciplinary and transdisciplinary understanding and research in model-based, data-driven modes to be able to obtain feasible accurate solutions, designed simulations, optimization processes, among many more. Hence, we address the theoretical reflections on how all these processes are modeled, merging all together the advanced methods, mathematical analyses, computational technologies, quantum means elaborating and exhibiting the implications of applicable approaches in real-world systems and other related domains.
We look at the issue of obtaining a variance like measure associated with probability distributions over ordinal sets. We call these dissonance measures. We specify some general properties desired in these dissonance measures. The centrality of the cumulative distribution function in formulating the concept of dissonance is pointed out. We introduce some specific examples of measures of dissonance.
Education is one of the areas with a higher impact on digitalization, which takes various forms such as education through digital devices and technology to improve the learning process. Online and tangible computing platforms have become more interested in pursuing active teaching through educational technologies within the curriculum. Some common problems and considerations can be addressed, such as access, capacity, financing, periodic progress measurements, and evaluation results. The productive learning activity can be immediately and constantly monitored by proposing Digital Tangible Intelligent Monitoring systems (DTIMS). The decision tree method’s teaching methodology can efficiently perform this proposed system, which monitors the former challenges. At the same time, the next is resolved by a dynamic evaluation process based on the Internet of Things (IoT). The research is evaluated using the education systems currently adopted. The results highlighted the potential of the proposed model and helped to gain information in digital teaching. The simulation analysis is performed based on accuracy 97.69%, vulnerability 91.09%, and efficiency, proving the proposed framework’s reliability of 85.10%.
The article presents the development of a Classifiers Combination based on machine learning techniques (Artificial Neural Networks, Support Vector Machines and C4.5 algorithm) which were able to increase the performance achieved by the Refractive Errors Measurement System (REMS) that analyzes Hartmann-Shack (HS) images from human eyes. The HS images are analyzed in order to extract relevant data for identification of refractive errors (myopia, hypermetropia and astigmatism). Those data are extracted using Gabor wavelets transform and afterwards, machine learning techniques are employed to carry out the image analysis.
Systems with conversational interfaces are rather popular nowadays. However, their full potential is not yet exploited. For the time being, users are restricted to calling predefined functions. Soon, users will expect to customize systems to their needs and create own functions using nothing but spoken instructions. Thus, future systems must understand how laypersons teach new functionality to intelligent systems. The understanding of natural language teaching sequences is a first step toward comprehensive end-user programming in natural language. We propose to analyze the semantics of spoken teaching sequences with a hierarchical classification approach. First, we classify whether an utterance constitutes an effort to teach a new function or not. Afterward, a second classifier locates the distinct semantic parts of teaching efforts: declaration of a new function, specification of intermediate steps, and superfluous information. For both tasks we implement a broad range of machine learning techniques: classical approaches, such as Naïve Bayes, and neural network configurations of various types and architectures, such as bidirectional LSTMs. Additionally, we introduce two heuristic-based adaptations that are tailored to the task of understanding teaching sequences. As data basis we use 3168 descriptions gathered in a user study. For the first task convolutional neural networks obtain the best results (accuracy: 96.6%); bidirectional LSTMs excel in the second (accuracy: 98.8%). The adaptations improve the first-level classification considerably (plus 2.2% points).
This study introduces the technique of Genetic Fuzzy Trees (GFTs) through novel application to an air combat control problem of an autonomous squadron of Unmanned Combat Aerial Vehicles (UCAVs) equipped with next-generation defensive systems. GFTs are a natural evolution to Genetic Fuzzy Systems, in which multiple cascading fuzzy systems are optimized by genetic methods. In this problem a team of UCAV's must traverse through a battle space and counter enemy threats, utilize imperfect systems, cope with uncertainty, and successfully destroy critical targets. Enemy threats take the form of Air Interceptors (AIs), Surface to Air Missile (SAM) sites, and Electronic WARfare (EWAR) stations. Simultaneous training and tuning a multitude of Fuzzy Inference Systems (FISs), with varying degrees of connectivity, is performed through the use of an optimized Genetic Algorithm (GA). The GFT presented in this study, the Learning Enhanced Tactical Handling Algorithm (LETHA), is able to create controllers with the presence of deep learning, resilience to uncertainties, and adaptability to changing scenarios. These resulting deterministic fuzzy controllers are easily understandable by operators, are of very high performance and efficiency, and are consistently capable of completing new and different missions not trained for.
Cervical cancer cell images play an important part in diagnosing the cancer among the females worldwide. Existing noises, overlapping cells, mucus, blood and air artifacts in cervical cancer cell images makes their classification a hard task. It makes it difficult for both pathologists and intelligent systems to segment and classify them into normal, pre-cancerous and cancerous cells. However, true cell segmentation is needed for pathologists to make for accurate diagnosis. In this paper, a review of algorithms used for cervical cancer cell image classification is presented. This includes pre-processing steps (noise reduction and cell segmentation/without segmentation), feature extraction, and intelligent diagnosis systems and their evaluations. Finally, future research trends on cervical cell classification to achieve complete accuracy are described.
The paper presents an approach to classification based on neuro-fuzzy systems and hybrid learning algorithms. A new method of rule generation is proposed. The rules are used in order to create a connectionist neuro-fuzzy architecture of the multi-NF system. Parameters of the rules are then adjusted by a gradient algorithm. Thus, the system can be employed to solve multi-class classification problems. Some examples are depicted.
Intelligent methods are more and more applied to several tasks also in process industry. In industrial applications, the changing production environment causes the urgent need for adaptation. This paper concerns with applications of Smart Adaptive System to industrial problems using some case studies as examples.
This paper describes the problem of developing working paradigms for advanced spatial data applications. The key role of interactive visualization in enabling the expertise of specialists, if effectively integrated into their working environments, is described. The scope for applying intelligence in designing visualizations to support, rather than to supplant, the expert is explored. A systematic framework describing the visualization design process, and an approach to applying intelligence around metavisualizations of the visualization design process, are summarized.
This chapter provides an overview on the increasing use of intelligent software as a means for combating fraud. In particular methods such as Neural Nets, Genetic Algorithms, and Fuzzy Logic are discussed in terms of their role in automated systems designed to detect and alert instances of fraud. Such methods are already employed in systems for detecting credit card fraud, insurance claims fraud, insider dealing, and company auditing.