Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Transient algebra is a multi-valued algebra for hazard detection in gate circuits. Sequences of alternating 0's and 1's, called transients, represent signal values, and gates are modeled by extensions of boolean functions to transients. Formulas for computing the output transient of a gate from the input transients are known for NOT, AND, OR and XOR gates and their complements, but, in general, even the problem of deciding whether the length of the output transient exceeds a given bound is NP-complete. We propose a method of evaluating extensions of general boolean functions. We study a class of functions for which, instead of evaluating the extensions on a given set of transients, it is possible to get the same values by using transients derived from the given ones, but having length at most 3. We prove that all functions of three variables, as well as certain other functions, have this property, and can be efficiently evaluated.
The back-off procedure is one of the media access control technologies in 802.11P communication protocol. It plays an important role in avoiding message collisions and allocating channel resources. Formal methods are effective approaches for studying the performances of communication systems. In this paper, we establish a discrete time model for the back-off procedure. We use Markov Decision Processes (MDPs) to model the non-deterministic and probabilistic behaviors of the procedure, and use the probabilistic computation tree logic (PCTL) language to express different properties, which ensure that the discrete time model performs their basic functionality. Based on the model and PCTL specifications, we study the effect of contention window length on the number of senders in the neighborhood of given receivers, and that on the station’s expected cost required by the back-off procedure to successfully send packets. The variation of the window length may increase or decrease the maximum probability of correct transmissions within a time contention unit. We propose to use PRISM model checker to describe our proposed back-off procedure for IEEE802.11P protocol in vehicle network, and define different probability properties formulas to automatically verify the model and derive numerical results. The obtained results are helpful for justifying the values of the time contention unit.
The difficulty in the energy efficiency analysis of discrete manufacturing system is the lack of evaluation index system. In this paper, a novel evaluation index system with three layers and 10 indexes was presented to analyze the overall energy consumption level of the discrete manufacturing system. Then, with the consideration of the difficulties in directly obtaining machine energy efficiency, a prediction method based on recursive variable forgetting factor identification was put forward to calculate it. Furthermore, a comprehensive quantitative evaluation method of rough set and attribute hierarchical model was designed based on the index structure to evaluate the energy efficiency level. Finally, an experiment was used to illustrate the effectiveness of our evaluation index system and method.
Recommender systems have already been engaging multiple criteria for the production of recommendations. Such systems, referred to as multicriteria recommenders, demonstrated early the potential of applying Multi-Criteria Decision Making (MCDM) methods to facilitate recommendation in numerous application domains. On the other hand, systematic implementation and testing of multicriteria recommender systems in the context of real-life applications still remains rather limited. Previous studies dealing with the evaluation of recommender systems have outlined the importance of carrying out careful testing and parameterization of a recommender system, before it is actually deployed in a real setting. In this paper, the experimental analysis of several design options for three proposed multiattribute utility collaborative filtering algorithms is presented for a particular application context (recommendation of e-markets to online customers), under conditions similar to the ones expected during actual operation. The results of this study indicate that the performance of recommendation algorithms depends on the characteristics of the application context, as these are reflected on the properties of evaluations' data set. Therefore, it is judged important to experimentally analyze various design choices for multicriteria recommender systems, before their actual deployment.
This paper describes possible strategies for the implementation of a feature selection algorithm particularly suited to the realisation of an efficient automatic handwritten signature verification system in which an active feature vector, optimised with respect to an individual signer, is constructed during an enrollment period. A range of configurations based on transputer arrays are considered and the possible implementational approaches evaluated. The paper demonstrates how the inherent parallelism which exists within a generic model for verification can be exploited to provide an optimised general-purpose framework for verification processing.
At the SEKE'99 conference, knowledge engineering researchers held a panel on the merits of meta-knowledge (i.e. problem solving methods and ontologies) for the development of knowledge-based systems. The original panel was framed as a debate on the merits of meta-knowledge for knowledge maintenance [21]. However, the debate quickly expanded. In the end, we were really discussing the merits of different technologies for the specification of reusable components for KBS. In this brief article we record some of the lively debate from that panel and the email exchanges it generated.
In this paper, we present a logical framework to facilitate users in assessing a software system in terms of the required survivability features. Survivability evaluation is essential in linking foreign software components to an existing system or obtaining software systems from external sources. It is important to make sure that any foreign components/systems will not compromise the current system's survivability properties. Given the increasing large scope and complexity of modern software systems, there is a need for an evaluation framework to accommodate uncertain, vague, or even ill-known knowledge for a robust evaluation based on multi-dimensional criteria. Our framework incorporates user-defined constrains on survivability requirements. Necessity-based possibilistic uncertainty and user survivability requirement constraints are effectively linked to logic reasoning. A proof-of-concept system has been developed to validate the proposed approach. To our best knowledge, our work is the first attempt to incorporate vague, imprecise information into software system survivability evaluation.
Code clone refers to two or more identical or similar source code fragments. Research on code clone detection has lasted for decades. Investigation and evaluation of existing clone detection techniques indicate that they are resilient to function-level clone detection. Still, there may be room for further research in block-level clone detection. Particularly, type-3 clones that include large gaps, are ongoing challenges. To solve these problems, we propose a clone detection method based on multiple code features. It aims to improve the recall rate of code block clone detection and overcome large-gap and hard-to-detect type-3 clones. This method first splits the source code files based on the program’s structural features and context features to obtain code blocks. The collection of code blocks obtained in this way is complete, and the large gaps in clone pairs will also be removed. In addition, we only need to compute the similarity between code blocks with the same structural features, which can also significantly save time and resources. The similarity is obtained by calculating the proportion of the same tokens between two code blocks. Moreover, since different types of tokens have different weights in similarity calculation, we use supervised learning to obtain a classifier model between token features and code clone. We divide the tokens into 13 types and train the machine learning model with the manually confirmed clone or non-clone pair. Finally, we develop a prototype system and compare our tools with existing tools under the Mutation Framework and in several actual C projects. The experimental results also demonstrate the advancement and practicality of our prototype.
Parameterization approaches for non-uniform Doo-Sabin subdivision surfaces are developed in this paper using the eigenstructure of Doo-Sabin subdivision. New methods for evaluation of values and derivatives of Doo-Sabin surfaces at arbitrary parameters by means of non-iterative approaches are presented. Furthermore, generalized basis functions for non-uniform Doo-Sabin surfaces are derived. Thus, many algorithms and analysis techniques developed for parametric surfaces can be extended to Doo-Sabin surfaces.
In this paper we present and discuss the results of the evaluation of an Intelligent Computer Assisted Language Learning (ICALL) system that operates over the Web. In particular, we aimed at evaluating the system along three dimensions: a) the effect of the intelligent features of the system on the learning outcome of students, b) the system's ability to provide individualized support to students that leads to more effective use of the system and c) the general usability and friendliness of the ICALL. To achieve this, we conducted an empirical study, where we compared the intelligent system with a non-intelligent version of it. The results of the study revealed that the students of the Web-based ICALL had gained more knowledge of the domain and had been able to interact with the system more effectively as compared to the students that had used the non-intelligent version of the system. However, the students of the intelligent version of the system found it more difficult and they needed more time to get acquainted with the system in comparison to the students of the non-intelligent system.
Biomedical research becomes increasingly multidisciplinary and collaborative in nature. At the same time, it has recently seen a vast growth in publicly and instantly available information. As the available resources become more specialized, there is a growing need for multidisciplinary collaborations between biomedical researchers to address complex research questions. We present an application of a data mining algorithm to genomic data in a collaborative decision-making support environment, as a typical example of how multidisciplinary researchers can collaborate in analyzing and interpreting biomedical data. Through the proposed approach, researchers can easily decide about which data repositories should be considered, analyze the algorithmic results, discuss the weaknesses of the patterns identified, and set up new iterations of the data mining algorithm by defining other descriptive attributes or integrating other relevant data. Evaluation results show that the proposed approach facilitates users to set their research objectives and better understand the data and methodologies used in their research.
Mobile learning offers, with the help of handheld devices, a continuous access to the learning process. With the advent of mobile learning, educational systems are changing, offering the possibility of distance education without the restrictions of place and time. As such, new technological advancements are employed by mobile learning. This paper presents the design, development and evaluation of a novel artificial conversational entity, incorporated in a mobile learning system for personalized English language instruction. More specifically, it offers amelioration of the domain knowledge model by adapting it to the students’ needs and to the pace that they prefer to receive learning. Moreover, it creates personalized tutoring advice in order to support students in the educational process. Finally, it can assist the procedure of assessments since it automatically generates questions to assess the knowledge level of students. The evaluation of the mobile tutoring system presents promising results regarding the incorporation of this new technology in digital education with the aim of creating a student-centric learning experience.
Collaborative filtering techniques have been studied extensively during the last decade. Many open source packages (Apache Mahout, LensKit, MyMediaLite, rrecsys etc.) have implemented them, but typically the top-N recommendation lists are only based on a highest predicted ratings approach. However, exploiting frequencies in the user/item neighborhood for the formation of the top-N recommendation lists has been shown to provide superior accuracy results in offline simulations. In addition, most open source packages use a time-independent evaluation protocol to test the quality of recommendations, which may result to misleading conclusions since it cannot simulate well the real-life systems, which are strongly related to the time dimension. In this paper, we have therefore implemented the time-aware evaluation protocol to the open source recommendation package for the R language — denoted rrecsys — and compare its performance across open source packages for reasons of replicability. Our experimental results clearly demonstrate that using the most frequent items in neighborhood approach significantly outperforms the highest predicted rating approach on three public datasets. Moreover, the time-aware evaluation protocol has been shown to be more adequate for capturing the life-time effectiveness of recommender systems.
Recommender systems’ evaluation is usually based on predictive accuracy and information retrieval metrics, with better scores meaning recommendations are of higher quality. However, new algorithms are constantly developed and the comparison of results of algorithms within an evaluation framework is difficult since different settings are used in the design and implementation of experiments. In this paper, we propose a guidelines-based approach that can be followed to reproduce experiments and results within an evaluation framework. We have evaluated our approach using a real dataset, and well-known recommendation algorithms and metrics; to show that it can be difficult to reproduce results if certain settings are missing, thus resulting in more evaluation cycles required to identify the optimal settings.
As program behavior becomes complex, it’s increasingly important to analyze their behavior statistically. The article describes two separate but synergistic tools for statistically analyzing large Lisp programs. The first tool, called CLIP (Common Lisp Instrumentation Package), allows the researcher to define and run experiments, including experimental conditions (parameter values of the planner or simulator) and data to be collected. The data are written out to data files that can be analyzed by statistics software. The second tool, called CLASP (Common Lisp Analytical Statistics Package), allows the researcher to analyze data from experiments by using graphics, statistical tests, and various kinds of data manipulation. CLASP has a graphical user interface (using CLIM, the Common Lisp Interface Manager) and also allows data to be directly processed by Lisp functions. Finally, the paper describes a number of other data-analysis modules that have been added to work with CLIP and CLASP.
This paper examines the future of software engineering with particular emphasis on the development of intelligent and cooperating information systems (ICISs). After a brief historical overview, the applications of the 1990s are characterized as having open requirements, depending on reuse, emphasizing integration, and relying on diverse computational models. It is suggested that experience with TEDIUM, an environment for developing interactive information systems, offers insight into how software engineering can adjust to its new challenges. The environment and the methods for its use are described, and its effect on the software process is evaluated. Because the environment employs a knowledge-based approach to software development, there is an extended discussion of how TEDIUM classifies, represents, and manages this knowledge. A final section relates the experience with TEDIUM to the demands of ICIS development and evolution.
Many information fusion methods have been widely investigated to tackle group decision making (GDM) problem under dual hesitant fuzzy (DHF) environment. Nevertheless, traditional DHF information fusion methods are not quite perfect and they could be improved. In this study, we introduce the Archimedean t-conorm and t-norm (ATT) to DHF environment. We first introduce the ATT operations for DHF information, some existing operations are proved to be the exceptional cases of the proposed ones. Then, several new aggregation operators for DHF information are proposed based on ATT operations. Finally, an actual case about the selection of cooperative hospital as an emergency hospital for a company is studied based on the proposed methods.
Quantitative models built as tools for evaluating human language performance, can prove useful for both theoretical and applied areas of discourse comprehension, assessment, and education. The current paper offers a test case focused on mathematical articulation of a model and a test of that model with existing corpora. The model presented has been developed to evaluate free text answers of students based on the fuzzy indiscernibility between the semantic spaces created by the model answer and the learners’ response. The model semantic space represented as a knowledge matrix can be constructed out of one or more model answers prepared by human experts and closely resemble the knowledge of the human evaluator. The proposed model finds out the indiscernibility between the types of word usage and scores the answer based on the fuzzy indiscernibility measures stored in a graded thesaurus prepared specifically for this purpose. The results returned on experimental data correlates well with human evaluators. This simulated learner-friendly atmosphere, inspired by a teacher’s intelligent benevolence, ensures effective attainment of learning objective in e-learning environment.
The evaluation models for long time historical data is important in many applications. In this study, based on Age measure defined by Yager, we propose the definitions of Age Sequence and Age Series. Then, we provide a Generalized Recursive Smoothing method. Some classical smoothing models in evaluation problems can be seen as special cases of Generalized Recursive Smoothing method. In order to obtain more reasonable and effective aggregation results of the historical data, we propose some different Age Sequences, e.g., the Generalized Harmonic Age Sequence and p Age Sequence, which theoretically can provide infinite more recursive smoothing methods satisfying different preferences of decision makers.
This study proposes a novel concept of Scatter for probability distribution (on [0,1]). The proposed measurement is different from famous Shannon Entropy since it considers [0,1] as a chain instead of a normal set. The measurement works easily and reasonably in practice and conforms to human intuition. Some interesting properties like symmetricity, translation invariance, weak convergence and concavity of this new measurement are also obtained. The measurement also has good potential in more theoretical studies and applications. The novel concept can also be suitably adapted for discrete OWA operators and RIM quantifiers. We then propose a new measurement, the Preference Scatter, with its normalized form, the Normalized Preference Scatter, for OWA weights collections. We analyze its reasonability as a new measurement for OWA weights collections with comparisons to some other measurements like Orness, Normalized Dispersion and Hurwicz Degree of OWA operators. In addition, the corresponding Preference Scatter for RIM quantifiers is defined.