Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Multimedia services of cultural institutions need to be supported by content, metadata and workflow management systems to efficiently manage huge amount of content items and metadata production. Online digital libraries and cultural heritage institutions, as well as portals of publishers need an integrated multimedia back office in order to aggregate content collection and provide them to national and international aggregators, with respect to Intellectual Property Rights, IPR. The aim of this paper is to formalize and discuss requirements, modeling, design and validation of an institutional aggregator for metadata and content, coping with IPR Models for conditional access and providing content towards Europeana, the European international aggregator. This paper presents the identification of the Content Aggregator requirements for content management and IPR, and thus the definition and realization of a corresponding distributed architecture and workflow solution satisfying them. The main contributions of this paper consist of the formalization of IPR Model that enable the shortening of the activities for the IPR resolution, and avoid the assignment of conflicting rights/permissions during IPR model formalization, and thus of licensing. The proposed solution, models and tools have been validated in the case of the ECLAP service and results are reported in the paper. ECLAP Content Aggregator has been established by the European Commission to serve Europeana for the thematic area of Performing Arts institutions.
Semantic computing addresses the transformation of data, both structured and unstructured into information that is useful in application domains. One domain where semantic computing would be extremely effective is evacuation route planning, an area of critical importance in disaster emergency management and homeland defense preparation. Evacuation route planning, which identifies paths in a given transportation network to minimize the time needed to move vulnerable populations to safe destinations, is computationally challenging because the number of evacuees often far exceeds the capacity, i.e. the number of people that can move along the road segments in a unit time. A semantic computing framework would help further the design and development of effective tools in this domain, by providing a better understanding of the underlying data and its interactions with various design techniques. Traditional Linear Programming(LP) based methods using time expanded networks can take hours to days of computation for metropolitan sized problems. In this paper, we propose a new approach, namely a capacity constrained routing planner for evacuation route planning which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints. We describe the building blocks and discuss the implementation of the system. Analytical and experimental evaluations that compare the performance of the proposed system with existing route planners show that the capacity constrained route planner produces solutions that are comparable to those produced by LP based algorithms while significantly reducing the computational cost.
Current bioinformatics tools or databases are very heterogeneous in terms of data formats, database schema, and terminologies. Additionally, most biomedical databases and analysis tools are scattered across different web sites making interoperability across such different services more difficult. It is desired that these diverse databases and analysis tools be normalized, integrated and encompassed with a semantic interface such that users of biological data and tools could communicate with the system in natural language and a workflow could be automatically generated and distributed into appropriate tools. In this paper, the BioSemantic System is presented to bridge complex biological/biomedical research problems and computational solutions via semantic computing. Due to the diversity of problems in various research fields, the semantic capability description language (SCDL) plays an important role as a common language and generic form for problem formalization. Several queries as well as their corresponding SCDL descriptions are provided as examples. For complex applications, multiple SCDL queries may be connected via control structures. For these cases, we present an algorithm to map a user request to one or more existing services if they exist.
Semantic Computing extends Semantic Web both in breadth and depth. It bridges, and integrates, several computing technologies into a complete and unified theme. This article discusses the essences of Semantic Computing with a description of SemanticServices.Net, a new paradigm that enables "Problem-driven" search that may offer a new story to the Internet.
Semantics is the meaning of symbols, notations, concepts, functions, and behaviors, as well as their relations that can be deduced onto a set of predefined entities and/or known concepts. Semantic computing is an emerging computational methodology that models and implements computational structures and behaviors at semantic or knowledge level beyond that of symbolic data. In semantic computing, formal semantics can be classified into the categories of to be, to have, and to do semantics. This paper presents a comprehensive survey of formal and cognitive semantics for semantic computing in the fields of computational linguistics, software science, computational intelligence, cognitive computing, and denotational mathematics. A set of novel formal semantics, such as deductive semantics, concept-algebra-based semantics, and visual semantics, is introduced that forms a theoretical and cognitive foundation for semantic computing. Applications of formal semantics in semantic computing are presented in case studies on semantic cognition of natural languages, semantic analyses of computing behaviors, behavioral semantics of human cognitive processes, and visual semantic algebra for image and visual object manipulations.
Computing with words (CWW) is an intelligent computing methodology for processing words, linguistic variables, and their semantics, which mimics the natural-language-based reasoning mechanisms of human beings in soft computing, semantic computing, and cognitive computing. The central objects in CWW techniques are words and linguistic variables, which may be formally modeled by abstract concepts that are a basic cognitive unit to identify and model a concrete entity in the real world and an abstract object in the perceived world. Therefore, concepts are the most fundamental linguistic entities that carries certain meanings in expression, thinking, reasoning, and system modeling, which may be formally modeled as an abstract and dynamic mathematical structure in denotational mathematics. This paper presents a formal theory for concept and knowledge manipulations in CWW known as concept algebra. The mathematical models of abstract and concrete concepts are developed based on the object-attribute-relation (OAR) theory. The formal methodology for manipulating knowledge as a concept network is described. Case studies demonstrate that concept algebra provides a generic and formal knowledge manipulation means, which is capable of dealing with complex knowledge and their algebraic operations in CWW.
This paper presents an experiment that allows the inference over data published in social networks, resulting in a potentially severe privacy leak, more specifically the inference of geo-location resulting in the potential of cybercasing attacks. We present an algorithm that allows the inference of the geo-location of YouTube and Flickr videos based on the tag descriptions. Using the locations, we find people where we can infer both the home address as well as the fact that they are currently on vacation, which makes them potential targets for burglary. By doing so we repeat an experiment from the literature that was originally meant to show the potential dangers of geo-tagging but replacing the geo-tags with Semantic Computing methods. We conclude that the only way to tackle potential threats like this is for researchers to develop an enhanced notion of privacy for Semantic Computing.
Semantic Computing is an emerging research field that has drawn much attention from both academia and industry. It addresses the derivation and matching of semantics of computational "content" where "content" may be anything including text, multimedia, hardware, network, etc. which can be mapped to many areas in Computer Science that involve analyzing and processing the intentions of humans with computational content. This paper discusses some potential applications of Semantic Computing in Computer Science.
Socioeconomic needs combined with technological advances are creating a demand for an increasing number of systems for which high-assurance is an essential attribute. These system designs, which increasingly include semantic computing components, span a broad spectrum of applications. They are incredibly diverse and their complexity is growing. The conditions under which these systems operate are such that system faults will occur and must be comprehensively accounted for in the systems' designs.
This article investigates the landscape of high-assurance systems, the challenges these systems face, and what must be done to address those challenges. Challenges include societal needs, scientific frontiers, dynamics of technological evolution, and drivers of current research models.
The ever-increasing amount of information flowing through Social Media presents numerous opportunities for the generation of Business Intelligence. Challenges exist in the leveraging of these data sources due to their heterogeneity and unstructured content. This paper presents the application of Semantic Computing to Social Media for industrial application, focusing on topic identification and behavior prediction. The methodologies described can benefit many areas of an organization including support of marketing, customer service, engineering and public relations. Results demonstrate that business operations can be substantially enhanced through application of Semantic Computing to Social Media.
This paper discusses principles for the design of natural language processing (NLP) systems to automatically extract data from doctor's notes, laboratory results and other medical documents in free-form text. We argue that rather than searching for "atom units of meaning" in the text and then trying to generalize them into a broader set of documents through increasingly complicated system of rules, an NLP practitioner should take concepts as a whole and as a meaningful unit of text. This simplifies the rules and makes NLP system easier to maintain and adapt. The departure point is purely practical; however, a deeper investigation of typical problems with the implementation of such systems leads us to a discussion of broader linguistic theories underlying the NLP practices, such as metaphors theories and models of human communication.
There are challenges educators face in delivering the vast amount of teaching material to students. Using ontologies could solve some of these issues. We survey different types of ontologies available that could aid educators teach students. We discuss how ontologies may help improve the education system for K-12, higher education, curriculum creating, e-learning, etc. We analyze the efficacy of the ontologies available as well as the challenges educators face and how to make improvements.
Biological and medical intelligence (BMI) has been studied in solos, lacking a systematic methodology. In this paper, we describe how Semantic Computing can enhance biological and medical intelligence. Specifically, we show how Structured Natural Language (SNL) can express many problems in BMI with a finite number of sentence patterns, and show how biological tools, OLAP, data mining tools and statistical analysis tools may be linked to solve problems related to biomedical data.
One may expect the Internet to evolve from being information centric to knowledge centric. This paper introduces the concept of a Knowledge Society Operating System (KSOS) that allows users to form knowledge societies in which members can search, create, manipulate and connect geographically distributed knowledge resources (including data, documents, tools, people, devices, etc.) based on semantics (“meaning”, “intention”) in order to solve problems of mutual interest. Built on top of the current Internet infrastructure, a KSOS can take advantage of existing resources to enable the use of applications or services through a web browser. This paper discusses some crucial aspects of a KSOS.
This paper presents a method to infer the quality of sprayers based on data collection of the drop spectra and their physical descriptors, which are used to generate a knowledge base to support decision-making in agriculture. The knowledge base is formed by collected experimental data, obtained in a controlled environment under specific operating conditions, and the semantics used in the spraying process to infer the quality in the application. The electro-hydraulic operating conditions of the sprayer system, which include speed and flow measurements, are used to define experimental tests, perform calibration of the spray booms and select the nozzle types. Using the Grubbs test and the quartile-quartile plot an exploratory analysis of the collected data was made in order to determine the data consistency, the deviation of atypical values, the independence between the data of each test, the repeatability and the normal representation of them. Therefore, integrating measurements to a knowledge base it was possible to improve the decision-making in relation to the quality of the spraying process defined in terms of a distribution function. Results shown that the use of advanced models and semantic interpretation improved the decision-making processes related to the quality of the agricultural sprayers.
This paper presents the design process of an embedded stereo vision system, which investigates the most relevant criteria for developing the hardware and software architectures for plant phenotyping. In other words, this paper is the result of a preliminary study in which the main motivation was the evaluation of the viability of a low-cost visual system for such field of knowledge. In addition, the implications of the adversities in an actual agricultural scenario under the system design are presented, since the system should not only meet the portability requirements but also the quality and precision for the measurements carried out by cameras. After the use of such method, the systems obtained may present a high chance of satisfying a set of constraints, and meeting their possibility to be used for machine vision applied in agricultural decision-making processes related to plant architecture and in situ recognition.
As the world becomes more connected and instrumented, high dimensional, heterogeneous and time-varying data streams are collected and need to be analyzed on the fly to extract the actionable intelligence from the data streams and make timely decisions based on this knowledge. This requires that appropriate classifiers are invoked to process the incoming streams and find the relevant knowledge. Thus, a key challenge becomes choosing online, at run-time, which classifier should be deployed to make the best possible predictions on the incoming streams. In this paper, we survey a class of methods capable to perform online learning in stream-based semantic computing tasks: multi-armed bandits (MABs). Adopting MABs for stream mining poses, numerous new challenges requires many new innovations. Most importantly, the MABs will need to explicitly consider and track online the time-varying characteristics of the data streams and to learn fast what is the relevant information out of the vast, heterogeneous and possibly highly dimensional data streams. In this paper, we discuss contextual MAB methods, which use similarities in context (meta-data) information to make decisions, and discuss their advantages when applied to stream mining for semantic computing. These methods can be adapted to discover in real-time the relevant contexts guiding the stream mining decisions, and tract the best classifier in presence of concept drift. Moreover, we also discuss how stream mining of multiple data sources can be performed by deploying cooperative MAB solutions and ensemble learning. We conclude the paper by discussing the numerous other advantages of MABs that will benefit semantic computing applications.
Bioinformatics conceptualizes biological processes in terms of genomics and applies computer science (derived from disciplines such as applied modeling, data mining, machine learning and statistics) to extract knowledge from biological data. This paper introduces the working definitions of bioinformatics and its applications and challenges. We also identify the bioinformatics resources that are popular among bioinformatics analysis, review some primary methods used to analyze bioinformatics problems, and review the data mining, semantic computing and deep learning technologies that may be applied in bioinformatics analysis.
There have been an enormous number of publications on cancer research. These unstructured cancer-related articles are of great value for cancer diagnostics, treatment, and prevention. The aim of this study is to introduce a recommendation system. It combines text mining (LDA) and semantic computing (GloVe) to understand the meaning of user needs and to increase the recommendation accuracy.
There have been an enormous number of publications on cancer research. These unstructured cancer-related articles are of great value for cancer diagnostics, treatment, and prevention. The aim of this study is to introduce a recommendation system. It combines text mining (LDA) and semantic computing (GloVe) to understand the meaning of user needs and to increase the recommendation accuracy.