Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The task of object identification is fundamental to the operations of an autonomous vehicle. It can be accomplished by using techniques based on a Multisensor Fusion framework, which allows the integration of data coming from different sensors. In this paper, an approach to the synergic interpretation of data provided by thermal and visual sensors is proposed. Such integration is justified by the necessity for solving the ambiguities that may arise from separate data interpretations.
The architecture of a distributed Knowledge-Based system is described. It performs an Intelligent Data Fusion process by integrating, in an opportunistic way, data acquired with a thermal and a video (b/w) camera. Data integration is performed at various architecture levels in order to increase the robustness of the whole recognition process. A priori models allow the system to obtain interesting data from both sensors; to transform such data into intermediate symbolic objects; and, finally, to recognize environmental situations on which to perform further processing. Some results are reported for different environmental conditions (i.e. a road scene by day and by night, with and without the presence of obstacles).
In this paper concepts for goal-oriented reasoning within the blackboard development environment QBB are presented. The architecture of QBB supports the selection of problem solving actions with respect to the achivement of quality goals. Furthermore, interactions of goals are explicitly taken into account in action selection.
The features of QBB to support goal-oriented reasoning are presented. Especially, it is described how mutual influence of actions with respect to goal achievement can explicitly modeled as relationships between actions, the so-called compensation relations. The usefullness of compensation relations has been tested by goal-oriented modeling of the travelling salesman problem.