heidelberg_banner
clearpixel aktuelle-seite-schriftzug contact us   sitemap  
clearpixel
Home
clearpixel
Call for Papers
Committees
Technical Program
Paper Submission
Schedule
Social Program
Registration
Travel Information
clearpixel
Site Service
Contact
clearpixel
 

mfi-logo
     
 

Plenary Speakers

 

Role of Information Fusion in the Human-Machine Interface

 
 

Session organizer: Belur Dasarathy 

Abstract:

An important facet in the deployment of Intelligent Systems designed for real-world application is the often-disregarded human machine interface. For such human-machine interaction to be efficacious, it is advisable to have matching information fusion capability inculcated in the intelligent systems designed with requisite the human interface. The talk addresses the role of multisensor, and/or multi-source information fusion in the context of effective human machine interface that is critical in many application domains such as robotics, hospital care, homeland security, etc. Just as humans process and fuse information from the five senses vision, audition, taction, olfaction and gustation, it is possible to envisage the fusion of information by machine from multiple sensors and sources. The presentation will include a brief introduction to the field of information fusion and underlying taxonomies. A sample of applications involving human-machine interaction in the areas of industrial/robotics, security/defense, bio-medical, and other civilian fields will also be touched upon to illustrate the breadth of the application potential of information fusion technologies in this context.

 

Signal Fusion Using Novel Weighted Averages

 
 

Session organizer: Jerry Mendel 

Abstract:

The weighted average is arguably the earliest and most widely used form of signal fusion. By "signal" I mean data, features, decisions, recommendations, etc. Traditionally, the weighted average is limited to numerical values for signals and weights. In this talk I will describe a hierarchy of novel weighted averages that expand the use of the weighted average from numerical values to (the following applies to the weights and possibly to the signals) interval sets that are uniformly distributed (in a non-probabilistic sense), sets that are non-uniformly distributed (i.e., type-1 fuzzy sets), and words that are modeled as interval type-2 fuzzy sets. Because weights appear in both the numerator and the denominator of a weighted average, calculating the weighted average when the weights are sets is challenging. General interval arithmetic cannot be used. Some results from type-1 and interval type-2 fuzzy set theory can be used, and I will explain them.

When signals and weights are modeled as type-1 fuzzy sets, the resulting average is called the Fuzzy Weighted Average, and it first appeared in 1987. When signals and weights are modeled as interval type-2 fuzzy sets, the resulting average is called the Linguistic Weighted Average (LWA), and it first appeared in 2006. The LWA is one instantiation of Lotfi Zadeh's 1996 Computing With Words paradigm. It lets signals and weights be described using a vocabulary of words, each of which has been pre-modeled as an interval type-2 fuzzy set. So, for the first time signal fusion can be accomplished just using words, by means of the LWA. Hopefully, the LWA will provide a new way for the attendees of this conference to think about signal fusion.

 

Sensor Data Fusion and Integration for Visual Guidance of Autonomous Vehicles

 
 

Session organizer: Ernst D. Dickmanns 

Abstract:

A survey is given on the state of the art of dynamic vision for perceiving the environment to be safely traversed by autonomous guidance and control of ground vehicles. Recursive estimation and its extension to the 4-D approach to dynamic vision are detailed. Sensor data fusion takes place in several ways corresponding to the characteristics of the measurement modalities and properties. Examples discussed will be: Inertial gaze stabilization in a 'physical' and a 'mental' mode; motion stereo through proper use of odometry, vision and direct range measurements by radar (lidar), and finally, visual interpretation of multiple video data streams from sets of multi-focal cameras including binocular, real-time stereo for detecting and avoiding negative obstacles like a ditch in a grass surface to be crossed. Experimental results with test vehicles VaMoRs (5-ton van) and VaMP (Mercedes 500 SEL) will be demonstrated by video sequences.

 
top-link top technical information   disclaimer