Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Detection of Airborne Collision-Course Targets for Sense and Avoid on Unmanned Aircraft Systems Using Machine Vision Techniques

    Unmanned Systems01 Oct 2016

    Detecting collision-course targets in aerial scenes from purely passive optical images is challenging for a vision-based sense-and-avoid (SAA) system. Proposed herein is a processing pipeline for detecting and evaluating collision course targets from airborne imagery using machine vision techniques. The evaluation of eight feature detectors and three spatio-temporal visual cues is presented. Performance metrics for comparing feature detectors include the percentage of detected targets (PDT), percentage of false positives (POT) and the range at earliest detection (Rdet). Contrast and motion-based visual cues are evaluated against standard models and expected spatio-temporal behavior. The analysis is conducted on a multi-year database of captured imagery from actual airborne collision course flights flown at the National Research Council of Canada. Datasets from two different intruder aircraft, a Bell 206 rotor-craft and a Harvard Mark IV trainer fixed-wing aircraft, were compared for accuracy and robustness. Results indicate that the features from accelerated segment test (FAST) feature detector shows the most promise as it maximizes the range at earliest detection and minimizes false positives. Temporal trends from visual cues analyzed on the same datasets are indicative of collision-course behavior. Robustness of the cues was established across collision geometry, intruder aircraft types, illumination conditions, seasonal environmental variations and scene clutter.

  • articleNo Access

    DFTNet: Dual Flow Transformer Network for Conveyor Belt Edge Detection

    Unmanned Systems10 Apr 2023

    In traditional conveyor belt edge detection methods, contact detection methods have a high cost. At the same time noncontact detection methods have low precision, and the methods based on the convolutional neural network are limited by the local operation features of the convolution operation itself, causing problems such as insufficient perception of long-distance and global information. In order to solve the above problems, a dual flow transformer network (DFTNet) integrating global and local information is proposed for belt edge detection. DFTNet could improve belt edge detection accuracy and suppress the interference of belt image noise. In this paper, the authors have merged the advantages of the traditional convolutional neural network’s ability to extract local features and the transformer structure’s ability to perceive global and long-distance information. Here, the fusion block is designed as a dual flow encoder–decoder structure, which could better integrate global context information and avoid the disadvantages of a transformer structure pretrained on large datasets. Besides, the structure of the fusion block is designed to be flexible and adjustable. After sufficient experiments on the conveyor belt dataset, the comparative results show that DFTNet can effectively balance accuracy and efficiency and has the best overall performance on belt edge detection tasks, outperforming full convolution methods. The processing image frame rate reaches 53.07 fps, which can meet the real-time requirements of the industry. At the same time, DFTNet can deal with belt edge detection problems in various scenarios, which gives it great practical value.