Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Vision-Based Navigation" wg kryterium: Temat


Wyświetlanie 1-4 z 4
Tytuł:
Inertial navigation position and orientation estimation with occasional Galileo satellite position fixes and stereo camera measurements
Autorzy:
Vepa, R.
Petrakou, K.
Powiązania:
https://bibliotekanauki.pl/articles/320500.pdf
Data publikacji:
2012
Wydawca:
Polskie Forum Nawigacyjne
Tematy:
vision-based navigation
inertial navigation system (INS)
Galileo
GNSS
Opis:
In this paper an adaptive unscented Kalman filter based mixing filter is used to integrate kinematic satellite aided inertial navigation system with vision based measurements of five representative points on a runway in a modern receiver that incorporates carrier phase smoothing and ambiguity resolution. Using high resolution multiple stereo camera based measurements of five points on the runway, in addition to a set of typical pseudo-range estimates that can be obtained from a satellite navigation system such GPS or GNSS equipped with a carrier phase receiver, the feasibility of generating high precision estimates of the typical outputs from an inertial navigation system is demonstrated. The methodology may be developed as a stand-alone system or employed in conjunction with a traditional strapped down inertial navigation systems for purposes of initial alignment. Moreover the feasibility of employing adaptive mixing was explored as it facilitates the possibility of using the system for developing a vision based automatic landing controller.
Źródło:
Annual of Navigation; 2012, No. 19, part 2; 131-153
1640-8632
Pojawia się w:
Annual of Navigation
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Low-cost navigation and guidance systems for Unmanned Aerial Vehicles. Part 1: Vision-based and integrated sensors
Autorzy:
Sabatini, R.
Bartel, C.
Kaharkar, A.
Shaid, T.
Rodriguez, L.
Zammit-Mangion, D.
Jia, H.
Powiązania:
https://bibliotekanauki.pl/articles/320426.pdf
Data publikacji:
2012
Wydawca:
Polskie Forum Nawigacyjne
Tematy:
Vision-Based Navigation
integrated navigation system
MEMS Inertial Measurement Unit
unmanned aerial vehicle
Low-cost Navigation Sensors
Opis:
In this paper we present a new low-cost navigation system designed for small size Unmanned Aerial Vehicles (UAVs) based on Vision-Based Navigation (VBN) and other avionics sensors. The main objective of our research was to design a compact, light and relatively inexpensive system capable of providing the Required Navigation Performance (RNP) in all phases of flight of a small UAV, with a special focus on precision approach and landing, where Vision Based Navigation (VBN) techniques can be fully exploited in a multisensor integrated architecture. Various existing techniques for VBN were compared and the Appearance-Based Approach (ABA) was selected for implementation. Feature extraction and optical flow techniques were employed to estimate flight parameters such as roll angle, pitch angle, deviation from the runway and body rates. Additionally, we addressed the possible synergies between VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU (Micro-Electromechanical System Inertial Measurement Unit) sensors, as well as the aiding from Aircraft Dynamics Models (ADMs). In particular, by employing these sensors/models, we aimed to compensate for the shortcomings of VBN and MEMS-IMU sensors in high-dynamics attitude determination tasks. An Extended Kalman Filter (EKF) was developed to fuse the information provided by the different sensors and to provide estimates of position, velocity and attitude of the UAV platform in real-time. Two different integrated navigation system architectures were implemented. The first used VBN at 20 Hz and GPS at 1 Hz to augment the MEMS-IMU running at 100 Hz. The second mode also included the ADM (computations performed at 100 Hz) to provide augmentation of the attitude channel. Simulation of these two modes was accomplished in a significant portion of the AEROSONDE UAV operational flight envelope and performing a variety of representative manoeuvres (i.e., straight climb, level turning, turning descent and climb, straight descent, etc.). Simulation of the first integrated navigation system architecture (VBN/IMU/GPS) showed that the integrated system can reach position, velocity and attitude accuracies compatible with CAT-II precision approach requirements. Simulation of the second system architecture (VBN/IMU/GPS/ADM) also showed promising results since the achieved attitude accuracy was higher using the ADM/VBS/IMU than using VBS/IMU only. However, due to rapid divergence of the ADM virtual sensor, there was a need for frequent re-initialisation of the ADM data module, which was strongly dependent on the UAV flight dynamics and the specific manoeuvring transitions performed.
Źródło:
Annual of Navigation; 2012, No. 19, part 2; 71-98
1640-8632
Pojawia się w:
Annual of Navigation
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Vision-Based Mobile Robot Navigation
Autorzy:
Berrabah, S. A.
Colon, E.
Powiązania:
https://bibliotekanauki.pl/articles/384895.pdf
Data publikacji:
2008
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
robot navigation
vision-based SLAM
local and global mapping
adaptive fuzzy control
Opis:
This paper presents a vision-based navigation system for mobile robots. It enables the robot to build a map of its environment, localize efficiently itself without use of any artificial markers or other modifications, and navigate without colliding with obstacles. The Simultaneous Localization And Mapping (SLAM) procedure builds a global representation of the environment based on several size limited local maps built using the approach introduced by Davison [1]. Two methods for global map are presented; the first method consists in transforming each local map into a global frame before to start building a new local map. While in the second method, the global map consists only in a set of robot positions where new local maps are started (i.e. the base references of the local maps). In both methods, the base frame for the global map is the robot position at instant . Based on the estimated map and its global position, the robot can find a path and navigate without colliding with obstacles to reach a goal defined the user. The moving objects in the scene are detected and their motion is estimated using a combination of Gaussian Mixture Model (GMM) background subtraction approach and a Maximum a Posteriori Probability Markov Random Field (MAP-MRF) framework [2]. Experimental results in real scenes are presented to illustrate the effectiveness of the proposed method.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2008, 2, 4; 7-13
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Dense 3D Structure and Motion Estimation as an Aid for Robot Navigation
Autorzy:
De Cubber, G.
Powiązania:
https://bibliotekanauki.pl/articles/384899.pdf
Data publikacji:
2008
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
outdoor mobile robots
behavior-based control
stereo vision and image motion analysis for robot navigation
modular control and software architecture (MCA)
Opis:
Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2008, 2, 4; 14-18
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-4 z 4

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies