Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "vision based control" wg kryterium: Temat


Wyświetlanie 1-4 z 4
Tytuł:
Design of a vision‐based autonomous turret
Autorzy:
Louali, Rabah
Negadi, Djilali
Hamadouche, Rabah
Nemra, Abdelkrim
Powiązania:
https://bibliotekanauki.pl/articles/27314239.pdf
Data publikacji:
2022
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
autonomous turret
stepper motor
DC motor
vision based control
Tracking‐Learning‐Detection algorithm
TLD algorithm
Kalman based visual tracking
Opis:
This article describes the hardware and software de‐ sign of a vision‐based autonomous turret system. A two degree of freedom (2 DOF) turret platform is designed to carry a cannon equipped with an embedded camera and actuated by stepper motors or direct current motors. The turret system includes a central calculator running a visual detection and tracking solution, and a microcon‐ troller, responsible for actuators control. The Tracking‐ Learning‐Detection (TLD) algorithm is implemented for target detection and tracking. Furthermore, a Kalman filter algorithm is implemented to continue the tracking in case of occlusion. The performances of the designed turret, regarding response time, accuracy and the execu‐ tion time of its main tasks, are evaluated. In addition, an experimental scenario was performed for real‐time autonomous detection and tracking of a moving target.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2022, 16, 4; 72--77
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Stereovision Safety System for Identifying Workers’ Presence: Results of Tests
Autorzy:
Grabowski, A.
Jankowski, J.
Dźwiarek, M.
Kosiński, R. A.
Powiązania:
https://bibliotekanauki.pl/articles/89921.pdf
Data publikacji:
2014
Wydawca:
Centralny Instytut Ochrony Pracy
Tematy:
human-machine interaction
neural networks
safety control
vision-based protective devices
interfejs człowiek-maszyna
kontrola bezpieczeństwa pracy
zabezpieczenia
interakcja człowiek-maszyna
sieci neuronowe
Opis:
This article presents the results of extensive tests of a stereovision safety system performed using real and artificial images. A vision based protective device (VBPD) analyses images from 2 cameras to calculate the position of a worker and moving parts of a machine (e.g., an industrial robot’s arm). Experiments show that the stereovision safety system works properly in real time even when subjected to rapid changes in illumination level. Experiments performed with a functional model of an industrial robot indicate that this safety system can be used to detect dangerous situations at workstations equipped with a robot, in human–robot cooperation. Computer-generated artificial images of a workplace simplify and accelerate testing procedures, and make it possible to compare the effectiveness of VBPDs and other protective devices at no additional cost.
Źródło:
International Journal of Occupational Safety and Ergonomics; 2014, 20, 1; 103-109
1080-3548
Pojawia się w:
International Journal of Occupational Safety and Ergonomics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Dense 3D Structure and Motion Estimation as an Aid for Robot Navigation
Autorzy:
De Cubber, G.
Powiązania:
https://bibliotekanauki.pl/articles/384899.pdf
Data publikacji:
2008
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
outdoor mobile robots
behavior-based control
stereo vision and image motion analysis for robot navigation
modular control and software architecture (MCA)
Opis:
Three-dimensional scene reconstruction is an important tool in many applications varying from computer graphics to mobile robot navigation. In this paper, we focus on the robotics application, where the goal is to estimate the 3D rigid motion of a mobile robot and to reconstruct a dense three-dimensional scene representation. The reconstruction problem can be subdivided into a number of subproblems. First, the egomotion has to be estimated. For this, the camera (or robot) motion parameters are iteratively estimated by reconstruction of the epipolar geometry. Secondly, a dense depth map is calculated by fusing sparse depth information from point features and dense motion information from the optical flow in a variational framework. This depth map corresponds to a point cloud in 3D space, which can then be converted into a model to extract information for the robot navigation algorithm. Here, we present an integrated approach for the structure and egomotion estimation problem.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2008, 2, 4; 14-18
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Vision-Based Mobile Robot Navigation
Autorzy:
Berrabah, S. A.
Colon, E.
Powiązania:
https://bibliotekanauki.pl/articles/384895.pdf
Data publikacji:
2008
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
robot navigation
vision-based SLAM
local and global mapping
adaptive fuzzy control
Opis:
This paper presents a vision-based navigation system for mobile robots. It enables the robot to build a map of its environment, localize efficiently itself without use of any artificial markers or other modifications, and navigate without colliding with obstacles. The Simultaneous Localization And Mapping (SLAM) procedure builds a global representation of the environment based on several size limited local maps built using the approach introduced by Davison [1]. Two methods for global map are presented; the first method consists in transforming each local map into a global frame before to start building a new local map. While in the second method, the global map consists only in a set of robot positions where new local maps are started (i.e. the base references of the local maps). In both methods, the base frame for the global map is the robot position at instant . Based on the estimated map and its global position, the robot can find a path and navigate without colliding with obstacles to reach a goal defined the user. The moving objects in the scene are detected and their motion is estimated using a combination of Gaussian Mixture Model (GMM) background subtraction approach and a Maximum a Posteriori Probability Markov Random Field (MAP-MRF) framework [2]. Experimental results in real scenes are presented to illustrate the effectiveness of the proposed method.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2008, 2, 4; 7-13
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-4 z 4

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies