dc.contributorMario Fernando Montenegro Campos
dc.contributorGuilherme Augusto Silva Pereira
dc.contributorArnaldo de Albuquerque Araujo
dc.contributorAlexei Manso Correa Machado
dc.contributorLuiz Marcos Garcia Gonçalves
dc.creatorJose Luiz de Souza Pio
dc.date.accessioned2019-08-10T23:10:33Z
dc.date.accessioned2022-10-03T23:37:19Z
dc.date.available2019-08-10T23:10:33Z
dc.date.available2022-10-03T23:37:19Z
dc.date.created2019-08-10T23:10:33Z
dc.date.issued2006-02-10
dc.identifierhttp://hdl.handle.net/1843/RVMR-6QGJSQ
dc.identifier.urihttp://repositorioslatinoamericanos.uchile.cl/handle/2250/3825522
dc.description.abstractNowadays, surveillance and security systems based on visual sensors are a very common approach. International terrorism and the growth of urban violence, evoke the new applications of computer vision systems. These systems can be found in international ports, airports, train and subway stations of all great urban centers. A common approach is based on analog closed-circuit television systems with image scene analysis, control and decision centered on human operator. However, modern and sophisticated surveillance based computer vision systems enable the integration of sights from many cameras into a single, consistent scene representation. This thesis addresses the problem of multi-camera target observation. Here, we name this problem of Cooperative Dynamic Observation. We want to find the cameras pose that makes possible the observation of moving targets of interest and their trajectories based on visual information shared between cameras. In this problem, we consider that observation means target identification and tracking; dynamic is the cameras moving ability provided by mobile robots with navigation and positioning performance; and cooperation refers to the practice of self-organized camera positioning based on shared visual information, provided by a communication network. To address this problem, we developed a framework that finds the cameras poses based on visual information acquired and shared by a camera communication network. The framework is modelled in three principal modules: the tracker, the camera position planner, and the target/trajectory association module. The tracker is based on distributed particle filter that fuses, in real time, the targets motion and visual information given by colors. The position planning is done by a observation function with optical and environmental constraints. The targets trajectories between cameras with disjoint field of views are matched by an adaptation of classical EM (Expectation-Maximization)Algorithm. The robustness of framework is analyzed and tested in real experiments with a developed systematic experimental protocol.
dc.publisherUniversidade Federal de Minas Gerais
dc.publisherUFMG
dc.rightsAcesso Aberto
dc.subjectRobótica
dc.subjectVisão computacional
dc.titleObservação dinâmica cooperativa
dc.typeTese de Doutorado


Este ítem pertenece a la siguiente institución