top of page

RESEARCH

Introduction


The main purpose of creating robots is to perform useful and efficient tasks in benefit of the activities and technological progress of humans; such as search and rescue, spatial exploration, shipment of merchandise, garbage collection or toxic waste, service robots and personal assistants and so forth. The Robotics Lab has the purpose to educate graduate/undergraduate students majoring robotics engineering throughout the development of research projects. The Lab has an approach on theoretical/practical training with fundamentals in a systematic project development. Different kinds of engineering robotic projects have been realized in our laboratory: distributed robotic architectures, RTOS embedded systems, mechanical locomotion devices, sensor fusion architectures, robot dynamics and control. This Lab particularly has a high accepting-demand of students from Mechatronics Engineering programs.
 

  • Robot modeling and control.

 

Locomotion models are critic- to establish the way of interaction of the robotic platform with the environment. They also determine its movement efficiency in terms of stability and controllability. In other hand, kinematic models consider the restrictions of locomotion to impose the geometry of movement of the robot regardless of the causes that generate it. However, the dynamic models or mobility equations are used to build intelligent control software.
 

  • Robotic computer system architectures (hardware, software, intelligence).

 

One more aspect of critical importance is the computational organization of the robots' intelligence, substantiated in the hardware and the operative system. Moreover, the robot perception capacity refers to the use of sensory information to perform cognitive processes.

 

  • Robotic visual recognition.

  • Multisensor fusion.

 

In order for a mobile robot to perform tasks with autonomy, it requires the localization process as a critical issue to answer: from where it comes?, where it is? and where it will go?. Therefore, the robot's location ability becomes the core of autonomy, to allow any intelligent robots to have notion of its mobility.


Sensory information is acquired through extraction of algorithms and pattern recognition of native data from the sensors. Sensing models are useful in multi-sensory fusion scheme. The sensors used in robotics are classified as proprioceptive (odometers, inclinometers, gyroscopes, accelerometers, etc), exteroceptive (LADAR, sound, touch, vision, proximity, etc.) and proprio-exteroceptives (GPS), and such types of sensors are the means used by the robot to mesure the world in which they interact.

 

In order to create mobile robots in any control modality (ground, aerial or aquatic) requires the synergistic fusion of diverse areas of engineering and science. The fundamental purpose of create robots is for these to perform useful and efficient tasks in benefit of the activities and progress of human beings. Tasks such as: search and rescue, spatial exploration, shipment of merchandise, garbage collection or toxic waste, service robots, personal assistants and so on, these are just a few examples. To achieve that a mobile robot perform tasks with autonomy, it requires the location process to know where it comes from? Where it is? And where it goes? That is the reason why location is definitely the core of autonomy, focusing on creates intelligent robots that may have notion of its mobility.

 

The locomotion models are critics to establish the way of interaction of the robotic platform with the environment and those determine it efficiency of movement in terms of stability and controllability. The kinematic models consider the restrictions of locomotion to determine the geometry of movement of the robot regardless of the causes that generate it. Likewise, the dynamic models or mobility equations that are used to the intelligent control software. Another aspect of critical importance is the computational organization of the robot.s intelligence, substantiated in the hardware and the operative system for its implementation.

 

Finally, the robot perception capacity refers to the use of sensory information to perform cognitive processes. Sensory information is acquired through extraction algorithms and pattern recognition of native data from the sensors. Sensing models are useful in the schemes of multi-sensory fusion. The sensors used in robotics are classified as proprioceptive (odometers, inclinometers, gyroscopes, accelerometers, etc), exteroceptive (LADAR, sound, touch, vision, proximity, etc.) and propioexteroceptivos (GPS), and are the means used by the robot observe the world in which we interact.

bottom of page