Four TRACLabs scientists just returned from Washington DC where they attended the kick-off of the DARPA Robot Challenge (DRC). The DRC involves either building a robot that can perform disaster relief functions (Track A) or writing software that allows a DARPA-provided robot to perform the same disaster relief functions (Track B). TRACLabs was given a Track B award by DARPA, one of only eleven institutions world-wide to receive an award, potentially worth over $2 million dollars. TRACLabs is partnered with SUNY-Buffalo for this work. TRACLabs will need to compete with other Track B awardees in a Virtual Robot Challenge (VRC) conducted in simulation. The VRC involves a simulated robot driving a cart, walking through a rubble field, and connecting a hose or cable to a socket. The VRC will take place in June 2013. The popular press is already covering the DRC, including:
Archive for the ‘Robotics’ Category
TRACLabs was pleased to have four interns this past summer at our Houston location and one intern located in Buffalo NY. Shown below from left to right are: Brian Lemke from Texas A&M, Jeff Johnson from Indiana University, Joshua James from the University of Texas, and Nicholas Barrash from Georgia Tech. These interns worked with our robot (also shown below) as well as our procedure assistance software.
Dr. Robin Murphy, Raytheon Professor of Computer Science and Engineering at Texas A&M University, and her student Zach Henkel recently visited TRACLabs. Dr. Murphy is collaborating with TRACLabs on a robotic system for the Army that will evacuate wounded soldiers from the battlefield. Dr. Murphy is a leading expert in the use of robots for search and rescue operations and directs the Center for Robot-Assisted Search and Rescue. Dr. Murphy and her student are shown in the picture below with the TRACLabs dual-arm manipulation robot with a Texas A&M survivor interface.
TRACLabs is now starting a new Army Phase I STTR along with Robin Murphy from Texas A&M to investigate robotic extraction of wounded soldiers from the battlefield. Our research project focuses less on the mechanical issues, and more on the issues of human-robot interaction (user interaction with the medic that is a safe zone and victim interaction with the robot). This project involves advancing the state of the art in interface, robotic architectures, and perception. Furthermore, the technologies should be generalizable to different domains, like bomb dispersion, automating warehouses, search and rescue, and extraterrestrial construction.
This video shows a quick proof of concept demo (made by our outstanding summer interns) illustrating that control algorithms and architectures that developed in simulation (post forthcoming) can be quickly transferred to a real robot. Here the robot drives to a person (known location), picks them up using coordinated control of two 7-DOF arms with fixed end paddles, and carries the person back to a known starting location.
This video shows off a modular arm, that is designed to operate from 1 to 7 degrees of freedom. This video shows the 7 joints (4 rolls and 3 pitch joints) going from sitting on a table to a fully powered and operational arm in under 2 minutes. This uses our Universal Mating Adapter technology, so that all power and communications are handled inside each joint. The UMA allows hotswappable joints. Each joint also stores it’s configuration and calibration data. At around time 1:54, you can hear the brakes of the arm disengage. The brakes default to on with no power and their last known state is reinitialized when the brakes are released.
This video also shows joint control as well as coordinated Cartesian control as the arm descends to attach to an end effector (a drill bit). The wrist roll joint is equipped with an automated attach/detach mechanism so that different end effectors and manipulators can be used in complex scenarios.
This arm was produced under a NASA Phase II SBIR.
National Robotics Week was April 9 – 17, 2011. TRACLabs scientists took several of our robots to Space Center Houston for demonstrations to the general public. Demonstrations included using stereo vision to follow passers-by with a pan-tilt head and pre-programmed scooping behaviors from our seven degree-of-freedom manipulator. Both children and adults enjoyed talking to our scientists and seeing real robots up close.
A previous post discussed human following and showed videos of robust human tracking indoors. This work has now been integrated with 2D “best-based” gestures from Brown University in order to allow different tracking modes of humans. TRACLabs and Brown University are collaborating under an Army contract to create a passive sensing system that facilitates unmanned leader-following in potentially novel, cluttered, and dynamic environments. We call this self-contained system LIBERATION (Leader Informed Beacon Estimation for ReAl-Time, Intelligent, Onboard Navigation).
3D person tracking:
2D Beat-based gesture recognition (with human location from 3D tracking):
Fully integrated system:
It would be quite useful for a robot operator to simply tell the robot “Follow me” and have it track the operator in a variety of situations and environments. TRACLabs and Brown University are collaborating under an Army contract to create a passive sensing system that facilitates unmanned leader-following in potentially novel, cluttered, and dynamic environments. We call this self-contained system LIBERATION (Leader Informed Beacon Estimation for ReAl-Time, Intelligent, Onboard Navigation). LIBERATION uses stereo vision (a technique using two cameras and triangulation to determine distance to objects) to build a depth map of the environment. We then use a simple 3D model composed of 8 overlapping spheres—one at each corner of a virtual box—to quickly and easily detect and track arbitrary shaped people in the lab. The video below shows some preliminary results.
TRACLabs is developing the Wearable Augmented Perception for Environmental Recognition (WRAPPER) under a National Science Foundation grant. WRAPPER is a computer vision system that provides information about the world to individuals who have impaired vision. This information can be provided via auditory cues (for those with severely impaired vision) or via small, embedded LCD screens (for those with partial vision). WRAPPER is designed to provide information to the visually impaired in five key areas: 1) Detection of obstacles, free corridors and drop-offs in the immediate vicinity of the user; 2) Detection, identification and location of people in the immediate vicinity of the user; 3) Detection, identification and location of specific, pre-defined objects within reaching distance of the user; 4) Estimation of user location and distance traveled within a large-scale environment; and 5) General vision aids such as reading text, enhancing images and magnifying objects. All information is extracted from a pair of cameras mounted in unobtrusive eyewear connected to a portable, embedded computer system. Stereo vision algorithms produce depth maps that are fed to different algorithms that identify specific features in the world. After feature identification, audio or visual cues are produced and sent to the user. The picture below on the left shows the current prototype system, which is large and bulky, but is used to test algorithms. The picture below on the right shows the target system. TRACLabs will be teaming with headset manufacturer Vuzix Corporation to develop the system.