The Huggable project started in the MIT Media Lab (The Robotic Life Group) in 2005. The Huggable is a new type of robotic companion for healthcare, education, and social communication. It is inspired in traditional companion animal therapy.
The Huggable is equipped with a full body multi-modal sensory skin (see the video below for details), quite mechanical servos, inertial sensors, eyes cameras, ear microphones, and mouth speaker. In addition, it has an embedded PC with WiFi (802.11) communications capability.
The Huggable project has two main components: the Huggable robot itself and a set of Huggable technologies. Additionally, the Huggable robot has two modes of function. On one hand, it can work as a fully autonomous robot interacting with the patient. On the other hand, it can also work as a semi-autonomous robot avatar with some level of human control via the Internet.
These capabilities make the Huggable robot a really interesting platform for many applications in the fields of healthcare and education.
A human operator (say, a nurse) can collect remote date from the Huggable and the patient such as live video feed, live audio feed, and live sensor feed. Moreover, the human operator can send commands to the robot, so the interaction with the pation is done via the robotic avatar. The huggable uses Microsoft Robotics Studio (MSRS) for their “communication avatar” service
The follwing video shows some of the main features of the Huggable.
More information can be fount at: http://robotic.media.mit.edu/?url=Projects.Robots.Huggable.Overview