September 25, 2012

MIT Develops Wearable Automatic Building Mapping System


We hear a lot about “the fog of war.” And, since we recently observed the anniversary of the 9/11 attacks, we were once again reminded of the challenges first responders face in adverse conditions where things like smoke, dust, etc., can impair the ability to find those in need and get all involved to safety. Researchers at the Massachusetts Institute of Technology (MIT) have been looking into the problem, and they have gone public with a solution that is in a word, amazing!

According to an MIT News item, Automatic building mapping could help emergency responders, the school’s researchers have built a wearable sensor system that automatically creates a digital map of the environment through which the wearer is moving. It is designed as a tool to help emergency responders coordinate disaster response.

As noted in the article, this is at the moment a prototype system. The picture below is of one of the researchers showing what the system looks like and how it is worn.


Above: Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, demonstrates how the sensor is worn. Photo: Patrick Gillooly


Details of all of the technology that have gone into the system will be presented in a paper slated for the Intelligent Robots and Systems conference in Portugal next month. What we do know is that the system employs a stripped-down Microsoft Kinect camera and a laser rangefinder.

Putting it through the paces

The system may be a prototype but it is a working one. It has been put through tests on the MIT campus. A graduate student wearing the sensor system wandered the halls (no they did not simulate an obstructed vision environment). The sensors wirelessly relayed data to a laptop in a remote conference room. There observers tracked progress on a map created as the student moved. A handheld pushbutton device enables the wearer to annotate the map designating a location as a point of interest. As the article notes, the researchers envision emergency responders will be able to add voice or text tags to the map to specify what they have encountered.

Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, and lead author on the new paper states that: “The operational scenario that was envisioned for this was a hazmat situation where people are suited up with the full suit, and they go in and explore an environment…The current approach would be to textually summarize what they had seen afterward — ‘I went into this room on the left, I saw this, I went into the next room,’ and so on. We want to try to automate that.”

Note: Fallon is joined on the paper by professors John Leonard and Seth Teller, of, respectively, the departments of Mechanical Engineering and of Electrical Engineering and Computer Science (EECS), and EECS grad students Hordur Johannsson and Jonathan Brookshire. Context sensitive

The research extends previous work on systems that enable robots to map their environments for use by a real human. This obviously led to some significant design modifications. A few examples are cited:

  • The laser rangefinder which is accurate when stable needed to be modified for the shakiness of human use as opposed to the steadiness of a robot.
  • Sensors in a robot’s wheels can provide accurate information about its physical orientation and the distances it covers, but that too is missing with humans and needed to be taken into account.
  • The system also has to recognize changes in altitude, so it doesn’t inadvertently overlay the map of one floor with information about a different one which would be a real problem in many catastrophes for responders.

The article briefly describes the science used to overcome these challenges which will be presented in depth at the conference. It also notes that, “in principle, the whole system could be shrunk to about the size of a coffee mug.”Wolfram Burgard, a professor of computer science at the University of Freiburg in Germany, is cited as saying the MIT researchers’ work is on the general topic of SLAM, or simultaneous localization and mapping. “Originally, this came out as a problem of robotics,” Burgard says. “This idea of having a SLAM system that is attached to a human’s body, for figuring out where it is, is actually innovative and pretty useful. For first responders, a technology like this one might be highly relevant.”

MIT has provided an interesting video on the subject which is available for review below.


 

While first responders of all types are the ultimate targets of this system, it should come as no surprise that the U.S. Air Force and the Office of Naval Research supported the work. Lifting “the fog of war” in real-time for years has been a top priority of those responsible for improving all aspects of the digital battlefield. Think no further than hostage situations where night scopes and other technology is precise for targeting but where have a real-time understanding of the entire environment, especially if matched up against old building blue prints could be invaluable in situations where split second decisions can mean the difference between life and death, and success or failure.

I wonder if they have one of these in a 42 Long? It would be fascinating to take it out for a test drive. Think I will wait for the coffee mug version.







Related Tags

Intel    Microsoft    Wireless
       

blog comments powered by Disqus

More in TechZone360






TechZone360
Twitter

FOLLOW TECHZONE360


EDITOR'S CHOICE


WHAT'S HOT @ TECHZONE360



Featured Magazines - Subscribe for FREE


Featured Events