Introduction and Significance

Following the 9/11 attacks on the World Trade Center (WTC), a dozen teams of mobile robots were called to Ground Zero to aid in the effort to find victims trapped underneath the debris of the collapsed buildings [1]. This marks the first time in history that robots were used in a real search and rescue environment [2]. The small size of these machines granted them the ability to investigate places where workers could not go [Illustrations, Figure 1]. Using visual and audio sensors, rescuers were able to perceive and communicate with victims. The robot teams were deemed a huge success [3], managing to discover five victims in the rubble [4].

Participating teams came from different organizations across the country and had not previously met. Robots were heterogeneous, varying in shape, size, and functionality [Illustrations, Figures 2 & 3]. Based on their abilities, robot teams were assigned particular objectives, including hazardous materials, medical, logistics, planning, and search-and-rescue [5]. Despite this delegation, they did not demonstrate teamwork. The overall goal was the same; however, the robots were only aware of their own assigned tasks. There was no communication between them, and thus no sharing of perceptual or status information. Missions were carried out individually, and were limited solely to the abilities of that robot. A failure of a single unit would greatly hinder the completion of the overall search and rescue operation.

It is crucial in situations such as this that robots cooperate as a functional, fault-tolerant team to prevent human rescuers from becoming victims themselves. How can these robots, which have never interacted, collaborate to accomplish a joint objective? How can abilities, internal knowledge, perceptual experiences, and tasks be uniformly represented and understood by all members of an impromptu robot team? Once this information is obtained, how can robots reorganize and delegate individual and joint objectives, based on individual abilities and limitations?

All robots used in the WTC rescue effort were tethered, and were controlled through a remote interface. A minimum of two workers was required to operate them, one to navigate and one to look for victims [5]. In an effort to simplify this operational overhead, the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF) have identified human-robot interaction by specialists as one of only two "Grand Challenge" projects [6]. However, a team of robots could rely on its own autonomy and cooperation, greatly reducing the number of workers dedicated to supervision. Robots with similar objectives could work together to achieve their goals.

Adaptive, impromptu teams of mobile robots are an essential technology in an environment that is too dangerous for human workers to investigate. In the case of search and rescue, situations include the attacks on the WTC, as well as natural disasters, such as Hurricane Katrina in New Orleans and the recent mine collapses in Kentucky and West Virginia [7; Illustrations, Figure 4]. However, teamwork between robots extends beyond that of search and rescue. The topic has been applied to the RoboCup robotic soccer league [http://www.robocup.org/; Illustrations, Figure 5] in the form of a pickup player that attempts to join a team and adapt to that team's strategy [8]. Additionally, robot teams can be applied to numerous missions proposed by the National Aeronautics and Space Administration (NASA). However, impromptu teams of heterogeneous mobile robots are a new area of research; no physical implementation has ever been proposed or attempted.

Context, Background, and Literature Review

Search and rescue is a hot topic in robotics research. The University of South Florida's team CRASAR (Center for Robot Assisted Search & Rescue) was among rescuers at Ground Zero. Their focus, however, is not on the robot teams, but rather on the way that specialists interact with the robots via a human-robot interface [5, 2]. Murphy et al. [5, 2] identify that current technology requires numerous people to operate a single robot. Additionally, they provide a model for how data collected by geographically distributed teams of robots is combined to be interpreted by workers. Information that is relevant to a particular robot's current task is manually communicated to that robot.

In Tejada et al. [9], a homogeneous team (all functionally identical) of robots is used in a search and rescue environment [Illustrations, Figure 6]. The team can be controlled by a group of users, or can explore an area autonomously. Robots are assigned particular roles, and demonstrate distributed planning and collaboration via a global communication system.

Bowling et al. [8] formally define "[the] problem of coordination in impromptu teams, where a team is composed of independent agents each unknown to the others [8]." This work is based on RoboCup soccer. A single pickup player attempts to join an already coordinated team and, by observation, chooses a play and a role in this play. Plays are defined in and chosen from an enumerated "playbook", which may or may not contain the same plays used by the other robots. Two systems were tested: an adaptive version (learns which plays integrate well with the team) and a predictive version (determines plays based on physical locations and trajectories of robots). This is implemented solely in simulation, and compared against three different types of pickup players: regular (not impromptu; already coordinating with a team), missing (not present on the field), and brick (present on team, but not active). Results show that both the adaptive and the predictive systems work surprisingly well against these criteria. This is the only known endeavor into impromptu teams of robots that I am aware of.

Cohen et al. [10] provides a theoretical foundation for teamwork among mobile robots. This work classifies numerous theorems for individual and joint operations between robots, and addresses principles that a team should follow. A multi-layered approach to heterogeneous teams is discussed and implemented in Balch et al. [11]. At Level 1, robots can operate without awareness of each other to accomplish their own goals. Levels 2 and 3 involve implicit and explicit communication about actions that may affect the tasks of another robot. At the highest level, Level 4, robots communicate representations of their environments and the status of their objectives. Three levels of knowledge representation are defined: iconic (physically similar to what it represents), indexical (links between icons), and symbolic (provides a relationship between icons, indices, and other symbols). They successfully demonstrate the layered system on a pair of two robots, each with different physical attributes, assigned the task of vacuuming a floor. It is noted that symbolic communication does, in fact, improve the efficiency of completing the objective. However, the researchers identify that their implementation is simplified to a single symbol, and thus does not address the real-world problem of the representation of internal knowledge and how it is meaningfully interpreted as concepts in the physical world. In artificial intelligence (AI), this is known as the symbol grounding problem [12].

Goals/Objectives

If heterogeneous robots are to organize and work together as an impromptu team, a language must be developed that allows them to share knowledge in a meaningful way. While it is not my goal to solve the symbol grounding problem, in learning new symbols, a shared grounding, or meaning, must be established and communicated through labeling and syntax [11]. For consistent interpretation of communicated knowledge within a team, symbols representing the knowledge must be grounded in concepts, which refer to categories of entities in an environment [13].

I propose a symbolic communication protocol based upon the Semantic Web. The Semantic Web is an extension of the World Wide Web that gives information well-defined meaning, relating data to the corresponding entity in the real world [14; Illustrations, Figures 7 & 8]. Thus, if a library of unified grounded physical concepts was established and a robot's abilities, perceptions, and goals were represented by symbols relating to those concepts, then the robot could meaningfully share its symbols with others based on the agreed concepts. Information is presented in a structured form, similar to a descriptive sentence, containing a subject, a predicate, and an object. By enforcing this strict syntax, robot attributes can be broken down and related to the traits of others. In addition, this structure can easily be interpreted by both computers and people, making it simple to edit, search, and communicate.

This new robot communication protocol would be based on the characteristics of XML, which is the language used to program the Semantic Web. XML is able to encode metadata, or information about the information, and is extensible. This allows for the categorization and interpretation of information on a webpage, or for our purposes, a robot. XML is designed so that new tags can be defined that describe new concepts and categories of information that can be shared with other programs searching for information. The communication protocol would be layered beneath a robot's own independent program, providing meaning to symbols relating to itself and other robots in its immediate area.

Once an impromptu team of robots is established, a task management system is needed to complete individual and joint goals. Balch et al. [11] identifies terms characterizing multi-robot tasks, including time, criteria, subject of action, resource limits, group management, and platform capabilities. The team must consider the individual abilities, limitations, costs, and rewards of each robot to delegate tasks and coordinate activities appropriately, and must be tolerant of communication failure and dysfunctional teammates.

Procedures and Time Line

The robot coordination system, consisting of symbolic communication and task management, will initially be implemented and tested in simulation in PyRo [http://pyrorobotics.org/]. PyRo provides a single programming environment for numerous different types of robots, as well as the ability to upload user programs to these robots. The specifics of robot actuators and sensors are abstracted. Thus, a program written once can be shared across different platforms, providing the ultimate tool for heterogeneous robot code portability. It is worth noting that PyRo is written in the programming language Python, which is also used to handle most of the processing of the Semantic Web, potentially simplifying certain compatibility issues.

This project will be developed within the framework of the Department of Computer Science (CS) Senior Assignment. The target task will be the American Association for Artificial Intelligence (AAAI) Robot Scavenger Hunt Challenge [http://www.aaai.org/Conferences/AAAI/2006/aaai06robots.php#exhibition]. Working with a team of senior CS students, a robotic system to search the Engineering Building for a list of objects will be developed. The code must be portable and the list of objectives must be easy to modify, as they will be working with three of the CS Department's ActivMedia robots, Elmer, Taz, and Marvin [Illustrations, Figure 9; http://www.mobilerobots.com/]. Working alongside the scavenger hunt project, I will be developing the robot coordination system with the intention of applying it to the three robots so that they can complete the search effort cooperatively as opposed to individually. PyRo supports the ActivMedia robots, so there should not be a problem in transferring the code to the physical robots.

For reasons of comparison, the robot coordination system will be evaluated in both simulation and the physical world against the previously mentioned criteria of [8]. The scavenger hunt is, in many ways, analogous to urban search and rescue, and a successful implementation of the system will show proof-of-concept of impromptu teams in a search and rescue environment.

Deadlines for research milestones

August 21, 2006 - September 9, 2006

Become familiar with the PyRo programming environment. PyRo provides a simulation interface for one or many robots, and allows for seamless integration with numerous different types of robots. PyRo is based on the programming language Python, and thus this language must be learned.

September 10, 2006 - September 30, 2006

Work with Senior Project team to analyze and design a strategy for implementing a scavenger hunt robot. Preliminary data collected in this step will be used in designing world concepts and robot symbols.

October 1, 2006 - December 2, 2006

Design communication protocol and interpreter based on robot symbols and their relationships with world concepts. A unified understanding of concepts must be globally defined for robots to meaningfully understand their own symbols and the symbols of others. Robot symbols must be easily modified and platform independent to be considered a useful tool in the robot community. The XML web language appears to provide the extensibility required for this to be achieved.

December 3, 2006 - January 6, 2007

Work with students from Senior Projects to interface actual robot abilities, perceptions, and tasks with their respective symbolic representations. A multi-layered approach will be required to abstract these symbols and maintain cross-platform functionality. User programs will be independent of the protocol, improving usability from a programming perspective.

January 7, 2007 - February 10, 2007

Based on initial experimental results, fully implement communication protocol that interfaces with a robot. A comprehensive set of concepts must be generated so that many different symbols can be easily and intuitively represented and interpreted by both a robot and a human.

February 11, 2007 - April 7, 2007

Design and implement a fault-tolerant task management system for robots to complete individual and joint tasks. Based on communicated symbolic representations of individual abilities, perceptions, and tasks, robots must delegate and coordinate their activities appropriately.

 


 

References

  • S. Westphal (2001). “Robots join search and rescue teams”, New Scientist (Online) 19 September 2001. 09 March 2006.
  • R. Murphy (2004) “Human-Robot Interaction in Rescue Robotics,” IEEE Systems, Man and Cybernetics Part C: Applications and Reviews, special issue on Human-Robot Interaction, Vol. 34, No. 2, May 2004.
  • “Making the Nation Safer: The Role of Science and Technology in Countering Terrorism,” C. on Science and N.R.C. Technology for Counter Terrorism, Washington, DC: National Academies Press, 2002.
  • “Grand Research Challenges in Information Systems,” Washington, DC: Computing Research Association, 2003.
  • R. Murphy (2004) “Lessons Learned in the Field in Robot-Assisted Search and Rescue,” Safety, Security, and Rescue Robotics, Bonn, Germany.
  • R. Murphy & E. Rogers (2001) “"Human-robot interaction: Final report for DARPA/NSF study human-robot interaction.” (Online) 09 March 2006.
  • R. Murphy, “A Quick Guide to Robots for Mine Disasters,” (Online) 11 March 2006.
  • M. Bowling & P. McCracken (2005) “Coordination and Adaptation in Impromptu Teams,” In the Proceedings of AAAI-05.
  • S. Tejada, A. Cristina, P. Goodwyne, E. Normand, R. O'Hara, & S. Tarapore (2003) “Virtual Synergy: A Human-Robot Interface for Urban Search and Rescue”. In the Proceedings of the AAAI 2003 Robot Competition, Acapulco, Mexico.
  • P. Cohen & H. Levesque (1991) “Teamwork,” Nous 25:11-24
  • T. Balch & L. Parker (2002) Robot Teams: From Diversity to Polymorphism, A K Peters, Ltd., Natick, Massachusetts.
  • Harnad, S. (2003) The Symbol Grounding Problem, Encyclopedia of Cognitive Science, Nature Publishing Group/Macmillan.
  • R. Daviddson (1993) “A Framework for Organization and Representation of Concept Knowledge in Autonomous Agents,” In Scandinavian Conference of Artificial Intelligence-93, IOS Press.
  • Semantic Web. Ed. Eric Miller, et al. 8 March 2006. W3C. 11 March 2006.
  •  

    Budget Justification

    Total Contractual Services Cost

    $150

    Total Commodities Cost

    $150

    Total Travel Costs

    $500

    Total Request

    $800

    Contractual Services

    PyRo, the programming environment being used to simulate and test the communication protocol, is free to download [http://pyrorobotics.org/]. The ActivMedia robots utilized for the physical implementation generally cost around $20,000 apiece. However, this expense is avoided with the cooperation and generosity of the Department of Computer Science, which will be providing the use of three of them.

    To show a proof-of-concept that the physical attributes of a robot can effectively be represented and communicated, various technical additions must be made to the robot. Items of interest include things such as webcams for vision, Bluetooth adaptors for peer-to-peer communication, and flashlights for illumination, each ranging between $20 and $40 [http://www.newegg.com/]. A microphone and speakers have also been considered as notable robot interactive abilities, but are already integrated into the ActivMedia robots and do not need to be purchased. A finalized list of items will not be known until I collaborate with the Senior Projects team in the Fall semester.

    Commodities

    For the scavenger hunt, robots must search the Engineering Building for a checklist of given objects, such as boxes, balls, cans, colors, people, information, etc. Cardboard boxes of various sizes can be purchased for $5-$20. Similarly, a package of a dozen foam balls costs about $20; many sizes will be needed. As part of the AAAI Challenge, robots were required to find a woman wearing a pink hat. Following suit, a hat of a bright color will be purchased for $20. A variety of cans will be provided by the researcher. A package of colored construction paper costs around $1, providing a method for coloring different objects, and may also serve as a means of referencing and landmarking using a robot's color camera. Colored poster board is needed to indicate larger targets. Additionally, various materials are necessary in preparation for the URA Student Symposium poster session.

    Travel

    Registration for attending the AAAI Robot Competition and Exhibition costs $250. A block of rooms has been reserved for conference participants at a reduced rate of approximately $160 per night. Additional student funding for presenting at the conference is available from AAAI, the Undergraduate Research Academy, and the Department of Computer Science. Securing these funds will aid in travel and registration costs, but, any remaining funds will be paid at the expense of the researcher.

    Illustrations

     

    Figure 1: Sub-human confined space at WTC response. [2]

     

     

    Figure 2: Examples of robots brought to the WTC response. [2]

     

    Figure 3: Different sized robots (clockwise from upper left) Shoebox size, carry-on sized, lawnmower sized, EOD bomb squad. [5]

     

     

    Figure 4: Micro-robot useful for mine disasters. [7]

     

    Figure 5: Small-sized CMDragons robotic soccer team for RoboCup 2002.

     

    Figure 6: Homogeneous team of Sony Aibos finding a victim in a test arena.

     

    Figure 7: The current web, where links are not descriptive and resources are not classified.

     

     

    Figure 8: The semantic web, where links are related to globally defined resources.

     

    Figure 9: The CS Department's own ActivMedia robots (left to right) Elmer, Taz, and Marvin.