Abstract
The annual Robocup soccer competition is an excellent
opportunity for our robotics and agent research. We view the
competition as a rigorous testbed for our methods and a unique
way of validating our ideas. After two years of competition,
we have begun to understand what works (we won the competition
in Tokyo 97) and what does not work (we failed to advance to
the second round in Paris 98). This year, we continue to make
improvements in hardware and software, including a wireless
communication network and improved soccer playing behaviors.
This paper presents an overview of our goals in Robocup, our
philosophy in building soccer playing robots and the methods
we are employing in our efforts.
Paper
Philosophy and Goals
Our primary goal in the Robocup project is to build
autonomous physical robots that can function robustly in a challenging
environment. Obviously, this implies two things about our robots:
Requirement 1: They must be autonomous.
Requirement 2: They must be robust.
These requirements have significant implications on the methodology we
use to build and program our robots. In particular, Requirement 1
implies that processing must be distributed and on-board. No remote
computing or centralized control is allowed. Requirement 2 implies
that algorithms and hardware must be simple enough to guarantee
reliability. Indeed, a guiding philosophy in building these robots is
to favor robustness over sophistication.
Adhering to the requirements outlined above, our efforts can
be decomposed into three specific specialties; hardware, vision and
learning. Section 2 below describes the hardware of our physical
robots, section 3 describes our vision system and section 4 describes
learning. Section 5 has a brief description of our soccer playing
algorithm.
Hardware
Our robots are constructed from scratch and by our team. The
flexibility to modify our custom-built robots gives us an added
dimension for experimentation. As we learn more about what
capabilities are needed by an autonomous physical agent interacting
with its environment, we are able to easily adapt and extend our
custom-built robots. For example, in the past two years, we have been
able to add dual cameras, replace motors, and redesign the base. The
next section describes the hardware of our robots in detail.
The base of each robot is a modified 4-wheel, 2x4 drive DC
model car. Specifically, we have lowered and widened the base for
added stability. The wheels are independently controlled, allowing
in-place turning and easy maneuverability. We have replaced the stock
motors with stronger, heavy-duty motors to support the increased
weight of the car. Mounted above the base is an on-board computer.
It is an all-in-one 133MHz 586 CPU board extensible to connect various
I/O devices. Attached to the top of the body are twin commercial
digital color QuickCam cameras made by Connectix Corp. One faces
forward, the other backward. Also, we have affixed fish-eye lenses to
each camera to provide a wide-angle view of the environment. The two
drive motors are independently controlled by the on-board computer
through two serial ports. The hardware interface between the serial
ports and the motor control circuits are custom built by our team.
The images from the cameras are sent into the computer through a
parallel port. On board are three batteries, one for each of the two
motors and one for the CPU and cameras.
This year, we plan to incorporate additional hardware. In
particular, we are going to extend the sensory capabilities of the
robot by adding touch sensors to the body. This will allow the robot
to avoid obstacles more effectively. Also, we are going to add shaft
encoders. These devices allow the robot to measure the actual
revolution of the wheels. We hope this will allow the robot to move
about more accurately.
Vision
We view color-based vision as one of salient challenges in the
Robocup initiative and one of the scientific issues on which we intend
to focus. Building an accurate, reliable vision system that can work
under a variety of conditions is one of our team's primary goals. We
are continually improving this component of our robots. The following
describes our current vision system.
Our vision system is a custom-built, specialized software
component developed specifically for detecting balls, goals and other
robots. Visual information is extracted from an image of 658x496 RGB
pixels, received from the on-board camera via a set of basic routines
from a free package called CQCAM, provided by Patrick Reynolds from
the University of Virginia. Since the on-board computing resources
for an integrated robot are very limited, it is a challenge to design
and implement a vision system that is fast and reliable. In order to
make the recognition procedure fast, we have developed a sample-based
method that can quickly focus attention on certain objects. Depending
on the object that needs to be identified, this method will
automatically select certain number of rows or columns in an area of
the frame where the object is most likely to be located. For example,
to search for a ball in a frame, this method will selectively search
only a few horizontal rows in the lower part of the frame. If some of
these rows contain segments that are red, then the program will report
the existence of the ball. Using this method, the speed to reliably
detect and identify objects, including take the pictures, is greatly
improved; we have reached frame rates of up to 6 images per second.
To increase the reliability of object recognition, the above
method is combined with two additional processes. One is the
conversion of RGB to HSV, and the other is "neighborhood checking" to
determine the color of pixels. The reason we convert RGB to HSV is
that HSV is much more stable than RGB when light conditions are
slightly changed. Neighborhood checking is an effective way to deal
with noisy pixels when determining colors. The basic idea is that
pixels are not examined individually for their colors, but rather
grouped together into segment windows and using a majority-vote scheme
to determine the color of a window. For example, if the window size
for red is 5 and the voting threshold is 3/5, then a line segment of
"rrgrr" (where r is red and g is not red) will still be judged as red.
Object's direction and distance are calculated based on their
relative position and size in the image. This is possible because the
size of ball, goal, wall, and others are known to the robot at the
outset. For example, if one image contains a blue rectangle of size
40x10 pixels (for width and height) centered at x=100 and y=90 in the
image, then we can conclude that the blue goal is currently at 10
degree left and 70 inches away.
A significant drawback of our current vision system is its
sensitivity to lighting conditions. Its parameters must be hand-tuned
to a specific environment and this is a time consuming task. We are
currently exploring more automated approaches that will reduce this
burdensome task.
Learning
Any robot situated in a dynamic environment must be able to
discover new things at run-time. We view autonomous learning as our
holy grail and the dream that we are striving for. We also consider
it the most difficult scientific issue. There are many well-known
learning algorithms that work well in simulations or on a desktop, but
what happens when you attempt to run these algorithms on a physical,
situated robot? In our Robocup project, we have found that there is a
large gap. In particular, we found that our efforts in learning have
been limited by the previous two areas, hardware and vision. But we
feel that we are finally beginning to achieve a critical mass in those
areas to allow implementation of on-board learning algorithms. Some
of our team members are actively involved in research in the field of
multiagent learning and our team has significant expertise in the area
of agent learning. We are trying to apply this work to our physical
robots as a validation of the research.
Programming Approach
Our robotic soccer team consists of four identical robots.
They all share the same basic hardware, but they differ in their
behavior programming. We have developed three specialized roles; the
forward role, the defender role and the goalie role. Each role
consists of a set of behaviors organized as a state machine. For
example, the forward role contains a shoot_ball behavior, dribble_ball
behavior, a search_for_ball behavior, etc. The state transitions
occur in response to percepts from the environment. For example, the
forward will transition from the search_for_ball behavior to the
shoot_ball behavior if it detects the ball and the goal from its
sensory input. At game time, each robot is loaded with the program
for the role it has been assigned. Note that each robot has the
integrated physical abilities to play any role (i.e. detect_ball,
move_forward, turn, etc...). We feel this is a natural, flexible,
efficient approach to programming the robots to play soccer.
Conclusion
In summary, we have stated that our primary goal is to build
autonomous, robust physical robots. We aim to accomplish this goal by
focusing on three important areas; physical hardware, robot vision and
agent learning. We view Robocup as an exciting, strenuous testbed for
our project and hope to prove the viability of our ideas and
approaches in the soccer tournaments.