Issues with your account? Bug us in the Discord!

About the Liandra's Weapons systems *** SPOILERS ***

samuelksamuelk The Unstoppable Mr. 'K'
JMS has even commented on how the following article, posted on Usenet, could be very useful if the LotR becomes a series.

Read this:

[quote]

From: Charles J. Cohen
Organization: University of Michigan AI Lab
Date: Monday, January 21, 2002 3:01 PM
Newsgroups: rec.arts.sf.tv.babylon5.moderated
Subject: Review of Liandra's Weapons System


This article is a review of the Liandra's gesture and virtual reality
based fire control system as seen on Babylon 5: Legends of the
Rangers. I wanted to review it because gesture recognition based
systems ares my main area of study, and I am always excited to see
representations of gesture based systems television and films.

This review by its nature contains spoilers, but I will be avoiding
major plot points as much as I can.

Quick review: I believe that this Liandra's method of weapon control
is a well designed system to allow a person to actively target ships,
while ensuring appropriate system response (i.e. weapons firing) when
desired without false positives (i.e. a weapon firing when it is not
desired). There are some clear areas for enhancing the system, such
as adding higher level gesture commands, as well as voice.

Long review.


My Qualifications

So, who am I to be doing such a review? Human-Computer Interaction
(HCI), and specifically gesture recognition, are my area of expertise.
While I'm not *the* foremost expert in this area, I am well versed in
the field. My Ph.D. in gesture recognition (Electrical Engineering
Systems, minors in Artificial Intelligence and Robotics) is from the
University of Michigan in 1996 (thesis: Dynamical System
Representation, Generation, and Recognition of Basic Oscillatory
Motion Gestures, and Applications for the Control of Actuated
Mechanisms). Since then I've published a variety of papers in the
fields of gesture recognition, HCI, and machine vision [1], have been
interviewed by a small number of news publications [2], and have given
talks on this subject [3]. My company (Cybernet Systems) has produced
a very basic gesture/tracking software product called "Use Your Head"
[4], that allows a person to use their head motions as an additional
game device (which could be considered a precursor to the Liandra's
targeting system!). I've installed prototype gesture recognition
systems for NASA and the Army [5], and still have government funding
to continue this research.

Again, I'm not *the* authority in this area, but I like to consider my
an authority. Since I also love Science Fiction in general, and
Babylon 5 in specific, I decided it would be fun to review the
Liandra's system.


Overview of the Liandra's Fire Control System.

This overview is taken from my viewings of Babylon 5: Legends of the
Rangers. I will try to point out three things: What I know, what I
think I know, and what I don't know. That is, I'll try to keep
assumptions to a minimum, and if I can't, I will at least point out
what it is I'm assuming. I'd appreciate any feedback letting me know
what I got wrong.

What I know:

1. The weapon's control officer (WCO) is suspended in a full zero
gravity Virtual Reality (VR) environment.

2. The WCO has three degrees of rotational motion (roll, pitch, and
yaw) centered around her center of gravity. That is, she can
rotate herself to any orientation to view the combat environment as
desired.

3. The VR environment is similar to that found on the Minbari command
ships [6]. That is, a full 3D representation of the battle space,
with gestures used to focus attention on various aspects of the
battle environment, terrain, and assets.

4. Each of the WCO's extremities (hands and feet) are linked to
specific weapons on the Liandra.

5. Ship tracking and targeting is performed using an eye-tracking
system.

6. Multiple ships could be targeted off of the eye-tracking system.

7. A "fire" gesture consists of pumping a limb (arm or leg). There is
a limited vocabulary in the set of gesture commands. The direction
of fire is the direction of the gesture, matched up to the
eye-tracking software to determine the target. The ship's computer
actually aims/targets the weapons.

8. A list of possible targets from eye-tracking is kept, so that if
the WCO points at a ship behind her current viewing, that ship will
be attacked.


What I think I know:

1. The gravity isn't quite zero g because the WCO remains centered in
the VR environment. Therefore, some forces must be acting upon her
to keep her centered and not bumping into walls.

2. Although only one eye was shown to track ships, I believe that both
eyes were probably tracked to allow full three-dimensional target
acquisition (that is, both eyes would be needed to determine which
ship should be targeted when two ships are in the same
line-of-site, the near one or the far one).

3. Extra battle information was either being presented (drawn with
light) on to the WCO's eyes or on the VR screens directly, so she
saw more than what we saw. This information was probably targeting
information, status, etc.

4. It looks like the rate of fire of the Liandra's weapons were based
off of the speed the WCO could pump her limbs.

5. The system never misinterpreted a command. That is, a shot was
fired only when the WCO wanted it to fire, and it did not fire when
the WCO did not want it to fire. In other words, it is a very
robust system.


What I don't know:

1. I could not tell if voice recognition was used at all.

2. Can the WCO's firing commands be overridden from the bridge?

3. What happens if the ship is so damaged as to lose artificial
gravity?

4. Aside from firing gestures, are there other hand/body gestures
available to the WCO?


Discussion:

As stated in the Lurker's Guide to Babylon 5, this fire control system
is probably based on taking advantage of the Ranger's physical combat
training [7]. However, I think there is more to it than that.

What would a WCO want from a human-computer interface for a battle
system? I believe it is the following:

1. Quick identification of target(s) to fire at.

2. Full control of all weapons simultaneously.

3. Instant information, but only that which is desired (too much
information is just as bad as too little).

4. Ability to view the entire battle-space, with proper orientation and
perspective (that is, in a way we humans (and apparently Minbari)
can understand it).

It is my opinion that the VR/gesture system as portrayed in B5:LotR
achieves all the above desires, better than a keyboard, mouse, or
button interface could. Specifically:

1. Quick target identification. Without using eye-tracking and
gesture, to target a ship (among multiple targets), either a mouse
must be moved over the iconic representation, or multiple
keyboard/button commands must be given to cycle through to the
target. If touch screen capability is allowed (pointing at a
specific ship), then that is just an instantiation of the Liandra's
gesture system. The WCO simply looks at a target, and that target
is the one a weapon will fire at if the gesture command is given in
the target's direction. Multiple targets could be tracked this
way, even the huge number of space mines that were targeted and
destroyed (but see more below).

2. Full simultaneous weapon control. Each weapon was linked to the
WCO's limbs. For example - left cannon = left arm. With practice,
using the Liandra's ship board weapons would be just like using any
other hand held weapon in combat, which is what the Rangers excel
at. A question that arises is what if there are more than four
weapons on a ship - how are they controlled? (See Improvement's
below.)

3. Instant information. The VR environment probably provides more
information than just a camera view. Status of enemy
power/weapons, full location, various weapon ranges (for enemy,
friend, and the Liandra herself), etc., would all be available and
either drawn on the eye or displayed on the VR screen.

4. Full battle-space view. I think this my even be my favorite part of
the system. Current battlefield commanders have a difficult time
viewing a battle-space environment. Here, the WCO just has to rotate
her body/head and she can see everything, and still keep relative
orientations of friends and foes in view. Yes, she has to be in
excellent shape, but that is not unheard of in the military.


So, overall, I do find that this weapon interface is a viable one, and
has definite advantages over traditional computer input devices.
However, there is always room for improvements, which is my next
section.


Improvements and Issues:

First, a note: the improvements and issues I list have probably
already been thought of by the Babylon 5 staff, and just not shown due
to budget/time/plot considerations. Also, these are just my opinions,
and I could easily see other people (or weapon officers) disagreeing
with me. But then again, that is why we have discussion groups!

I am a firm believer that gestures will be an important input device
in the future. However, I do not believe that gestures are the "be all
and end all" in human-computer interaction devices. For example, I
would not use gestures to replace a computer mouse (which is great for
simple pointing, clicking, and pull-down menu operations) because,
well, we already have a device that works as well as a computer mouse
- the computer mouse itself!

So, while I think that gestures as shown works well as an input
device for the WCO, I think more is needed, and in this case, that
would be voice recognition. Now, I'm not talking about the Star Trek
type of voice recognition which can parse sentences and never make a
mistake. I'm referring to specific limited vocabulary of voice
commands (just like the limited vocabulary of gesture commands used)
to aid in controlling the "state" of the system. Here is where I think
voice recognition would be useful:

1. Change weapons. If there were multiple weapons on a ship, then
voice commands would allow the WCO to change weapon/limb
configurations instantaneously.

2. Change firing modes. From the movie, it seemed that one shot was
fired per pumping gesture. Therefore, to take out the dozens of
mines, the WCO had to flail around to keep firing at the closest
(or most dangerous) targets. Instead, a word could be used to set
up continuous firing.

3. Switch gestures. Instead of using the 'pumping' gesture to attack
the mines, instead each finger could be attached to the same or
different weapons. Then only faster finger flicks would be needed
to fire the weapon. This would probably lead to more false
positives, but when you want to all out fire, that usually isn't a
problem.

Voice commands, as well as meta-gesture commands to achieve the same
result as some voice commands, would add great utility to the
system. For #3 above, a different gesture could have been used to fire
a weapon repeated at the mines. Training would be needed, but not
much more than is probably required now.

Another issue is multiple WCOs to handle a larger number of weapons.
I wonder if bigger ships would require multiple officers, and how they
would interact. Maybe their relative orientation in the VR gravity
environment would correspond to the relative location of the weapon
banks.

I noticed gestures being used to control the VR environment. That is
fine, but care must be taken to only use *purposive* gestures. It
would be unfortunate if a random gesture (such as one used to change
the WCO's orientation) resulted in the system performing an unwanted
action. This is why, for example, fingers weren't used to control
weapons during normal combat operations.


Is such as system possible?

Yes! Well, okay, we don't have the anti-gravity yet. If you do, please
email me - we'll do lunch.

But the eye-tracking, body tracking, and gesture recognition systems
are not that far off from what was shown on B5:LotR.

Gesture recognition and body tracking is my area, so let me discuss
that first. The system we've developed at Cybernet can do full body
tracking in complex unstructured backgrounds. We can recognize hand
and body motions (not American Sign Language) - specifically the types
performed by the CWO of the Liandra! That is, repeatable hand/arm
motions, similar to those used by Army scouts, construction crane
operators, and the like. It is beyond the scope of this review to
explain the various mathematical methods of gesture recognition
(geometric, Hidden Markov Model, dynamic based, etc.). You can look
up my dissertation if you really want a full overview!

As a point of information, tracking is a much harder problem than
gesture recognition. Tracking needs to work in a variety of lighting
conditions, backgrounds, and targets (skin color, clothing, hair,
etc.). This problem is not yet completely solved, but it does get
better every year, to the point that products can be made now.

For those interested in gesture recognition (and associated tracking
software), head on over to the Gesture Recognition Home Page [8].

Some commercial body tracking devices are listed below. Many of these
are used by computer game developers to track athlete's movements for
their sports games. Note that all of these are 'tagged' trackers, that
is, something must be worn on the body for tracking to occur. For
Cybernet's gesture recognition system (and for other systems out
there), untagged systems are used and preferred, although they are much
less accurate.

Manufacturer Product Method Output
Cybernet Firefly IR Optical 3D position

Ascension MotionStar DC magnetic field 6DOF position and
orientation

Northern Opto Trak IR Optical 3D position
Digital Inc. 3020

Intersense IS-300 Inertial 3D 6DOF

Polhemus Ultra Trak AC magnetic field 6DOF


There are a small number of companies that produce eye-trackers. While
of course not on the level of fidelity as shown in Babylon 5
(specifically, in order to work the camera has to be extremely close
to the person's eye, which was not the case in B5:LotR!), these
eye-tracking systems are pretty robust. Companies include AmTech,
Applied Science Laboratories, Cybernet, DBA Systems, LC Technologies,
Microguide, NAC, SensoMotoric Instruments [9]. Methods used include
CCD Line Scan cameras, Infra-red oculography, and video imagining.
Precision for these systems is typically at around 0.5 degrees or
less, with a 40 degrees (though some have 80 degrees). The sampling
rate can be anywhere for 50 Hz to 1,000 Hz (with 50-60 Hz typical).

For the Liandra, I would imagine a large array of camera like devices
with an extremely high resolution, sampling at 1,000 Hz or more.


Conclusion:

I do think that this gesture based interactive fire control system for
the Liandra is not only a viable option for a battle environment, it
might even be optimal. Fast accurate targeting, robust weapon
control, full view of the environment, and instant information are all
part of this system. With the addition or showing of meta-control
using voice or other gestures, I think this system would be one that
would practical, even for the control of today's Uninhabited Combat
Aerial Vehicles.

Thank you for reading my article. Comments are always welcome. If you
wish to respond to me directly, please use my personal email of
charles@umich.edu. The work address below should be used only for low
volume work related messages.

Charles J. Cohen, Ph.D.
Vice President, Research and Development
Cybernet Systems Corporation
ccohen@cybernet.com [url="http://www.cybernet.com"]www.cybernet.com[/url]

Footnotes:

[1] Some of my papers and talks are:

Program Chair: Applied Imagery Pattern Recognition 2001 - Analysis and
Understanding of Time Varying Imaging. Cosmos Club, Washington, DC,
October 10-12, 2001.

Cohen, Charles J., Glenn Beach, and Gene Foulk. "A Basic Hand Gesture
Control System for PC Applications." Applied Imagery Pattern
Recognition 2001 - Analysis and Understanding of Time Varying
Imaging. Cosmos Club, Washington, DC, October 10-12, 2001.

Cohen, Charles J. "Gesture Recognition Interface for Controlling
Virtual Displays." Virtual Design Technology and Applications.
Somerset Inn, Troy, MI, 15 November 2000.

Cohen, Charles J., Glenn Beach, Doug Haanpaa, and Chuck Jacobus. "A
Real-Time Pose Determination and Reality Registration System." SPIE
AIPR'99 Conference. Washington, DC, 13-15 October 1999.

Cohen, Charles J., Glenn Beach, Brook Cavell, Gene Foulk, Jay
Obermark, and George Paul. "The Control of Self Service Machines
Using Gesture Recognition." SCI'99 and ISAS'99 Conference. Orlando,
FL, 31 July 1999 - 4 August 1999.

Beach, Glenn, Charles J. Cohen, Jeffrey Braun, and Gary Moody. "Eye
Tracking System for Use With Head Mounted Displays." IEEE SMC'98
Conference. San Diego, CA, 11-14 October 1998.

Cohen, Charles J., Glenn Beach, George Paul, Jay Obermark, and Gene
Foulk. "Issues Of Controlling Public Kiosks And Other Self Service
Machines Using Gesture Recognition." IEEE SMC'98 Conference. San
Diego. CA, 11-14 October 1998.

Conway, Lynn and Charles J. Cohen. "Video Mirroring and Iconic
Gestures: Enhancing Basic Videophones to Provide Visual Coaching and
Visual Control." IEEE Transactions on Consumer Electronics, May 1998.

Obermark, Jay, Charles Jacobus, Charles Cohen, and Brian George.
"Building Terrain Maps and Virtual Worlds from Video Imagery."
AeroSense 1998. Orlando FL, 13-17 April 1998.

Conway, Lynn and Charles Cohen. "Apparatus and Method for Remote
Control Through the Visual Information Stream." U.S. Patent 5,652,849,
29 July 1997.

[2] For example: New York Times, 31 August 2000, buried on page D7:
"A Wave of the Hand May Soon Make a Computer Jump to Obey" by Anne
Eisenberg.

[3] Cohen, Charles J. "The Bleeding Edge: New Technologies, New Ways of
Learning." SchoolTech Expo. Chicago Hilton & Towers, Chicago, IL,
17-20 October 2001.

[4] [url="http://www.gesturecentral.com/"]http://www.gesturecentral.com/[/url]

[5] Our current Army project is with STRICOM to allow training of
their scouts in their Dismounted Infantry Semi-Automated Forces
(DISAF). See [url="http://source.asset.com/orl/disaf/"]http://source.asset.com/orl/disaf/[/url] for details of their
system.

[6] See the Babylon 5 episode "Shadow Dancing."

[7] [url="http://www.midwinter.com/lurk/guide/117.html"]http://www.midwinter.com/lurk/guide/117.html[/url]

[8] [url="http://www.cybernet.com/~ccohen/"]http://www.cybernet.com/~ccohen/[/url]

[9] This data is about a year old, so I can't guarantee if any of the
companies are still around. Well, except for ours.

[/quote]

Comments

  • SanfamSanfam I like clocks.
    That sure is an informative, not to mention very well written little discovery you made there!
  • Falcon1Falcon1 Elite Ranger
    Yeah you sure are right there Sanfam. Thats was an excellent read! Even JMS is taking it on board. I can't wait to see more of this system in use in the new series! [img]http://216.15.145.59/mainforums/wink.gif[/img]

    ------------------
    'The future is all around us' G'kar
    'I have no surviving enemies! None what so ever!' Galen

    Visit my B5 site at: [url="http://www.nialb5.com"]B5 site[/url].
  • Jon_SJon_S Earthforce Officer
    A nice piece.

    As a quick real world example of eye tracking, the AH-64 Apache attack helecoptor, uses a head / eye tracking system to handle weapons targeting. The chaingun (when active) tracks l/r up/down following the gunners head, and aims where the gunner is focused on.

    A similar system is being considered / designed for next generation fighters which mount air to air missile capable to attacking targets up to 90 degrees away from the centerline of the aircraft. Having the missle sensors track where the pilot is looking is a good way of ensuring that the missle can get a lock on the correct target given its very wide field of view.


    Just thought some examples of current systems that employ some of the same techniques as the Lisandra's weapons systems would be interesting.
Sign In or Register to comment.