• Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint
Share this Page URL

Chapter 3. Robot’s Rules of Order > Bots in Legal Briefs

Bots in Legal Briefs

In the previous chapter, we looked at the many branches on the family tree of robot evolution. We also profiled some of the pioneering scientists and technologists who are feverishly engineering that tree, each focused on a different branch. From these various schools of robotic thought have emerged a number of operating principles (such as looking to nature for inspiration, or using human competition to accelerate robotic innovation). In this chapter, we’ll look at some of the “laws,” maxims, words of wisdom, and other pithy thought compressions that guide many robot builders. Some of these are from science fiction, some from engineering; some are whimsical, others more serious. They are all worth chewing over; hearty food for thought to keep you stoked as you think about, design, and build robots of your own.

Asmivo’s Three (er...Four) Laws of Robotics

You can’t call yourself a sci-fi fan, a deep geek, or a robot builder if you aren’t familiar with Asimov’s Three Laws of Robotics:

  1. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

  2. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

  3. A robot must obey the orders given to it by the human beings, except where such orders would conflict with the Zeroth or First Law.

  4. A robot must protect its own existence, as long as such protection does not conflict the Zeroth, First, or Second Law.

The laws first appeared, explicitly anyway, in the short story “Runaround” in 1942. The story was later reprinted in the wildly popular Asimov collection I, Robot in 1950. Asimov’s Laws were basically created as a literary device, something for Asimov to work off of as he tried to think intelligently and rationally about the future of robots (and how intelligent robots might interact with humans). The “Zeroth Law” appeared in a later story as a necessary addition to safeguard all of humanity (not just individuals) from robot aggression. In the real world, the Three Laws aren’t taken that seriously by most robot researchers, especially because we aren’t even close to having a robot that can parse the full grammatical import of the words in the sentences that make up the laws, let alone comprehend their meaning.

Some roboticists, such as BEAM (Biological Electronic Aesthetics Mechanics) creator Mark Tilden, have even suggested that these laws would create laughably wimpy robots. As Dave Hrynkiw and Tilden point out in their book Junkbots, Bugbots & Bots on Wheels, “If an Asimovian robot has enough power to push a vacuum cleaner into your toe (assuming it could even recognize the difference between your toe and a toy lying on the floor), it’d be too nervous to get any practical work done.” Still, Asimov and his laws deserve their props. Just as the laws gave Asimov something to push against in writing his positronic robot stories, they’ve also inspired countless other sci-fi writers, and real-world robot builders. Which brings us (as a for instance) to Tilden’s Laws.

Tilden’s Laws

BEAM innovator Mark Tilden (see Heroes of the Robolution trading cards in Chapter 2, “Robot Evolution”) likes his robots a little more feral than Asimov.

  1. A robot must protect its existence at all costs.

  2. A robot must obtain and maintain access to a power source.

  3. A robot must continually search for better power sources.

The more...ah...earthy expressions of these laws are:

  1. Protect thy ass.

  2. Feed thy ass.

  3. Move thy ass to better real estate.

BEAMbots are survivors. They are built to be hearty and suit the environment in which they find themselves. This is one reason BEAM developers focus mainly on tried and true analog technologies, and why they look to biological inspirations (millions of years of evolution can’t be all bad). A fussy big-brained bot with wheels, cameras, multiple processors, and other high-end gear is not going to last very long, in say, a jungle environment. A robot built like a Rhinoceros Beetle, with relatively low-tech parts and primitive sense-act behaviors, is more likely to survive. The other main feature of Tilden’s Laws concerns power autonomy. Tilden sees a robotic future in which robots should go about their (programmed) business and not have to be fiddled with very often by human operators (see Figure 3.1). So far, in BEAM, this has translated to solar power as the best way of delivering this autonomy.

Figure 3.1. And you thought it was a pain when Fido mangled your slippers. Would bots based on Tilden’s Laws be a little too autonomous?

Moore’s Law

Moore’s Law was proposed by Gordon Moore, one of the founders of computer chip juggernaut Intel:

The number of transistors on a computer microprocessor (basically a measure of processing power) will double every eighteen months.

When he first presented his forecasts on computer chip manufacturing, in a 1965 issue of Electronics magazine, Moore said this doubling would occur every 12 months. That number actually held true for a decade. In the mid-1970s, the “law’s” speed limit was slowed to 18 months, and that has held true ever since. Just when we think that manufacturers can’t possibly fit another transistor on a chip, some new breakthrough makes the impossible possible, and Moore’s Law remains in effect.


The perennial truth of Moore’s Law is impressive, but one might ask: Why is there no equivalent law for digital storage capacity? Each year, more storage is available, for less money, on ever-shrinking storage media. In fact, storage capacity advances actually exceed Moore’s Law. In 1983, a 10MB (megabyte) hard drive (which was nearly the size of a small car, and a forklift was required to get it onto your desk) cost nearly $1,000. If 10MBs cost that much in the ’80s, a modern 60GB (gigabyte) hard drive (which now sells for under $100) would cost $6,000,000! In the robot world, this storage boon translates to ever-more sophisticated control programs that can fit into tinier and tinier robot brains and require much less power.

Ohm’s Law

Ohm’s Law (named after German physicist George Ohm) is a formula used to figure out the interdependent relationships between voltage, current, and resistance in an electrical circuit:

One volt will push 1 amp of current through 1 ohm of resistance. Change a value, and they all change.

The basic formula is voltage (V) equals current (I) times resistance (R), or V=I × R (see Figure 3.2). If you know two of these values, you can calculate the other (I=V/R, R=V/I). We won’t go into this any further here (we’ll cover electronic fundamentals in Chapters 69), but knowing Ohm’s Law is extremely important to anyone doing work in electronics (and that includes us bot builders!).

Figure 3.2. It might look like a drug tablet, but this is Ohm’s Little Helper, a handy pie chart to help you remember how to do Ohm’s calculations (V=I × R, I=V/R, R=V/I).

Moravec’s Timeline

Carnegie Mellon robot researcher Hans Moravec (see Heroes of the Robolution trading cards, Chapter 2) sees machine intelligence as basically a hardware problem, or at least, a problem not solvable with the computing hardware of today. Using animal brainpower as a guide, and roughly calculating the processing power of various animal brains (in MIPS, or Millions of Instructions Per Second), Moravec has created a timeline for when machine intelligence will be possible (according to him, anyway). So, for instance, an insect brain can handle about 1,000 MIPS. By comparison, a modern Pentium 4 PC can deal with about 1,700 MIPS. Using Moore’s Law (see previous), Moravec believes that a computer will reach (and maybe even surpass) human MIPS power (approximately 100,000,000 MIPS) by 2050 (see Figure 3.3).

Figure 3.3. Moravec’s Timeline predicts that the computing muscle needed to handle human-level instruction processing will arrive around 2050. Image courtesy of Hans Moravec.


If voltage is abbreviated V and resistance is designated by an R, than why is current marked with an I? Well, just to confuse you and make you feel inferior, of course! Logic would dictate that it might be C (for current), but nooo... So, what does the I stand for? Bet you never guessed intensity.

The Turing Test

Considered to be one of the founding fathers of digital computing, British mathematician Alan Turing came up with this test in the 1950s:

If a human judge engages in a conversation with two parties, one human and one machine, and cannot tell which is which, then the machine is said to pass the Turing Test.

The idea is simple: If a human being can interact with another human intelligence and a machine “intelligence” (through written communications), and is unable to tell the difference, the machine is, for all intents and purposes, intelligent (see Figure 3.4).

Figure 3.4. From party game to artificial intelligence assessment, the Turing Test lives on. Gender guessing is optional.

Over the years, there has been growing criticism of the test. Does effectively simulating conversation equal intelligence? Can’t a machine be smart without having to engage in conversation? A 10-year-old child or an illiterate person wouldn’t pass the Turing Test. Does that make them stupid? Although there is an annual competition every year (called the Loebner Prize) to find the most “human-like” machine, to date, no machine has passed the Turing Test.

Amdahl’s First Law

Offered in Kenn Amdahl’s hysterical and enlightening book, There Are No Electrons: Electronics for Earthlings (see Chapter 6, “Acquiring Mad Robot Skills” and Chapter 11, “Robot Books, Magazines, and Videos”), this law basically reminds us not to mistake scientific models of the world for the world itself:

Don’t mistake your watermelon for the universe.

If you use a watermelon to describe the universe to children or particularly slow adults (“the universe is like a watermelon, and the stars are its seeds”), it’s easy for them to start thinking “watermelon” whenever they hear “universe.” Models can (and often do) become conceptual traps. The idea behind this law, and the inherent dangers of models and analogies, has been expressed in numerous other ways. Alfred Kozybski, the father of General Semantics, was famous for the quote, “The map is not the territory, the name is not the thing named.” This is the same basic idea. A related maxim from the cyberneticist Stafford Beer: “Models are not true or false, they are more or less useful.” Let your neurons fire that one for a few minutes!

Brooks’s Research Heuristic

In Rodney Brooks’s book Flesh and Machines, he reveals how he came upon many of his radical ideas regarding robots and AI:

Figure out what is so obvious to all of the other researchers that it’s not even on their radar, and put it on yours.

Essentially, Brooks would look at how everyone else was tackling a given problem, and what assumptions were so implicit to them that these assumptions weren’t even being questioned. Brooks would then question them.


The Turing Test was actually inspired by a party game. In the game, participants try to guess the gender of players (hidden in another room) by asking written questions and reading answers sent back to them. In Turing’s original proposal for the test, he had the human participant pretending to be the opposite gender (the machine was not asked to switch hit), although this feature was quickly dropped.


If you’d like to know more about the Loebner Prize, check out the competition’s Web site (www.loebner.net). It’s also worth doing a Web search on Hugh Loebner, creator of the prize, to read up on some of the controversy surrounding him and the contest. And you thought the Turing Test had its critics!

Braitenberg’s Maxim

This idea lies at the heart of Valentino Braitenberg’s groundbreaking book:

Get used to a way of thinking in which the hardware for realizing an idea is not as important as the idea itself.

Braitenberg’s book, Vehicles: Experiments in Synthetic Psychology (see Chapter 11) is a series of thought experiments using hypothetical autonomous robot vehicles to demonstrate increasingly complex, lifelike behaviors. What’s amazing, and a testament to this type of freeform thinking, is how useful these ideas have proven in real-world robotics (and even in possibly understanding the building blocks of human psychology). Braitenberg’s “vehicles” have inspired many real-world robot designs. Our “Mousey the Junkbot” project in Chapter 8 is basically Vehicle 2, the “Fear and Aggression” robot, described in Braitenberg’s book.

The Krogh Principle

August Krogh (1874-1949) was a Danish physiologist, who wrote:

For a large number of problems there will be some animal of choice, on which it can be most conveniently studied.

He strongly believed that studying the structures of the natural world could solve most engineering problems encountered in the human world. Many roboticists, such as Robert Full of the Poly-PEDAL Lab (see Chapter 2), have been inspired by Krogh’s working principle.

The Sugarman Caution

A colleague of mine, Peter Sugarman, a pioneer of pre-Web hypermedia and a constant supplier of potent bumper sticker wisdom (he reads way too many comic books), once told me this one after my hard drive fried itself in the middle of a hellish book deadline (no, not this hellish book deadline):

A computer can smell your fear.

The animistic paranoia behind this maxim suggests that machines will heartlessly pick the worst possible time to crap out on you (see Figure 3.5). And the more nervous and uptight you are around them, the more likely they are to check out. So relax, stay sharp, and back up frequently!

Figure 3.5. Caution: Your high-tech machines (including robots) are waiting for the worst possible time to fail you. Be prepared!


(Kenn) Amdahl’s First Law is not to be confused with (Gene) Amdahl’s Law. The more widely known Amdahl’s Law deals with the performance trade-offs of a single (large) computer processor versus multiple parallel processors.

  • Creative Edge
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint