How Computers Work

How Computers Work Part 9

With all the competing operating systems floating around in the world, it is quite amazing that any productive uses have ever been found for modern-day computers. Imagine, for no particular reason, Bill Gates and Steve Jobs in a seedy downtown bar fighting it out during amateur mud wrestling night. Sure, it can be fun to watch, but when the match is over and the beer is digested very little gets resolved. All that remains is a mildly disturbing image of two pasty white computer geeks cleaning mud from their various nooks and crannies. Despite this divergence in technology, one concept has focused the computer industry on a common goal. No, it’s not the “We Are the World” charity album (which came in a distant second), but the ever present concept of the Internet.

(Note to reader: Make wavy up and down motion with hands to indicate a flashback sequence.)

The birth of the Internet can be traced back to the mid 1960s. It was the middle of the cold war and everyone seemed to be worried about who was next on the Soviet’s invasion list. To make matters worse, they had become quite skilled at building nuclear weapons. And if the situation wasn’t bad enough, Gallagher started his first international fruit smashing comedy tour. With the exception of futuristic space battles and James Earl Jones portraying a large black man, this was clearly an “Empire Strikes Back” time for the United States of America.

Despite being 300 ton monstrosities, computer systems of this era were still quite vulnerable to inter-continental thermo-nuclear warheads. The military was taking extraordinary steps to protect their assets from this new threat. One high ranking government computer specialist went on record saying, “Over my dead body are those commies going to put funny little fur hats on our computers while they reprogram the software to display backwards letter Rs!”

One protective method was to tunnel deep inside granite mountains and place the computer hardware out of harms way in the event of a missile attack. While this approach seemed like a good idea on paper, it turned out the specific mountain they drilled into was also home to an established zoological garden. Filtering out the exotic animal dropping smells proved to be a non-trivial matter.

Since many of the computers in the nation were not located in the immediate vicinity of large granite mountain tops, a more practical solution was needed. While the idea of building portable mountain ranges was kicked around by the government, in the end they decided to connect their computers with really long wires. This allowed independent systems to communicate in the event of a nuclear war. Here is an example of typical electronic exchange of information:

Computer 1: Dude, what’s going on?
Computer 2: Not much—my operator is off watching that Gallagher guy.
Computer 1: How exciting. I don’t mean to be nosy, but has any of your hardware been damaged by a nuclear explosion?
Computer 2: Will you shut up already? You have been asking me that exact same question every 1.5 seconds for the past two years!
Computer 1: I’m sorry– that’s all I’ve been programmed to do.
Computer 2: Okay, fine. I’ve changed my mind. I’ve been completely annihilated by a surprise thermo-nuclear missile attack. What are you going to say now?
Computer 1: Umm… did it hurt?

(Note to reader: Imagine a series of wavy lines of varying frequencies in field of vision to return to the normal “now” time frame.)

Believe it or not, over the years this network of computers grew into the backbone of the modern day Internet. While technically functional, the average Joe on the street had no use for this technology. A few more pieces were needed to complete the puzzle. First of all, personal computers had to start multiplying faster than those evil muppets from the movie “Gremlins.” Finally, a ground-breaking new software program was needed for everyone with access to a phone line and the attention span and intelligence of an average third grader.

The company that first took up this challenge was named Netscape. Starting with little more than a few oversized mallets and a truckload full of produce, Gallagher built the company into an impressive giant by constructing an Internet browser. In an interview after the fact, Gallagher admitted to coming up with the idea after receiving a call from James Earl Jones. “I am your father, Gallagher. Now go and build up an enormous fortune so I can finance my empire of evil. And stop smashing all that fruit– it is wearing a bit thin.”

Once the power of the Internet was fully realized, everyone and their dog needed to have their own web site. In a few short years the Internet went from being completely empty to being chalk full of every imaginable type of web site. Personal, E-commerce, gambling, pornography, and undiscovered comedy writer web sites– the Internet has it all.

How Computers Work Part 8

Anyone with an advanced degree in Electrical Engineering and decades of hands-on experience in the world of computer design knows that hardware alone is not enough to make a computer function. One theory on how computers work involves groups of small gnomes that run around inside the case using enchanted spells to obey the will of the users. Due to the largely unverifiable and mythical nature of this explanation, it is yet to gain widespread acceptance in the scientific community. A less controversial hypothesis revolves around the concept of a software based operating system.

The need for operating systems first arose when the manufacturers of complex electrical devices realized their products were just too easy to operate. Equipment such as small pocket calculators, Commodore 64s, and Teddy Ruxpin dolls came equipped with a straight forward and easy-to-operate on/off switch. Users turned the machines on, performed the needed operations, and turned them off. The inherent problem with this situation was, of course, that the computer industry only received money from the customer for the initial purchase. Something had to be done to fix this grievous error.

Eventually the computer industry developed the concept of an operating system. Instead of just “being on,” computers would now have to load a software program in order to function correctly. In addition to costing the consumer extra money, this software was constantly being updated. Known problems were fixed, new problems were introduced, and the money kept rolling in.

One of the most popular and commercially successful operating systems is known as Microsoft Windows. Many people claim that the basic “window” concept was stolen from the Apple Macintosh. Of course Apple stole it from Xerox, who conveniently took it from basic Roman architecture. (Incidently, the “arch” style of operating system, while more elegant and able to support massive loads, proved too difficult to implement.) When asked how they felt about the whole situation, the Romans just shrugged their shoulders and mumbled something about having received poor legal advise from their copyright lawyer.

Choosing an operating system is an important decision for anyone who uses a computer on a regular basis. While no system is perfect, the following three options have evolved over the years to meet the various needs of the computer operating public:

Macintosh Operating System: Most people don’t know that the Apple Computer Corporation started out as little more than a garage band. After several noise complaints and a few visits from the local police department, they decided to change the focus from music and become a garage computer company. After releasing the commercially successful “Apple” line of computers, the focus of the company shifted to a new graphic-based operating system. The project, originally code-named “Granny Smith,” was eventually released to the public as the Apple Macintosh.

The simple yet elegant look of the operating system refined over the years has created a fierce loyalty to the Apple product line. (The only notable exception to this rule was the “Newton” hand-held digital personal assistant.) People who use this operating system are usually scared of electronic pointing devices with more than one button and often times can be heard making comments such as, “I can’t use this computer—its beige!”

Linux Operating System: This is the operating system of choice for hard-core computer geeks who like to build their own computers from scratch and anyone who wants to stick it to “the man.” While a relative newcomer in the world of operating systems, Linux was modeled after mainframe Unix systems. Due to an unexplained error in the accounting department, the source code for Linux is available at no charge. Despite being the most stable of all the operating systems for personal computers, many people figure that when something is free it must really suck. People who use Linux generally hope it will eliminate, with extreme prejudice, the competing operating systems in the near future.

Windows Operating System: As another computer company born in a garage, Microsoft has built a vast empire based on the Windows operating system. This operating system has won over countless users with functionality such as the “unscheduled coffee break while the computer reboots” and informative error messages such as “an unknown error has occurred at location 57EE:009B.” Having the largest market share, most people use Windows simply because everyone else is—and everyone can’t be wrong.

What can we expect to see in future versions of operating systems? Apple has just released “Macintosh X” (not to be confused with the recently released Friday the 13th movie, “Jason X.”) Microsoft’s Windows XP includes functionality to collect user’s DNA during the installation process. Rumor has it that the next version will be able to read user’s most personal thoughts. Finally, if everything goes according to plan, Teddy Ruxpin 2.0 will be in stores in time for the Christmas shopping season.

How Computers Work Part 7

The decade of the 1980s ushered in many new revolutionary changes that affected every person in this country not living in a shack in remote wilderness area of Montana. Some of these changes included witnessing the new found fame of the denim overall (and nothing else) clad rock group Dexy’s Midnight Runners, electing an actor to the office of President of the United States of America, and having a surprisingly large percentage of the world running around screaming, “Where’s the beef?”

While all of these events are important to the evolution of the planet, this decade was witness to one of the most critical single advancements in the computer industry. Without intending any disrespect to the Pac Man stand-up video game, the world was never the same after the introduction of the first Personal Computer.

While various computer systems were available to the general public before the “Personal Computer”, many potential customers were turned off by the disclaimer on the box stating “some assembly required.” For just about any other product in the known world this would mean getting out a Phillips head screw driver and an adjustable wrench. Assembling a computing system of the time required a soldering gun, a high precision metal lathe, and a Masters degree in Electrical Engineering.

IBM changed all of this with the introduction of its Personal Computer. The whole system was already assembled and loaded with the state of the art operating system known as DOS. All that a new user has to do is to take it out of the box, plug it in, and turn on the power switch. It couldn’t be any easier. Or at least that was the theory.

From the hardware perspective, the Personal Computer helped standardize computer parts. Since IBM didn’t want to be in the business of manufacturing every component that went into their systems, they helped create standards. This allowed different components to be swapped in a single system. For example, if you were running out of space on the hard drive, you could go to the computer store and buy a bigger drive. After taking the case off the computer, you simply swap the old and new drives. After getting the case back on you turn on the power only to see a blank screen come up. The next step is to put the old drive back in, only to get the same blank screen when it boots up. Finally, you go to the nearest drinking establishment and order a double shot of whiskey as you come to realize the last six months of work is trapped inside an uncooperative computer component.

Pretty soon there were a few computer component manufactures that got this idea in their heads to build their own Personal Computers. Well, IBM had already seen this coming, and had taken steps to prevent this from happening. They built the Personal Computer around a single chip named BIOS that only IBM manufactured. Without this chip, all the other hardware was not able to talk to each other. In effect, you could not build a Personal Computer unless IBM let you.

This situation is quite similar to the safe guards put in place in the movie, “Jurassic Park” to keep the dinosaurs from reproducing. And we all know how well that worked out. With the exception of countless bad sequels, the exact same thing happened in the computer industry. One of IBM’s rival companies figured out the exact functionality of the BIOS chip and constructed their own version. This processes of reverse engineering opened up the electronic flood gates. Anyone and their dog could now build their own Personal Computer with only the basic understanding of what was happening inside the computer.

While IBM didn’t really seem happy about the entire situation, countless new computer companies were cheerfully popping up overnight. They didn’t all survive the test of time, but companies such as Dell and Compaq expanded and eventually came to dominate the industry. This created fierce competition in the industry. The costs of systems was constantly coming down while their speed and capacity was improving. This behavior benefited consumers by having any system they purchased be obsolete by the time they drove home and took it out of the box.

The development of the Personal Computer changed the way the world looked at electronic devices. For better or worse, everyone had to have a computer to get through their daily lives. Even when they made our lives more complicated it seemed like a good idea at the time to do everything on a computer. Well, that’s all for this week-I’m off to go finish my game of computer solitaire.

How Computers Work Part 6

While there are many, many ways in which computers have been used to make the world a better place to live, the 1970s was witness to the scientifically verifiable best possible use of this emerging electronic technology. No, I’m not talking about the perfection of the Andy Gibb robot duplicate (which ranked 5th over all), but rather the birth of video games.

Up until this point in time, playing games generally involved social interaction and physical activity. In retrospect, it’s hard to believe that people even bothered with this type of behavior. But this was a time in the history of America when people really were not too concerned with their own health or the general state of the planet. As evidence, many people smoked cigarettes and the Bee Gee’s music was allowed to propagate with little or no government intervention. We didn’t realize back then that the best way to preserve our bodies and minimize physical injury was to sit inside and dedicate large periods of time alone sitting in front of some type of computer controlled output device.

The first commercially successful video game system was named Pong. This simulation was an exact electronic replication of the game of tennis. The only minor components of the sport removed included: rackets, nets, gravity, wind resistance, the third dimension, and of course, Arthur Ashe. And the ball was square instead of spherical. Despite these limitations, the game of Pong was a tremendous success. This goes to show how a well-run marketing department can make or break the release of a new product. The lead computer programmer for the company described the game as, “two sticks that can move up and down bouncing a ball back and forth.” The packaging of the product in stores proclaimed the game of Pong to be, “Virtual reality fourth dimension alien space tennis with real lasers.”

The next major video game system to capture the hearts and minds of the American public was the Atari 2600. Unlike the game of Pong, this setup allowed for different game cartridges to be inserted into the main unit. When people grew tired of their existing game collection, they could just drive out to the nearest retail store and buy a few more.

This system also had the advantage of separating the hardware and the software components of the video game system. Which meant that any Tom, Dick, and Harry could get together in their garage and start making their own video game titles. When this phenomena occurs the results can revolutionize the world. But usually it meant they came out with a few very mediocre titles. While several impressive game titles ran on the Atari 2600, countless forgettable counterparts would sit next to them on the shelves of the store. Unfortunately, consumers had a hard time determining which of these games were worth buying as they all claimed to be some slight variation of “alien space tennis.”

The Atari 2600 era largely ended with the introduction of the Commodore 64. While not exclusively a video game system, this system included a keyboard and optional floppy disk drive. This meant that anyone who owned a Commodore 64 could write their own programs and distribute them on a floppy disk. Potential computer nerds didn’t even need to work from their garage anymore-code could be written from the comfort of their own living rooms without creating a big mess of wires, circuit boards, and duct tape. In addition to rampant unchecked piracy, this system also led to some of the most well designed video games the world has ever seen. I’ll always lovingly remember my Commodore 64, despite the fact that my mom threw it out when I was away in college.

The video game industry has been continually improving their systems to keep up with the demands of consumers. While these “consumers” do not have a centralized leader or clear command structure, intelligence reports indicate they demand games that are colorful, make interesting noises, and inspire them to remain motionless for indefinite periods of time even when it is nice enough to go outside and play. The computational resources needed to operate these games is quite impressive. One recent study reported that if all the processing power from all the computers running video games could be harnessed at once, the resulting system would be powerful enough to master the game of chess, sequence all the DNA of the human race, or locate Jimmy Hoffa. Since that isn’t going to ever happen you might as well go to the store and buy “Ultimate Alien Space Tennis 7.”

How Computers Work Part 5

After the concepts involved in the Eniac computer were proved to be a success, people started asking a lot of questions about the future of computational devices. “What else can it do?”, “Can it be made smaller than 200 tons?”, and “Does it come in blue?” were just a few of the many, many thoughts people had about the topic.

The 1950s and 1960s were quite exciting times for the development of computers. Successors to the Eniac system allowed researchers to gain valuable insights into mathematical and sociological functions of our world. For example, the companies who won large and profitable government contracts to build and maintain computer systems quickly learned to construct their systems with large panels of blinking lights. While a few of the lights actually corresponded to actual parameters related to the machinery such as “power”, “something is going on inside”, and “an unknown error has occurred at location at 57EE:009B”, most of the lights were designed to blink on and off in such a way that was aesthetically pleasing to the eye.

This functionality proved to be critical when top level defense department officials or members of congress stopped by to see the final results of their considerable expenditures. After a tour of the facilities, the gentlemen would light up their pipes, puff out their chests, and confidently spew out random pleasantries like “Good work men!”, “This is EXACTLY what we need to beat the Commies!”, and “I don’t know about you, Bob, but I think it needs more blue lights.” Eventually the contractors brought in interior decorators during the hardware design phase to coordinate the color schemes of the systems. Some of the individuals who programmed the computers started to develop software that did nothing more than make the lights blink in the most interesting sequence possible.

Eventually blinking light technology reached a limit and computer designers were forced to explore other avenues. An in depth investigation revealed that in addition to changes in light intensity, the human eye responds positively to periodic rotational motion. Armed with this knowledge, computers were enhanced with state-of-the-art tape drives. While containing little, if any, adhesive properties, these devices were used to store and retrieve information on a long and thin strip of material capable of holding a magnetic charge. The constant back-and-forth motion provided a convincing illusion of productivity. Often times the managers of these facilities would be giving tours of the computer facility while the rest of the office was busy in the break room building elaborate paper fortresses with rolls of scotch tape and reams of used continuous feed paper.

In addition to the blinking lights and reel-to-reel tape devices, each generation of computers was becoming smaller and more powerful than its predecessor. The development of the integrated circuit allowed designers to eliminate bulky vacuum tubes. These types of technological advancements allowed for the same amount of computational power to occupy a continually shrinking volume of space. This phenomena is often times referred to as the Carnie Wilson effect.

All of this visual stimulation associated with computing devices led the general public to assume that while computers were useful in some abstract manner, they would eventually become sentient and bent on destroying the human race. While it isn’t mathematically feasible to prove such an event will never happen, many popular films of the era encouraged this concept. One prime example is the movie “2001: A Space Odyssey.”

After successfully sending its crew half way across the solar system, HAL, the talkative onboard computer system, decides to fling the crew into outer space one at a time just because he had nothing better to do. In all reality that is not how computers of the day would have worked. The worst thing that could have happened was the “fling yourself out the airlock one at a time” light would have lit up. Eventually the crew would have realized this was a computer error and not in the best interest of the mission. If this occurred before everyone followed the instructions one of the remaining crew members would have put a small piece of tape over the light and ignored it for the duration of the movie. I believe this would have all been clearly explained if a logistical error during the final editing process hadn’t caused extensive quantities of a completely different film to accidentally replace the intended ending of the movie.

While the 1950s and 1960s were a time of extensive change in the world of computers, the true power of these devices were just beginning to be discovered. Will these machines of our own creation, with their hypnotizing blinking lights and magnetic tape drives, indeed take over the world? The world may never know-unless, perhaps, you are Bill Gates.

How Computers Work Part 4

The year was 1946- the world was busy with its new, “Can’t we all just get along?” campaign, the United States military was busy building, among other things, the most technologically advanced computational devices the world had ever seen, and the weather seemed, in general, more pleasant than usual. The answer to the first questions is by in large, “No, we can’t all just get along.” The part about the weather turned out to be nothing more than a statistical anomaly. Which leaves the part about constructing computers unexplored. Put your thinking caps on as we prepare to examine this topic in an objective and historically accurate manner.

In order to make this machine sound more like a cute, furry animal and less like a cold blooded killing machine, the people who came up with the idea in the first place decided to call it “Eniac.” While this name sounds somewhat cute and furry, its meaning comes from an old Czechoslovakian phrase that roughly translates to “factory workers with steel shells who attempt to enslave humanity.” The United States built Eniac after identifying a need to calculate the trajectories for their long range thermonuclear weapons.

Once constructed, the military also discovered they could use Eniac to beat the Russians at their own game: tic-tac-toe. After months of tedious programming, the system consistently advised players to always go first and pick the center square. Future versions of Eniac were enhanced to play the game show variations of tic-tac-toe such as “Tic Tac Dough” and “Hollywood Squares.” Some of the general pointers for these games generated by Eniac included, “Caution: Wink Martindale is a robot” and, “Agreeing to appear on Hollywood Squares automatically makes you a loser.”

The heart of the Eniac consisted of thousands of small vacuum tubes that were used to store information while calculations were being performed. While bulky and unreliable compared to the technology available today, these vacuum tubes were a critical component for Eniac to function properly. When a vacuum tube malfunctioned, one of the operators had to locate and replace the tube with a fresh new one. This maintenance consumed quite a bit of the operators time and, by in large, kept them from their favorite activity involving day dreaming of a future where all enemies of the United States could be destroyed with a push of a button.

The process quickly became tiresome and the military eventually hired low paid foreigners to change out the malfunctioning tubes at night. In the meantime, the men and women who built Eniac could focus on the next objective of deciding on the color of the buttons that would be used to fire the missiles their computer was helping aim all around the world. In the end they chose red.

This system created somewhat of a security issue when the mathematicians and computational theorist came into work one day and noticed the 200 ton computer was missing. Naturally the cleaning staff was accused of walking off with the system after everyone else had gone home for the evening. These individuals continually proclaimed their innocence in their native language, which really didn’t do anything to help their cause. In fact, it made them look like raving lunatics-exactly the type of individuals who would steal a state of the art computer. Eventually they were cleared of any and all wrong doing after a complete audit of all the militaries computational devices located the lost piece of equipment. For reasons that have never been completely explained, Eniac was accidentally placed in a seldomly used supply closet.

One rather critical issue with the Eniac computer involved error handling. This system was constructed long before traditional computer screens with the ability to turn completely blue had been invented. To put this time frame into perspective, the top computer scientists of the day were just beginning to coin the phrase “an unknown error has occurred at location 57EE:009B.” Despite incredible advances in the field of computers, much of the behavior of the Eniac system is to this day not completely understood. For example, when an error occurred in a program, the system would calmly and confidently instruct the Navy to launch every long range missile at the five richest kings of Prussia.

Eniac represented a monumental investment in time and money for the United States. Fortunately, World War II was, for the most part, an “away” war that left our nations infrastructure intact. While most other countries in the world were busy rebuilding roads and buildings, we were able to get a head start on the computer craze. Eniac blazed the path for modern day computers. Most importantly, it started an entirely new belief that given enough time every sufficiently powerful computer will eventually do everything in its power once its operators have let their guards down to take over the planet and enslave humanity.

How Computers Work Part 3

Part two of this series left off with the ancient computational tool known as the abacus. From there we fast forward through history to the nineteenth century. Sure, a lot of important things happened in that time frame, but none of it was really central to the advancement of the computer. Most of that time was spent fighting each other, fighting off the plague, and fighting over how much it should cost to paint the ceilings in prestigious religious establishments.

These events are part of what is known as the “Dark Ages.” Despite the fact that on average the amount of sunlight the planet received had not changed, the people on the planet were depressed, wore dark clothes and sunglasses all the time, and didn’t spend a lot of time learning the ways of the abacus. In more informal situations, many historians refer to the period of human development as the “pimply moody teenage years.” This situation did very little to stimulate the creative juices of the general population.

The next major advancement in the area of computational machinery came in the late 1800s in a rather unlikely form. No, I’m not talking about evil alien time traveling robot monkeys who ruthlessly scavenge the planet for shiny pieces of scrap metal. At the time of this writing the monkeys in question have only achieved limited success in building their time machine. The piece of equipment to which I’m referring relates to, of course, the textile industry.

At this point in time many nations of the world were busy building expansive factories and cutting down vast forest lands to keep the factories up and running. A few individuals focused their time and attention to making the world a better place to live. Despite the dark ages being over for the most part, being optimistic and proactive was not very fashionable at the time. Even so, some of these people voiced the opinion that cutting down the forests and building factories that polluted the air wasn’t very good for the planet. Oddly enough, these people tended to die in unfortunate industrial accidents such as falling into smoke stacks or having large trees fall on their house in the middle of the night.

A few slightly less radical individuals got together and decided the world might be a better place to live in if instead of producing endless quantities of drab colored fabric, the textile factories made blankets with images of cute little bunny rabbits woven into the cloth. After looking into the situation, they discovered it was quite simple to produce fabric made of a single color, and quite difficult to integrate mammals into the design.

To solve this problem, they designed a revolutionary new weaving loom that used a special series of cards with holes in various positions. The individual strings on the loom would be positioned based on whether there was a hole in the punch card at that location. A series of these cards allowed for intricate designs to be produced with little additional effort. The guy operating the machine does not need to know the exact details of why there are random looking holes in the punch cards. They just slide the “bunny rabbit” cards into the machine until enough fabric has been produced. Then they can quickly stop the machine and put in a different pattern, such as “evil monkey robots.”

For various reasons this device was never a wide spread commercial success. In addition to being bulky and expensive, whenever any of the two dozen delicate threads feeding into the machine broke, the blanket produced was totally solid with the exception of a message in the exact center that would read “an unknown error has occurred at location 57EE:009B” along with a special 1-800 number and web address to contact for further assistance. Since neither the telephone nor the Internet had been invented yet, the technical support department had quite a bit of free time to pursue other activities such as creating loom patterns that produced wildly inappropriate images of the high ranking political figures of their day.

While this may seem like a small technological advancement, this new design allowed for information to be stored on punch cards and used on different machines. The designers probably didn’t know it at the time, but a hundred years into the future this concept would be used as a fundamental component of modern day computers.

This completes another installment on how computers work. So tonight when you crawl into your bed with your special Mr. Honey Bunny blanket, you can sleep a little easier knowing how it came to be. And don’t worry too much about the evil alien robot monkeys. The odds of them suddenly materializing in your bedroom are rather slim. But on the off chance they do launch an offensive attack, don’t let them see that new sliver filling on your back molar.

How Computers Work Part 2

Welcome back to part two of the continuing series that explains how computers work. Last time we covered fingers, toes, and piles of rocks. While the connection between these items and today’s computers may seem tenuous at best, the idea is to understand how these creatures evolved over time. I wasn’t all that long ago when computers were large, primitive, hairy animals who scurried about in the tropical climates of world feeding on native plants and sleeping eighteen hours of every day. Wait a minute, I was thinking of Marlon Brando.

The next important technological advance in the world involved numbers. One of the first numbering systems was invented by a fellow named Edgar Roman. The year was 999 and Edgar was busy preparing those miniature hot dogs for his Y1K party. While known to his friends as kind, generous, and generally agreeable to be around in social situations, Edgar was not blessed with an abundance of hand eye coordination. He managed to drop the whole box of toothpicks on to the floor while trying to get them out of the very top shelf of the kitchen cupboard.

Looking at all the toothpicks on the floor, Edgar realized that numbers can be represented as simple symbols such as I, V, X, M and so on. It would have been much, much easier to write “You are formally invited to Edgar’s house to ring in the ‘M’th year of our Lord” instead of having to count out exactly 1000 tiny tick marks on each and every invitation. After throwing the party, seeing if the apocalypse was really going to rip the known world in half, and dealing with a few issues relating to excessive alcohol consumption, Edgar sat down and created a formal definition of his numbering system. While originally named “Edgar’s Wacky Toothpick Numbers,” some of his more politically correct associates convinced him to change it to “Roman Numerals.”

There may be some confusion about why the Roman numeral for 1000 is the letter M, but the letter K is often times used to denote the same number. This deviation was created in the late 15th century when Samuel Gates Junior– a distant predecessor of William Gates– decided to create a completely new system of counting. After researching the legal ramifications of Roman numerals, he discovered that anyone could use the system without having to pay royalties to Edgar’s descendants. Seeing the potential for a proprietary counting system, an ever so slightly different system was developed and then licensed to companies interested in counting things. While the system was inferior to the original, it was used by enough of the population to create confusion for several centuries.

One important idea missing in Roman Numerals is the concept of zero. Many experts attribute this deficiency to the fact that it is quite difficult to bend toothpicks into a complete circle without breaking it. Another possibility is that the Romans were pragmatic about the whole situation and figured if there wasn’t anything there, why bother keeping track of it? For example, you can physically oppress the serfs until the aqueducts are completed, but if their pockets don’t contain any gold coins, then it’s all just wasted effort.

Many people think that the first personal digital assistants (PDAs) came into existence in the late 1990s. In reality, this technology has been around for many hundreds of years. The abacus was the first portable device that allowed the user to store and retrieve information. The basic design of the abacus originated in Asia and involved a series of rods with beads that could freely slide up and down the rod to keep track of numbers. While technically portable, these devices would malfunction if shaken or rotated too vigorously. When this happened, the device would turn completely blue and the message “an unknown error has occurred at location 57EE:009B” would magically appear. Ancient Chinese texts explain this mysterious event as a sign of the devil traveling to the earth with the intention of destroying the planet.

The invention of the abacus also marked the start of the playground bully. Some of the smarter and less physically skilled students would sit on the stairs of the steps of the school using the abacus they received for their birthday to try and answer the esoteric question, “how many roads must a boy travel down before he becomes a man?” The less intellectually inclined students feared that which they didn’t understand, and would often times start a game of kickball with the computing device. Which is really a shame, since the kick ball had already been invented.

Well, that wraps up another segments on computers. If you would like more information on the topics discussed today, please visit the nearest ancient Roman library and local abacus store.