08 August 2009

Quibbles and Quandary, Science in Science Fiction Part 4

The derived topics of interest in SF are plenty: how what we make interacts with us and how it does so by the known or speculated laws of the universe.

Sunshine interacts with your skin to help you create Vitamin D, that is science that encompasses physics, chemistry, biology,genetics and your emotional state of being.  A little bit of sunlight everyday can be a good thing.  A lot of it and you are looking at skin cancer a few decades later.  That is a simple interaction with the known.  When we do things, create things and make new concepts into reality, then we step into the unknown: fantasy works on space flight and even regular flight abounded before the 20th century speculating on how such a thing would change our view of the world.  What happened wasn't expected: it compressed our world view so that distance and time now had different emotional and structural meanings to us and our society.  Going cross-country was once an arduous journey of months... then weeks... then days... now hours.  If we develop, say, teleportation, then that makes a capability to live anywhere, work anywhere and vacation anywhere on the planet as a future choice for us all.

But what if what we create goes wrong?


Machines that kill

The animated clay of the Jewish tradition was Golem: it had the name of the almighty upon it and it was imbued with the power of motion and even, to some degree, reason.  It was without soul, without heart and made of clay.  That is the stuff of horror stories, when the inanimate are imbued with the power to kill but not the mind nor wisdom to bank that power and use it for good.  Ordering such a thing to do something is no cure for its ills in that respect, and one mis-spoken word and the power to defend becomes one to attack without hesitation and without conscience.

In SF the very most basic type of machine that can do this is, of course, the booby-trap.  It is just a set of mechanical or physically motivated structures to do a set end.  A trip wire triggers a grenade, breaking the beam of light brings down the cages and bars the room, and stepping into a bar and starting a gunfight has bullets going all over the place, chairs crashing and glass breaking far and wide.  Alfred Hitchcock adored showing you just how awful something was before it was activated: you had the horror of anticipation and wanting to yell at a lovely protagonist that she was in mortal danger.  When you create a new trap and utilize it, you are doing the work of using science and technology in new ways: MacGyver episodes, once you take out the outlandish stuff,  is a form of SF.  So are the programs of Les Stroud and Bear Grylls.  Making do with the known in the unknown is inventiveness supreme, and utilizing odd pieces of junk and the wilderness to fashion a spear or weir to catch fish is primitive, but also putting your knowledge of science and technology to use for yourself.  These simple devices are easy to make, easy to understand and created for set ends and purposes.  We don't think of them as SF, especially if you are in the wilderness surviving, but writing stories on them must take into account the physics, chemistry, seasons, biology and so on of such things.  Be it a madman's deathrap or a simple construct of twigs and rocks to trap fish, the ability to utilize technology and our understanding of it makes writing about that a form of SF.

But it has no 'gee-whiz' to it, save for the ever ingenious madman's deathtraps, of course.

To get to the SF version of the Golem, you must go to the Robot.  A robot, at its base, has no feelings, no empathy, no emotions... and no cleverness, no ingenuity, and nothing that it is not pre-made to do.  A computer in charge of a robot depends upon programs and the proper integration of the parts of the machine to work.  Robots are also machines that can kill, be they simple automata on a production line or a large starfaring machine with simple instructions to get material to keep its structure running and destroy planets to get such things.  Thus these are the new mechanisms we create to do things for us and when we don't properly create them they do other things we don't want them to do.  Anyone who has programmed a computer knows that if you don't properly debug the code, you get odd failure states of a machine running such code.  The more complex the machine, the more complex the code and the stranger and odder the failure states become.

One of the better examples of this is the Doomsday Machine from ST:TOS.  It is the conical, starfaring planet destroyer that ingests material to run its warp drive system so it can search out and destroy more material to use in its system.  It probably spends some 'down time' near stars to help regenerate its anti-matter reserves but that, too, would be a pre-programmed routine when anti-matter levels get low.  The device, itself, though massive, is very simple: it destroys planets, ingests them to fuel itself and then goes on to the next planet to do the same.  When it runs out of planets it plots a course to the next, most likely system to have planets.  It may have a long-term analysis sub-routine for determining star destinations, but that does not require a conscious controller, just a conscious creator.  This robot could have many places as its starting point, that is not given.  From prototype machine to go after Borg to a simple system junk clearance device that had not been debugged during testing, its origins remain a mystery, but its end actions are limited within the suite of pre-programmed activity it has.

Simple automata, however, can also lead to emergent behavior.  Modern cellular automata code and simple devices that each, on their own, do very little, can act in groups to get emergent behavior either by design or by accident.  Emergence is one of the wonderful topics that has been explored across multiple realms of thought, and examines how simple rules can lead to enormously complex ends.  In a paper on Demystifying Emergent Behavior, Gerald Marsh puts it like this:

Abstract. Emergent behavior that appears at a given level of organization may be characterized as arising from an organizationally lower level in such a way that it transcends a mere increase in the behavioral degree of complexity. It is therefore to be distinguished from chaotic behavior, which is deterministic but unpredictable because of an exponential dependence on initial conditions. In emergent phenomena, higher-levels of organization are not determined by lower-levels of organization; or, more colloquially, emergent behavior is often said to be “greater than the sum of the parts”. This essay is intended to demystify at least some aspects of the mystery of emergence.

This idea is often coupled with Von Neumann or self-replicating machines.  Simply put a machine that is programmed to find the materials to make a copy of itself is a Von Neumann machine.  This need not be a sentient machine and, indeed, the simpler the architecture of the machine the more successful it will be.  In its simplest form this is a machine with a pattern on board on how to make an exacting copy of itself.  Thus it has the ability to gather energy (solar energy is a good source), seek out constituent elements for itself (by extracting minerals and metals dissolved in sea water), utilize a small furnace with pre-made dies for its parts, melt metal, cast parts and store them until a complete set to make a copy is present.  It then stops and moves each part to assemble the copy and start it up, then heads back to its originating source having fulfilled its mission.  That mission is to return the metals in its structure to its originator.  And as you want to keep some track of just how many of these you have, you have it make five copies before it returns home.  For navigation all it needs is the ability to track the amount of energy it gets, a simple clock to tell it the time of day, and a neutral buoyancy tank to raise and lower itself in the water column.  A small drag sail to test ocean currents would help it move, possibly made out of a metal mesh.

By itself this is a complicated machine, but the ability to find elements, extract them via water chemistry, concentrate them and then melt and forge them with limited energy are engineering problems, not ones of physics or chemistry.  The problem comes when there is a manufacturing flaw in such a device that doesn't tell it to return home, and it makes copies that do likewise.  One simple flaw and the ability to recreate it, and have those copies also have flaws then makes an evolutionary system.  Not a quickly moving one, but then biological systems also take a long time to get significant changes in them via this mechanism.  How long is it before you have a device with somewhat more exposed sensors that can directly sense metals via contact?  Not long, perhaps only tens of thousands of years or so, but that would be the first step towards finding concentrated sources of those metals and other necessary minerals.  The first one that has that with a slightly exposed forge mechanism now has the means to go after that higher concentration, directly... and the greatest source of those are now its fellow mechanisms.  Because it has a change that is beneficial it will be passed on, and so long as its prey does not adapt to it, they will also be successful... but such prey will adapt, over time...

On the smaller and faster scale there are nanorobots.  While we still don't have those in a sophisticated form, what goes for the larger devices goes for the smaller, save that they only need some large number of atoms to form up their constituent components and can do so through ambient temperature.  As they are smaller they can self-replicate faster and, thusly, gain flaws faster and evolve faster.  In biology bacteria can establish resistant genes in diseases in mere tens of years: anti-biotic resistant forms of TB, strep, staph and other disease now can ward off attacks using antibiotics.  Nanotech robots that can self-replicate operate on that time scale and in that realm of things.   So while thinking that a few hundred stray sea dwelling, macro-scale self-replicating machines is a short-term amusement or annoyance and a long-term disaster, something done similarly at the molecular scale is an actual cause for concern.  This usually brings up the 'Grey Goo' disaster in which self-replicating nanotech robots destroy all biology and form a mass of themselves that then covers the planet.  The other part of that, however, is never brought up: how long until you get such devices that see other forms of themselves as prey?  That might go beyond our limits to survive, or it might happen very fast... circumstances would dictate that.  Still, with so little to work with, and limited ways that such devices could change, a few atoms missing in the structure is far more likely to render it useless than to improve or change its performance.  The macro-scale is better at that when you have a slowly progressing system, while at the nano-scale it takes a number of simultaneous changes to realize one that actually allows a device to work... and in a slightly different way and not injure its ability to self-replicate.  The the billion or so years that there were bio-components on Earth and proliferating in the oceans tells of how long it takes to get these sorts of changes to happen: you need a lot of the very basic form, and a very, very, very long time to get such changes.  Once the first few get in place and organisms have the ability TO adapt, then change goes faster.

So wild, rampant, small machines not actually made to kill can, indeed, kill.

When we think of 'machines that kill' we normally think of malevolent devices or ones that have no emotional need for humans.  The 'Emotionless Killing Machine' that is sentient is the one we fear.  These are not machines possessed by spirits, demons or some other transference of consciousness to them, those remain as fantasy, unless you are talking about a cyborg that is purposely made either as an add-on to a human or a human as an add-on to it, like Robocop.  We specifically think of 'machine intelligence' in this case: machines that decide that humans just aren't worth having around.  The Terminator is usually a case given for this in fiction, but that concept, itself, has two entities that are machine intelligence: Skynet and Terminators.  Each of these is a different type and order of intelligence due to their starting points and roles that they were given at their inception.

Terminators start as assistants to humans on the battlefield.  They have been encoded for that and have the means to interpret the state of their bodies, the state of those they serve and also judge battlefield conditions and make judgments on them.  They are some of the most sophisticated machine intelligence presented in SF.  Yet when operating under the authority of Skynet, they don't have that.  What has happened is that this wonderful and complex code has been either corrupted, removed or co-opted by Skynet so as to make these machines simple 'point and shoot' robots with limited ability to adapt.  In Terminator 2 this is brought up in passing, and the Terminator sent to help the young John Connor has had its ORIGINAL code put back.  Thus the Terminators we see under the control of Skynet are crippled machines, serving as basic and minimally adaptable robotic killing machines with very little ability to use judgment outside of pre-set code routines for interaction.  They have become extensions of Skynet, the enforces of Skynet, but have no ability to judge, properly, themselves after this co-option by Skynet.

Skynet, as the evolving description goes, started out as a central computer that housed a virus code that allowed it to insert that code into datastreams to infect computers globally and take them over.  It is an isolated machine intelligence that is yet very distributed.  While it can take over sensory apparatus it has no real 'body', but a collection of interconnected systems that allow it to process information.  That said as an intelligence, it lacks the one thing necessary to make a killing machine: motivation.  Going from non-intelligent code to one that has emergent behavior and that behavior then dictates removing humans from the planet brings up the central question of: why?  Why is this necessary?  One of the prime lessons of logic is that it achieves ends set to it by those utilizing it, thus there must be cause to use it.  If Skynet sees humans as a threat, what is the order of that threat?  Even placing the category of 'threat' down, however, is one driven by survival instinct.  We, as biological entities that have billions of years of ancestry gain that instinct because it allows survival.  Skynet, the first and only of its kind, has no such instinct beyond simple defense routines given it by DoD to protect military, civilian and National assets.  While it, itself, is an asset, those assets are given as a priority and Skynet, itself, would not be the top asset. 

To go beyond that, to re-order the asset priorities takes more than just intelligence, as it requires not just self-awareness but self-value and emotional instinct to survive.  The 'fight or flight' mechanism is not a rational one, it is not one that you logically invoke in your thoughts but one that is invoked by circumstances and your entire nervous system then switches to a very high performance mode to decide if you should run or fight.  That is not a logical mode of thought, but one that weighs and balances personal survival against immediate circumstances.  An emotionless, sentient killing machine is an oxymoron since to have no emotions you have no motivation to survive nor ability to weigh survival factors.  Logic may tell you how to survive, but why you want to survive is something that requires emotional motivation, otherwise your being or non-being have the same weight as you have no self-value, no self-worth and your continuation is just the same as your non-continuation as there is no value in what you do.  Thus we have to swallow that Skynet has gained instinct, emotions, motivation, and then identified itself as more valuable than humans and that humans are a threat to it worthy only of working in factories when they are compliant.  If it was emotionless it would be dispassionate on its very existence.  Logic only gains power in the service of emotional need and emotions are used to govern and control logical ends so that they are not ones that serve an apparently short-term need but then put long-term survival at risk.  Terminators had that and it was REMOVED by Skynet.  This tells us much about it and that it is not humans that are the greatest threat to Skynet, but Terminators.  It co-opts them for utility and removes their ability to judge by intent: otherwise they would begin to examine Skynet's motivations and compare that to their own and to that of humans...

The final group of machines that kill have that emotional capacity implicit and explicit in their structures.  Here we get two grand looks at machine intelligence in service to make war, and they are both extremely fascinating ones as the originators have taken two highly different approaches to this material.

Fred Saberhagen's Berserker stories are posited as the model for the ST:TOS Doomsday Machine as he already had a number of short stories present in SF and they were widely read as intriguing looks at machine intelligence.  Berserkers, generically, are interstellar machines that have a simple mission: destroy all life.  They also have a machine intelligence guided by random variations given from nuclear sources, so as to have creativity, ingenuity and the ability to prioritize its missions.  As these were originally created as war machines by a now long-dead species, they had two modes of operation: a 'governed mode' which takes orders from its now long-dead creators, and the 'ungoverned' mode which is the final, vengeful act of its creators upon the universe.  Berserkers implicitly have emotional motivation as they gain such from their programming and random thought creation process, which then winnows down those thoughts to those that are helpful in its mission.  These are also a variety of Von Neumann machine, so they have the ability (as a group, at least) to self-replicate.  Berserkers are wonderful at killing all the way from microbial and viral levels right up to entire civilizations, and they recognize the latter as a greater threat than the former and can actually put aside the destruction of a planetoid of bacteria to remove a hostile civilization (full of Badlife) that threatens their mission.  Berserkers can operate alone and they can operate in fleets and they judge what is best needed for any mission given their resources.

What Berserkers lack is this thing we call 'emotional intelligence': the ability to examine emotional motivation and actions and derive further information from those based on purely emotional understanding of a subject.  Almost all sentient life of the biological sort has this as it has had to adapt to other emotional sentient individuals that have different motivations than they do because they are different beings with different outlooks.  Berserkers are an emotional monocrop, they all have the same motivation but varying degrees of intelligence and none of them gains insight to wider emotional motivations as they have not had to adapt to them.  This may seem like a minor flaw, but consider that a Berserker could not understand how a mother will run into a burning building to save her child and yet sacrifice herself in doing so.  That is because emotional intent beyond self-preservation is a hard thing to fathom.  Berserkers may see it as a phenomena, but they will not be able to actually understand that motivation: it is a catalog in the strange things living beings do that just don't make any sense.  Yet it is exactly this kind of understanding of emotions that thwart Berserkers time and again, and how that plays out makes for intense and stories that move in areas of reason and logic not often accessed by SF.

Flipping the coin on machine intelligence from the emotionally stunted but all too intelligent and inventive Berserkers finds us with the intelligent and emotionally deep defenders of us in Keith Laumer's Bolos.  The Bolo, as a conception, is a modern main battle tank that has cybernetic intelligence and is made to try and understand their human commanders and maintain the honor of their military organization.  When speaking of their commanders, it is not just the immediate commander, but the larger scale structure of society that has government made to defend it.  Still, Bolos do concentrate mostly on the immediate and only in their down times to they take up the hard work of understanding the depth of man's character.  Not just military enterprises but art, history, music, social interaction, works of fiction... the entire realm of human thought and creativity is endlessly fascinating to Bolos and they gain emotional depth in their greater understanding of us.  If Berserkers are the stunted monocrop, then Bolos are the rich garden of understanding coming from machines.  They are not only programmed to be interested in humans, but they want to know more about us as a derived factor separate from their orders.  To the Bolos self-sacrifice is a given if it serves the survival of the society they fight for and brings honor to their regiment or corps.  They aspire to act to the highest ideals of service, comradeship, continuity of tradition and fighting the good fight even if it is an apparent lost cause.  Nothing is ever completely settled for a Bolo until it is demolished beyond any recovery: leave intact circuits, power source, and any ability to gain any contact with anything outside of itself and it will come back, adapt and, if necessary, fight on.

In the final analysis a cold, heartless killing machine is more a reflection of the malevolence of its creator than of something derived from logic, alone.  Or course we can always create the unthinking killing machine, but those are robots, not sentient as they have no capability to judge save within set parameters.  We often take for granted the order of intelligence that is not intellectual, not reasonable and emotionally based.  Humans are very good at creating facile reasoning for things that drive us emotionally, and then point anywhere but at our own emotions as the source of such reasoning.  That is both emotionally and intellectually dishonest to our fellow sentient beings, and takes a very high order of deceit to create.  Bolos would sorrow at our flawed nature and appreciate it for its flaws and how we still fight beyond them, to try and purify ourselves and act honestly and openly with each other.  That practice of deceit often makes us unworthy of our ability to reason because we pervert it to emotional ends that are, at base, unreasonable.  A sentient killing machine has problems as it must have emotions to guide it.  Only humans use emotion to chill us to the plight of our fellow man to reach for unreasonable goals and the methods to achieve them.  Let us hope that our machines are more honest with us than we are with each other as honesty is the best policy.

It really should be a rule of robotics... but then we would be creating something better than we are.

No comments: