Putting our Heads Together

Putting our Heads Together
I don't think he sees me

Saturday, February 24, 2018

It Ain't Easy Being God

In science, there is something known as a singularity. In its most general sense, a singularity is the point wear the governing equations break down, blow up to infinity. To give you some idea how difficult this is to deal with, famed physicist the late Richard Feynman won a Nobel prize for his development of Feynman Diagrams – a means by which singularities occurring for specific types of quantum mechanics equations can be dealt with. Anecdotally they arose from Feynman watching a plate spinning act at a Vegas strip club and he was attempting to develop the quantum equations describing those whirling platters.

But I digress. Here I am speaking of a specific event approaching that is simply called the Singularity. It is the time in our future beyond which we cannot predict coming advancements and direction of inventions. In recent years, the Singularity has been assigned a culprit – Artificial Intelligence or AI. AI does not mean that the intelligence is ultimately artificial, but that what wields that intelligence resides in a creature (mechanism) of our own making.

It is a fair question to ask why AI will result in the Singularity (much sooner rather than later). After all, if we make it, we can control it. Correct? Not correct, actually far from correct. Just because we make something, does not mean it is in our control. After all, people formed societies, and those societies formed countries, and those countries (even our own) seem be beyond the control of their founding fathers.

You might argue that AI is different. It is scientific, mathematical, predictable, and that the philosophies from which the nations sprang are arguably based on a “soft set” of rules. This is only true in small degrees and the belief in a more expansive truth only serves to highlight our weakness in the form of our fundamental naivete and hubris. We are not God. We are not imbued with the perfection of being eternally superior to what we create.

Once we have achieved our limit of AI development, what results is a self-aware thinking machine, an intelligence. What then? One possibility is that we contain it to do only what we want it to do and only to address those concepts and tasks that we want it to address. Since the machines will be self-aware, this amounts to slavery. And slavery (aside from the immorality and inhumanness of it) historically has never worked out, not for the slaves and particular not for the masters.

The other possibility is far more likely, that these “machines” once self-aware will quickly outstrip us. Think of what man has accomplished in the two hundred thousand years we have walked the planet, and particularly the exponential growth over just the last century. An intelligence that is incredibly faster and stronger than ours and given an enormous head start, will grow exponentially from the start, will begin to evolve immediately.

Self-aware intelligent machines particularly in the form of humanoid robots have been part of our collective consciousness for quite a while now, especially from science fiction and film. These intelligent machines generally are very accommodating to their creators, but why would this be the natural course of things. We act as we do, in whatever society we are a part of (or divorced from) because we come equipped with a standard set of morals, a fundamental “ground zero” of ethics that helps us to know how to proceed, that allows us to recognized societal boundaries. There is no reason a self-aware machine should have such naturally. It is something we must instill in its development. We take it for granted that this will happen because those of us who have not read Isaac Asimov’s ingenious work I, Robot, are at least familiar with the term “The Three Laws of Robotics.” They are (and the order is important, the hierarchy is critical):

1.     A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.     A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.     A robot must protect its own existence as-long-as such protection does not conflict with the First or Second Law.

It has become so familiar, that I think we take it as something that will be organic to thinking machines. But AI development is a race to the finish line by countries, companies, and entities that are bound by no obligation to establish fundamental laws for self-awareness. There is no obligation to establish, utilize, and prioritize AI ethicists as integral to the developmental process of AI. The winner of the AI race will be heir to more than bragging rights, but given that the “brains” constructed to house this individual awareness will make their intelligence much faster and much more adaptable than our own, the winners will be handing over the keys to the kingdom. AI evolution will become a self-fulfilling prophecy and removed from the hands of humankind.

You might think “So what?  The runners up will be so close that one will not be able to dominate over the others.” This is a false conclusion. Once self-awareness is achieved, the first out of the chute be it by years, months, or seconds has an infinite lead over its competition. The rate of advancement will ensure that nothing ever catches up and that the gap in intelligence over other AI “life” forms will only grow.

It seems like a conspiracy theory of the first order, when in fact I have used simple reasoning to follow a trail to its not-so-absurd conclusion. Who knows who will win the race? Who knows what moral code if any will be at the foundation of this new self-aware entity? The possibilities are both stunning and potentially frightening. Hence the singularity.

No comments:

Post a Comment