Schizophrenic AI Computer Claims Credit for Terror Plot

In artificial intelligence news, scientists have discovered a way to give computers the digital  equivalent to schizophrenia.  While some suggest this could end only in a Hal 9000 scenario, others are questioning whether this lack of logic could actually be a step forward in giving machines a more human consciousness and promoting creative thought and individuality.  Could purposefully putting flaws in a computer system eventually be engineered so it actually improves their ability to interact with humans?  Shortly after the test was run, the computer claimed it was behind a terrorist plot.

If there were two subjects that may be more relevant to the year 2011 in the news than artificial intelligence and terrorism, it would have to be of considerable importance.  So when one computer in the University of Texas-Austin was digitally altered to become schizophrenic and claimed responsibility for a terrorist plot, then it was enough for researchers to stand up and take notice.  Fortunately, however, the computer was delusional and not actually responsible for the terrorist plot to begin with.

And the word delusional being attributed to a computer is an incredibly human descriptor.  Computers rarely achieve a level where a formerly uniquely human trait could be attributed to them.  And now for the first time we can use the word delusional to describe a machine.  The discovery has scientists scrambling for answers and what the next step might be.  Though the study was one in psychology, using computers to process human emotions and psychosis could eventually lead us in a completely different direction in the end – and make computers more human.  But if this were true, it may be an uncomfortable fact that we may have to drive a computer to insanity before it can be considered human.  And that question reaches to the very core of what being human truly means.

The Neural Network, known as DISCERN (no relation to the particle accelerator CERN).  The computer was allowed to process information in a narrative format that would eventually lead it to draw information based on the stories.  It did so without problem.  Then, the scientists sped up the process to the point where it couldn’t ‘remember to forget’ certain elements within it.  The end result was something that had a striking similarity to how schizophrenia manifests in the human mind.  The information processed lost much of its relationship to the overall narrative, transposing the interactions of different words on top of each other until it didn’t seem to make much sense – or in some cases made perfect sense but resulted in patently untrue statements.  The goal of the experiment was to provide evidence that dopamine was directly related to schizophrenia and “hyperlearning.”

So will this crazed computer mind eventually be turned into something a bit more reliable in the future?  And could these impulses running through its digital circuits one day become comparable to an artificial form of intelligence?  If this were the case, would we one day run the risk of artificial intelligence going insane?  And what does that say about the future and the possibility of an AI singularity?