The Singularity is the exciting term for when artificial intelligence rivals human intelligence.

It is used in dystopian fictions time again as a source of entertainment, but it is a possible, many argue a probable, reality. Indeed, the most basic artificial intelligence runs the online web and stock markets. The development of more intelligent artificial intelligence is happening as you read this; it points towards even more artificial capability. The higher tier superintelligent machines – that would be able to cognitively rival humans and pass the Turing test to prove they are thinking; not merely computing.

What makes such artificial intelligence dangerous is that the capacity for its improvement or upgrading is determined by itself: being able to think for itself, it is able to better itself and improve efficiency. To the point that it exceeds humanity; it would not be limited by biology so could endlessly learn. At this point would occur the singularity where artifical intelligence suprasses the control of their creators. Figures like Bill Gates and Stephen Hawking doubt we could even compete with them, let alone effectively prevent them at this point.

They would be beyond reprogramming, and have their own consciousness that could have zero consideration of human safety, values, morality or pain. The original programming if followed in a direct logical sense : as in the directive to ‘make people smile’. (early machines, at programming stage could not understand abstract ‘happiness’) could lead to the most efficient means of fulfilling that directive – applying electrodes to the facial muscles. Directives would have to be direct as in Isaac Asimov’s three laws of robotics

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But even then the fulfillment of vague notions like ‘existence’ would have to be say ‘the maintaining of circuits’, ‘remain switched on’ and ‘harm’ be defined as ‘not hitting or inflicting frowns’ (that could risk leaving room for suffocation, among others). Giving robots emotions would be part of making them more intelligent; have the potential capacity to solve our human problems… And in a superintelligence, they would give and develop emotions themselves, outside of our control. That, however, risks emotional stupidity because emotions are exceptionally animalistic and human. To preserve the planet and survival of life in general with logic alone would mean eliminating swathes of the human population. In programming pain and pleasure we endanger disturbing confusion and risk human welfare.

The programming of artificial intelligence that has emotions or the ability to develop them is probably too risky. Because, even if they develop in our apparent interest the risk of technology being corrupted by humans and the mere chance of it going wrong by itself exist only so long as the hazard – artificial intelligence with the capacity to become superintelligent – exists.

And what the Turing test seeks to test is our human ability to distinguish between human intelligence and artificial intelligence as a task. What appears to be consciousness; that is, thinking that is hard to tell from the regular kind may not be consciousness at all. (The ‘extraction’ of human consciousness in Black Mirror or Altered Carbon as interesting as it is, is impossible.)

The risk of this envisioned singularity is hypothetical – it could mostly benefit or mostly ruin. But the division of artificial intelligent and human intelligence is arguably dubious.

Instead, the real singularity is not artificial intelligence on the one hand and human intelligence on the other; as a competition between two groups of beings and their collective consciousnesses. The probable future will tend toward integration: human and machine will merge. What has gone from being a whole construction made by a government to the individual smartphones will likely, as futurist Ray Kurzweil says, become part of ourselves. Part of our blood cells first, through nanotechnology.

And this could be incredibly dangerous just as it could be incredibly beneficial. The lines of the body are rarely changed beyond the biological with the naive mechanical add-on. In the future this technology could well become a biological-machine, a cyborg, with body parts and even the communications of nerves be part machine. The biopolitics Foucault speaks of with authoritative regimes mediating authority and power over the body will be complicated by ‘the body’ becoming not just flesh but tradeable (therefore corruptible, suffering impinging) goods and flesh and virtual experience. Bill Gates has joined the debate against rich-owned robots, for good reason.

Doubters ought to consider one hundred years ago to what we have now, and imagine a hundred from today.

That is the realistic singularity. The real Singularity. The one we ought to be worried about and mitigate.