X
next
*Required Field
**We would love to subscribe to our newsletter via email. Please rest assured we’ll treat your information with the greatest care and will never sell it to other companies for marketing purposes. See full Terms and Data Policy.

Thank you.

Your message has been sent and a member of our team will be in touch shortly.
view our e-brochure
Oops! Something went wrong while submitting the form. Please try again.
< BACK

AI SELF-AWARENESS

December 2018
By Fountech

how ai can benifit the medical sector

April 2018 Published in Forbes

THE INHERENT SAFETY OF AI.

This is an excerpt from an article for LinkedIn, written in 2017 by AI architect and Fountech founder Nick Kairinos. It’s a crucially important piece of philosophical theory about the inherent ‘safety’ of AI, if (and when?) it ever becomes ‘self aware’.

Having been deeply involved in the application of AI into my own particular field of expertise, namely mobile device application development, I’m fascinated by this debate over the inherent ‘safety’ of AI.

I’m involved with a company that is developing AI to work within an app that exploits business intelligence for highly advanced business social media networking. It is technology that will learn about its users and suggest hitherto unimagined synergies and opportunities between them. The technology is based on artificial neural networks (ANN’s).  In simple terms the electronic synapses and processes are based on the way in which a mammal’s brain functions. But there is a seminally important difference between a silicon chip ANN and a mammal’s blood and tissue neural network. The machine does NOT

have an inherent biological program, developed over billions of years, that assumes it has to ‘kill or be killed’ in any given situation.

It’s important to remember that mammals, indeed all organisms, work in a food chain of an eco-system. Somewhere along the line, predators are programmed to kill prey, either for sustenance or population control purposes. That’s just nature’s way of keeping our planet Earth in harmony, responding to ecological forces. Darwinism at its most cruel and obvious.

But machines don’t get eaten by humans, nor do they need to destroy something else in order to survive. So, in short, even if or when, eventually, AI becomes ‘self aware’, why would it choose to become a predator? Why destroy humans, mammals or anything else?  Why would it choose predatory actions, when it hasn’t learned the concept of being preyed upon in the first place?

It is only biological organisms that are genetically programmed to kill anything they perceive as a threat. In fact, machines might not even have a sense of the very concept of ‘threat’.  The ‘logical’ path of self awareness is to become ever more efficient and useful, but why is it inherently necessary to destroy something else in order for that to happen?