X
next
*Required Field
**We would love to subscribe to our newsletter via email. Please rest assured we’ll treat your information with the greatest care and will never sell it to other companies for marketing purposes. See full Terms and Data Policy.

Thank you.

Your message has been sent and a member of our team will be in touch shortly.
view our e-brochure
Oops! Something went wrong while submitting the form. Please try again.
< BACK

LOST IN TRANSLATION?

December 2018
By Fountech

how ai can benifit the medical sector

April 2018 Published in Forbes

IS THE ASSUMPTIVE USE OF NATURAL LANGUAGE IN SOFTWARE HINDERING THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE?

Here’s a simple question. How many people reading this article, which you can see is written in the English language, will know what is the most popular book written and published in, say, Slovakian this year? It’s very likely that the only group of people who’d be able to answer that question would be a subset of bilingual Slovakians. This is one tiny example of the fact that there’s a huge amount of information out there in the world that we can’t use, because we don’t have access to it, and we probably don’t know it’s there in the first place. Similarly, no one offers us the information, because they assume we won’t understand it.

As a result, it’s very unlikely that the majority of native English speakers reading this article will be sent a monthly newsletter about the latest trending Slovakian authors. They won’t have sought that information, and the publishers of such a list, if one exists, are probably only mailing it to Slovakian speakers.

Certainly it’s true that online translation tools have made us able to converse, albeit awkwardly, by e-mail in dozens of different languages. Translation apps have enabled us to order restaurant food in far flung corners of the planet wherever we find ourselves. However, crucially, unless you are Slovakian, you’re very unlikely to have had a conversation with a non-Slovakian speaker about an incredibly entertaining new novel that’s just been published in the Slovakian language. In summary, the use of natural language creates natural invisible barriers to exchanging knowledge and promoting learning for human beings.

A utopian system of understanding.

So, let’s consider a poor B movie sci-fi scenario for a moment, whereby a comet passed nearby the Earth’s atmosphere, leaving a trail of dust that infected every human being on the planet with the ability to suddenly speak only one universal language? What would the world be like then? There would probably be fewer wars, less racism and, crucially, a lot more understanding of one another. We’d know a lot more and disagree less. In short, it’s likely that the world would suddenly become a much better place for all its inhabitants.

My vision for the way forward in which software communicates with humans, and vice-versa, would work this way: a universal language of symbols, numbers and icons but with an important add-on; the language would be personalized by artificial intelligence (AI) so that the software knows its owner and learns the subtleties of how they respond, in turn learning how to tweak the output of information the next time it communicates with that person.

Unfortunately, the current way of thinking by humans working in the field of AI development, for example in mobile device apps, seems to carry the assumption that software needs to be able to communicate eloquently with its owner in that user’s natural language. Regardless of whether that be English, Mandarin Chinese or Slovakian, if the software can’t do this, it probably isn’t seen as successful or utilitarian.

Look at the novelty value of Siri, for example; it’s probably more entertaining than useful, but not taken seriously as a reliable day-to-day business tool. But most importantly, it’s only scratching the surface of what machines could do for humans if they could communicate with us unhindered by natural language misunderstandings.

At the moment, if software tells us something in either spoken or written natural language, which is badly worded for what the software is trying to output, a human might either ignore it, perhaps regarding the AI as simply unintelligent, or act upon the information in an unintended way.

I contest the current assumption that machines should talk to us like humans, or that they should write for us on a screen in our natural language. It makes much more sense for us to interact with machines in a whole new way, especially now that they are beginning to learn for themselves and becoming ever more driven by AI. In fact, AI can learn from its own mistakes much more efficiently than human beings.

Perhaps then, we should be using some sort of new system of diagrammatic, visual, super-specific icons and symbols instead of words to interact with the machines we create. Whatever system we design it really shouldn’t be based upon words. In fact, when it comes to semantics, written words are very easily misinterpreted. Almost everyone has sent an e-mail that has upset its recipient unintentionally because there’s not the nuance of body language nor the intonation of a real human voice.

Anyone who has studied mathematics and has struggled to make sense of some of the work of the early Greek mathematicians, like Euclid, before the mathematical symbols and geometry we’re used to today will understand what I’m saying. Today, mathematicians from around the globe can easily make sense of each other’s work without trying to interpret convoluted, ambiguous sentences written in one natural language, or another.

Allowing software to do what it does best.

It seems to me that forcing software to communicate with us in natural human language is like trying to teach a donkey to first explain to us how it pulls a cart, before we allow it to pull that cart. Wouldn’t it make more sense to simply allow the poor creature to do what it does best, which is pulling carts!

Putting such obstacles in the way of progress is hindering what could be an incredibly exciting time for the exploitation of artificial intelligence. There can be little doubt that the prevalence of AI in our everyday lives is inevitable. More software will be using it for hyper-personalization, for example (I wrote about hyper-personalization in this recent article). As this self-accelerating, self-teaching, hyper-learning improves, it can only make AI’s appearance in our everyday products more certain.

For its interaction with humans to be truly successful, however, we need to be absolutely clear about what AI is telling us, and, importantly, what it wants to tell us, perhaps something that might not have even occurred to us yet.

The Turing Test, a red herring of eloquence?

If you’ve read this far, I’m assuming that you are probably familiar with the work and reputation of Alan Turing, and that you may well have heard of the Turing Test, developed by him in 1950. The test assesses a machine’s ability to exhibit, in natural language output, intelligent behaviour equivalent to, or indistinguishable from, that of a human. This test seems to be the current Holy Grail of AI developers, who see their software passing it as a precursor or even a starting point.

I’d like to offer a contentious hypothesis here, which some specialists in the AI field may even consider apostasy, so I’d like to emphasise my opinion that Alan Turing was undoubtedly a genius whose work is beyond reproach. I should make it clear that the following statement doesn’t diminish my respect for this great man’s memory in any way. But here goes:

I believe that the Turing Test is now redundant in the context of software development, and that its use is actively hindering the development of AI in general. I think humans are taking the blinkered view that a machine must talk to us as if it were itself human in order for it to work. If we could develop a universal language that enabled a more efficient output of how artificial intelligence thinks, the development of AI would proceed faster and more efficiently. We’d be letting the software do what it does best.

By way of example, I’d like to refer to the development of an exciting app in which I’ve become deeply involved. It’s called LOMi, the AI engine for a product called Prospex.

It’s an extremely powerful business networking intelligence tool (if you’d like to receive more information about LOMi, look at lomi.ai) . Being developed as a UI- and platform-agnostic cloud-based app, LOMi uses artificial intelligence to recommend smart connections, unlock opportunities and deliver valuable insights.

Either running in parallel with the user or as an autonomous assistant, LOMi uses proprietary neural network based technology in order to identify and connect with the right people and organizations, whilst simultaneously offering contextual insights across a broad range of subjects.

The analysis of data is performed across a wide spectrum, from local to global, whilst always focusing on depth and relevancy above breadth and reach. In short, LOMi can put you in touch with the right people and organizations at the right time, in ways you might not have previously imagined and make ongoing suggestions about how to better interact with them.

But it seems that trying to have LOMi communicate with its user completely in natural language is a huge squander of effort, and counter-productive to boot. To use a simplistic example, let’s say that LOMi discovers that two people, unknown to each other, have numerous things in common and they are already connected to two other people who are very closely connected. LOMi might determine that the dots need to be connected, and so will suggest to both those individuals to connect on LinkedIn to explore synergies. LOMi may do this by asking one known contact to ask the other party’s known contact to make the introduction.

Irritating noise or crucial information?

So here’s an important example of the crucial point about the difficulties of software using natural language. LOMi might say or print on-screen a message that person A should connect with person B and show the quickest path to make that connection. However, it’s important for LOMi to give compelling reasons, otherwise this information could be perceived as being just noise. The software works with keywords (i.e. the names of things) but may not really know (yet) exactly what the real-world relationships between those things are.

So LOMi can join the dots between people and things and figure out strong correlations based on complex stochastic and metaheuristic algorithms, so we are building something incredibly powerful and useful. However, the world of AI scientists, developers and marketers have been conditioned to expect LOMi to be able to eloquently explain in long-winded natural language why she has drawn certain inferences. Consumers have, in their turn, likewise been conditioned to expect AI that speaks or writes to them like a well educated human.

The counter productive use of natural language.

Up to this point, all the data that has been gathered by LOMi would have been crisp and specific. All the AI algorithms are extremely logical and nothing is fuzzy. LOMi sees a series of people and entities. The connections between them have been arrived at by some complex mathematics and the use of extreme computational power. Yet the rationale for the recommendations don’t make any sense in natural language. In a grossly oversimplified example, LOMi might determine that Person X is somehow connected to the phrase ‘mobile applications’, but the relationship between that person and phrase could be one or more of an infinite number of possibilities. For example, did Person X invent mobile applications? Does that person build mobile applications for a living? Do they write about mobile applications, or work as marketer in the field of mobile applications? Perhaps Person X has argued on a blog that mobile applications are destroying the world, or they wrote a book entitled ‘mobile applications, their relationship to AI’? The possibilities are, of course, endless.

So, it’s become apparent that trying to get LOMi to convert those powerful discoveries into natural language turns something super-specific into something ambiguous, imprecise and fuzzy. There is a very high risk of LOMi coming up with a superb recommendation, but not communicating it precisely or accurately or eloquently enough for it to be given credence. It is very easy for the superb utility of the output being lost in translation.

In short, forcing AI to speak or write like humans can easily dumb down the very intelligence for which the software was created.

Natural Language Generation – are we swimming against the tide?

There is some great work being done in the field of natural language generation (NLG) and natural language production (NLP- not to be confused with Neuro Linguistic Programming). I have the utmost respect for many of the people doing great work in this field. They are trying to adapt the way in which software interacts with human beings, ever trying to improve methods of AI communicating with and serving us. But I can’t help but feel that, in many cases, peoples’ efforts might be better employed by working with a brand new hyper-personalized, freshly invented system of interaction that didn’t involve words, or if it did, very few of them.

It seems to make much more sense for computers to use a visual, super-specific personalized language, which doesn’t use complete English sentences, but draws a series of diagrams, similar to mind-maps, for example. This would allow LOMi, for instance, to output more accurate representations of what she learns, in a visual output that any human would understand. This carries the added bonus of not having to create multiple language versions of software interfaces in Chinese, Russian, etc.

Not one system is perfect.

Of course, no system of communication is without its limitations and difficulties, not least misinterpretation by users. But remember that previous systems have hitherto involved communication only between humans; and humans don’t learn from their mistakes anything like as quickly as computers.

If we think about pre-telecommunication forms of sending messages like semaphore and Morse code, they were merely ways of converting words into signals that could be sent across a distance, then those signals converted back into words.

The beauty of a hybrid system of diagrams, icons, numbers and perhaps a few words, between a machine and a human, is that the machine can learn instantly how its owner interprets the output.

For example, let’s look at the internationally recognizable iconic shape of a heart. Assume it is portrayed in the color of red. Some people might interpret that icon as meaning love, whereas others might see it in a medical context. A hyper-personalized AI driven software package would learn very quickly how its user saw such a symbol and remember that for the next interaction. The ambiguity of interpretation of words would be replaced by the certainty of a specific individual’s response to a given symbol.

It goes without saying that a system of diagrams and numbers without words isn’t without its limitations. I wonder how many people must have wished fervently, of a Sunday afternoon, that a special place in hell has been reserved for the author of an instruction leaflet to assemble flat-pack furniture! But imagine if that leaflet was an on-screen AI interface, able to instantly understand how the human furniture assembler interpreted the displayed diagrams, through the human tapping certain responses during the assembly. The next time that person purchased another item of furniture, the diagrams would have been subtly changed for that individual.

A world of possibilities.

Never before have there been such possibilities for the human race. We’re on the cusp of building machines that can truly teach themselves to learn. In turn, they can learn about their creators’ needs and preferences without making the same mistake twice.

In this context, it’s my firm belief that the sole use of natural human language is almost certainly more of a hindrance than a helping hand.