Artificial Intelligence is Dumber Than a Bee. For Now!
The nervous system of a mosquito consists of about half a million neurons. That of a bee – of 800,000 neurons, of a dog – of 160 million neurons. A human has 85 billion neurons in the system. Modern computer systems contain only a few hundred thousand neurons and in rare cases, a few million, which, in terms of intelligence, is comparable to a bee. Bees are technically 100 times dumber than dogs and 100,000 times dumber than humans.
However, if computer technologies continue to develop at the current speed, then, according to futurist Raymond Kurzweil and other researchers, desktop computers will outsmart or even surpass the human brain in computational capabilities by 2030-2040.
Does this mean that robots will triumph over people in 2030-2040? Not quite. But the future will still be exciting. Artificial intelligence will learn to create other artificial intellects more efficient and powerful than human-designed systems. And, by that time, AI will be in use in every business, in all parts of our life.
The evolution of the electronic intellect
In the 1990s, the first AI technologies required rules. Engineers and experts laboured long, teaching intelligent technologies to produce and test different hypotheses and rules.
For example, this is how text is recognized when there are millions of fonts: an expert lays out letters on the elements and creates a rule: if you see a small bar attached to the left side of a circle, it is the letter “p”. Other hypotheses are developed when recognising the circle and the bar – these are “p”, “d” or “b”, and are either proven or refuted. This is how ABBYY FineReader software learned to recognise even fonts it has never seen before. That was magic.
Modern machine learning technology is even more magical. Modern artificial intelligence does not need to define data structure and invent rules. You just need to feed in a million texts and show it a thousand characters, similar to the letter “p”. The artificial neural network will learn from these examples, find consistent patterns in them and start generating its own solutions, picking all the “P”s. This is very similar to a black box and to how a human thinks: the neural network builds its own neural connections in such a way that enables it to understand incoming signal.
More advanced artificial neural networks are capable of training themselves without any human input. There is no need to show them letters “p”, the system itself understands that sentences consist of words, that words comprise letters, and that the English alphabet for example, consists of 26 letters. This is the highest league – the self-learning neural networks.
A similar network taught itself to play the game Go and won with a score of 100: 0. That is, despite the number of combinations in Go exceeding the number of atoms in the universe. And the game cannot be won by force.
Self-learning artificial neural networks are already able to select cats or dogs from images of a million animals. Next is the ability to distinguish between soft and hard objects, between water and trees. Intelligent technologies understand the meaning of words and sentences in vast complex texts, they are able to extract the necessary information, for example, about persons, dates, locations and see connections between them. Neural systems have already started to learn how to make complex decisions.
There are still some broad challenges for AI, as many hypotheticals demonstrate.
If a self-driving car sees a person running across the street, it will either brake or move to the side of the road. But the situation may be complex: suppose a group of children is crossing an icy road while an elderly man stands on the roadside. Any outcome suggests a victim. What must AI do when critical sacrifices are inevitable? Should we entrust this decision to the “black box” of artificial intelligence or should we introduce rules in such a situation? We still have to provide answers to many questions about how intelligent systems should operate.
What is around the corner?
Progress in technology is irreversible. “AI is the new electricity,” said Andrew Ng. The question is whether we will use its high-voltage wires for development or get a short circuit.
We can expect real business to apply intelligent technologies in the near future and see a rise in efficiency as AI helps make business decisions. These are the some of thhe projects already underway:
- Banks are using AI technologies as a much faster way to analyse documents for customer-onboarding, assessing risks when issuing loans, and identifying financial irregularities.
- In large corporations, AI checks tender documents and determines the best supplier.
- In telecom and retail networks, AI can process client requests, respond to comments in social networks, analyse open sources and internal documents to identify reputational risks.
- In construction and manufacturing AI sends notifications about various incidents to quickly fix a workplace emergency, verifies project documentation and helps reduce project costs at an early stage.
Another emerging trend is recognition in the video stream. When you point the camera at any surface or object, such intelligent technologies instantly extract information. Very soon they will be used everywhere to recognise data from documents – passports and id-cards, driving licenses, as well as car numbers, signs, counters, monitors and much more.
In addition, systems that analyse images from video cameras and instantly understand what is happening will soon come into everyday use. They will be able to understand who went to the pool – a dog, a child or a kangaroo, to analyse the actions of the object and decide how to react. In retail outlets, analysis of the video stream will allow owners to monitor and evaluate the behaviours of both staff and buyers. So, the elements of artificial intelligence will be present in all spheres of life.
Will artificial intelligence replace people and provoke unemployment? I don’t think so. Most likely, we will simply reduce the working week to 3 or 4 days. The rest of the time can be devoted to self-development.
David Yang is a co-founder of ABBYY, the leading developer of document recognition, data capture and linguistic software. He holds an M.S. in Applied Mathematics and Physics, is the author of a large number of scientific publications and holds many patents.