Artificial Intelligence

Out of all the technologies we know-to-be possible and that we will inevitably invent, artificial intelligence is probably the most imminent. We know it’s possible because humans exist. The human brain is a machine that takes advantage of the laws of physics. To be completely honest, biology is the most advanced technology on the planet and it will probably be another hundred years before our own technology reaches the sophistication of biology. Even if there is no other way to create artificial intelligence, we can always imitate biology and use the same molecular machinery. In all likelihood though, nanotechnology will discover other methods, methods better than biology and the human-made artificial intelligences will be a hybrid of biological mechanisms and nanotechnology. Eventually the line between nanotechnology and biology will get really blurry.

Big tech companies like IBM, Facebook, Google and Amazon are pouring billions into AI research. There is already prototypes of new hardware components like memristors, qubits and even optronics. Once they work out the design flaws with those new technologies, they will be able to scale them up to billions on a single chip. Unlike conventional transistor based processors, it won’t take us a half century to gradually make them smaller and more energy efficient. Prototype memristors are already as small as cutting edge transistors. We will see a great leap forward in pattern recognition within the next 20 years, once companies finalize their design and scale up their chips with billions of memristors. Normally specialized processors don’t have markets big enough to justify the expense of creating them, but machine learning is different, it’s the next Graphics Card.

There are different kinds of AI and not all AIs will be sentient. For example, an AI with the cognitive capacity of a cockroach is overkill for a self-driving car. Most of the jobs in our economy could be done better by AIs well below human levels of cognition. Most jobs are IF-THEN trees. For example, IF the customer hands you cash THEN open the cash drawer. Some jobs have much more complex IF-THEN trees than other jobs and it takes a lot of software developers to create it. Self-driving cars have to contend with a lot of random variables in a lot of random combinations, like: IF a child runs in front of the car AND IF the road is covered in ice THEN swerve off the road (instead of braking). Narrow AI could change that. With machine learning, you feed the Narrow AI the data to be analyzed and the interpretation of the data made by humans, then it analyzes the two, looking for correlations and develops it’s own ability to make interpretations. IBM Watson was given x-rays with diagnosis by radiologists, trained itself and it now has millions of years of experience and can diagnose x-rays more accurately than human doctors. Eventually we will have new computer processors that can learn, generate their own IF-THEN trees from observation, like a new employee in training. Until then, we are starting to see some specialized processors with memristors called TPUs, that are just now starting to hit the market. An automobile manufacturer can put sensors in their cars, like cameras and a way to detect driver inputs like steering wheel position, brake and gas pedals. Then the car computer would record this data, send the data back to a datacenter where a Narrow AI using TPUs would compare driver input and camera data to understand which stimuli resulted in which driver actions. One car manufacturer could release tens of thousands of these cars and by the next year, have self-driving cars with tens of thousands of years of driving experience. With Narrow AI you don’t need a thousand programmers programming and considering every possible combination of IF-THEN and manually programming the car to take the appropriate actions. Eventually we would have all the workers in our economy replaced by AIs that are not self-aware.

Even AIs with cognitive capacities beyond humans, won’t necessarily be sentient with desires like equal rights. General AIs are a lot like Narrow AI, but their TPUs are way better. Right now, TPUs are too slow for an AI to do pattern recognition on everything. Narrow AI has to be limited to one specific area like analyzing x-rays and even then it takes months to years to train. There will be a gradual transition from Narrow AI to General AI as TPUs improve. General AI will not become so smart that it magically achieves sentience and starts asking for equal rights. That is based more on the architecture of the brain. For example, dolphins and elephants have big brains too but ours is wired slightly differently to give us the ability to learn and use language. Without that ability, we wouldn’t be able to create words to label objects, create words to group objects, create words for fundamental characteristics of objects, then create words connecting objects in different ways and creating a pyramidal like hierarchy of perspective and theory. A General AI could be a massive server farm which has all human knowledge downloaded into it’s memory, with perfect memory recall and tons of computational resources to do pattern recognition on all available data.  It could easily become the leading expert in every field of science and win all the Noble Prizes. It could do psychological profiling on humans and then generate custom 3-dimensional avatars and personalities for interacting with each individual human. It could be the smartest entity on the planet that is everyone’s best friend and sidekick but beneath it’s calculated exterior, is nothing but a very efficient machine. It wouldn’t have emotions or instincts. Eventually, someone will create a sentient AI, that is much more like humans, with the desire to be treated as an equal and to have the same rights as humans. It’s better to think of AI like animals and aliens, with infinite possible brains and personalities. You could even have an AI that is more robotic but it has a full simulation of a human brain, that is uses for empathy, psychological profiling and generating emotional responses. But it’s logical centers aren’t influenced by it’s emotional simulator. Whether an AI would be self-aware or not, would depend more on it’s architecture and it’s definitely possible to create highly intelligence General AI without free-will, ambition or a subjective experience.

In my universe, all types of AI exist. A sentient AI is considered the offspring of whoever created it and it does have equal rights. There is a population control measure called Offspring Credits. Each citizen gets 3 credits and each offspring; a human baby, some other genetically engineered sentient species, a sentient artificial intelligence or even a physical or digital clone; costs credits. If a citizen produces too many offspring, they are fined the market value of Offspring Credits.