As engineers and researchers work on developing and perfecting their machine learning and AI algorithms, the end goal is ultimately to recreate the human brain. The most perfect AI imaginable would be able to process the world around us through typical sensory input but leverage the storage and computing strengths of supercomputers.
With that end goal in mind, it’s not hard to understand the ways that AI is evolving as it continues to be developed. Deep learning AI is able to interpret patterns and derive conclusions. In essence, it’s learning how to mimic the way that humans process the world around us.
That said, from the onset, AIs generally need typical computer input, like coded data. Developing AIs that can process the world through audio and visual input, sensory input, is a much harder task.
In order to understand artificial intelligence in the context of a perception-based interface, we need to understand what the end goal is. We need to understand how the brain is modeled and works.
Our brain from a computer’s perspective
Our brains are essentially the world’s most powerful supercomputers, except for the fact that they’re made out of organic material, rather than silicon and other materials.
Our right brain is largely perception-based, it’s focused on the interpretation of environmental inputs like taste, feel, sound, sight, etc. Our left brain, on the other hand, is focused on rational thought. Our senses provide patterns to our right brain, and to our left brain, those senses provide the rationale for decision making. In a sense, we have two AIs in our head that work together to create a logical, yet also emotionally swayed machine.
Human intelligence and our definition of what an intelligent thing is all drawback to how we ourselves process the world. In order for artificial intelligence to truly succeed, that is to be the best version of itself that it can be, then it needs to be intelligent from a human perspective.
All this draws back to modern AI in a simple way, AI is programmed in how to make a decision. Machine learning algorithms allow code to be pseudo-organically generated so that algorithms can “learn” in a sense. All of this programming is based on reasoning, on “if, then, do this.”
Arguably, our brain’s decision-making process is just as much based on emotions and feeling as it is reason. Emotional intelligence is a significant portion of what makes intelligence. It’s the ability to read a situation, to understand other human’s emotions and reactions. In order for AIs to evolve and be the best possible algorithm, they need to be able to process sensory input and emotion.
Integrating emotional & human intelligence into modern AI
Most artificial intelligence systems are primarily created on the foundation of deep learning algorithms. This is the means of exposing a computer program to thousands of examples and AI learning how to solve problems through this process. Deep learning can be boiled down to teaching a computer how to be smart.
After any given deep learning phase for an AI, the system can perceive the inputs that it was trained on and make decisions therein. The decision-making tree that the AI forms from traditional deep learning mimics the way the right side of our brain works. It is based on the perception of inputs, of pseudo-senses.
Deep learning is a way of getting computers to reason, not just with if-then statements, but through the understanding of the situation. That said, the current situations AI are being trained on aren’t as complex as interpreting a conversation with Becky to see if she’s into you. Rather it’s more along the lines of is this a dark cat, a black bag, or the night sky. Primitive, but still sensory perception…
While deep learning is currently heavily focused on one pathway, meaning AIs are developing specialties, eventually it won’t be too far fetched to start training AIs on multiple things at once. Just like a toddler might learn colors and numbers at the same time. Expanding this out, as computer processing power grows, perhaps accelerated by practical quantum computing, there’s no question that AIs will evolve to become more human.
Understanding what this all means
Advanced AI will continue to deal with understanding and processing patterns from the world around us. Through this, it will develop more complex models on how to process that information. In a sense, AIs are like toddlers, but soon they’re going to be teenagers, and eventually, they may graduate with a doctorate. All figuratively of course… though, an age where an AI graduates a university probably isn’t that far off.
When we think about intelligent humans, we usually think of the most rationally minded people. Yet, we miss out on what is so unique about human intelligence – creativity. In a sense, we take for granted our creativity, yet it is the thing that makes us the most intelligent of living beings. Our ability to process situations, not just understand what the sum of two numbers is, is what makes us uniquely intelligent. So uniquely intelligent that we can design and create artificially intelligent beings that will soon be able to match our human intelligence.
While modern AIs are primarily focused on singular strands of intelligence, whether that be finding which picture contains a bicycle or which email is spam, we’re already training AIs to be all-around smart, humanly smart.
This is a syndicated post. Read the original post at Source link .