IBM Developing AI That One Day Might Rival The Human Brain
Thanks to some of mankind’s greatest minds, also science fiction to an extent, we have always dreamt of a world filled with technological advances where bleeding-edge advancements in machine learning and artificial intelligence taking cues from the natural world rendering us the most advanced life forms in the universe.
Now, the IBM, has taken inspiration from the most advanced computational organ in the natural word: the human brain, and is building computational models of attention and memory using the machine learning techniques.
Mimicking our gray matter isn’t just a clever means of building better AIs, faster. It’s absolutely necessary for their continued development. And the ultimate goal of this project by IBM is to build lifelong learning AI systems, able to adapt to new environments while retaining what they have learned so far. This challenge is broken down into short term adaptation, where there is little time to change a system and train it on what to pay attention to, and long term adaptation that is inspired by how the human brain forms memory and how neuroplasticity affects this process.
In this direction, the tech giant has developed twoimportant innovations that enable short-term and long-term adaptation which are a result of reward-driven attention techniques and enabling network “plasticity.”
The first project is reported to revolve around an algorithm developed to solve the problem of choosing a small subset of important features to focus on, out of potentially endless numbers of possibilities, is something we experience every day.
The algorithm learns to quickly focus its attention on the right input based on a reward (i.e. feedback from its environment) obtained during the task. The higher the reward, the more attention it will place on a certain piece of input. In the case of the lion and antelope, the antelope learns which part of its environment to glance at, and when detecting an unusual movement in the bushes, the reward is survival when it takes action to escape from the path of a potential predator.
Another instance being, when a doctor has a large number of possible tests and treatments to prescribe, the doctor needs to decide on the most effective ones, much like an AI system, with training and experience, the doctor learns to choose the most effective combination of tests and treatments so that the expected reward (i.e. the patient gets better) is maximized.
The next technique in development is based on neuroplasticity, which allows for long term learning and is inspired by the adult neurogenesis process which happens in the hippocampus, the part of the human brain responsible for forming memories.
An algorithm developed for this cause which expands and compresses hidden layers of a network, imitating the birth and death of neurons. It not only adapts to a new environment (e.g., a new domain) but also preserves memories of the previous domains, thus making a step towards lifelong learning AI systems.
It might be scary to think of AI networks building and improving themselves, but if monitored, initiated, and controlled correctly, these advancements could provide a boon to the AI research community.