Could these 3 clues unlock the future of AI? – Recknsense

There are a number of developments in the field of Artificial Intelligence that signal it might be time for a directional change. Progress has been rapid in the last few years and AI is being implemented faster than we’ve had time to fully assess or prepare for. However the democratization of ML, limitations of deep learning and accountability questions in biased models and data, may be good reason to start looking for clues to build AI in a new way.

The Commoditization of AI and Limits of Deep Learning

AI, in the form of machine learning, mostly applying the ‘deep learning’ variety is being used everywhere today. From recommendation engines, to finance based predictions to social media and much much more. You only have to look at tech job listings to see how high demand is right now. As demand grows, the supply of highly qualified professionals can’t keep up and the possibility of democratizing access to it becomes highly desirable. This new ambitious approach is referred to as  ‘no code AI’. There are many companies, big and small, claiming to offer non developers an opportunity to create their own AI (usually machine learning in the form of deep learning neural networks). Check out this list of no code ML platforms.

However, it’s just at this moment when the technology is ripe enough to be able to be commoditized in this way, that we should take notice of this big red flag. Usually if a technology is mature enough to be packaged up in a ‘no code’ bundle, it’s time to look at what’s next for technologists. 

This isn’t the only reason we could be at an impasse in the progress of AI. Deep learning itself is only getting us so far in the quest for human like intelligence. For a higher level type of intelligence where the stakes are high like self driving cars, or complex language understanding, deep learning isn’t enough. It’s either not able to cope with the complexity or is inefficient and inaccessible.

Responsibility and Accountability in Models and Data

If you need more convincing that it’s time to move on, there is the ‘accountability of AI’ for which our current methods seem inadequate. This has been called many things – human centered AI, responsible AI, accountable AI and ethical AI. The purpose is largely the same with many people and organizations becoming increasingly aware of the impact. In short, the current process of machine learning lends itself to bias and discrimination as it uses training data and models that rely on past data to predict what might happen next. It isn’t easy to avoid as data is commonly includes bias, just like humans are prone to it. 

Many efforts are underway to move away from a ‘black box’ model of ML which could mask the bias allowing it to affect decision making undetected. Solutions include explaining how the ML models are working, what data is being used, etc. This is usually referred to as ‘interpretability’ and much research is being done to advance the field. Another solution could be designing a new type of AI that might mitigate the impact. 

All these signs lead to this being the perfect time to discover what comes next in AI. There are some clues which have surfaced recently. Some are already taking shape with early successes and others are only ideas right now, inspired by the discoveries we are making in human intelligence via cutting edge biology and neuroscience research. Below are 3 that give clues to the directions that may be taken.

Clue 1 – Understanding Symbols

Neuro-symbolic AI brings us closer to machines with common sense

Being able to understand symbols and geometry has been proposed as a unique human skill. The article provides some great information on the theory of neuro-symbolic systems that may be the missing ingredient in our AI technologies. Humans can make sense of symbols in a way that machines (and other non humans) cannot. Not every decision the brain makes is through pattern matching or prediction, as deep learning is designed to do. Often we can model and imagine the world based on what we see that doesn’t always require huge amount of data from past experience. It’s likely to be an area that could bring us closer to the idea of general AI or common sense that has so far eluded AI. This new type of neuro-symbolic system also lends itself to physical problems and has the capability to advance robotics.

Clue 2 –  AI that is able to formulate rules and teach a human

The Guardian view on bridging human and machine learning: it’s all in the game

This recent story surfaced recently on a new AI called Nook which can beat humans at the game Bridge. Digging further, there is a hybrid mix of AI technologies at play but at the heart of the solution appears to be the ‘explainability’ of AI so that humans can learn and together in a human-computer partnership are able to be the winners in this team game. This blended solution (with some applications of symbol understanding as described above) may be the future of AI. Instead of relying on one ML model that only ingests more data to improve accuracy, there is an element formulating new rules and explaining these. Much like the way humans learn from each other to collectively improve our intelligence. 

Clue 3 – Modules and Pattern completion

New clues about the origins of biological intelligence

This clue takes inspiration from new biological intelligence and neuroscience research. Much of this article explains the discoveries recently made in better understanding how cells have intelligence to influence the overall organism. It has been a mystery in biology but recent proposals have attempted to explain how the body might be working. The theory is a modular system where lower level modules are able to manipulate higher levels. A concept of ‘pattern completion’ allows triggers within a module to turn on the module itself. In this way, small tweaks can be made within a module that can change the whole system without having to recreate it all. 

Applying this to AI suggests that our designs should be broken down into modules, with goal based learning to make every individual module behave like a living human cell. Each module is intelligent and capable of affecting change in the higher level system.  I’ve not come across any application in AI using this theory but the idea of breaking down a large foundational ML based model into modules that can incorporate this type of biological control could be significant. 

Have you come across any other new developments in AI that might lead to a major shift in direction? Let us know – contact@recknsense.com


Source link