AI’s are Smart but it’s Humans that know where to look for Wisdom – Recknsense

We have enormous amounts of knowledge at our disposal so you’d think it would be easy to become an expert these days. If only this were true! Sadly we aren’t all super smart just because the internet exists. Despite the fact that vast quantities of easily accessible information is available online, it doesn’t turn you into an instant expert. You need to know where to look to find the good stuff i.e the salient information. To make it worse, the problem gets harder, as our internet grows.

 

We know that gaining expertise or wisdom is not simply down to the quantity of information we have. In fact it may only require a few select items of knowledge to shortcut what’s important. The art of effective knowledge gathering is a skill humans are continuously working on. While great progress has been made to get the world’s information online and organize it, we now need more assistance to pinpoint the most valuable dig sites.

 

It’s likely to be a surprisingly small quantity of knowledge to gain wisdom, maybe even single digits. Under 10 single items of knowledge (a book, a video clip, a research paper, etc) might sound too low a number to get wise to on a subject but as humans we seem to do better with less. The inspiration for small quantities is derived from other theories on the brain’s inherent capacity to retain important information. For example, we have an average number of items our brains can easily remember at one time and an average number of close friends we tend to stay connected with. It seems plausible therefore that there is a similar average (low) number for the things we need to know before we can be wise about something. I’ve written a bit about numbers and wisdom before in A Formula for Wisdom.

Learning about something is even more challenging now because there are so many options available that claim to help. Where do you start and who can you trust? Perhaps there are one or two important papers that have to be read, or books that clearly explained key knowledge. There is so much choice in how to learn something new, we could be forgiven for giving up before we even start.

 

If only we could go somewhere that listed only the most insightful items – like a cheat sheet. The problem remains though – who chooses those essential things and can we trust them? Everyone has an opinion about the best book to read or the best technique or process in a given scenario. Ideally we would hope that a search engine is the place to get those essential items. However, when put to the test, pitting typical search engine results with what ‘someone in the know’ would tell you, we find the human response quite different from machines.

 

Here’s a real example using a well known search engine compared with a list compiled from human sources:

 

  1. Search for ‘AI resources’ – The first 4 responses are Ads including Palantir, Berkeley then ‘Education – Google AI’, a few free resources, a medium article and a list of companies in AI.
  2. Human recommendations as a result of asking a few people in the business – an intro course, a website with the latest AI news, a book on issues around AI bias, a good starter book on the technology. (You can see this list on our recommend page)

Human results consist of items tried by real people, ranging in diversity of medium and opinions i.e. not just the most popular and highest ranked. Sometimes, people have a good tip off that never makes it to the top of search results. The downside of search results is that we may be less likely to trust it given the search advertising business model that exists today.

 

When so much of the internet is powered by adverts or can be gamed, it can make us suspicious of the motives behind recommendations. Humans can have their own biases when they recommend too but using our natural instincts, we seem to be good at sniffing out the obvious culprits. The other difference between search results and a list a human might come up with, is the lower quantity of knowledge. Whilst the search engine has scoured the entire internet to arrive at the top pages of results (even though we know the first page is usually the only one viewed), human recommendations lists in contrast are more succinct. After all we can only hold so many thoughts at one time and a natural filtering emerges.

 

The power of the simple model

 

In his highly recommended new book about AI and Human Values, called ‘The Alignment Problem’ , Brian Christian discusses the impact of simplicity in the chapter on transparency. He points to research that was done which compares complex AI models next to the simplest of models. It came to the remarkable conclusion that the simple models can be as good or even better than intricate models. Christian cited research into real decisions such as parole judgements or medical assessments, where simple models were shown to work more effectively when compared to complex ones.

 

It turns out a small body of knowledge can yield the best results and not a big complex list of variables or data sets. Humans are particularly good at figuring out what this small body of knowledge is. As quoted in Christian’s book ‘Human expertise is that we’re good at knowing what to look for’. We are naturally efficient at it too. Maybe it’s an inclination we have to reserve energy and effort that makes us look for the easiest way to get to a solution. We possess a super skill to calculate salient information fast. AI doesn’t lean this way, at least not yet. Machine learning is designed to consume a lot of data and that in itself can be celebrated, sometimes even held up as an achievement. Being trained on huge amounts of data doesn’t appear to always give machines the advantage though. While there are many situations where huge quantities of data is exactly what’s needed, there are plenty of scenarios where it doesn’t lead to better results – especially where ‘general AI’ is required. The more we attempt to give AI a general or common sense understanding hoping to make it more human, the less it points to our current data heavy models being the answer. 

 

By combining a small number of things you need to know with a simple rule set, we can attempt to find a solution using a collection of ‘wisdoms’ on a given theme. Somewhere between the big data search engine and a fully human recommendation is a simple algorithm to find wisdom that may one day be coded.

 

 

If you’re interested in human recommendations, check out our list of shortcuts on a few selected themes. 

 

Can we ask a quick favor? If you have a minute, please respond to our very brief survey that asks how you gain knowledge – it’s just 4 multi choice questions and the user research will help us better design Recknsense.


Source link