What is intelligence anyway?

I have before briefly mentioned a definition for intelligence that is useful to consider when investigating AI. I’d like to go deeper into what my understanding of intelligence is. I have a deep interest in understanding the fundamentals of intelligence, what is it really that allows humans to outperform machines that are by most measures far superior in their computational power.


I discuss a lot with friends and colleges the facets of what they believe intelligence to be and what different elements mean.


This search has not just led to a Masters in artificial intelligence but to look deeper to read numerous books on epistemology, a topic I believe was missing in my degree. Epistemology is the study of knowledge. By most measures of intelligence and especially in its colloquial use intelligence has something to do with knowledge. Epistemology is the school of philosophy dedicated to making arguments about what it means to know something. It goes a lot deeper into the topic than the quote that used to be my maths teachers screensaver:

“Knowledge is to know that a tomato is a fruit. Wisdom is to know not to put one in a fruit salad”

An example is the following scenario regarding a $10 note in your pocket.

  • Say you had $10 in your pocket and you know this.
  • Someone steals the $10 from your pocket without you realising.
  • Another person replaces the $10 from your pocket (Generous person, allow it for the example)

You now have $10 in your pocket and you believe you have $10 in your pocket. Does the fact that your reasoning about why you still have $10 in your pocket being wrong, you thought it was there the whole time, mean that you are unjustified in calling your belief that you have $10 in your pocket knowledge?

There are different camps on this, some arguing that you did, some arguing that you didn’t.

I’d like to look deeper in this post into what a computational basis for intelligence might be. Clearly, if you are unable to recall things then you are not going to be able to make plans or learn. So memory (Which is slightly different to knowledge) is an important component.

I think we can be rest assured that we have enough memory in modern machines. Perhaps in the days gone by where kilobytes were scarce it might have been an issue. However, in a world of 1TB/£13 it isn’t.  (The average cost of HDD in my recent build)

Knowledge on the other hand feels like it is a little more than memory. It has understanding. Perhaps not the understanding of wisdom but the understanding of what it means for something to be a fruit. 

Knowledge as opposed to the cold notion of memory has some understanding.


But there is more to intelligence than simply memory. Clearly, computation has an element to it as well. The assumption that there is a component of computation, the raw ability to follow through on logical premises that are in memory is so entrenched in our understanding that it isn’t ever really mentioned. However much like cold memory, we have ample computation, although we might not be able to simulate a human brain in a machine they are much quicker at following through logical premises and in general. The human mind has the benefit of parallelism but any thought process that a person can do in a second or less can only have travelled along a path with a maximum length of a few hundred neurons.

Clearly, there must be something more to intelligence than raw speed. We need the right logical premises, not just the ability to flow through lots of them.

David Deutsch argues in Possible Minds that the study of artificial intelligence is doomed to fail on its current path because it is seeking to reduce the number of branches which are searched. I couldn’t disagree more, in fact, I believe that there is a technique for reducing the branches further and creating generalisable ways to reduce the number of branches.


It has something to do with my last post on goals. I believe that the missing ingredient in modern artificial intelligence in an automated way to create useful abstractions.

Yes, modern convolutional neural networks create a form of abstraction to be able to carry out the recognition that they are able to do. In fact, when I was looking around for what to do my thesis on one of the options was investigating what structures were being learnt in the convolutions.


I’m not arguing that the universal function approximators that are neural networks can’t be trained to carry out abstraction. My argument is that perhaps there is a more direct way to go about creating abstractions.


I have been recently reading a 90s sourcebook on the foundations of AI, yes that's quite out of date but given most of my degree focused on the 50s, it’s relatively modern. The majority of the advancements in the last 20 years have come from advancing NN models, mainly by simply throwing more layers at the problem. When looking for what failed in the search for intelligence through the GOFAI (Good Old Fashioned AI) anything published in the AI winter should be insightful.


The problem I have come across is that although there are some models which create abstractions such as an adaption of the MITs STRIPs planning program. There doesn’t seem to be a general model of what it means to create an abstraction. Trying to delve deeper into the issue using google is fruitless as any search terms mentioning abstraction will inevitably lead to someone's blog on “How to produce abstractions for junior devs”. Unfortunately, structures that help human developers to be able to produce useful abstractions in their programs presume an existing body of knowledge in the domain. Something that is perhaps a problem as challenging as producing abstractions itself. 

A problem I believe to be related is that of why has mathematics not been automated yet? Yes, we have automated theorem provers and wolfram alpha but there hasn’t been a computer program, at least not one I am aware of which has produced a deep insight into some mathematical object. This could be due to the link between understanding the world so as to come up with reasonable axioms. Something which is still the realm of humans. Machines can be used to explore the implications of those axioms without the body of knowledge.

Even the embodiment of the implications back into the world isn’t something that can’t necessarily be automated without the same body of knowledge needed for the axiomatisation of the world in the first place.

There are problems however which truly are separate from the outside world. Higher-level abstractions like group theory and category theory. Surely if computers can perform calculations following the implications of logic quicker than all of humanity can it should be possible to produce higher-level abstractions.


This is something I haven't found anything on the literature on however which has made me question if the ability to produce simplifying abstractions is something that might be core to intelligence.

In Superintelligence Nick Bostrom talks about what he categorises as the three kinds of superintelligence:

Parallel - Where the individual units aren’t necessarily more intelligent than humans but where there is either a step-change in the efficiency with which we organise ourselves in the development of further knowledge

Speed - Where the conceptual intelligence isn’t any more than an individual human but the speed of the thought is much much faster.

Quality intelligence - This is perhaps the one I’d like to focus on because I think it is related to the ability to produce abstractions

The way that quality intelligence is put forward in Superintelligence is that there are some leaps, some concepts which require the great minds of Einstein, Feynman, Newton, Dirac to be able to achieve. That even with a million years of though the average human wouldn’t be able to come up with calculus. That even with 100 billion average humans (presuming everyone is fixed at 100 IQ rather than it being normally distributed) that relativistic quantum fields would still be elusive.


I think that if quality intelligence is a thing, and that that society doesn't just idolise people who make the most of their situation or are in the right place at the right time, then their gift is the ability to create abstractions on levels which others have to be shown.

There was a lot of human history where calculus was unknown to anyone, now it's even taught in high school.

Often the conceptual leap is the genius. What conceptual leaps have been provided by artificial intelligence so far?

A new strategy in go, from alphazero?

Perhaps that indicates a road to new conceptual links but remember that to beat Se-dol alphago had to check a lot more future positions. Se-dol wasn’t searching thousands of moves in his head. He was still thinking at a higher level of abstraction than alpha go.

What is abstraction anyway?

  • Hiding details
  • Being more meta

If you look for abstraction as a software engineering principle you will find articles talking about hiding the implementation detail. There is a lot of truth to that in this general form of abstraction.

When talking on this topic with a friend recently he suggested that you could create abstractions by correlation. I disagree. I feel that the hiding in abstraction is doing more than covering correlations. Either way you cut the correlation you don’t end up with useful abstractions because you are either creating simplifications grouping where things don’t vary, or your grouping the variability together and so losing all the structure.

I feel like there is something which is more meta in abstraction. It’s the things “that change in the same way” which, although only subtly, is different from things “that change together”.

I would make the argument that correlation is things that change together. Abstraction is grouping things that change in the same way. It seems to me that almost the entirety of statistical AI techniques, machine learning, focus on correlations over abstraction.

Neural networks are really a sophisticated way to find the correlation between input and output data.

I would make the argument that the bucket of other tools SVM, Decision trees, Random forest, k-NN, Linear regression, Naive Bayes; are all about correlations.

Now you might be thinking well if things “change in the same way”; is that not a correlation in the way that they change? Well, perhaps. I’m sure that abstraction can be fully represented with correlation on meta representations of the underlying but I feel that is a more obtuse way to describe something that might be better understood by thinking down the path of “things that change in the same way”.


One of the problems in modern AI approaches is transfer learning, taking the knowledge gained in one domain and applying it to another. I believe this is related to the issues of creating abstractions.


Comments

Popular posts from this blog

Structural engineering with cardboard

The twelve fold way