Rambling about stuff: intelligence and computers

Yeah, this blog should be about audio related stuff. However, I think I will concede myself to ramble about some other topics I am interested in. In many times of my life I have been left wandering what is intelligence. It is that kind of question nobody really has an answer to. Many definitions of intelligence can be given really. Not long ago I stumbled across this article. I read it with deep interest and curiosity. To my surprise, I found that the article, although aiming to discredit the idea of the brain as a computer, reinforced a lot that idea in my mind.

But let’s try to put some order into this ramble.

Disclaimer

Well, I am not a specialist of the field. I am a Physicist and Acoustician, all I know about Psychology and Neurology comes from what I studied about Psychoacoustics. Maybe I am not 100% naive about the subject, but I cannot by any means claim to be an expert. To be honest, I don’t like to claim I am an expert of Physics even if I am a Physicist. The fact is that every discipline is so vast and deep that it is simply arrogance to claim to be an expert. Maybe just call us competent… Anyway, I am only (very) marginally competent about these subjects. Still, I think that sharing my thoughts online will not really hurt. People with more competence will maybe reply, shedding some light on what I did not really grasped.

I would also make clear that I am not trying to dismantle Robert Epstein article. More likely, I am just testing myself on the comprehension of things out of my comfort zone.

Even if I am not an expert, I would like to write this as if I were, to some extent. Just because I feel it will make my points clearer.

Finally, as a Romantic poet, I decided to write this after a few days passed since I last read the article, like recollecting emotions in tranquillity. This should clearly show how much I have actually understood of the article, given that the following involves my personal rephrasing of the concepts contained in it. However, this comes with a drawback. I have the feeling that few of the points I am discussing were not explicitly expressed by the author but were more like implied by the way the article was written. I hope this does not cause any ambiguities or similar.

The author’s point

Robert Epstein, the author of the article, appears to claim that the brain cannot by any means be said to work as a computer. While a computer can store information to be retrieved and used, the brain works more like finding ways to recreate experiences. If a computer can run only immutable code instructions, the brain is able to keep on changing, adapting and transforming the way it operates. Any single thought is related to the actual state of our mind, but also all the previous. More or less, any single object that comes as a product of our mind is the result of a constant stream of creation, whose path is constantly modelled by actual and past objects creations.

Well, maybe you will be surprised, given the incipit, that I mostly agree with that. What bothers me is that nothing of this is in contradiction with signal processing and computers theory.

You will understand better as I go on.

The banknote example

The author asked to few people to draw a one dollar banknote in two different situations. At first, just by using memory. Then, pretty much by copying a present banknote. If you look at the drawings we see pretty evident differences. None of the particulars are there when we use memory!

Then, think about computers. A computer could have acquired “visually” the banknote through a scanner. Then, once stored in the memory, the banknote is saved with all its details, very sharply captured. If we were like a computer we would have had the “sharp file” of the banknote ourselves after visually acquiring it the first time. So we are not a computer.

Right? I don’t think so…

Let’s look at the banknote again. There are actually many accurate things. The proportions, as drawn from memory, are just right! The overall symmetry is spot on. Few attributes that are needed to identify the bill (the numbers in the corners) are supplied. For sure, in our memory the “banknote” concept is accompanied by some grasps of its texture and colour. Maybe even smell.

The author says that these are all signs of the fact that the brain, having been exposed to some experiences, is able to recreate them keeping them consistent with few attributes, which we might be redefining experience after experience, thought after though.

Think about dreams. It maybe happened to you that when dreaming you had this feeling everything was sharp around you. The truth is often that this was an illusion, pretty much all was just vague and deeply linked with emotional states. In other words, just the frame of it was really in your head, the underlying minimal regularities needed to discern things rather than the complete things themselves.

But is all of this not signal processing?

How the regularities (proportions and such) are acquired from the banknote? We see that humans have a strong “geometrizing” instinct. Ask to a sample of people to select randomly an object in an set of a few identical ones. The symmetry with which they are placed will very strongly influence the results. Couldn’t be the “geometrizing” thing a sort of signal processor?

We don’t know that for sure. But there is a deeper reason why this example fails to show that the brain is not a signal processor, to some extent.

What is processing a signal?

But first,

What is a signal?

Pretty much everything to which a given system has the ability to respond. This definition is very broad and general.

Take a guitar effect pedal. It is a Physical system. It responds to voltage variations. A varying voltage is a signal for the pedal. Its geographical position? Not really… Unless the environmental conditions prevents the circuity to work correctly the response of the pedal is not affected whatsoever by the position. Position is not  a signal for the correctly functioning pedal.

Take a person. It is a Physical system. It responds to visual stimuli. As such, an intermittent light (like turn signals) is a signal.

We can make a very long list. Signals can belong to every domain. They can be any variation of a Physical entity to which something is able to react.

And then what is signal processing?

The reaction of the system to the signal. That’s it. My guitar pedal might process the signal maybe just by filtering it. I can process the turn signal and understand a person is about to turn right.

The processing is the action of the system. The result of this action can belong to the same signal domain or not. For example, an AD converter takes as input voltage signals and turns them into numbers. The input and output belong to completely different domains now. Just like the intermittent light impinging into my eyes and the act of slowing down, for example.

We can say that signal processing is just the behaviour of systems that can react to a stimulation, being able to transform this stimulation in whatever way the dynamics governing them allows to. Needless to say that, unless this transformation si an identity, the signal will be transformed into something else that could or not belong to the same domain.

Now look at this chain:

experience of a banknote -> brain does its stuff -> ability of recreating the experience of the banknote

The experience was transmitted by our senses, which took Physical objects to begin with (light in our eyes, sound in our ears, pressure on our hands…) and it got transformed into the ability to create (simulate?) analogous experiences.

This is signal processing! The fact that this something, this ability, was constructed involving emotions and previous analogous objects (if as such we can call them) does not change the fact that from Physical signals (and other ingredients) the brain was able to create something else!

Moreover…

Who said that computers can only store complete sharp information? This is not the case. Regularity of signals is exploited to compress them and reconstruct them procedurally. Yes, with a margin of error but allowing to be very conservative on physical memory. These are main paradigms in computer vision and procedural audio, for example. I mean, imagine if a robot had to save into hard drive every image recorded by its cameras. It would take terabytes and it would be so inefficient to access… Too much information is as bad as no information at all.

Take a look at just these few examples:

Data Compression – A Generic Principle of Pattern Recognition?

The baseball example

I have the impression that this one is plain and simply contradictory.

The author argues that if we ask to a computer to catch a baseball ball it will have to measure the motion state of the ball, model the forces acting on it, derive its trajectory and then calculate where along the trajectory it is able to put itself to catch the ball. And, of course, it has to do it.

On the other hand, a person actually maintains a sort of “optical alignment” with its target while moving, so that its motion is driven directly to the ball without needing to calculate or model anything.

Completely different from a computer, isn’t it? Doesn’t really seems so to me…

What is an algorithm?

  • Go to an ATM
  • Insert your card
  • Enter your PIN
  • Select “cash withdrawal with receipt”
  • Collect your card, cash and receipt
  • Leave the ATM

This is probably more or less what you do every time you need cash. And it is an algorithm. An algorithm is also a recipe. Everything that is a set of instructions, sequential or not, is an algorithm. It does not need to involve equations. The recipe of Risotto alla Milanese doesn’t!

In this regard, also the “optical alignment” thing is an algorithm. Look:

  • Start moving as the ball starts moving
  • Move so that the ball appears to move in a linear optical trajectory
  • Continue to move until you catch the ball or the ball hits the ground

A computer can be programmed to do the same thing. Yes, this will involve pattern recognition which we are used to code into computers as equations anyway. It does not really matter if our brain does it differently though, as long as it keeps on being an algorithm.

To be clear, I am not stating that for sure the optical alignment is a algorithm hard-coded in our brain. I don’t know if that is the case. The fact is that this is a completely legit algorithm that would ensure that a person is able to catch a ball. Imagine we had two baseball players, one with the algorithm and one that can achieve the same without. But you don’t know who is who. There would be no way of distinguish them until we look inside their brain. Which categorically rules out that the algorithm technique is not a valuable model in this moment in which we are deeply ignorant about how the brain really works under the hood.

We have maybe no way to state whether our brain is really using the algorithm or not, at least not at this point. But the author says that the catching of a ball cannot be really modelled by algorithms, which turns out to be false.

The concept of program

A program is a very close relative of an algorithm. It is indeed a set of instructions. The main feature is that a computer is able to understand them. Also, it can be decomposed in sequential instructions.

Here is where I probably almost agree with the author: I would be rather surprised if the brain worked by using programs. The program is spawned by the way we crated our programmable machines. Nature created other systems, which for sure differ a lot from our creations (although subject to the same mysterious laws).

Still, think at the stated ability to recreate experiences. One could think of it as a program which is constantly rewritten by the experiences/recreation dance and cycles. I don’t think this is what happening in reality: written how? Where? Executed how? Where?

Still, we might think that experiences and brain creativity supply us with paradigms. I will not define this sharply. Just imagine them as something that makes the creation act possible, yet it is not a classic program and it is not executed in the classical meaning of the word. Think as the paradigm as constantly reshaped by the brain activity.

Does the fact that the brain makes use of object radically different from programs a not computer?

Not really. A computer is first of all a machine that can be adapted to different tasks. Our electronic computers are adapted by different programs. The use of paradigms does not make our brain any less computerish, as it appears to be an adapting machine anyway.

Moreover…

Who said that a program is immutable? There is plenty of examples of Self-modifying or Self-adapting code around. These usually modify themselves at run-time (the source code is not altered) but current terrain of research involve coding programs able to code other programs. In principles, they could be made to rewrite themselves.

Take a look at these few examples:

Self adaptive software: A position paper

Automatic Quantum Computer Programming

A note about memory

What is memory? You will be maybe surprised that a Physical system, in order to have memory, does not need to be a computer or to be alive.

We seen that Physical systems can react to stimuli. When the reaction at a given time depends on the previous reactions and/or previous values of the stimulation and/or previous states of the system we say that the system has memory.

That’s it and makes a lot of sense, given that it means that the response at a given time is given keeping into account related variables for past times.

Most of linear dynamic systems have memory. The fact is that the stimulation alters the state of the system itself, which as such alters its response. For the vast majority of the systems the response is a function of the previous states. This is particularly true for transients, quickly varying stimuli. Nonlinear systems depend on previous states often very sharply, especially in case of hysteresis, where the response is not only a function of the input signal and all the previous system’s states, but also of the whole history of the system. Which means that not only the previous states are important, but also how they were reached is important. Elastic bodies and magnetized bodies work exactly like this.

Now, the author argues that the concepts of memory from signal processing and computer theory are radically different. I imagine the brain to be, indeed, much more complex than a piece of iron. However, he states that every instance of though, every mind creation, is deeply linked to previous mind states. Which means that the brain has memory according to the very same signal processing definition of memory, which is very large and general and encompasses many different subtypes of memory. All of this makes the article to appear as it is focussing too much on the “desktop computer” implementation of memory, glimpsing over the fact that the concept of memory is a much broader thing than that. Again, in the light of this, the signal processing definitions appear well suited to at least describe brain behaviour.

Scientific inaccuracy

I think that few things in this article feel like scientific inaccuracy. Here my impressions:

Models VS reality

Drop a small sphere into air in a constant thermodynamic state. The only forces acting on the small sphere are due to gravity and viscous friction. It can be shown that the velocity of the sphere will increase exponentially up to a limit velocity, at which the acceleration imprinted by the viscous friction (proportional to velocity itself) will be equal and opposite to the gravity induced acceleration. This prevents to velocity of the sphere to change anymore.

Does that mean that the velocity is an exponential? I wouldn’t say so.

It means that the reality of the process can be modelled, with well defined accuracy margins, constraints and hypotheses, by an exponential law.

Exponentials do not exist in reality. They have been invented by us and belong to our mind. Still, we found many phenomenons that can be described with this facility.

They make for very good models of many natural processes, of the more different kinds. For example, they model the activity of radioactive materials very well.

Models are not meant to be identified with reality. Unless we are a very lucky species, the way we create theories has little to do with how nature really works. We care about models and theories so that, discovery after discovery, we increase the degree of truth contained in them, while retaining predictive power. Still, it is not said that we will ever produce a theory that is the whole truth, the real explanation of how nature works. For sure, we are far from that today.

This is why this article felt odd from the very beginning. Who said that brains are computers? I bet many think that many aspects of brain can be modelled with concepts from signal and computer theory. Hopefully, they have provided also proofs and reasons. Maybe not. But models should not be identified with the things they model blindly.

How to use models

In the case of our little sphere we were faced with a fairly simple problem: all the forces acting on the sphere can be studied independently. As such, once they are known, the principles of dynamic can be used to calculate a representation of the motion of the sphere.

Now think about nuclei. The core of atoms. They are modelled as hold together by weak interaction and strong interaction, binding the particles making up the nuclei. Can be these interactions studied independently? No. The problem is that they have such a short range that two particles cannot be put a distance big enough for us to characterize the forces while avoiding them to react, producing a third body whose characteristics are different and all the forces are folded into.

As such, Physicists study the whole nuclei. From the structures of them they hypothesize models for the forces so that the structures can be calculated as they appear. Then they test the models out of this environment, to see if they can predict also different phenomenons, not only the ones they are based upon, so that the models are not useless. After the tests the models are updated.

As a result there is a plethora of models that can describe few properties of nuclei, each one with its own strengths and weaknesses. The shell model is good for binding energy, the superfluid model for the shape. They have to be used in tandem in order to make complex and extensive previsions.

This is what happens when the building blocks of a systems cannot be really studied independently: there is no way a single model can describe the whole system.

Yet, the author pretends that the computer model is pure madness and we should get rid of it. He says that it is as naive as the humour model.

The truth is that not even the humour model is naive. It can be reformulated in modern terms as equilibrium of chemicals, which clearly have strong neurological effects. In this optics, the intuition of the ancient philosophers appears powerful rather than stupid.

And I imagine to be the same with the computer model: the chance is that it can be used well to account for at least a set of the features of the brain. To model them, not necessarily to state what they are!

For example, let’s suppose that the brain cannot be modelled as a whole as a computer.

Each single neuron could be modelled as a signal processor though. We see that they react to signals indeed, so they can be modelled as some kind of signal processors. Now they are all interfaced. As such maybe the brain is not a computer, but resembles much more of a network of very simple computers. We can still use the computer as a modelling device with impunity!

(as long as it gives us some predictive power)

These networks are known as neural networks. The simplest way to model a neuron is by giving it this rule: when the action potential exceeds this threshold, fire a signal. Using this model neural network programs have been written able to cope with complex problems, like optimization and nonlinear system identification (for example). They seem a powerful modelling technique indeed and I feel that getting rid of them before having done proper research to asses their potentiality is not the definition of wise. Which brings me to

The argument against research

It is very hard to keep my cool in front of this. We are bombarded every day by people that think that studying particle Physics is useless, sending space probes is useless, exploring the oceans is useless and so on and so forth.

They all argue that money should be invested elsewhere, where it is more urgent. It is false and dealt with it!

Science is one of the many Philosophical arts devoted to the building of knowledge. Without knowledge we are stones. There isn’t useless knowledge. Just the beauty and love for knowledge is a good enough reason to pursue resarch, but nothing is really useless. Which means that there isn’t such a thing as useless research.

Just.

Deal.

With.

It.

What if the brain cannot by any mean be really modelled by computer and signal theory? Better to know it with a good confidence level, after the required amount of research.

And by the way, I have already supplied examples (computer vision, neural network, adaptive programs…) where trying to merge what the brain appears to do with what computers do has been very generous with results. Which invalidates completely the last paragraph of the article: we got many insights in many different disciplines.

Conclusion

I hope I did not look like as being arrogant or something. Robert Epstein seems to be a well respected specialist and there ar no doubts that he knows his matter way better than I do for sure.

I mean, he got many qualifications and publications. On the other hand I am just a random duck on the web with not even a real name!

In fact, I would like to remind that this is an exercise for me, to test my comprehension, rather than me trying to invalidate Robert Epstein points.

Still, I could barely believe that a specialist could glimpse over so many things when writing an article. It almost feels like the author barely knows what signal processing and computers are. Which for sure is not the case, I swear that I am not so not even thinking that the author is ignorant about that. But it feels like it. Maybe it is just the way he simplified the topic, to reach and divulgate to a wider audience…

Still, this makes the whole article feeling as it was written having a point before having the facts. Which I am sure it is not the case… But makes for a weird experience while reading it: I felt as if each single line urged me to provide a kind of counter argument…

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s