The human brain is not intelligence
The human brain is not intelligence, it’s a survival mechanism optimized for selecting immune and other compatibilities. In other words, cognition is an illusion.. the human brain is just a feedback algorithm, simulator based on the feedback. Causality is perceived, simulated, remembered and used to manipulate. It’s quite eloquent if you understand feedback loops. Even better if understand the algorithmic pattern programming from the genetic levels, biological differential stability network.
Nothing humans do is complicated or very complex, at least in terms of systems... lots of refinements and integrated ideas, but nothing is truly complex, all by itself. Nobody just invented the fission or other types of devices, it was a simply refinement of knowledge that lead to the discovery and further refinement that lead to the devices being usable.
Same goes for everything else. Time and refinement is all the inventions in this world. Building on other people's time and refinements. People for the most part act the same way they did millennia ago, they can't organize more than 200 people at a time-place, without breaking apart into subunits. If not organized by a "master" respected by the workers, they aren't productive and fail to be productive.
We grow into a context of demands that then causes the growth of the required level of complexity to manage the knowledge and acquisition of details to fill in the domains. For the most part, this means that the “lazy intelligence” only executes when it is necessary for survival.
There are just 2 areas where superior intelligence is potentially enabling:
1. In (partially) understanding certain useful hypercomplex things like health and physics - that first awaited modern tools like microscopes to observe them, and
2. Competing with others for procreation, wealth, power, etc. HOWEVER, Game Theory dictates that most decisions in "games" be made on a weighted random basis, and further, Nash's famous equilibrium theorem tells us that if the other guy has selected his weights correctly - that it doesn't make any difference if you get them right or wrong. As a result, limitless "intelligence" provides little if any competitive advantage, beyond the low threshold of simply understanding what the game is all about.
I once sat down with a winning high-school football coach, and we looked at various common situations that would seem to benefit from game theory computation. Amazingly, his off-the-top-his-head weights were spot-on to my carefully computed weights. We only looked at 2x2 zero-sum situations. This demonstrated that there would be NO advantage in having computer assistance. A better random number generator might have helped - but seeing the coach throw dice before deciding each play would have been pretty demoralizing to the players. As in spread spectrum communication, weighted random is as good as limitless intelligence can even approach.
How The Brain Works
The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future. The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. Hawkins' basic idea is that the brain is a mechanism to predict the future, specifically, hierarchical regions of the brain predict their future input sequences. Perhaps not always far in the future, but far enough to be of real use to an organism. As such, the brain is a feed forward hierarchical state machine with special properties that enable it to learn.
The state machine actually controls the behavior of the organism. Since it is a feed forward state machine, the machine responds to future events predicted from past data.
The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Higher levels of the cortical hierarchy predict the future on a longer time scale, or over a wider range of sensory input. Lower levels interpret or control limited domains of experience, or sensory or effector systems. Connections from the higher level states predispose some selected transitions in the lower-level state machines.
For example, if you knew the structural breakdown of the incoming input and have already processed it for the various layers emulating the visual system of the human, you'd already have the post processing structure setup that is required for the "functions" but you'd only need a matching algorithm just like the damn cortical columns that take in the various areas of the visual fields and the vectorized elements coming from the higher stage breakdowns... You end up with 800 million potential elements you can search for a match for, with the right data representation the most basic binary search function you can find your elements down any tree-branch and you'd end up with a few hundred required pattern matching elements you'd always be firing for... i.e. constantly validating the existing visual field for elements of change and identifying them but you'd never have to fire all 800 million unless you somehow shutdown and had to reboot the whole image -- and even then you'd really only have to the number of elements in the imagine... This is why I hate CS and AI people, they brute force everything when you can work from a post processing standpoint and at that point you only have rudimentary functions that can easily be parallelized in a GPU/APU even on commodity hardware. The only limitation is that you really have to understand the mesh of representation and functional form that allows you to do *knowledge* based processing without algorithmic complexity... Although, I will admit it is just trading representation processing to hold the complexity, but data storage is definitely cheaper than GPU.
If I did my AI, there would only be one language of expression of knowledge and it would include itself in that expression of knowledge that would run on the CPU/logic processing platform. The major problem with that is that it shrinks the logic sets to almost nothing and would wipe out the programmer occupation as we now have it (redundancy reduced to 0).
Additional reading material: