The human brain is not intelligence
The human brain is not intelligence, it’s a survival mechanism optimized for selecting immune and other compatibilities. Generally speaking, intelligence is pattern matching but still requires an active context to know if you need to calculate within two seconds whether the mountain lion is going to eat you or run away from you? Turn up the risk aversion, compensation for slow thinking, and the person survives by never being near a mountain lion, turn down the risk aversion and you have to speed up immediate surrounding risk assessment or the person never survives the mountain lion.
The high level processes of the brain are coordinating the retrieval of context mode and maintaining it, while activating different coherency regions that might be used by that context mode. Similar to the OS maintaining the partition of the programs and scheduling the access to resources like the memory or storage systems
In other words, cognition is an illusion.. the human brain is just a feedback algorithm, simulator based on the feedback. Causality is perceived, simulated, remembered and used to manipulate. It’s quite eloquent if you understand feedback loops. Even better if understand the algorithmic pattern programming from the genetic levels, biological differential stability network.
Nothing humans do is complicated or very complex, at least in terms of systems... lots of refinements and integrated ideas, but nothing is truly complex, all by itself. Nobody just invented the fission or other types of devices, it was a simply refinement of knowledge that lead to the discovery and further refinement that lead to the devices being usable.
Same goes for everything else. Time and refinement is all the inventions in this world. Building on other people's time and refinements. People for the most part act the same way they did millennia ago, they can't organize more than 200 people at a time-place, without breaking apart into subunits. If not organized by a "master" respected by the workers, they aren't productive and fail to be productive.
We grow into a context of demands that then causes the growth of the required level of complexity to manage the knowledge and acquisition of details to fill in the domains. For the most part, this means that the “lazy intelligence” only executes when it is necessary for survival.
How The Brain Works
The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future. The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. Hawkins' basic idea is that the brain is a mechanism to predict the future, specifically, hierarchical regions of the brain predict their future input sequences. Perhaps not always far in the future, but far enough to be of real use to an organism. As such, the brain is a feed forward hierarchical state machine with special properties that enable it to learn.
The state machine actually controls the behavior of the organism. Since it is a feed forward state machine, the machine responds to future events predicted from past data.
The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Higher levels of the cortical hierarchy predict the future on a longer time scale, or over a wider range of sensory input. Lower levels interpret or control limited domains of experience, or sensory or effector systems. Connections from the higher level states predispose some selected transitions in the lower-level state machines.
For example, if you knew the structural breakdown of the incoming input and have already processed it for the various layers emulating the visual system of the human, you'd already have the post processing structure setup that is required for the "functions" but you'd only need a matching algorithm just like the damn cortical columns that take in the various areas of the visual fields and the vectorized elements coming from the higher stage breakdowns... You end up with 800 million potential elements you can search for a match for, with the right data representation the most basic binary search function you can find your elements down any tree-branch and you'd end up with a few hundred required pattern matching elements you'd always be firing for... i.e. constantly validating the existing visual field for elements of change and identifying them but you'd never have to fire all 800 million unless you somehow shutdown and had to reboot the whole image -- and even then you'd really only have to the number of elements in the imagine... This is why I hate CS and AI people, they brute force everything when you can work from a post processing standpoint and at that point you only have rudimentary functions that can easily be parallelized in a GPU/APU even on commodity hardware. The only limitation is that you really have to understand the mesh of representation and functional form that allows you to do *knowledge* based processing without algorithmic complexity... Although, I will admit it is just trading representation processing to hold the complexity, but data storage is definitely cheaper than GPU.
If I did my AI, there would only be one language of expression of knowledge and it would include itself in that expression of knowledge that would run on the CPU/logic processing platform. The major problem with that is that it shrinks the logic sets to almost nothing and would wipe out the programmer occupation as we now have it (redundancy reduced to 0).
Additional reading material: