Mind sti

Mind Programs -Resource - Emotion Recording

You are at work and you start to feel hungry. You’re not even consciously aware of the hunger at this stage yet. The hunger is automatically detected, and your lunch is ordered from your restaurant of choice. You know those days when you just want to have some sushi, and the next day we don’t want to hear about sushi. Your meal choice is correlated to your neuro-signals, and a program will be able to “feel” and guess what you would like to have for your lunch.

Your cognitive and mental states will one day become as priceless as the pictures that you took on your vacation. Captured cognitive & emotional signals, combined with location data from GPS, and recorded audio, will allow us to learn important correlations between places, activities, events and mental states. Those important metrics will be the next stage in the revolution that was started by bio-feedback, and will allow us to reach new levels of self-control. Once we learn to recreate emotions, we could augment movies and pictures with the emotional state that you were in when you took the picture, and create more comprehensive capture of reality.

Emotion based – Interactive movies

Interactive movie experience, in which you implicitly navigate the plot using your emotions. Your emotional state is constantly fed into the playing device, which chooses the plot which will create the most enjoyable experience for you. The movie literally learns your taste and lets your hidden emotions navigate through the plot.

Communication augments the individual, it creates communities, it is vital for survival of the majority of species. It shaped and was shaped by the course of evolution. it is impossible to imagine a day in our life without communicating with our family, friends and colleagues.

Is there any pattern in the course of communication paradigms?

What will the next major paradigm shift in communication be like?

Is there any fundamental limit to the rate of information exchange between humans?

The above questions are all intertwined. The evolution of communication is was shaped and limited by inherent physical characteristics of the species. A clear pattern in the communication evolution will allow us to speculate on the next paradigm shift. The evolution of communication between living organisms is tightly tied with the evolution of sensing, processing and emitting organs. Nature has always found many fascinating ways for animals to communicate.

The history of communication goes back 4 billion years ago, to the age of bacteria. Single cell organisms exchange information through chemical signals. Bacteria talk to each other, distinguish between species, and have both common inter-species as well as intra-species language. Some species of bacteria have hundreds of different behaviors, affected by cell community voting mechanism. Cells within our body communicate through variety of chemical, thermal and electrical signals. Chemical communication works for small distances. It is impossible to distinguish the exact source of the information, and the information exchange rate is limited.

Around 600 million years ago, during a burst of rapid evolution, dubbed the “Cambrian explosion”, primitive nerve system and vision started to evolve. A new medium emerged, and shortly it became utilized for communication. Sea creatures developed complex light emitting capabilities, based on symbiotic relations with luminescent bacteria or complex bio-luminescence organs, lenses and color filters. This was a significant paradigm shift, which introduced new dimensions into the language: Fine granularity of the light spectrum, ability to create moving colorful 3 dimensional light shapes. It is an interesting fact that this luminescence capabilities are the dominated by primitive deep-sea creatures. We can only imagine what could be done by covering our skin with LCDs and wiring every pixel to our intellectually and socially evolved brains.

Mammals and birds communicate mostly through sound, due to its ubiquitous nature. No line of sight is required. As far as you are close enough to the origin, you’ll get the message. Due to sexual nature of all mammals and birds, rituals have evolved, whether it’s to impress the female, or to beat another rival male. Primates have developed more complex language structure which involves hoots and gestures.

Humans, during their short period of existence, have reinvented the communication more times and in more ways than ever seen in nature before. This time it wasn’t due to an introduction of a new sense, but rather a very complex processing organ. We have learned and evolved to think metaphorically. Combined with our ability to produce consonants in addition to vowels, we found a completely new way to communicate. Sequences of vowels and consonants were assigned to abstract concepts, feelings, metaphors and everyday objects. From the dawn of humanity, we’ve been a very social creature, for a good evolutionary reason. We always had to collaborate in order to survive, and collaboration is simply impossible without an effective way of communication. Information wasn’t really liberated at this point.

Then, around 30,000 years ago, there came the first human who painted a picture on a cave wall. He projected a metaphor to the visual medium. And change history again. Writing was born. For the first time, one could say something which could be heard over and over again, in different times. Information could finally be saved, perpetuated, persisted. The bandwidth increased dramatically because the producer of the information did not have to repeat the same information over and over again. The wall literally augmented, enhanced and empowered him.

The wall was stationary, and thus required the participants to physically stand against it. This all changed when humans started to write on non-stationary objects, such as clay tablets (3500 BC, Sumer), bones (1400 BC, china), stone tokens, papyrus (2200 BC) and paper. New concepts and metaphors were created all the time. It just wasn’t practical to create corresponding symbols to all new concepts, and spreading their meaning to the people. Pictographs appeared around 3500 BC. The oldest one discovered to date is from Sumar.

Then came the alphabet. it literally liberated language. The alphabet is basically a projection from the auditory to the visual space. Literate people could finally say what they could read, and write what they could say, without the need for constant learning of new symbols.

Once you have an alphabet, it’s much easier to automate the process of writing. A prototype is created, and cloned over and over again. Printing was so inevitable that it was independently invented in China around 200AD (woodblock printing), in the Holy Roman Empire by the German Johannes Gutenberg around 1440, and more times throughout the history. Writing got faster. This is a step forward in the distribution of information, rather than a paradigm shift.

The ability to transmit information without physically moving the matter is extremely powerful. Hundreds of different telegraphs appeared in the last few millennia. The Greek telegraph, used around 500BC, included the use of trumpets, drums, shouting, beacon fires, smoke signals, mirrors and semaphores. When we gained the ability to generate and conduct electric current through wires, it was only a matter of time until ideas start to travel on those wires. Now they were even easier, faster and cheaper ways to spread information.

The next information liberation step is not related to the transfer speed, but rather to the associative nature of our minds. Hypertext was born. The nonlinear reading was used hundreds of years before the invention of hypertext. Dictionaries and encyclopedias included references and interlinking. Books had tables of contents. However, for the first time, the nonlinear reading became practical, simply because hyperlinks illuminated manual paging and lookup.

We’re in the dawn of the next revolution. The Semantic Web revolution, which takes nonlinear reading even further. It makes the interlinking dynamic and automatic. Personalized nonlinear reading flows can be created for the reader, based on his intent, implicit goals, reading history, past knowledge, reading style, time constraints and interests.

Primarily at the same time, another revolution is taking place. Computers learn to understand voice. We talk much more than we type. We can talk when we drive, walk or eat. However, most of verbal information doesn’t leverage the above described technological advances, simply because it is not digitized. There are two major limiting factors: computers are not yet ubiquitous, and speech recognition quality is still relatively low. When those limiting factors disappear, Exabytes of verbal information will be processed, stored, transmitted, shared, indexed, searched, interlinked and consumed.

It’s 2010. We’ve come a long way in developing our communication capabilities. Information is being transmitted at the speed of light, broadcast to billions of people everywhere in the globe, as fast as it is created. It can be instantly found, edited and shared.

Are we done here?

The speculation part starts here. What is the next paradigm shift? No one knows, but we have billions of years of history to try to spot the trends. We’ve created a ridiculous situation where the rate of the information transmission is limited by how fast we move our tongue, and the rate that we can parse and understand human speech. It was found that the average reading, listening, typing and braille reading rate rarely exceeds 400 words per minute. There might be simple reasons for those limitations. We simply never needed to pass information at a rate faster than our lips move, and our hearing and brain never had the need to understand speech at fast rates. These days we’re literally flooded by information, the news industry explodes as users start to share and contribute events and the increasingly connected nature of information creates an infinite stream of juicy information to consume.

EEG based mind-reading headsets are becoming affordable these days. A decent Emotiv EPOC headset can be purchased for $300. That’s just the first generation of the technology, but it gives a clue on the exciting possibilities that are going to be created, once it gets more mature.

What if we could leverage the high-bandwidth optic nerve, to create a direct broadband connection to the brain, bypassing the language 400 words/minute bottleneck? We will need to create an engineered language which could be naturally transmitted through the optic nerve and processed by the visual cortex. We will have to train our brains to understand that language. Probably, some sorts of ideas will be more naturally transmitted this way than others. Some of them would have to be transformed on many levels (e.g semantic abstractions/metaphors, visual transformations, aggregations/summarizations). An efficient communication framework should have a common vocabulary and ontology, an extensible concepts framework, be multi-layered, and impose low information redundancy.

The optic nerve is responsible for transmitting visual information from the eye to the brain. It contains around 1.2 million nerve fibers (compared to 30k of the auditory nerve), capable of transmitting 8.7 megabits per second. The ultra-high bandwidth channel might hold the key to the next paradigm shift in communication. During a typical one hour professional meeting, about 2 thousand words will be exchanged between the participants. This is less than 10 kilobytes of information. Theoretically, direct mental communication via the optic nerves can enable us to complete such a meeting faster than we pronounce a single word verbally. One of

The first adopters are likely to be the military, police, hospitals and other emergency services, where the ability to shorten discussions to fractions of a second could be the difference between life and death. Businesses will highly benefit from the productivity boost. Academy people and students, who deal with information and knowledge transfer most of the time, will adopt the new medium to speed up their learning and research process. It is interesting to ask whether and when we will adopt telepathy to personal talks among friends and family. This paradigm shift will drive further profound changes, which will make us question the purpose of daily concepts such as: business trips, meeting rooms, podcasts, lectures, keyboards, text.

There are still many unanswered questions, regarding the feasibility and implications of such technology.

Will an adoption of telepathy boost the human evolution, and the technological development?

What are the ethical implications?

Are our brains capable of processing such quantities of information? How will it affect our psychology?

Will it require us to rebuild the existing Internet infrastructure?

Will a new language be required for telepathy?

The Mysterious Brain

It is capable of having more ideas than the number of atoms in the known universe, by constantly firing chemical and electrical signals over 500 trillion synaptic connections between 100 billion neurons.

All this magic fits a 1400cm3 box, weighs only 1.4kg and consumes just 10-20 watts of energy.

While the mystery is being uncovered little by little by neuroscientists around the world, we still have very little clue on how physical brain activity is translated to the intellectual and emotional planes, as stated in the top 10 unsolved brain mysteries, from the Discover Magazine:

•How is information coded in neural activity?

•How are memories stored and retrieved?

•What does the baseline activity in the brain represent?

•How do brains simulate the future?

•What are emotions?

•What is intelligence?

•How is time represented in the brain?

•Why do brains sleep and dream?

•How do the specialized systems of the brain integrate with one another?

•What is consciousness?

An engineering perspective

A computer architect will easily notice that the structure of the brain contains many desired characteristics of a Distributed System. Cheap and highly redundant, simple, poorly specialized units, each of them responsible for just a few basic functions. Interconnections autonomously evolve towards the organic flow of information. Certain parts of the system are specialized and optimized for specific functionality, whether it’s rapid yet hard-wired response, or a neuro–bureaucratic logic and inference driven rational decision.

An Artificial Intelligence perspective

The human kind has a long history of attempts to understand, formalize, model and recreate consciousness, reasoning and intelligence. Some of those attempts date back to the time of Aristotle. This history of AI is strongly intertwined with the history of scientific and technological developments. Paradigm shifts starting from mechanical “thinking” machines, electric circuitry, analog electronics, digital electronics, and eventually, the age of software AI, spanning from rule based symbolic reasoning and logical inference, through the uncertainty-embracing probabilistic machine learning approaches, such as Artificial neural networks. In February 2010, an MIT research scientist Noah Goodman introduced A grand unified theory of AI, combining the rule based and the probabilistic approaches. The theory hasn’t yet been proven in industrial applications, but is believed to hold the potential of becoming the holy grail of AI.

Brain modeling and simulation approach

The Blue-Brain project, takes the opposite approach, by reverse-engineering the brain, modeling the neural topology on the macro scale as well as building empirical models of individual neurons on the micro scale. Those models are already the basis of a brain activity simulation. 8192 processors power the simulations in a distributed manner, totaling 28 Teraflops of computing power. Currently, in 2010, a rack of servers is required to simulate small subsets of the brain functionality. The blue brain project leads me to the main question I would like to raise in this post.

What it takes to create a complete simulation of the human brain?

I’d like to quote 4 questions from the FAQ section of the blue-brain website:

Q: What computer power would you need to simulate the whole brain?

A: The human neocortex has many millions of NCCs. For this reason we would need first an accurate replica of the NCC and then we will simplify the NCC before we begin duplications. The other approach is to covert the software NCC into a hardware version – a chip, a blue gene on a chip – and then make as many copies as one wants.

The number of neurons various markedly in the Neocortex with values between 10-100 Billion in the human brain to millions in small animals. At this stage the important issue is how to build one column. This column has 10-100’000 neurons depending on the species and particular neocortical region, and there are millions of columns.

We have estimated that we may approach real-time simulations of a NCC with 10’000 morphologically complex neurons interconnected with 10×8 synapses on an 8-12’000 processor Blue Gene/L machine. To simulate a human brain with around millions of NCCs will probably require more than proportionately more processing power. That should give an idea how much computing power will need to increase before we can simulate the human brain at the cellular level in real-time. Simulating the human brain at the molecular level is unlikely with current computing systems.

Q: Do you believe a computer can ever be an exact simulation of the human brain?

A: This is neither likely nor necessary. It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today. Mammals can make very good copies of each other; we do not need to make computer copies of mammals. That is not our goal. We want to try to understand how the biological system functions and malfunctions so that this knowledge can benefit mankind.

Q: The Blue Gene is one of the fastest supercomputers around, but is it enough?

A: Our Blue Gene is only just enough to launch this project. It is enough to simulate about 50’000 fully complex neurons close to real-time. Much more power will be needed to go beyond this. We can also simulate about 100 million simple neurons with the current power. In short, the computing power and not the neurophysiological data is the limiting factor.

Q: You are using 8’000 processors to simulate 10’000 neurons — is this a 1 neuron/processor model?

A: There is no software in the world currently that can run such simulations properly. The first version will place about one neuron per processor – some will have more because the neurons are less demanding. We can in principle simulate about 50’000 neurons, placing many neurons on a processor. The first version of Blue Gene cannot hold more than a few neurons on each processor. Later versions will probably be able to hold hundreds.

Conclusions

Today’s CPUs are capable of performing several billion operations per second.

Assuming that there are no more than 10 Billion computers on earth, simulating the human brain using the current architecture will require the computing resources of the entire planet.

The famous diagram by Ray Kurzweil reflects the long-term trend of exponential increase in calculations/second/$1k over the years.

Moore’s law predicts a yearly doubling of the number of transistors on an integrated circuit.

Hans Morvec, principal research scientist at the Robotics Institute of Carnegie Mellon University estimates the human brain’s processing power to be around 100 teraflops and have a memory capacity of 100 terabytes.

According to Moore’s law, we’re around 25 years away from the point of a brain in a box for $1000.

that human consciousness can’t be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells.