Medical

Device may contribute to creating a neuromorphic computer

19th July 2017
Enaie Azambuja
0

Of all things that contemporary science is capable of observing in the universe, nothing outperforms the human brain or can even be compared to it in terms of functionality, plasticity and efficiency. The brain is a massively parallel processor of information, consuming an amount of energy on the order of a femtojoules (10-15 J) per synaptic event. As a comparison, an ordinary 100 W bulb consumes one hundred quadrillion times more energy per second.

Emulating, or even approaching, the interconnectivity, information density and energy efficiency of the human brain is an ideal pursued by the most advanced and innovative research in medicine, engineering and informatics.

An organic electrochemical device produced recently in the United States has provided an original contribution to this quest. Called ENODe (Electrochemical Neuromorphic Organic Device), it can be fabricated on flexible substrates, and it operates at energy levels below one picojoule (10-12 J), displays more than 500 distinct nonvolatile conductance states, and accurately simulates synaptic functions.

The research that led to the development of this device is described in the article “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing”, published in Nature Materials.

The group led by Italian chemist Alberto Salleo, a professor in Stanford University’s Department of Materials Science & Engineering, conducted the study. Brazilian researcher Gregório Couto Faria, affiliated with the University of São Paulo’s São Carlos Physics Institute (IFSC-USP), also participated. Faria was supported by FAPESP via a scholarship for research abroad.

“Although the ENODe is very simple, it has a property typical of neural structures, which is multilevel memory,” Faria told Agência FAPESP. “Each unit of our brain displays about a hundred states of potentiation, which correspond to different levels of memory. In our experiment, by varying voltage we were able to change the conductivity of the active material and thereby also access differentiated memory patterns.”

The device essentially consists of a conjugated polymer, which conducts not only electrons but also ions and thus can act as a translator – or more accurately, a transducer – of electric current into ionic current. The main advantage of this property is that it enables the material to mimic biological systems.

“These systems communicate primarily via ion fluxes. When a neuron interacts with another, it opens up channels for the passage of ions and thereby polarises the surrounding area. By reproducing this function, our polymer establishes an interface between artificial and living systems,” Faria said.

Some immediate applications would be to use the device in sensors to detect the presence of certain substances or in prostheses to stimulate living tissue, such as heart cells, for example. Another far more ambitious application, would be to leverage the properties of the materials to design and produce electronic devices that mimic biological structures, such as neurons.

Alan Heeger (USA), Alan MacDiarmid (New Zealand) and Hideki Shirakawa (Japan) won the Nobel Prize in Chemistry 2000 for the development of conjugated polymer electronics. The family of conjugated polymers includes a specific class of organic mixed conductors.

One of the advantages of these materials over their inorganic analogues is their highly porous morphology, which is akin to that of a sponge. When immersed in a liquid medium in the presence of ions, they absorb the solution in their interstices, and the network of interstitial channels becomes the pathway along which the ions can travel.

During ion transit, oxidation and reduction reactions occur between the polymer and the ions. This phenomenon is known as doping in materials science. The gain or loss of electrons affects the electrical conductivity of the material, and the conductivity can be modulated by varying the voltage applied.

“This is what enables the device to display different levels of memory, one of the initial conditions for the creation of a neuromorphic structure,” Faria said. Neuromorphic units integrated in neural networks are a highly promising option for surmounting the impasse currently facing computing.

“The development of computing is in fact at a crossroads,” Faria said. “Moore’s Law, according to which the number of transistors on the integrated circuits in computers doubles every 18 months, remains valid, but processing or clock speed isn’t rising proportionally. In fact, it’s basically standing still.”

This standstill is a result of several factors, such as the Joule effect, i.e., the dissipation of energy in the form of heat, which intensifies as components are increasingly miniaturised.

“One of the ways to break through the impasse people have looked at is verticalising memory units. This is the same principle as you see in urban growth. When the area available for a city’s expansion hits a wall, it starts to grow vertically.

The principle could also apply to processors. Another more innovative solution is to replace computation, à la von Neumann [Hungarian mathematician John von Neumann (1903-1957)], the model followed since the onset of the information technology revolution, with neuromorphic computation, which seeks to mimic the functioning of the brain,” Faria said.

In traditional computing, the bit (binary digit), 1 or 0, is the smallest unit of information that can be stored or transmitted. The numeral 1 corresponds to a closed circuit that allows an electric current to pass, and 0 corresponds to an open circuit that blocks the signal. In short, the paradigm is yes or no, with nothing in between. In neuromorphic computing, each basic unit can display several possible states – yes, no, and various forms of maybe.

Moreover, neural networks would display levels of plasticity and learning capacity that would increasingly resemble those of biological structures. Hence, many different units could be synchronised so that they would be able to learn and mimic each other’s electrical patterns.

“Alongside multilayer memory, learning capacity is a very important characteristic of the ENODe,” Faria said. “To demonstrate this capacity, we subjected the device to ‘Pavlov’s dog test’.”

The test, based on experiments performed by Russian physiologist Ivan Petrovich Pavlov (1849-1936), associates the offer of food with the sound of a buzzer or bell. The dog salivates initially on seeing or sniffing the food, and later, whenever it hears the sound, it salivates even in the absence of food.

“We reproduced this kind of animal conditioning using two neuromorphic systems, one for vision, mimicking the sight of food, and the other for hearing, mimicking the sound of the bell. Changes in conductivity due to electric potential occurring in the vision system were ‘learned’ by the hearing system when the two systems were integrated,” Faria said.

It seems complicated, but essentially these findings can be summarised as the ability to dope the polymer structure by using an ion flux.

Today’s von Neumann computers are already able to learn by trial and error. In technical language, this is called machine learning, and it can be found even in the latest generation of smartphones. A milestone in this evolution was reached in 1997, when world chess champion Garry Kasparov, thought by many to be the best chess player of all time, lost to IBM’s Deep Blue supercomputer.

Another landmark event occurred in 2011 when Watson, an even more powerful IBM supercomputer, defeated two former human champions of Jeopardy!, a popular TV quiz program in the US. With 15 trillion bytes of random-access memory (a byte equals eight bits), equivalent to 5,000 massively parallel processors, Watson can read half a billion pages in 3 seconds.

During the quiz program, its machine learning capability enabled it to understand the questions (many of them tricky), find all the possible answers, rank them statistically by plausibility, and choose the most likely option, within seconds or fractions of a second.

The target now, however, is a different type of learning based not on the accumulation of bytes in gigantic machines but on multilevel processing units integrated into networks capable of mimicking the plasticity of the human brain.

The device tested in Stanford’s experiment is macroscopic in size, less than a millimeter thick, and can be miniaturised to the nanometric scale by using photolithography. “Our prime goal is to connect several neuromorphic devices and mimic neural networks capable of performing increasingly complex functions,” Faria said. Neuromorphic computing is on the horizon.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier