magnetic track memory can work like neurons in the human brain

Machine learning and artificial intelligence algorithms require their own processor base, not general-purpose processors. This is necessary to optimize the operation of neural networks when processing data arrays. Ideally, it is necessary to create a silicon analogue of the human brain. However, silicon does not meet its goals. The solution may be electronics based on the interaction of magnetic fields.

A group of researchers from the Cockrell School of Engineering at the University of Texas at Austin conducted a series of experiments on the use of magnetic circuits for energy-efficient processing of big data. An article on the work is published in the journal IOP Nanotechnology (access paid) In practice, scientists were convinced of the mutual and productive interaction of a pair of magnetic transitions in the form of so-called domain walls (boundary transitions of magnetization).

Magnetic interactions between two adjacent logical elements, and each domain wall within track memory – these are logical 0 or 1, lead to the weakening of one of them. For circuits using classical silicon logic, this would require special corrective circuits that convey the reaction of one element to another. Magnetic interaction, as it turned out, automatically suppresses the signal of the neighboring element without any additional circuits through “space and time”. In fact, for free.

How IBM Tracked Memory Works

How IBM Tracked Memory Works

Neurons in the human brain act in a similar way. The most rapidly excited neuron inhibits the activity of other neurons in the layer where it is located. There is no need to once again explain that the brain, after millions of years of evolution, performs its tasks in the most efficient way. So with magnetic domains. If instead of complex silicon logic with a mass of feedbacks, we create mutually influencing arrays of domain walls with a simpler implementation of connections, this will significantly reduce the energy costs of processing data.

In the field of machine learning, the effect described above is called lateral braking and is implemented using complex logic. Magnetic elements, as we see, simplify circuitry for implementing the same algorithms. Researchers from the University of Texas were able to show this on a model of two magnetic elements and derived a mathematical model for an array of 1000 elements. At the next stage, they intend to conduct experiments with many magnetic elements.

If you notice an error, select it with the mouse and press CTRL + ENTER.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.