How a single neuron AI brain may outperform humans

 


A multidisciplinary team of researchers from Technische Universität Berlin have developed a neural "network" that might one day outperform human brain capacity with just one neuron.

There are around 86 billion neurons in our brains. They form one of the most sophisticated biological neural networks known to exist when combined.

Current artificial intelligence systems seek to mimic the human brain by constructing multi-layered neural networks that compress as many neurons as possible into as little space as feasible.

Unfortunately, such designs consume a lot of energy and generate results that aren't as good as the human brain's robust and energy-efficient outputs.

According to Katyanna Quach of The Register, experts anticipate that the expense of training only one neural "super network" will outweigh the cost of a nearby space mission:

The scale of neural networks, as well as the amount of technology required to train them using large data sets, is increasing. Take GPT-3, for example: it has 175 billion parameters, which is 100 times more than GPT-2.

When it comes to performance, bigger is better, but at what cost to the environment? According to Carbontracker, training GPT-3 only once consumes the same amount of energy that 126 Danish households consume in a year, or the equivalent as travelling to the Moon and back.


A network usually requires more than one node. However, in this scenario, the single neuron may network with itself by extending out through time rather than space.

According to the study paper written by the team:

A technique for full folding-in-time of a multilayer feed-forward DNN has been developed. Only a single neuron with feedback-modulated delay loops is required for this Fit-DNN technique. An arbitrarily deep or broad DNN may be obtained by temporal sequentializing the nonlinear procedures.
Each neuron in a classic neural network, such as GPT-3, may be weighted to fine-tune results. Typically, more neurons result in more parameters, and more parameters result in finer outcomes.

However, the Berlin team discovered that instead of distributing differentially-weighted neurons over space, they could achieve a comparable job by weighting the same neuron differently across time.

According to a Technische Universität Berlin news release:

This would be similar to a single visitor recreating a huge dinner table discussion by swiftly moving seats and speaking each section.
However, "rapidly" is an understatement. The team claims that by activating time-based feedback loops in the neuron using lasers — neural networking at or near the speed of light — their device may hypothetically reach speeds nearing the universe's limit.

What does this signify for artificial intelligence? The researchers believe that this will help to offset the growing energy expenses of building powerful networks. If we continue to double or treble utilization needs with larger networks over time, we will eventually run out of practical energy to utilize.

The essential question is whether a single neuron trapped in a temporal loop can achieve the same effects as billions of neurons.

The researchers employed the novel device to perform computer vision capabilities in early testing. It was able to eliminate manually generated noise from clothing images in order to produce an accurate image – a feat deemed sophisticated by today's AI standards.

The scientists think that with future refinement, the technique might be expanded to establish "an infinite number" of neural connections from neurons trapped in time.

It's possible that such a system may outperform the human brain and become the world's most powerful neural network, or "superintelligence," as AI scientists call it.

Previous Post Next Post