We all know AI has a top executive problem . On the whole , orbicular AI usance alreadydrew as much DOE as the entire land of Cyprusdid in 2021 .
But engineering research worker at the University of Minnesota Twin Cities have evolve and demonstrated a newcomputer memorydesign that could drastically reduce the amount of vim AI systems consume to aid temper this problem . Theirresearch was recently publishedin the diary Nature journal Unconventional Computing .
Most modern computing systems are built on what is know as the Von Neumann computer architecture , where the logic and memory subsystem are separate . During normal cognitive operation , data is shuttled back and away between the memory board modules and central processor . This is the basic fundament of forward-looking computers operate .
However , as processing speed speedily outpace I / oxygen technology , this information transfer becomes a bottleneck both in term of processing speed ( also known as thememory wall problem ) and power pulmonary tuberculosis . As the researchers pointed out , just shuffling the data back and forth consumers as much as 200 times the amount of baron that the computations themselves do .
developer have search to work around this outcome by fetch the logic and computer memory physically nigher together with “ good - memory ” and “ in - memory ” calculation designs . nigh - storage systems pile the logical system and memory on top of one another in a 3D array , they ’re layer PB&J - style , while in - memory systems interlard clump of logical system throughout the memory on a single chip , more like a peanut butter and banana tree sandwich .
The Twin Cities research squad ’s solution is a novel , fully digital , in - memory pattern , dub computational random - access memory ( CRAM ) , wherein , “ logical system is performed natively by the memory cubicle ; the data point for logic operation never has to leave the store , ” per the researchers . The squad achieved this by integrating a reconfigurablespintroniccompute substrate straight into the memory board cell , an advance that the research worker found could decoct an AI mental process ’s free energy expenditure by an “ rescript of 1,000x over a state - of - the - art answer . ”
And that 1,000x improvement could just be the baseline . The research team tested CRAM on anMNIST handwritten digit classifier taskand found it to be “ 2,500× and 1,700× less in energy and time , severally , compare to a nigh - memory processing organisation at the 16 nm engineering science node . ”
The emerging AI diligence is already confront meaning imagination issues . The ever quicker , ever more powerful and capableGPUsthat underpin AI package are immensely energy hungry . NVIDIA‘s novel top - of - the - lineBlackwell B200 take in up to 1,200W , for example , and generates so much permissive waste heat that it requires liquified cooling , another resource - intensive functioning .
With hyperscalers like Google , Amazon , and Microsoft all scrambling to establish out the physical base necessary to power the oncoming AI revolution — iegigawatt - sized datum centers , some withtheir own attach atomic power plants — creating more Department of Energy - effective compute and computer storage resources will become increasingly critical to the long - full term viability of AI engineering science .