One of the most obvious — and frankly , the dullest — tendency within the smartphone industryover the retiring couple of yearshas been the incessant talk about AI experience . Silicon warrior , in particular , often toutedhow their latest fluid processor would enableon - machine AI process such as video coevals .
We ’re already there , albeit not all . Amidst all the hoopla show withhit - and - miss AI trick for smartphone user , the debate barely ever went beyond the glitzy presentations about the new processors and ever - evolving chatbots .
It was only when theGemini Nano ’s absence on the Google Pixel 8raised eyebrow that the hatful come to have it away about the vital grandness of RAM capability for AI on mobile devices . shortly , Apple also made it clearthat it was keepingApple Intelligencelocked to gimmick with at least 8 GB of RAM .
Micron / Digital Trends
But the “ AI speech sound ” picture is not all about the store electrical capacity . How well your phone can execute AI - powered tasks also depends on the invisible random-access memory optimization , as well as the storage module . And no , I ’m not just talk about the content .
Memory innovations headed to AI phones
Digital Trends sat with Micron , a global leader in memory and memory board solutions , to break down the role of RAM and storage for AI processes on smartphones . The advancement made by Micron should be on your radar the next you go denounce for a top - grade phone .
The in style from the Idaho - based companionship include the G9 NAND mobile UFS 4.1 computer memory and 1γ ( 1 - gamma ) LPDDR5X RAM mental faculty for flagship smartphones . So , how exactly do these memory board resolution push the grounds of AI on smartphones , apart from boost the electrical capacity ?
get ’s get going with the G9 NAND UFS 4.1 storage solution . Theoverarching promise is frugal might wasting disease , depleted reaction time , and richly bandwidth . The UFS 4.1 standard can reach peak sequential read and write stop number of 4100 MBps , which amount to a 15 % gain over the UFS 4.0 propagation while trimming the response time numbers , too .
Micron / Digital Trends
Another all-important welfare is that Micron ’s next - gen mobile memory board modules go all the way up to 2 TB electrical capacity . Moreover , Micron has managed to squinch their size , seduce them an ideal solution for foldable phones and next - gen slim telephone such as theSamsung Galaxy S25 Edge .
move over to the random memory onward motion , Micron has developed what it calls 1γ LPDDR5X RAM module . They give birth a peak speed of 9200 MT / s , can mob 30 % more transistors due to size shrinking , and consume 20 % lower big businessman while at it . Micron has already serve the slightly dull 1β ( 1 - beta ) RAM solution pack inside theSamsung Galaxy S25series smartphones .
The interplay of storage and AI
Ben Rivera , Director of Product Marketing in Micron ’s Mobile Business Unit , order me that Micron has made four all important sweetening atop their latest memory board solution to assure faster AI operations on mobile devices . They include Zoned UFS , Data Defragmentation , Pinned WriteBooster , and Intelligent Latency Tracker .
“ This feature enables the processor or host to identify and isolate or “ pin ” a smartphone ’s most frequently used data to an area of the storage twist holler the WriteBooster buffer ( consanguineal to a cache ) to enable quick , fast access , ” explain Rivera about the Pinned WriteBooster feature film .
Every AI model – guess of Google Gemini or ChatGPT — that look for to perform on - twist job needs its own set of instruction files that are stored locally on a peregrine gimmick . Apple Intelligence , for representative , take 7 GB of storagefor all its roguishness .
Micron / Digital Trends
To do a task , you ca n’t depute the intact AI package to the RAM , because it would need space for do by other critical chores such as calling or interacting with other important apps . To deal with the constraint on the Micron storage module , a memory single-valued function is created that only loads the needed AI weights from the storage and onto the RAM .
When resource get tight , what you need is a fast data point swap and reading . Doing so see to it that your AI project are execute without affecting the speed of other important tasks . Thanks to Pinned WriteBooster , this data central is sped up by 30 % , ensuring the AI tasks are treat without any delays .
So , let ’s say you needGemini to pull up a PDF for analysis . The flying memory swap ensures that the require AI weights are speedily shifted from the storage to the RAM module .
Micron / Digital Trends
Next , we have Data Defrag . conceive of it as a desk or almirah organizer , one that ensures that objects are neatly grouped across different categories and place in their alone cabinets so that it ’s easy to regain them .
In the circumstance of smartphones , as more data is saved over an extended period of time of usage , all of it is normally stored in a rather sloppy issue . The nett shock is that when the onboard system of rules demand access to a sure kind of Indian file , it becomes harder to get them all , leading to slower operation .
According to Rivera , Data Defrag not only avail with orderly storage of data , but also changes the route of fundamental interaction between the warehousing and twist controller . In doing so , itenhances the interpret speed of data by an impressive 60 % , which course hastens all kinds of user - machine interactions , include AI workflows .
Micron / Digital Trends
“ This feature can assist expedite AI characteristic such as when a generative AI model , like one used to generate an image from a schoolbook prompt , is call from storage to storage , allowing information to be show quicker from storage into computer memory , ” the Micron executive told Digital Trends .
Intelligence Latency Tracker is another feature that essentially keeps an eye on stave result and factor that might be slowing down the usual gait of your headphone . It subsequently helps with debugging and optimizing the headphone ’s performance to ensure that even , as well as AI undertaking , do n’t run into speed bumps .
The final storage sweetening is Zoned UFS . This system ensures that datum with similar I / O nature is salt away in an neat fashion . This is of the essence because it makes it easy for the system to locate the necessary files , instead of wasting fourth dimension rummaging through all the folder and directories .
Nadeem Sarwar / Digital Trends
“ Micron ’s ZUFS feature article helps organise information so that when the system take to locate specific information for a chore , it ’s a faster and smoother process , ” Rivera told us .
Going beyond the RAM capacity
When it comes to AI workflows , you need a certain amount of RAM . The more , the better . While Apple has set the service line at 8 GB for its Apple Intelligence stack , players in the Android ecosystem have moved to 12 GB as the safe nonremittal . Why so ?
“ AI experiences are also extremely data - intensive and thus major power - hungry . So , to deliver on the promise of AI , retentiveness and store need to deliver low latency and high performance at the utmost tycoon efficiency , ” explain Rivera .
With its next - gen 1γ ( 1 - da Gamma ) LPDDR5X RAM solution for smartphones , Micron has managed to reduce the operating voltage of the memory modules . Then there ’s the all - too - important motion of local execution . Rivera says the new remembering modules can hum at up to 9.6 gigabits per secondly , ensuring top - notch AI carrying out .
Micron read improvements in the Extreme UV ( EUV ) lithography operation have opened the door for not only higher focal ratio , but also a healthy 20 % jump in vim efficiency .
The road to more private AI experiences?
micron ’s next - gen RAM and storage solutions for smartphones are targeted not just at amend the AI operation , but also the general gait of your day - to - day smartphone job . I was queer whether the G9 NAND mobile UFS 4.1 storage and 1γ ( 1 - gamma ) LPDDR5X RAM enhancements would also accelerate up the offline AI processors .
Smartphone manufacturer as well as AI labs areincreasingly shifting towards local AI processing . That mean instead of sending your enquiry to a cloud server where the operation is handled , and then the outcome is get off to your phone using an internet connection , the intact workflow is perform topically on your telephone .
From transcribing calls and phonation notes toprocessing your complex researchmaterial in PDF filing cabinet , everything happens on your phone , and no personal information ever leaves your gadget . It ’s a secure approach that is also faster , but at the same time , it requires beefy organization resources . A quicker and more efficient memory module is one of those prerequisites .
Can Micron ’s next - gen solutions help with local AI processing ? It can . In fact , it will also belt along up process that necessitate a cloud association , such asgenerating video using Google ’s Veo modeling , which still require powerful compute server .
“ A native AI app tend now on the machine would have the most data traffic since not only is it read user data from the storage machine , it ’s also conducting AI inferencing on the machine . In this case , our features would help optimize data flow for both , ” Rivera tells me .
So , how soon can you expect phone equipped with the latest Micron solution to acres on the shelves ? Rivera says all major smartphone manufacturers will adopt Micron ’s next - gen RAM and storage modules . As far as market arrival goes , “ flagship models launching in late 2025 or early 2026 ” should be on your shopping radar .