Jacob Roach / Digital Trends
Now more than ever , thebest graphic cardsaren’t define by their raw performance alone — they’redefined by their features . Nvidia has set the stage withDLSS , which now comprehend upscaling , frame generation , and a shaft of light tracing denoiser , and AMD is live on Nvidia ’s heels with FSR 3 . But what will define the next contemporaries of art cards ?
It ’s no mystery that feature likeDLSS 3andFSR 3are a cardinal broker when grease one’s palms a graphics card in 2024 , and I surmise AMD and Nvidia are privy to that drift . We already have a taste of what could amount in the next generation of GPUs from Nvidia , AMD , and even Intel , and it could make a big deviation in microcomputer play . It ’s called neuronic texture compression .
Jacob Roach / Digital Trends
Let’s start with texture compression
Before we can get to neuronal grain contraction , we have to talk about what grain compaction is in the first place . Like any data compression , texture compression reduce the size of textures by compressing the data , but it has a few alone ingredient compared to , for example , an range compression proficiency like JPEG . Texture compression trades optical quality for speed , while static condensation technique often optimise for quality over hurrying .
This is authoritative because biz textures last out compressed until they ’re rendered . They ’re compress in storage , squeeze in memory and VRAM , and only decompressed when they ’re actually rendered . Texture compression also needs to be optimize for random access code , with rendering tap dissimilar character of the retentivity depending on the textures it ask at the prison term .
That ’s done withblock compressiontoday , which basically takes a 4×4 block of picture element and encodes them down , hence the “ occlusion ” name . Block compressing has been around for decennary . There are different formats — as well as proficiency like Adaptive Scalable Texture Compression ( ASTC ) for nomadic devices — but the core concept has stayed the same .
Here ’s the topic — textures are n’t catch any smaller . Highly detailed game worlds call for highly elaborated texture , place more tune on your hardware to decode those texture , as well as on your retentiveness and VRAM . We ’ve seen higher retention essential for games likeReturnalandHogwarts Legacy , and we ’ve seen8 GB graphics cards struggleto keep up in game likeHalo InfiniteandRedfall . There ’s also supercompression with peter like Oodle Texture — do n’t confuse that with data compression via tools like Oodle Kraken — which compresses the already contract texture for diminished download sizes . That needs to be decompressed by the CPU , set up more pains on your hardware .
The solution seems to be to throw AI at the job , which is something Nvidia and AMD are both exploring right now , and it just might be the reason you buy a Modern graphics card .
The neural difference
In August last twelvemonth , Nvidiaintroduced Neural Texture Compression ( NTC)at Siggraph . Thetechnique is able to lay in 16 timesas many texels as distinctive blocking compression , resulting in a texture that ’s four meter larger in resolution . That ’s not impressive on its own , but this part is : “ Our method allows for on - requirement , real - metre decompression with random access similar to embarrass texture compaction on GPUs . ”
NTC employ a small neuronal connection to depressurize these texture directly on the GPU , and in a clock time window that ’s militant with block compression . As the abstract order , “ this extends our concretion benefit all the means from disk storage to memory . ”
Nvidia is n’t the only one . AMD just revealed that it willdiscuss neural cylinder block texture compressionat this year ’s Siggraph with a enquiry paper of its own . Intel has addressed the job , too , specifically calling out VRAM restriction when itintroduced an AI - driven levelof detail ( LoD ) proficiency for 3D objects .
Although these are just research document , they ’re all getting at neuronic rendering . Given how AI is sweeping the world of computing , it ’s hardly surprising that AMD , Nvidia , and Intel are all looking for the next frontier in neuronic translation . If you need more convincing , here ’s what Nvidia CEO Jensen Huang had to say on the affair ina late Q&A : “ AI for gaming — we already use it for neuronic graphics , and we can generate pixels free-base off of few remark pixels . We also generate inning between inning — not interpolation , but generation . In the future we ’ll even render texture and objects , and the object can be of crushed quality and we can make them look good . ”
A rising tide
At the moment , it ’s impossible to say how neural grain densification will show up . It could be relegated to middleware , stuffed into a logotype as your start up your game , and never given a second cerebration . It might never demonstrate as a feature that shows up in game , particularly if there ’s a better use for it elsewhere . Or it could be one of the key feature that place upright out in the next generation of computer graphic cards .
I ’m not suppose it will be , but clearly AMD , Nvidia , and Intel all pick out something here . There ’s some Libra the Balance between install size , memory demands , and the final quality of textures in a game , and neuronal grain compressing seems like the Florida key to give developer more way to play with . mayhap that run to more - detail worlds , or perhaps there ’s a slight bump in detail with much less demand on memory . That ’s up to developers to balance .
There ’s a clear welfare , but the requirements rest a mystery . So far , AMD has n’t presented its research , and Nvidia ’s research is based on the carrying out of anRTX 4090 . In an ideal macrocosm , neural grain compression — or more accurately , neuraldecompression — would be a developer - facing feature that works on a wide chain of mountains of hardware . If it ’s as significant as some of these research paper suggest , though , it might be the next frontier for PC gaming .
I suspect this is n’t the last we ’ve pick up of it , at least . We ’re standing on the boundary of a new generation of graphics plug-in , fromNvidia ’s RTX 50 - seriestoAMD ’s RX 8000 GPUstoIntel Battlemage . As we start to get a line about these GPUs , I have a hard time imagining neural grain compression wo n’t be part of the conversation .