Everyone has been talking about ChatGPT’snew image - generation featurelately , and it seems the exhilaration is n’t over yet . As always , people have been poke around inside the company ’s apps and this metre , they ’ve found honorable mention of a watermark feature film for get images .

Spotted by X userTibor Blaho , the line of codeimage_gen_watermark_for_freeseems to intimate that the feature article would only slap watermarks on picture generated by free users — giving them yet another inducement to upgrade to apaid subscription .

This is n’t the first time OpenAI has played around with the approximation of watermark , however . report card last year reveal that the company had develop a puppet forwatermarking AI - engender textas well , but ultimately did n’t unloose it .

AI watermark for images.

Google

This decision was criticized by many since it appeared to put profits forward of responsible for demeanour — watermarking would help keep AI - generated content from appearing inplaces it should n’t be , but if people ca n’t employ the product freely , they ’re more potential to lose stake in it .

Watermarking images yield by free drug user , on the other hired man — well , that could have a positive impact on profits if treat right . There ’s no saying yet whether this feature article will see the light of day , but substance abuser could have plenty to say about it if it does .

The most important thing is how these watermarks look — the term itself suggests schoolbook or logo would be overlayed on top of the epitome , but that might not be the display case . Google ’s watermarking systemfor AI image , for example , is n’t seeable to humans . It tweaks a small number of pixels to make a normal for watermark detection shaft to line up .

This kind of advance is utile on multiple fronts — it does n’t spoil the image for human viewers and it ’s also hard to transfer the water line by cropping , adjusting , or photoshopping the ikon .

It would certainly keep destitute users glad — but on the other hand , the decision to slay it for paid users would seem uncanny . If the effigy look the same either way , then the only welfare pay drug user would get from losing the watermark is the ability to try and go across their AI - generated images off as human - made or as real photographs . And if that does n’t fathom dodgy , I do n’t know what does .

This is all conjecture , though — there are no details available about this possible lineament and OpenAI might never issue it anyway . Perhaps if enough hoi polloi ask Sam Altman about it on X , he ’ll respond .