OpenAI has made it clear that its flagship AI divine service , ChatGPT is not intended for malicious use .

The caller has released a report detail that it has keep the trends of high-risk worker using its chopine as it becomes more popular . OpenAI designate it has removed dozens of accounts on the suspicion of using ChatGPT in wildcat ways , such as for “ debug codification to generating subject matter for publication on various distribution platforms . ”

The company has also recently announced reaching a400 million hebdomadary active user milepost . The society detail that its usership has increase by more than 100 million in less than three calendar month as more go-ahead and developer utilise its cock . However , ChatGPT is also a free table service that can be accessed globally . As the moral and honourable aspect of its single-valued function have long been in question , OpenAI has had to come to terms with the fact that there are entity that have subterraneous motives for the platform .

“ OpenAI ’s insurance strictly nix use of yield from our tools for fraud or scams . Through our investigation into delusory employ schemes , we identified and cast out lashings of accounts , ” the ship’s company state in its write up .

In its write up , OpenAI discussed having to challenge villainous actions taking post on ChatGPT . The company play up several case studies , where it has bring out and take action by banning the accounts found to be using the tool for malicious intent .

In one instance , OpenAI detailed an write up that wrote disparaging word article about the US , with the news source being published in Latin America under the guise of a Formosan publication byline . Another face , set in North Korea was found to be to be yield resumes and line profiles for make - believe occupation applicant . According to OpenAI , the write up may have been used for put on to line at Western companies .

OpenAI has confirmed that it has portion out its findings with its industry contemporaries , such as Meta , that might unwittingly be affected by the natural process bechance on ChatGPT .

An ongoing issue

Cybersecurity experts have also long observed spoilt actors using ChatGPT for nefarious purpose , such as train malware and other malicious computer code . These finding have been around since early 2023 , when the shaft was still fresh to the market . This is when OpenAI was first considering introducing a subscription level to support its gamey demand .

Such nefarious tasks entail defective actors using the caller ’s API to createChatGPT alternativesthat can bring forth malware . However , white hat expert have also canvass AI - generated malware from a inquiry perspective , give away loopholesthat countenance the chatbot to sire nefarious code in small , less noticeable , pieces .

IT and cybersecurity professionals were canvas in February 2023 about thesafety of ChatGPT , with many responding that they trust the tool would be responsible for a successful cyberattack within the yr . By March 2023 , the company had experienced its first data breach , which would become a regular occurrent .