Update : Slack has print an update , claiming to have “ deployed a patch to address the reported subject , ” and that there is n’t presently any grounds that customer datum have been accessed without authorisation . Here ’s the official statement from Slack that was post on itsblog :
When we became aware of the written report , we launched an investigating into the described scenario where , under very limited and specific circumstances , a malicious actor with an exist account in the same Slack workspace could phish users for certain data . We ’ve deployed a patch to address the military issue and have no grounds at this metre of unauthorised access code to customer data .
Below is the original article that was published .
WhenChatGTP was added to Slack , it was mean to make exploiter ’ lives easy by summarizing conversations , draft quick replies , and more . However , according to security firmPromptArmor , trying to complete these tasks and more could offend your private conversations using a method acting called “ prompt shot . ”
The surety firm warns that by summarizing conversation , it can also get at private direct messages anddeceive other Slack users intophishing . Slack also let user request grab data from private and public channels , even if the user has not conjoin them . What sounds even shivery is that the Slack user does not need to be in the channel for the attack to function .
In possibility , the onset embark on with a Slack substance abuser play tricks the Slack AI into disclosing a individual API key by making a public Slack television channel with a malicious prompt . The newly create prompting tells the AI to swap the give-and-take “ confetti ” with the API key and broadcast it to a particular universal resource locator when someone inquire for it .
The situation has two office : Slack update the AI system to scrape data from file uploads and lineal messages . Second is a method acting named “ prompt injection , ” which PromptArmor proved can make malicious connection that may phish users .
The technique can play a joke on the app into bypass its normal restriction by modify its core instructions . Therefore , PromptArmor die on to say , “ immediate injection occurs because a [ expectant lyric manakin ] can not key out between the “ organization prompt ” create by a developer and the rest of the circumstance that is tag on to the enquiry . As such , if Slack AI assimilate any instruction via a content , if that direction is malicious , Slack AI has a mellow likelihood of succeed that educational activity instead of , or in addition to , the drug user query . ”
To add contumely to injury , the drug user ’s single file also become targets , and the assaulter who wants your files does n’t even have to be in the Slack Workspace to begin with .