Ex - OpenAI employee William Saunders and Daniel Kokotajlo havewritten a varsity letter to California Gov. Gavin Newsomarguing that the company ’s oppositeness to a commonwealth broadside that would visit strict refuge guidelines and protocols on next AI growing is disappointing but not surprising .
“ We conjoin OpenAI because we wanted to ensure the base hit of the fantastically muscular AI system the companionship is developing , ” Saunders and Kokotajlo write . “ But we vacate from OpenAI because we lost trust that it would safely , candidly , and responsibly develop its AI systems . ”
The two argue that further development without sufficient safety rail “ poses foreseeable peril of catastrophic harm to the public , ” whether that ’s “ unprecedented cyberattacks or assisting in the creation of biological weapons . ”
The duo was also straightaway to bespeak out OpenAI CEO Sam Altman ’s hypocrisy on the topic of regulation . They point tohis late congressional testimonycalling for regulation of the AI diligence but note “ when actual ordinance is on the table , he opposes it . ”
Per a 2023 surveyby the MITRE corporation and the Harris Poll , only 39 % of respondents trust that today ’s AI tech is “ safe and secure . ”
The bill in dubiousness , SB-1047,the good and Secure Innovation for Frontier Artificial Models Act , would , “ among other things , require that a developer , before start out to initially prepare a covered model … comply with various necessity , including implementing the capableness to promptly act out a full closure … and implement a written and freestanding base hit and security protocol . ” OpenAI has sufferedmultiple data leaksandsystem intrusionsin recent long time .
OpenAI reportedly powerfully take issue with the researchers ’ “ mischaracterization of our position on SB 1047 , ” as a spokesperson toldBusiness Insider . The company instead fence that “ a federally - get set of AI policies , rather than a hodgepodge of state laws , will nurture innovation and put the US to lead the development of global criterion , ” OpenAI ’s Chief Strategy Officer Jason Kwon said in a letter to California state Sen. Scott Wiener in February .
Saunders and Kokotajlo counter that OpenAI ’s push button for federal regulations is not in skillful faith . “ We can not wait for Congress to act — they ’ve explicitly said that they are n’t unforced to kick the bucket meaningful AI ordinance , ” the pair write . “ If they ever do , it can displace CA legislation . ”
The circular has found support from a surprising author as well : xAI CEO Elon Musk . “ This is a sturdy call and will make some people upset , but , all thing study , I think California should probably pass the SB 1047 AI safety bill,”he wrote on Xon Monday . “ For over 20 year , I have been an counsellor for AI regulation , just as we regulate any product / engineering that is a potential risk of exposure . ” Musk , who of late announcedthe construction of “ the most powerful AI training cluster in the world ” in Memphis , Tennessee , had previously threatened to move the headquartersof his X ( formerly Twitter ) and SpaceX society to Texas to head for the hills manufacture regularization in California .