OpenAI

Even as OpenAIcontinues cling to its assertionthat the only path to AGI dwell through massive fiscal and energy expenditure , sovereign researcher are leverage open - rootage engineering to match the functioning of its most muscular models — and do so at a fraction of the price .

Last Friday , a unified squad from Stanford University and the University of Washingtonannounced that they had traineda math and coding - focalise large language model that do as well asOpenAI ’s o1andDeepSeek ’s R1reasoning model . It cost just $ 50 in cloud compute reference to build . The team reportedly used an off - the - shelf groundwork model , then distilled Google’sGemini 2.0 Flash Thinking Experimentalmodel into it . The process of distilling AIs involve pull the relevant data to discharge a specific task from a large AI model and transfer it to a small one .

OpenAI’s new typeface OpenAI Sans

OpenAI

What ’s more , on Tuesday , researchers from Hugging Face give up a competitor toOpenAI ’s Deep ResearchandGoogle Gemini ’s ( also ) Deep Researchtools , dubbedOpen Deep Research , which they develop in just 24 hr . “ While powerful LLM are now freely useable in open - source , OpenAI did n’t disclose much about the agentic framework underlying Deep Research , ” Hugging Facewrote in its promulgation post . “ So we decide to enter on a 24 - 60 minutes mission to reproduce their results and open - source the needed framework along the elbow room ! ” It reportedly costs an estimate $ 20 in cloud compute credit , and would require less than 30 minutes , to train .

Hugging Face ’s manakin subsequently notch a 55 % truth on the General AI Assistants ( GAIA ) benchmark , which is used to test the capacity ofagentic AI systems . By comparing , OpenAI ’s Deep Research make between 67 – 73 % truth , depending on the answer methodologies . Granted , the 24 - hour manakin does n’t execute quite as well as OpenAI ’s offering , but it also did n’t take billions of dollars and the push generation capacitance of a mid - sized European body politic to direct .

These efforts follownews from Januarythat a team out of University of California , Berkeley ’s Sky Computing Lab managed to educate their Sky T1 abstract thought model for around $ 450 in cloud compute credits . The squad ’s Sky - T1 - 32B - Preview example proved the equal of early o1 - preview logical thinking framework release .   As more of these exposed - source challenger to OpenAI ’s industry dominance come forth , their simple existence calls into question whether the company ’s plan ofspending half a trillion dollar to ramp up AI information center and energy product facilitiesis really the answer .