The saying “the chickens have come home to roost” may have materialized for OpenAI co-founder and chief scientist Ilya Sutskever. Like a personal trainer who has to quit his job when the weights become too heavy, the computer scientist formally announced his departure barely six months after he played a crucial role in pushing out former CEO Sam Altman.
Altman was forced to leave OpenAI in November 2023 on suspicion of not being completely honest in his communications with the board. Immediately following his ouster, Microsoft grabbed Altman like a hotcake. Nonetheless, like the fury of a woman scorned, OpenAI employees started baying for Sutskever’s and other board members’ blood, threatening to resign en masse and follow Altman to Microsoft unless the entire board resigned.
Related: OpenAI Co-Founder’s Token Surges 140% in Just Seven Days
Tongue-In-Cheek Comment
The coup-de-tat against Altman seems to have backfired as he has been reinstated, leaving a bitter taste in the mouths of Sutskever and fellow board members. It’s also not surprising that Sutskever hooked the legs of other board members and brought them down with him, as they also resigned.
For most of the San Francisco tech crowd, it was simply a matter of when and not if the scientist who helped to create AI chatbot ChatGPT would step down as he had ceased being part of the firm’s decision making processes. Now, it’s finally over – for good, maybe. Offering his tongue-in-cheek comment on the development, CEO Sam Altman said in an X (formerly Twitter) post:
This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend […] His brilliance and vision are well known; his warmth and compassion are less known but no less important.
Watching the direction the artificial intelligence space will take following Sam Altman’s return to the helm of OpenAI will be interesting. Altman is among AI luminaries who believe that AI must spread quickly and widely so humanity can enjoy its benefits, a concept that most players in Silicon Valley seem to embrace. However, AI Doomsayers (Doomers), as well as some AI field luminaries and tech’s high-profile billionaires, are afraid runaway AI could easily escape human control and enslave or destroy the human species.
Have Doomers Lost the AI Fight?
And now, like the revolution that eats its own children, Doomers are afraid that Sutskever has become a victim of advanced AI, which they believe is on the way to becoming a “second intelligent species” that humans must share the Earth with. Doomers have long warned that AI could be the “Great Filter” answer to the Fermi Paradox, wiping out entire human civilizations.
AI ethicists, on the other hand, believe AI has the potential to fool people and spread lies, with the propensity to become a tool for discrimination, magnify humankind’s biases, and entrench them into systems. AI ethicists believe these dangers have grown, especially since the launch of ChatGPT, but sadly, the engineers and AI leading lights are being sidelined or departing in protest.
Leave His Brainchild Behind
When he appeared in a BBC documentary iHuman, Sutskever stated that AI models will “solve all the problems that we have today” and potentially create “infinitely stable dictatorships.” Shockingly, he went on to predict that robots would not just want to kill us humans but would eventually become smarter and prioritize their survival. And now, Sutskever, who in 2022 expressed excitement about ChatGPT, has had to quit and leave his brainchild behind.
More Info:
- Releasing the Dragon: China’s AI Startups Take on Global Giants
- Project Astra: Google’s Multimodal Answer to OpenAI’s ChatGPT 4o
- Virtual AI Priest Fired from Catholic Faith After Proposing Brother to Sister Marriage
Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.