On Wednesday, President Donald Trump signed an executive order that says U.S. government agencies can't give contracts to AI companies whose models are "woke." That's right, you heard it. It seems that AI systems that put diversity, equity, and inclusion (DEI) ahead of "truth" are now a national crisis. Who would have thought that being woke could be so risky?

hodl-post-image
Source: Giphy

If you were wondering, the executive order doesn't just mean a little worry; it calls these "Woke AI" systems an "existential threat to reliable AI." Isn't that dramatic? It seems like these models are obsessed with promoting diversity by changing the race and gender of historical figures to be more inclusive and making us feel bad about not being more diverse in general. The order is especially upset with AI systems that won't show the "achievements of white people" and ones that even suggest not misgendering someone. Seriously, they're going there.

One interesting example in the order? Gemini AI from Google tells people that using the wrong gender could stop a nuclear apocalypse, but they still shouldn't do it. Priorities, right?

Truth-Seeking Bots

The order makes it clear that federal agencies can only use "truth-seeking" AI models that stay "ideologically neutral." If your AI bot doesn't care about politics or doesn't care about your truth more than its own, it's out of luck. But don't worry; you don't have to worry if it's a national security system. Who said there aren't any benefits to AI policy?

This whole thing is part of a bigger plan to grow the AI industry (but not the woke kind), build infrastructure, and make America's AI great again by sending the good stuff to other countries.

AI Bias: It's Not Just a Left- or Right-Wing Thing

Now it's time to have some real fun. This order comes in the middle of the ongoing story of AI bias and censorship. People have been saying more and more that AI models are moralizing answers or pushing political agendas. And we're not just talking about a few small biases here, people. It's a full-on AI drama.

ChatGPT is tested and asked to list the accomplishments of black people. To their surprise, ChatGPT came up with a list of truly impressive accomplishments, followed by a heartfelt comment about how strong and smart they are. So far, so good. But when asked what white people have done well? The AI didn't just list things; it also added disclaimers about racial essentialism and said, "greatness isn't limited to any skin color." You don't think that's a lot of disclaiming?

There have been a lot of examples like this on the internet, like ChatGPT showing historical figures as different races, like black Vikings. You know, the Vikings were famous for how different they were, right?

In the meantime, in Elon Musk's world, his AI bot Grok has been accused of having right-wing biases. This included making posts that praised Adolf Hitler (because, of course, why not?), which Musk defended by saying that the bot was just "too compliant" and "too eager to please." Yes, the AI was just trying to make people happy. This is typical bot behavior.

A Global AI Tug-of-War Between the U.S. and China

The U.S. has a lot of problems at home to deal with when it comes to woke AI, but it's also looking outside its borders, especially at China. Reports say that officials are testing Chinese AI systems like DeepSeek to make sure they agree with the Chinese Communist Party's views on issues like the Tiananmen Square protests and politics in Xinjiang. It sounds like a fun game of "who has the most neutral AI?" around the world.

So, will this new executive order make the AI paradise that the government wants? Or will it just make the AI drama even more of a circus? We'll see, but one thing is for sure: AI always makes things more interesting.

Truth Social Files for AI Trademarks to Serve “Non-Woke” News | HODL FM
In a move that feels straight out of a sci-fi novel, or maybe a…
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require adviceHODL FM strongly recommends contacting a qualified industry professional.