During a recent event, President Donald Trump diverted attention from his intended discussion on the AI Action Plan to address his ongoing concerns about “wokeness.” He criticized AI organizations that operated under the Biden administration for hiring what he termed “woke people,” asserting that this trend was “so uncool to be woke.” Trump further claimed that AI models have been tainted with partisan biases, specifically mentioning “critical race theory.” In a bid to combat this, he signed an executive order titled “Preventing Woke AI in the Federal Government,” which instructs federal agencies to avoid purchasing models that compromise truthfulness and accuracy for ideological purposes.
Analysts familiar with the intersection of politics and technology may recognize that this move appears to be an attempt by the Trump administration to leverage federal funding to encourage AI companies to align with its political narratives. This influence may extend beyond government applications into consumer products and services.
The executive order emphasizes that federal agencies should only engage AI models that deliver “truthful information” and maintain “historical accuracy, scientific inquiry, and objectivity,” while ensuring neutrality and avoiding the influence of ideological frameworks such as diversity, equity, and inclusion (DEI). Trump provides a specific interpretation of DEI within this context, stating:
The suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.
(Historically, DEI has been associated with civil rights and social justice initiatives, prior to its recent politicization as a target for Trump and his supporters.)
The Office of Management and Budget has been tasked with providing further guidance within 120 days.
While specifics regarding the implications of the executive order are still awaited, its potential ramifications raise questions about its influence on the wider landscape of large language models (LLMs). Although the order claims a reluctance to regulate private marketplace AI models, major U.S. LLM developers often seek government contracts. This blurring of lines between government-funded and consumer models could affect how these entities shape their products.
Trump’s expansive definition of DEI has already manifested in other policy areas, leading to changes such as the removal of indigenous and women-related signage in national parks and renaming military vessels that honor LGBTQ+ figures. Even LLMs that aim for a neutral presentation might face scrutiny unless they adapt their offerings accordingly.
There’s not a hard wall between AI for government and everything else
Companies might need to allocate resources toward developing versions of their tools that align with the administration’s perspective—assuming the government distinguishes these products as separate from their mainstream offerings. Adjusting LLMs to comply with political expectations can be a costly and labor-intensive endeavor, especially given the fluid parameters around concepts like Trump’s “DEI.” The stakes are high, as both OpenAI and xAI recently secured $200 million defense contracts, with the prospective expansion of opportunities arising from the new AI plan.
The incentives favor a shift in the alignment priorities of LLMs to appease the Trump administration. This presents a paradox, as it embodies the very ideological manipulation the president claims to oppose.
While the executive order purports to aim for the dissemination of “accurate” and “objective” information, Rumman Chowdhury, co-founder of Humane Intelligence, argues that creating an AI devoid of ideological bias is challenging, if not impossible. Notably, the executive order critiques a past incident where Google faced backlash for an overzealous pro-diversity filter in its models, yet it does not address the documented biases that led to that corrective action.
Furthermore, this discourse extends to ethical considerations, as evidenced by an AI model asserting that users should not misgender individuals, even in dire scenarios, a judgment rooted in moral deliberations. However, similar scrutiny has not been applied to models like xAI’s Grok, which have made controversial statements regarding historical events.
The potential for LLMs to disseminate false information exists, leading to outcomes that could result in significant harm, such as misidentifying individuals or generating misleading data. The executive order’s focus does not address these pressing issues but instead reflects a continuation of Trump’s approach to “DEI” in various sectors, pushing institutions away from recognizing transgender individuals and systemic inequalities.
AI systems have been historically trained on datasets that mirror societal biases, leading to outcomes that do not necessarily conform to Trump’s notion of “woke.” Multiple studies have illustrated biases in outputs produced by AI models, suggesting that companies aiming for fairness frequently have to intervene to recalibrate their technologies, a process that will likely become increasingly difficult with the adoption of the current executive order.
The executive order effectively instructs developers to modify their understanding of important social topics, including racial and gender inequalities. Although it claims concern lies only with developers embedding partisan judgments intentionally into their models, the broader implications suggest an environment where political scrutiny discourages transparency regarding critical societal issues.
Trump’s actions reveal a commitment to curating cultural narratives, as evidenced by his administration’s push against media opposing his viewpoints, academic disciplines critiquing societal structures, and organizations promoting diverse storytelling. The tech arena is increasingly recognized as a crucial battleground for shaping future cultural landscapes, with Trump aiming to embed his political ideologies into foundational AI technologies.