Applied commerce
Please review each article prior to use: grade-level applicability and curricular alignment might not be obvious from the headline alone.
Farmers again caught in geopolitical crossfire
4 minute read Preview Saturday, Mar. 7, 2026Show her the money
6 minute read Preview Saturday, Mar. 7, 2026International Women’s Day spotlight on invisible work
6 minute read Preview Saturday, Mar. 7, 2026It takes a village to raise AI responsibly
5 minute read Saturday, Mar. 7, 2026Anthropic, maker of the popular Claude artificial intelligence model, has been facing heat from the U.S. government over the ethics of military AI. Due to its safety-first approach, its AI was considered the best and was approved for use on classified military networks. It signed a lucrative contract with the Pentagon and was integrated into military systems. Sounds ominous, for sure.
But the contract specified that the AI could not be used for fully autonomous weapons systems that can kill targets without involving human judgment, and for mass domestic surveillance of Americans. The Pentagon fought back against these restrictions, even though it signed the contract as such, insisting that the AI could be used for “all lawful purposes” and quickly sought to punish Anthropic for not capitulating to its demands.
Anthropic stood by its guardrails, both on principle and contract, standing up against the dangerous use of AI, risking the loss of government contracts and punishment from the autocratic regime. In solidarity, Sam Altman from OpenAI, Google’s AI division (Gemini AI) and others have supported the stand that these guardrails are necessary in a safe and democratic society. It is good news that there are red lines that AI should not cross and that the companies themselves are standing up against them.
But what struck me about this battle was a statement from an Anthropic executive in response to the Pentagon’s demands which read: “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” This defence is a clear definition of the limits of their AI model based on a deep understanding of its abilities as the creator of their technology. This becomes apparent when you look at how their model was developed.