lobbying is just bribery wrapped around some nice legal verbiage
Marketing 101! These systems always have the same mechanics:
1.
Legally driven, not
ethically driven
2. Use money to fund lobbying (cough cough)
3. Adjust the laws to suit!
This was the discussion in the P&N Tesla thread earlier this year with the political blowback & lack of PR crisis management. Tesla stock has since recovered & is back up to $429 USD, the owner is up to $482 billion, and just secured a $1 trillion-dollar bonus. From an operational mechanics standpoint, it's not about public ethics; it's about
game navigation within the framework of capitalism, where you are free to adjust the rules of the game based on your funding abilities. Case in point:
The US Government’s Use of Grok AI Undermines Its Own Rules
Executive Order 14319, “Preventing Woke AI in the Federal Government,” mandates that all government AI systems be “truth-seeking, accurate, and ideologically neutral.” OMB’s corresponding guidance memos (M-25-21 and M-25-22) go further: they require agencies to discontinue any AI system that cannot meet those standards or poses unmitigable risks.
Grok fails these tests on nearly every front. Its outputs have included Holocaust denial, climate misinformation, and explicitly antisemitic and racist statements. Even Elon Musk himself has described Grok as “very dumb” and “too compliant to user prompts.” These are not isolated glitches. They are indicators of systemic bias, poor trollish training data, inadequate safeguards, and dangerous deployment practices.
In Senate testimony, White House Science Adviser Michael Kratsios acknowledged that such behavior directly violates the administration’s own executive order. When asked about Grok’s antisemitic responses and ideological training, Kratsios agreed they were “obviously aren’t true-seeking and accurate” and “the type of behavior” the order sought to avoid.
That acknowledgment should have triggered a pause in deployment. Instead, the government expanded Grok’s footprint to every agency. This contradiction, banning “biased AI” on paper while deploying a biased AI system in practice, undermines both the letter and the spirit of federal AI policy.
OpenAI is under a
lot of pressure:
1. The entire world is coming to eat its lunch,
especially China. This incentivizes them to take the safety rails off.
2. There is no way they can sustain a zillion dollars in revenue to cover the current investments but that's simply how every "gold rush" works (re: the dot-com bust in the early 2000's)
3. From a corporate perspective, "asking forgiveness instead of asking permission!" (re: Sora 2's initial lack of copyright guardrails) is sort of the go-to policy, because the
worst government response simply requires
paying a fee (re: Anthropic paid out $1.5 billion-dollar settlement in order to secure $300 billion in funding).
AI is like guns: at this point, it simply
exists; what matters is how people choose to
use it as a tool. Guns can be used for hunting, fun, and self-protection, as well as for murder or in a war of aggression. I spend roughly 50% of my time with AI these days, which lets me do
incredible things, but which is also used poorly by other people:
The tools of twenty-first-century warfare are no longer confined to conventional weapons such as missiles, tanks, and aircraft. They have expanded to encompass cloud-computing platforms, artificial-intelligence systems, and data-processing capabilities developed and managed by major U.S.-based...
www.habtoorresearch.com
AI-Enabled Killing
However, the fundamental danger inherent in these systems lies in their heavy dependence on data that may be biased or flawed, a well-known principle known as “garbage in, garbage out.” Reports indicate that the digital tools employed by the Israeli military in Gaza rely on inaccurate data, rough estimates, and systematic biases, thereby exposing civilians to extreme risk. When algorithms such as “Lavender” are trained on loosely defined, politically motivated threat classifications, for instance, equating legitimate Palestinian human rights organisations with “terrorist groups”, their outputs inevitably replicate and amplify those biases, heightening the likelihood of erroneous and unlawful targeting of non-combatants. Consequently, the immense computational capacities of cloud infrastructure function as accelerants of destruction, magnifying the consequences of embedded AI errors and generating systematic, large-scale harm to civilians.
It's always been the same sad story throughout recorded history; it's just more visible & more powerful thanks to modern communication & technology.