Welcome to TACO Tuesday! In today’s post, we focus on the newly created National Strategic AI Defense and Advisory Board featuring some of our usual suspects - Meta, OpenAI, and that paragon of privacy and individual rights, Peter Thiel’s Palantir. This is potentially one of the most destructive Federal boards with Broligarch led companies that are all in thrall to Eric Land, Curtis Yarvin, and the NRx assault on our individual liberties privacy, and political voice through rapid AI proliferation. Carry on brave reader - to be informed is to be armed.
When the Trump administration announced the creation of the National Strategic AI and Defense Advisory Board in June 2025, it received barely a ripple of attention in the press. Framed as a necessary step to “safeguard national security and economic competitiveness,” the executive order establishing the board reads like any other technocratic update.
But dig deeper, and a far more troubling pattern emerges—one that should alarm anyone who still believes in democratic checks and balances. That same ecosystem has been embedded directly into the military via Detachment 201, aka the Executive Innovation Corps—marking a dramatic fusion of corporate tech, military power, and executive control.
Behind the jargon and national security gloss is an attempt to bypass Congress, hollow out statutory oversight, and concentrate AI governance in an executive-led, privately captured shadow government.
Executive Overreach: Rule by Directive
The advisory boards were not created by legislation. They were created by executive order, without hearings, public comment, or congressional vote. That means:
Its authority flows directly from the president, not from any statutory mandate.
Its recommendations can be adopted through National Security Presidential Memoranda (NSPMs), many of which are classified and not subject to Freedom of Information Act (FOIA) disclosure.
It answers not to the public, but to the National Security Council—a body already exempt from FOIA and judicial review under current interpretation.
In short: Congress was not consulted, and the public has no recourse.
This fits a broader pattern in the Trump administration’s second term: governing by executive instrument, particularly in areas framed as “national security” or “critical technology.” Since January 2025, over two dozen major tech and AI policy shifts have been pushed through via executive order, including:
Expanded Section 702 powers for domestic AI surveillance via the FISA Amendments Act.
A new classified interagency framework for “AI-enhanced civil unrest detection” (reported in a redacted IG memo).
Authorization of Palantir’s 10-year DOGE contract without competitive bidding, under a “national emergency” waiver.
Congress, meanwhile, has struggled to even get briefed. Several Democratic members of the House Homeland Security Committee complained in a closed-door session on June 20, 2025, that they were not informed about the board’s creation or the DOGE data transfer until it had already been finalized.
The Legislative Loopholes They’re Exploiting
The advisory board and its MAGA backers are relying on a handful of legal loopholes that allow extraordinary power with minimal oversight:
The Defense Production Act (DPA) – Originally created during the Korean War, the DPA allows the president to prioritize contracts and allocate resources for “national defense.” Under the Trump administration, the definition of “national defense” has been expanded to include AI infrastructure, enabling Stargate and DOGE’s data centralization under Palantir without Congressional appropriations.
CFIUS Authority Extensions – The 2018 Foreign Investment Risk Review Modernization Act (FIRRMA) expanded CFIUS oversight over “critical technologies.” Now, the board is using that power to argue that domesticAI companies fall under CFIUS review if there is any foreign partnership or investment—effectively chilling cross-border AI collaboration and enabling executive intervention without legislative input.
The National Security Act of 1947 – This Cold War-era statute allows the President to define “national intelligence priorities” and reassign agency resources. It’s being used to shift personnel and budgets from public-facing civil agencies (like NIST and NSF) to military-affiliated tech hubs like DARPA, IARPA, and the DoD’s Chief Digital and AI Office (CDAO), all without new legislation.
Constitutional Red Flags: Privacy, Due Process, and Civil Liberties
Legal scholars are beginning to raise the alarm.
The Electronic Frontier Foundation has already filed FOIA requests regarding the scope of DOGE, the process used to award the Palantir contract, and any coordination with the advisory board. As of June 2025, no documents have been released. The EFF argues that the board’s consolidation of mass behavioral data without consent constitutes a de facto violation of the Fourth Amendment, particularly in light of the Supreme Court’s 2018 Carpenter v. United Statesdecision, which held that individuals have a reasonable expectation of privacy in their location data.
Further legal concerns include:
First Amendment violations if AI is used to preemptively target protest movements, activists, or political organizers based on predictive models.
Fifth Amendment concerns around due process if algorithmic threat scores are used to justify detentions, investigations, or immigration actions without judicial oversight.
The lack of judicial review, since NSPMs and most NSC-directed programs are shielded from court challenge under the current interpretation of the state secrets privilege.
As civil liberties attorney Faiza Patel noted at a recent Brennan Center panel:
“We are witnessing a technocratic drift toward a form of governance where predictive analytics determine suspicion, and that suspicion replaces evidence.”
Where Is Congress?
The short answer: 404 File Not Found.
Many lawmakers, particularly within the House AI Caucus and Senate Judiciary Committee, have expressed concern over AI surveillance, but they lack the votes or public pressure to mount an effective challenge. And with the 2026 midterms looming, few are willing to appear “soft” on national security.
However, some efforts are underway:
Sen. Ron Wyden is preparing an AI Surveillance Transparency Act, which would require public disclosure of any federal agency using AI to make or inform decisions about U.S. citizens.
Rep. Pramila Jayapal has demanded a full investigation into the DOGE contract and Palantir’s access to immigration data.
A bipartisan group led by Sen. Josh Hawley and Sen. Elizabeth Warren is attempting to amend FIRRMA to include stronger civil liberties protections when “critical tech” is used on American soil.
But these efforts face opposition not only from the administration—but from a lobbying blitz by Meta, OpenAI, and Palantir, which collectively spent over $85 million on lobbying in 2024 alone, according to OpenSecrets.org.
Authoritarian Tech Dressed in Patriotism: Detachment 201
Formed earlier this month, Detachment 201 commissions senior tech leaders as Army Reserve lieutenant colonels, including:
Meta CTO Andrew “Boz” Bosworth,
Palantir CTO Shyam Sankar,
OpenAI CPO Kevin Weil,
Former OpenAI researcher Bob McGrew
Tasked with advising on “rapid and scalable tech solutions” as part of the Army Transformation Initiative, they’re embedded within military modernization efforts—from AI‐driven logistics to human–machine battlefield integration—without going through typical acquisition processes.
As with NSADA, the lack of transparency around their exact missions or how conflicts of interest will be managed—despite their companies holding lucrative defense contracts, remains unaddressed.
But, this alignment showcases a deeper ideological convergence: tech accelerationism and NRx (Neoreaction) influence.
Tech
leadersdouchebags like Sankar and Bosworth will wear military fatigues to signal a “weapons-grade” patriotism.Palantir and Peter Thiel’s network explicitly promote “speed, lethality, and data supremacy”—hallmarks of NRx accelerationism.
Embedding industry in government, circumventing oversight, and aligning with nationalist doctrine echoes NRx critiques of democratic stagnation.
Let’s be clear: both National Strategic AI and Defense Advisory Board and Detachment 201 are not about protecting America. It’s about accelerating a convergence of corporate power and national security prerogatives—what some call algorithmic governance, and what others might just call techno-fascism.
It is a system that doesn’t need stormtroopers or tanks. It needs data, dashboards, and plausible deniability. And it is being built in plain sight, using the tools of national defense and public-private partnerships to evade the checks and balances that were supposed to protect us.
We are living through a constitutional stress test. And if Congress does not reclaim its authority to legislate, oversee, and protect the public interest—these boards may become the first of many. Today they governs AI and surveillance - both civilian and in the military via an unholy alliance of commerce with government. Tomorrow it may decide the future of policing, education, voting, and civil protest itself.