AI Moonshot!
How to Take AI Back from Big Tech and Build Something Safer, Fairer, and Actually Useful
We’ve let a handful of companies turn AI into a toll road. Letting them fritter away trillions of dollars on only partially useable models that are prone to errors, proprietary, and cost fat too much to operate. We need a public option for AI—open rails, worker-first rules, and guardrails with teeth. Where innovation is prized over fundraising, where honest capability assessments can be performed and published, and where real use cases with real solutions can be built, deployed, and used for far less money and resource currently required. In short - a public-private moonshot. AI for the rest of us - not just the Hateful 8 (Altman, Bezos, Huang, Musk, Nadella, Patchai, Theil, & Zuckerbug). With that - On with the Show…
Why we need a public option for AI (right now)
AI is rapidly becoming essential infrastructure—like roads, power, and the internet once did. But today’s rails are owned and tuned by a few firms whose incentives (ad arbitrage, enclosure, dependency) don’t line up with the public interest. We can do better: build a public-private AI moonshot that couples open systems and public compute with enforceable safety standards and worker protections—so the gains accrue to all of us, not just the Broligarchs.
We’re not starting from zero. The U.S. National AI Research Resource (NAIRR) Pilot already connects public researchers and educators to compute, data, and models; it launched in January 2024 and runs for two years. That’s a template for scaling public compute beyond academia into cities, small businesses, and civil society. (NSF - National Science Foundation)
We also have robust, widely adopted safety and ethics frameworks we can operationalize: the NIST AI Risk Management Framework (AI RMF) in the U.S., the OECD AI Principles (updated 2024), and UNESCO’s global ethics recommendation. These give us interoperable scaffolding for risk controls, documentation, and oversight. (NIST, NIST Publications, OECD AI, UNESCO)
And regulation isn’t a blank slate: the EU AI Act has a risk-based approach with regulatory sandboxes due across Member States by Aug 2026; in the U.S., the 2023 AI Executive Order and the creation of the U.S. AI Safety Institute (AISI) at NIST established safety testing and model evaluation efforts that we can plug into. (Digital Strategy, Artificial Intelligence Act, The White House, Federal Register, NIST)
The moonshot, in one sentence
Build a public-private “AI public option”: shared public compute, open models and data trusts, interoperable APIs, and strong guardrails for safety and labor—delivered through local institutions (universities, libraries, workforce boards) and backed by federal/state procurement to guarantee demand.
The 7 pillars (and how we’d actually do them)
1) Public compute, not private chokepoints
Scale NAIRR into a permanent Public Compute Service: state and municipal “AI zones” piggyback on DOE/NSF resources (e.g., exascale systems at Oak Ridge) and regional clouds. Offer at-cost compute to SMEs, schools, hospitals, and co-ops with transparent queues. (NSF - National Science Foundation, The Department of Energy's Energy.gov)
Tie access to safety & transparency: recipients adhere to NIST AI RMF profiles and disclose model cards, data documentation, and evaluation results to the AISI. (NIST Publications, NIST)
2) Open models with reproducible science
Fund fully open foundation models (weights, code, data recipes, evals) through public grants; prioritize projects like AI2’s OLMo/OLMo 2 that publish training pipelines and checkpoints so others can inspect, verify, and improve. (Allen Institute for AI)
Require retrieval-grounding by default for high-risk use cases (so outputs cite sources), building on open RAG research. (Allen Institute for AI)
3) Ethical guardrails with teeth (not vibes)
Standardize risk controls using the NIST AI RMF and align with OECD and UNESCO norms for transparency, fairness, and human oversight; publish “assurance profiles” that map controls to sector risks. (NIST Publications, OECD AI, UNESCO)
Regulatory sandboxes for civic and health uses operate under EU-style rules: faster iteration, real-world guardrails, mandatory documentation, and exit criteria. (Digital Strategy, Artificial Intelligence Act)
Create a Model & Use Registry at AISI for any model used in safety-critical or rights-impacting contexts (employment, housing, credit, public benefits) with public evals and incident reporting. (NIST)
4) Worker protections baked in, not bolted on
Adopt the U.S. Department of Labor’s AI Principles in public procurement: human oversight, notice to workers, bargaining obligations in union shops, and bans on AI uses that undermine existing labor law. (AIHA, DOL, Privacy + Security Academy)
In employment decisions, follow NYC’s AEDT rule: bias audits before deployment, public posting of audit summaries, and applicant notice. Make it the default standard for any public contractor nationwide. (NYC Government, Greenberg Traurig)
Encourage state-level consumer AI protections à la Colorado SB24-205—impact assessments, governance duties, and duties to mitigate risk. (Colorado General Assembly, NAAG)
5) Interoperability and the right to exit
Mandate data portability and API interoperability in any publicly funded system to prevent “AI enclosures.” Doctorow’s “enshittification” isn’t a meme; it’s what happens when exit gets expensive. We can reverse it with hard interoperability and right-to-repair. (Medium, Wikipedia, bhgs.dca.ca.gov, LegiScan)
6) Data trusts and civic datasets
Stand up sectoral data trusts (health, transit, climate, small business) governed by communities and researchers, with opt-in rules, purpose limits, and revenue-sharing when data powers commercial systems. EU sandboxes provide an implementation path; NAIRR provides hosting. (Digital Strategy, NSF - National Science Foundation)
7) Procurement as the flywheel
Make the public option real with demand guarantees: cities, states, and agencies commit a share of AI procurement to open systems that meet the registry + RMF criteria. Pair with federal matches (akin to CHIPS-style incentives) to de-risk local adoption. (NIST)
Governance: who runs this and how it stays trustworthy
The AI Public Option Network (AIPON): a federated consortium of universities, public libraries, DOE/NSF compute centers, and workforce boards that operate regional nodes under shared rules (RMF profiles, model/use registry reporting, labor standards). Independent oversight by civil-society boards. (NSF - National Science Foundation, NIST Publications)
Safety & evaluation: the U.S. AI Safety Institute coordinates test methods and red-teaming across nodes; collaborates with the UK AISI and EU bodies for cross-border benchmarks. (NIST, GOV.UK, AI Security Institute)
Public dashboards: publish compute allocations, model cards, incidents, and audit summaries so communities can see who’s using what and to what effect (and can veto harmful deployments). (NIST)
What this delivers in 12–18 months
Lower costs for SMEs and co-ops: at-cost public compute and interoperable APIs cut dependence on vertically integrated platforms. (NAIRR-style access expanded beyond academia.) (NSF - National Science Foundation)
Safer hiring & HR: NYC-style bias audits and registries prevent black-box hiring and improve due-process rights for applicants and employees. (NYC Government)
Transparent models: open models (e.g., OLMo 2) with reproducible training let hospitals, schools, and newsrooms verify behavior and adapt to local needs. (Allen Institute for AI)
Reduced “platform tax”: interoperability + right-to-repair rules ease switching and maintenance, pushing incumbents toward service quality instead of lock-in. (Medium, bhgs.dca.ca.gov)
Policy checklist you can send to your mayor/governor/rep
Fund a regional AIPON node (partner with the state flagship university + library system; plug into NAIRR). (NSF - National Science Foundation)
Adopt NIST AI RMF profiles for all public AI procurement; publish assurance profiles and impact assessments. (NIST Publications)
Require NYC-style bias audits for any public-sector hiring tool and any vendor that sells into the region. (NYC Government)
Pass right-to-repair + interoperability ordinances for public-facing AI systems and devices. (CA SB-244 is a working model.) (bhgs.dca.ca.gov, LegiScan)
Stand up sectoral data trusts (health, transit, climate) with public governance; prefer EU-style regulatory sandboxes for pilots. (Digital Strategy)
Join the AISI consortium testing program; require vendors to contribute evals and incidents to the public registry. (NIST)
Anticipating the pushback (and why it fails)
“Open models are unsafe.” Openness with registries, evals, and incident reporting is safer than proprietary opacity—especially when grounded in NIST RMF and overseen by AISI/independent boards. (And the open ecosystem is advancing fast.) (NIST Publications, NIST, Allen Institute for AI)
“Public compute can’t scale.” DOE/NSF already run some of the world’s fastest systems; the NAIRR pilot shows the coordination model. Scaling is a budget and governance question—not a technical impossibility. (The Department of Energy's Energy.gov, NSF - National Science Foundation)
“Regulation kills innovation.” The EU AI Act’s sandboxes and the U.S. EO’s testing regime are designed to enable responsible iteration while protecting rights. That’s market-making, not market-breaking. (Digital Strategy, The White House)
The bigger point
This isn’t about punishing Big Tech; it’s about ending dependency. If AI is going to mediate the way we learn, diagnose, build, hire, plan cities, and care for one another, then the rails can’t be private chokepoints. A public option for AI—public compute, open models, interoperable data, worker-first rules, and serious safety science—gives everyone the tools and the rights to use them.
We’ve done moonshots before. This time, the rocket is governance - and it’s not from SpaceX.
No More Tech Lords - Smash the Machines!
Sources & further reading
NAIRR Pilot (NSF/OSTP): program overview and “democratizing AI R&D” announcement. (NSF - National Science Foundation)
NIST AI Risk Management Framework (AI RMF 1.0): overview + full PDF. (NIST, NIST Publications)
EU AI Act: risk-based rules and national sandbox requirements. (Digital Strategy, Artificial Intelligence Act)
U.S. Executive Order on AI (EO 14110): federal direction for safety, privacy, workers. (The White House, Federal Register)
U.S. AI Safety Institute (AISI) at NIST: formation and industry research agreements. (NIST)
UK AI Security/Safety Institute: mission and scope. (GOV.UK, AI Security Institute)
DOE Exascale (Frontier) background: foundation for public compute. (The Department of Energy's Energy.gov)
Open models: AI2’s OLMo/OLMo 2 releases and documentation. (Allen Institute for AI)
NYC Automated Employment Decision Tools (AEDT): bias-audit rule and FAQs. (NYC Government)
Colorado SB24-205: consumer protections in AI. (Colorado General Assembly)
Right-to-Repair: California SB-244 and state advisory (effective July 2024). (bhgs.dca.ca.gov, LegiScan)
Interoperability & platform decay: Cory Doctorow’s “enshittification.” (Medium, Wikipedia)
DOL AI & worker well-being principles: press and guidance summaries. (AIHA, DOL)