Neutral, transparent governance for AI-generated open source software โ ensuring quality, security, and ethical standards for the vibe coding era.
The first open source foundation purpose-built for the age of AI-generated code โ bridging the gap between explosive growth and the governance frameworks needed to make it trustworthy.
AOSF is an agentic-native foundation that uses AI agents for its own operations while hosting, validating, and stewarding the next generation of AI-created open source projects. We provide the governance, security validation, and ethical frameworks the industry needs as AI-generated code becomes mainstream.
Curated home for AI-generated open source projects with automated quality tiering
Automated SAST/DAST scanning for every hosted project submission
AI-assisted rewrites from C/C++ to memory-safe Rust at scale
Agent-assisted reviews, automated docs, and transparent audit trails
Attribution, licensing, and human-in-the-loop frameworks for AI code
Open-source LLM code evaluation with fully reproducible methodology
AI-generated code is no longer a novelty โ it's mainstream. Over 60,000 projects already include AGENTS.md, signaling a new era of AI-native development. Yet nearly half of AI-generated code contains security vulnerabilities, and existing foundations weren't built for this reality. AOSF fills the gap โ purpose-built for the age of vibe coding.
AOSF is built on five interconnected pillars that together provide comprehensive governance for AI-generated open source software.
The first curated home for AI-generated open source projects with automated security scanning, AI provenance tracking, and quality tiering.
AI-assisted translation of critical C/C++ infrastructure to memory-safe Rust. Over 1 billion lines of code need rewriting โ AI can accelerate this by orders of magnitude.
AOSF practices what it preaches โ using AI agents for its own governance, creating transparent audit trails for every decision.
No existing framework addresses the unique ethical challenges of AI-assisted development. AOSF creates the standards the industry needs.
The first fully open-source LLM code evaluation benchmark โ 100% open models, 100% open tooling, fully reproducible methodology.
Every project hosted on AOSF progresses through quality tiers based on security, testing, documentation, and community validation.
Initial submission. Basic scanning passed. No security guarantees.
SAST/DAST passed. Provenance verified. Community reviewed.
Full test coverage. Security audit passed. Docs complete.
Compliance verified. SLA commitments. Continuous monitoring.
AI-generated code is growing exponentially, but existing foundations weren't designed for this reality. Here's what's missing โ and how AOSF fills the gap.
How existing open source foundations compare across the domains that matter for AI-generated code.
| Domain | Linux Foundation | Apache | CNCF | OSI | AOSF |
|---|---|---|---|---|---|
| AI-Native Project Hosting | โ | โ | โ | โ | โ |
| AI Code Security Scanning | ~ | โ | ~ | โ | โ |
| Agentic Governance | โ | โ | โ | โ | โ |
| AI Ethics Framework | ~ | โ | โ | ~ | โ |
| Memory Safety Initiative | ~ | โ | โ | โ | โ |
| OSS Model Evaluation | โ | โ | โ | โ | โ |
| AI Code Provenance | โ | โ | โ | โ | โ |
| Traditional OSS Governance | โ | โ | โ | โ | โ |
AOSF doesn't compete with existing foundations โ it fills the gaps they weren't designed to address. The Linux Foundation, Apache, CNCF, and OSI do excellent work for traditional open source. But the vibe coding era demands new infrastructure: automated security validation, provenance tracking, ethical frameworks, and governance models that incorporate AI agents as first-class participants.
A machine-readable standard for tracking the origins, review status, and quality of AI-generated code โ essential for enterprise trust and compliance.
Every piece of AI-generated code gets tracked from generation through deployment.
Each field serves a specific purpose in establishing trust and traceability for AI-generated code.
Identifies the exact AI model and version that generated the code. Critical for reproducibility.
Timestamp of code generation. Enables tracking of model versions and vulnerability windows.
Identity of the human who reviewed and approved. Ensures accountability and human oversight.
Complete record of security analysis โ tool, version, findings, pass/fail status.
The model's CELLO benchmark score at generation time. Provides a quality baseline.
Current AOSF quality tier โ from Experimental to Enterprise-Certified.
As AI-generated code enters production systems, enterprises need answers: Who generated this? Was it reviewed? Is it secure? The AOSF provenance standard provides machine-readable, verifiable answers โ enabling compliance, auditing, and informed decision-making at scale.
AOSF is a community effort. Whether you're a developer, enterprise, or researcher, there's a place for you.
Build the tools and infrastructure that power the next generation of open source.
Shape the standards that will govern AI-generated code in production systems.
Advance the science of AI code quality, security, and evaluation.
In-depth research documents backing AOSF's mission and positioning.
Join the first foundation purpose-built for the age of AI-generated code.