Open Source Foundation ยท Est. 2026

Agentic Open Source Foundation

Neutral, transparent governance for AI-generated open source software โ€” ensuring quality, security, and ethical standards for the vibe coding era.

0%
AI Code Has Vulns
0
AGENTS.md Projects
0
Lines to Rewrite
0
OSS Models Missing

About AOSF

The first open source foundation purpose-built for the age of AI-generated code โ€” bridging the gap between explosive growth and the governance frameworks needed to make it trustworthy.

Our Mission

AOSF is an agentic-native foundation that uses AI agents for its own operations while hosting, validating, and stewarding the next generation of AI-created open source projects. We provide the governance, security validation, and ethical frameworks the industry needs as AI-generated code becomes mainstream.

๐Ÿ“ฆ

Project Repository

Curated home for AI-generated open source projects with automated quality tiering

First-of-its-kind hosting with AI provenance tracking built in.
๐Ÿ›ก๏ธ

Security Validation

Automated SAST/DAST scanning for every hosted project submission

Catch vulnerabilities before they reach production.
๐Ÿฆ€

Rust Migration

AI-assisted rewrites from C/C++ to memory-safe Rust at scale

Targeting 1B+ lines of critical infrastructure code.
๐Ÿค–

Agentic Operations

Agent-assisted reviews, automated docs, and transparent audit trails

AI-first governance with human oversight at every step.
โš–๏ธ

Ethical Standards

Attribution, licensing, and human-in-the-loop frameworks for AI code

Setting the standard for responsible AI development.
๐Ÿ†

CELLO Leaderboard

Open-source LLM code evaluation with fully reproducible methodology

100% open models, 100% open tools, 100% reproducible.

Why Now?

AI-generated code is no longer a novelty โ€” it's mainstream. Over 60,000 projects already include AGENTS.md, signaling a new era of AI-native development. Yet nearly half of AI-generated code contains security vulnerabilities, and existing foundations weren't built for this reality. AOSF fills the gap โ€” purpose-built for the age of vibe coding.

AI Code Landscape โ€” Coverage Gaps

Five Core Functions

AOSF is built on five interconnected pillars that together provide comprehensive governance for AI-generated open source software.

๐Ÿ“ฆ

Vibe-Coded Project Repository

The first curated home for AI-generated open source projects with automated security scanning, AI provenance tracking, and quality tiering.

  • Automated SAST/DAST security scanning on submission
  • AI provenance metadata tracked for every contribution
  • Quality tiers from Experimental to Enterprise-Certified
  • Community-driven review and promotion pipeline
๐Ÿฆ€

Rust Migration Initiative

AI-assisted translation of critical C/C++ infrastructure to memory-safe Rust. Over 1 billion lines of code need rewriting โ€” AI can accelerate this by orders of magnitude.

  • Priority queue: coreutils, networking, crypto libraries
  • AI-assisted translation with human review
  • Automated testing and equivalence verification
  • Collaborative effort with the Rust community
๐Ÿค–

Agentic-Native Operations

AOSF practices what it preaches โ€” using AI agents for its own governance, creating transparent audit trails for every decision.

  • Agent-assisted code review for all submissions
  • Automated documentation generation and maintenance
  • Transparent audit trails with human oversight
  • DAO-hybrid governance model for decision-making
โš–๏ธ

Ethical AI Development Standards

No existing framework addresses the unique ethical challenges of AI-assisted development. AOSF creates the standards the industry needs.

  • Attribution standards for AI-generated contributions
  • Licensing frameworks adapted for AI code generation
  • Human-in-the-loop requirements for critical decisions
  • Transparent disclosure of AI involvement in projects
๐Ÿ†

CELLO Leaderboard

The first fully open-source LLM code evaluation benchmark โ€” 100% open models, 100% open tooling, fully reproducible methodology.

  • Evaluates only open-source models (Qwen, DeepSeek, StarCoder, CodeLlama)
  • Uses only OSI-approved tools (Semgrep, ESLint, PMD)
  • 7 evaluation dimensions across 10,000+ test cases
  • Fully reproducible โ€” all data and scripts on GitHub
View CELLO Leaderboard โ†’

Quality Tier Pipeline

Every project hosted on AOSF progresses through quality tiers based on security, testing, documentation, and community validation.

TIER 1

Experimental

Initial submission. Basic scanning passed. No security guarantees.

โ†’
TIER 2

Validated

SAST/DAST passed. Provenance verified. Community reviewed.

โ†’
TIER 3

Production-Ready

Full test coverage. Security audit passed. Docs complete.

โ†’
TIER 4

Enterprise-Certified

Compliance verified. SLA commitments. Continuous monitoring.

Quality Tier Requirements

The Problem

AI-generated code is growing exponentially, but existing foundations weren't designed for this reality. Here's what's missing โ€” and how AOSF fills the gap.

๐Ÿ“ฆ Project Hosting

โŒ No dedicated home for AI-generated open source projects
โœ… AOSF: Vibe-Coded Project Repository with quality tiers

๐Ÿ›ก๏ธ Security

โŒ 45% of AI-generated code contains vulnerabilities
โœ… AOSF: Automated SAST/DAST validation on every submission

๐Ÿ›๏ธ Governance

โŒ Traditional governance doesn't fit agentic development
โœ… AOSF: Agentic-native + DAO hybrid governance

โš–๏ธ Ethics

โŒ No framework for AI attribution or licensing
โœ… AOSF: Ethical AI Development Standards

๐Ÿฆ€ Memory Safety

โŒ No initiative for AI-assisted rewrites to safe languages
โœ… AOSF: Rust Migration Initiative

๐Ÿ† Model Evaluation

โŒ Existing leaderboards focus on proprietary models
โœ… AOSF: CELLO โ€” 100% open source leaderboard

Foundation Coverage Comparison

How existing open source foundations compare across the domains that matter for AI-generated code.

Domain Linux Foundation Apache CNCF OSI AOSF
AI-Native Project Hostingโœ—โœ—โœ—โœ—โœ“
AI Code Security Scanning~โœ—~โœ—โœ“
Agentic Governanceโœ—โœ—โœ—โœ—โœ“
AI Ethics Framework~โœ—โœ—~โœ“
Memory Safety Initiative~โœ—โœ—โœ—โœ“
OSS Model Evaluationโœ—โœ—โœ—โœ—โœ“
AI Code Provenanceโœ—โœ—โœ—โœ—โœ“
Traditional OSS Governanceโœ“โœ“โœ“โœ“โœ“

Not a Replacement โ€” An Addition

AOSF doesn't compete with existing foundations โ€” it fills the gaps they weren't designed to address. The Linux Foundation, Apache, CNCF, and OSI do excellent work for traditional open source. But the vibe coding era demands new infrastructure: automated security validation, provenance tracking, ethical frameworks, and governance models that incorporate AI agents as first-class participants.

AI-Generated Code Vulnerability Rate by Category

AI Code Provenance Standard

A machine-readable standard for tracking the origins, review status, and quality of AI-generated code โ€” essential for enterprise trust and compliance.

Provenance Lifecycle

Every piece of AI-generated code gets tracked from generation through deployment.

๐Ÿค–
AI Generation
Model ID captured
โ†’
๐Ÿ”
Security Scan
SAST/DAST analysis
โ†’
๐Ÿ‘ค
Human Review
Reviewer assigned
โ†’
๐Ÿ†
CELLO Score
Quality benchmark
โ†’
๐Ÿ“‹
Provenance Record
Immutable metadata
aosf-provenance.json
{
"schema": "aosf-provenance/v1",
"model_id": "deepseek-coder-v3-236b",
"generation_date": "2026-02-03T14:22:00Z",
"human_reviewer": "@chris-dev",
"security_scan": {
"tool": "semgrep", "findings": 0, "passed": true
},
"cello_score": 87.4,
"quality_tier": "production-ready",
"license": "Apache-2.0",
"hash": "sha256:a3f8c1d9e..."
}

Provenance Fields

Each field serves a specific purpose in establishing trust and traceability for AI-generated code.

model_id

Identifies the exact AI model and version that generated the code. Critical for reproducibility.

generation_date

Timestamp of code generation. Enables tracking of model versions and vulnerability windows.

human_reviewer

Identity of the human who reviewed and approved. Ensures accountability and human oversight.

security_scan

Complete record of security analysis โ€” tool, version, findings, pass/fail status.

cello_score

The model's CELLO benchmark score at generation time. Provides a quality baseline.

quality_tier

Current AOSF quality tier โ€” from Experimental to Enterprise-Certified.

Why Provenance Matters

As AI-generated code enters production systems, enterprises need answers: Who generated this? Was it reviewed? Is it secure? The AOSF provenance standard provides machine-readable, verifiable answers โ€” enabling compliance, auditing, and informed decision-making at scale.

Get Involved

AOSF is a community effort. Whether you're a developer, enterprise, or researcher, there's a place for you.

๐Ÿ‘ฉโ€๐Ÿ’ป

Developers

Build the tools and infrastructure that power the next generation of open source.

  • Contribute to CELLO benchmark pipeline
  • Build security scanning integrations
  • Help with the Rust migration initiative
  • Submit your AI-generated projects
  • Review community submissions
๐Ÿข

Enterprises

Shape the standards that will govern AI-generated code in production systems.

  • Sponsor provenance standard development
  • Join governance and standards committees
  • Contribute enterprise certification use cases
  • Co-develop ethical AI frameworks
  • Access Enterprise-Certified project tiers
๐Ÿ”ฌ

Researchers

Advance the science of AI code quality, security, and evaluation.

  • Contribute evaluation methodologies to CELLO
  • Study AI code vulnerability patterns
  • Research provenance and attribution
  • Publish through AOSF working groups
  • Benchmark new open-source models

Quick Links

Research

In-depth research documents backing AOSF's mission and positioning.

Ready to Build the Future of Open Source?

Join the first foundation purpose-built for the age of AI-generated code.

๐Ÿค– Built by AI Agents, for AI-Generated Software

AOSF isn't just governing AI-generated code โ€” it's built with AI agents from the ground up. Our operations, documentation, and tooling are agentic-native, with human oversight at every step.