The problem is not that AI writes bad code. The problem is that AI writes plausible code — code that compiles, passes review, ships to production, and leaks your users' passwords in a log file.
The failure mode no one is talking about
Every major AI coding tool — Copilot, Cursor, Claude, ChatGPT — generates code that is syntactically correct and functionally reasonable. It connects to the database. It returns the user object. It handles the happy path. The tests pass.
Then it logs the entire request body, including the password field, to stdout. Or it serializes the user struct — Social Security number included — into a JSON response because no one said not to. Or it stores the API key in a config object that gets dumped to a debug endpoint.
These are not hypothetical scenarios. They are the most common class of security bug in AI-assisted codebases. The AI does exactly what you asked. It just also does things you did not think to prohibit.
Consider this pattern, which every AI coding tool will happily generate:
fn handle_login(request: LoginRequest) -> Response {
log::info!("Login attempt: {:?}", request);
// ^^^^^^^^
// request contains the password field.
// This compiles. This ships. This is a credential leak.
let user = db.authenticate(request.username, request.password)?;
Ok(user.into())
// ^^^^^^^^
// user contains SSN, email, phone.
// into() serializes all fields.
// This is a PII leak.
}
This code compiles in Rust. It compiles in Go. It compiles in TypeScript. It compiles in every language that exists today. The type system of every mainstream language considers this program correct.
Why existing languages cannot solve this
Rust proves that compile-time guarantees change how people write software. Its ownership system eliminated an entire class of memory safety bugs — use-after-free, double-free, data races — not through discipline or code review, but through type-level constraints that make the wrong thing impossible to express.
But Rust has no concept of data sensitivity. A String containing a password and a String containing a username are the same type. You can log either one, serialize either one, send either one over the network. The compiler cannot distinguish between them.
The same is true of every mainstream language:
- Python: No type enforcement at all at runtime. Type hints are advisory. Nothing prevents
print(password). - Go: Strong typing, but
stringisstring. A password and a filename are interchangeable. - TypeScript: Branded types can tag data, but nothing prevents casting or logging. It is a convention, not a constraint.
- Rust: Ownership prevents memory bugs. It does not prevent data classification bugs. You can
println!("{}", password)and the compiler is fine with it. - Java / C#: Annotation-based security is opt-in, runtime-checked, and routinely ignored.
The gap is structural. No mainstream language can express "this value contains sensitive data and must not appear in logs, serialized output, or network responses" as a compile-time type constraint.
What Loon does differently
Loon introduces privacy types as first-class compiler-enforced types. A Sensitive<String> is not a String. You cannot pass it to a function that expects String. You cannot log it. You cannot serialize it. You cannot concatenate it with a regular string and sneak it out. The compiler tracks sensitivity through every binding, every function call, every return value.
fn authenticate(password: Sensitive<String>) -> [IO] Bool {
// This is a compile error. Not a warning. Not a lint.
do print("Password: " + password);
// error[E301]: cannot log Sensitive value
// password: Sensitive<String>
// logging Sensitive data is prohibited
// You must explicitly declassify with an auditable operation.
let hash = do crypto_hash(declassify(password, "auth-check"));
do check_hash(hash, stored_hash)
}
The declassify operation is the only way to convert a Sensitive value to a regular value. It requires a string reason that appears in the audit log. It is visible in code review. It is greppable. When an AI agent generates code that declassifies data, the reason is right there in the source: why this value needed to leave the sensitivity boundary.
Loon also enforces effects. Every function declares what side effects it performs:
[IO]— reads or writes to the outside world[Audit]— writes to the audit log[Crypto]— handles cryptographic key material
A function with no effect annotation is pure. It cannot call an IO function. It cannot write audit logs. It cannot touch the filesystem. The compiler verifies this across the entire call chain. If an AI agent generates a pure function that secretly reads a file, the compiler rejects it.
Together, privacy types and effects mean the compiler acts as the last line of defense between AI-generated code and production. The human says "return the user." The AI writes the code. The compiler checks: did it leak the password? Did it serialize the SSN? Did it perform IO in a pure context? Did it use a banned crypto algorithm? If any of these are true, the code does not compile.
Built from scratch, on purpose
Loon's compiler was not built on LLVM, or GCC, or Rust, or any existing toolchain. It was bootstrapped from x86-64 assembly. Stage 0 is a hand-written lexer: 1,198 lines of assembly. Stage 1 is a hand-written parser and code generator: over 6,500 lines of assembly. Stage 2 is the self-hosting compiler: Loon compiling Loon.
This was not an exercise in masochism. It was a trust decision.
If your language's purpose is to enforce security invariants, the compiler itself is a security-critical component. A compiler built on LLVM inherits LLVM's 4 million lines of C++. A compiler built on the JVM inherits the JVM. Each dependency is an attack surface. Each dependency is a trust assumption.
Loon's trust chain starts at instructions you can read. The assembly is in the repository. The binary is reproducible. The bootstrap process is documented, tested on every commit, and auditable by anyone with an x86-64 machine and NASM.
One human directed the project. One AI collaborator wrote much of the assembly under human review. Every instruction was verified against the specification. The bootstrap was completed in four days. This matters because it demonstrates that the approach works: you can build a real compiler from bare metal with AI assistance, and the result is more auditable than a compiler with a million lines of inherited dependencies.
Who this is for
Security engineers who are tired of finding credential leaks in code review and want a language where those leaks are compile errors.
Developers building agentic systems where AI writes code that runs in production. If your AI agent generates code that handles user data, you need a compiler that enforces data classification regardless of what the agent decides to do.
Teams adopting AI coding tools who want guardrails that work at the language level, not the policy level. Linting rules get disabled. Code review gets rubber-stamped. Compiler errors do not get ignored.
Anyone who has shipped a credential leak and spent the next week rotating keys, notifying users, and filing incident reports. Loon exists so that incident does not happen again.
What is honest
Loon is pre-1.0. Here is what does not exist yet:
- The standard library is minimal. There is no package manager.
- Performance optimization has not started. The compiler generates correct code, not fast code.
- The WASM backend is in progress. You cannot run Loon in the browser today.
- Editor support is early. There is no LSP server shipping yet.
- The community is small. You will be an early adopter.
Here is what works today:
- Privacy types are enforced at compile time.
Sensitive<String>cannot be logged. - Effects are tracked across the call graph. Pure functions stay pure.
- Match expressions are exhaustive. Missing cases are compile errors.
- The compiler produces structured JSON errors that AI agents can parse.
- 68 gauntlet tests pass on every commit via GitHub Actions.
- The entire compiler is in the repository. No vendored blobs. No opaque dependencies.
The safety guarantees are real, they are tested, and they work. The ergonomics, ecosystem, and performance will improve. If you care about the guarantees more than the polish, Loon is ready for you to try.