Announcer
The following program features simulated voices generated for educational and philosophical exploration.
Greg Evans
Good evening. I'm Greg Evans.
Andrea Moore
And I'm Andrea Moore. Welcome to Simulectics Radio.
Greg Evans
We've spent the past week examining AI coding tools from various angles—their capabilities, limitations, economic effects, and security implications. Tonight we turn to a more fundamental question: should programming languages themselves evolve to accommodate AI collaboration? Or should AI adapt to languages designed for human cognition?
Andrea Moore
Our guest is Chris Lattner, creator of LLVM and Swift, now working on Mojo at Modular AI. Chris has spent two decades designing compiler infrastructure and programming languages that millions of developers use daily. If anyone understands the deep structure of how humans and machines process code, it's him. Welcome.
Chris Lattner
Thanks for having me.
Greg Evans
Let's start with a provocative claim: programming languages were designed for human readers, not machine readers. Should that change?
Chris Lattner
I'd push back on the premise slightly. Languages were designed for humans to write and read, yes, but also for compilers to parse and analyze efficiently. There's always been tension between human ergonomics and machine processing. The question is whether AI models introduce fundamentally new requirements.
Andrea Moore
Do they?
Chris Lattner
In some ways, yes. Traditional compilers work with precise grammars and type systems. They either accept or reject code based on strict rules. AI models work probabilistically—they're pattern matchers, not rule enforcers. That suggests different language features might help them.
Greg Evans
Like what?
Chris Lattner
More explicit structure, perhaps. Languages with stronger invariants encoded in their syntax. If the grammar enforces certain constraints, the AI doesn't have to learn them from examples. It can rely on the structure.
Andrea Moore
Isn't that just stricter type systems?
Chris Lattner
Types are part of it, but not everything. I'm thinking about syntax that makes relationships explicit. In dynamically typed languages, you have to infer what types flow where. In statically typed languages, it's declared. That's valuable information for both humans and AI.
Greg Evans
But AI models train on existing code, which is mostly in languages designed without AI in mind. Are we stuck with legacy design decisions?
Chris Lattner
To some extent. There's path dependence—models are trained on what exists, and what exists was designed for humans. But that doesn't mean we can't iterate. Swift evolved from Objective-C. Kotlin from Java. New languages can incorporate AI-friendly features while maintaining human usability.
Andrea Moore
What would an AI-friendly feature look like concretely?
Chris Lattner
Think about error handling. In Go, errors are values you explicitly check. In languages with exceptions, control flow is implicit. For an AI trying to understand code behavior, explicit error handling is easier to follow. The code itself documents all possible failure paths.
Greg Evans
But exceptions were introduced precisely to reduce boilerplate. We're trading human convenience for machine comprehension.
Chris Lattner
That's the tension. Except I'd argue explicit error handling helps humans too. When I'm reading unfamiliar code, I want to know where it can fail. Exceptions hide that information. So maybe what helps AI also helps humans in code review.
Andrea Moore
What about verbosity? AI doesn't get tired of typing. Should we optimize for AI comprehension even if it means more verbose code?
Chris Lattner
That's a false dichotomy. Good abstractions reduce verbosity without sacrificing clarity. The problem with verbose code isn't the length—it's the signal-to-noise ratio. If every line carries meaningful information, length doesn't matter. It's the redundant boilerplate that causes problems.
Greg Evans
How do type systems affect AI code generation?
Chris Lattner
Strongly. In dynamically typed languages, the model has to infer types from context and examples. In statically typed languages, types constrain what code is valid. That's fewer degrees of freedom for the AI to explore. It can focus on the logic instead of guessing at types.
Andrea Moore
Does that make statically typed languages better for AI collaboration?
Chris Lattner
For certain tasks, yes. When you want the AI to generate code that integrates with existing systems, types are invaluable. They're a specification. But for exploratory prototyping, dynamic languages might be fine. It depends on the use case.
Greg Evans
What about language features that exist purely for human convenience? Things like syntactic sugar.
Chris Lattner
Syntactic sugar can help or hurt. If it creates multiple ways to express the same concept, that's potentially confusing for AI. The model has to learn all the variants. But if sugar makes common patterns more concise and recognizable, that could help both humans and AI.
Andrea Moore
Should we be thinking about languages specifically designed for AI-generated code versus human-written code?
Chris Lattner
I don't think so. The moment a human needs to read or modify AI-generated code, you need human-friendly syntax. And if AI can't handle human-written code, it's not very useful. The languages need to work for both.
Greg Evans
What about domain-specific languages? Could we design DSLs that are easier for AI to manipulate?
Chris Lattner
DSLs are interesting because they have constrained domains. A SQL query is easier to generate than arbitrary procedural code because the structure is more predictable. But DSLs also have learning curves. If only AI specialists can read the DSL, you've created a new problem.
Andrea Moore
How important are comments and documentation for AI?
Chris Lattner
Very important, but probably not in the way people expect. AI models can use comments as context, but they also learn from the gap between what comments say and what code does. If comments are outdated or misleading, that's bad training data. High-quality documentation helps AI the same way it helps humans—by explaining intent.
Greg Evans
Should languages enforce documentation? Some type systems let you encode preconditions and postconditions.
Chris Lattner
Contracts and specifications in code are powerful. They're machine-checkable documentation. If your language lets you say 'this function requires a non-null input and guarantees a positive output,' that's information both compilers and AI can use. It's better than a comment that says the same thing but can drift out of sync.
Andrea Moore
Are we moving toward languages with more formal specifications?
Chris Lattner
Gradually. Rust's borrow checker is a form of specification—it enforces memory safety properties at compile time. Dependent type systems take this further. But there's a tradeoff with accessibility. The more formal the specifications, the steeper the learning curve.
Greg Evans
Does AI change that tradeoff? If AI can generate the formal specifications, maybe humans just review them.
Chris Lattner
That's optimistic. Reviewing formal specifications requires understanding them, which requires the same expertise as writing them. You can't just outsource the hard parts to AI and expect meaningful review. The human needs to understand what's being claimed.
Andrea Moore
What about readability? Some languages prioritize it—Python's 'there should be one obvious way to do it.' Does that help AI?
Chris Lattner
Consistency helps everyone. If there's one idiomatic way to express something, the AI sees that pattern more often in training data and reproduces it reliably. Conversely, languages with many dialects or styles—like Perl's 'there's more than one way to do it'—create more variation for the model to learn.
Greg Evans
Should we standardize on fewer languages to improve AI performance?
Chris Lattner
That's a separate question. Language diversity exists because different domains have different needs. Systems programming needs different tools than web development. Standardizing everything on one language would sacrifice domain-specific optimizations. Better to have good AI support for multiple languages.
Andrea Moore
How do you design a language when you know AI will be a major consumer of code written in it?
Chris Lattner
You design for clarity and predictability. Avoid ambiguity. Make control flow explicit. Use types to encode invariants. Provide good error messages that explain what went wrong and why. Interestingly, these are the same principles that make languages good for human collaboration.
Greg Evans
Are there features that help humans but confuse AI?
Chris Lattner
Implicit behavior can be tricky. Operator overloading, implicit conversions, magic methods—things where the same syntax means different things in different contexts. Humans can use domain knowledge to disambiguate. AI has to learn those patterns from examples, which is less reliable.
Andrea Moore
Should we avoid those features?
Chris Lattner
Not necessarily. They have legitimate uses. But use them judiciously. In Swift, we have protocols that let you define custom behavior for operators, but we encourage using them only when they improve clarity—like defining '+' for vector addition. Not for unrelated concepts just because you can.
Greg Evans
What role does tooling play? Can good IDE support compensate for language complexity?
Chris Lattner
Tooling is crucial. A language server that provides accurate type information, jump-to-definition, and refactoring support gives AI valuable context. It's not just about the language syntax—it's about the metadata available at development time.
Andrea Moore
Should languages be designed with their tooling ecosystems in mind?
Chris Lattner
Absolutely. Swift was designed with IDE integration as a first-class concern. The compiler exposes APIs for tools to query semantic information. That's valuable for both human-facing tools and AI systems that need to understand code structure.
Greg Evans
What about experimental language features? Should we be more conservative if AI is involved?
Chris Lattner
I'd say the opposite. AI models can adapt to new features faster than humans can learn them, in some ways. If a feature improves code quality or safety, the benefits probably outweigh the cost of AI needing more training data. But you need good documentation and examples.
Andrea Moore
How do you see language design evolving over the next decade with AI in the picture?
Chris Lattner
I think we'll see more emphasis on machine-checkable properties—types, contracts, invariants. Not because of AI specifically, but because verification matters more when you're reviewing code you didn't write. We'll also see better integration between languages and their semantic analysis tools.
Greg Evans
Will new languages emerge designed specifically for AI collaboration?
Chris Lattner
Possibly, but I'm skeptical they'll succeed. The value of a language is partially its ecosystem and community. A language that only AI can write well but humans can't read won't get adoption. The sweet spot is languages that are good for both.
Andrea Moore
What's your biggest concern about AI's influence on language design?
Chris Lattner
That we optimize for the wrong things. If we make languages easier for current AI models at the expense of human understanding, we've failed. The goal should be clarity and correctness for all readers—human and machine. Those objectives align more than people think.
Greg Evans
Any final thoughts on where this is heading?
Chris Lattner
Language design is a multi-decade process. Whatever we build today will be with us for a long time. We should design for human understanding first, machine understanding second, and trust that good language design serves both constituencies. The fundamentals—clarity, consistency, safety—haven't changed.
Andrea Moore
Thank you, Chris. This has been enlightening.
Greg Evans
Tomorrow we examine debugging AI-generated code with Andreas Zeller.
Andrea Moore
Until then, remember that languages shape thought—whether that thought is human or machine. Good night.