In partnership with

TL;DR:
  • A two-person company called LEAP 71 has designed and tested nine rocket engines in under two years—including an aerospike, the "holy grail" engine that could enable single-stage-to-orbit spacecraft

  • They're not using generative AI (ChatGPT, image generators, neural networks). They built something called Noyron—a computational model that encodes physics, not patterns

  • The difference matters: Noyron is deterministic, traceable, and explainable. Every design decision lives in reviewable code. No hallucinations. No black boxes.

  • For leaders worried about AI governance: This is what "auditable AI" actually looks like—and it's already building rocket engines that work on the first test

INTRODUCTION

The Headline That Should've Broken the Internet

Here's a story that somehow flew under everyone's radar:

A company with two employees—two—just tested nine rocket engines in under two years. Not simulations. Not prototypes gathering dust. Real engines that ignite, burn at 3,000°C, and work on the first attempt.

One of them was an aerospike.

If that name doesn't mean anything to you, here's the short version: aerospace engineers have been chasing functional aerospikes for decades. NASA's X-33 program burned through $1.3 billion trying to build one in the 1990s before getting cancelled. The engines actually worked fine—it was the composite fuel tanks that failed. But the program died, and aerospikes became the industry's favorite "what if."

LEAP 71 built and tested theirs in weeks.

A LEAP 71 rocket engine during a hot fire test

Earlier this month, they fired two 20 kN methalox engines—the same propellant SpaceX uses in Raptor. The bell nozzle hit 93% combustion efficiency on its first attempt. That's a number that typically takes years of iteration.

So how does a two-person startup outpace billion-dollar programs?

They threw out the entire design process and started over.

FROM OUR PARTNERS

The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.

Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:

  • 7 high-impact AI strategies to accelerate your marketing performance

  • Practical use cases for content creation, lead gen, and personalization

  • Expert insights into how top marketers are using AI today

  • A framework to evaluate and implement AI tools efficiently

Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.

AEROSPIKES

Why Aerospikes Are the Holy Grail

Traditional rocket engines use bell-shaped nozzles. They work great—at one specific altitude. At sea level, the exhaust over-expands and you lose efficiency. In the vacuum of space, it under-expands and you lose thrust. Bell nozzles are a compromise, optimized for one slice of the flight.

Aerospikes flip this problem inside out.

Instead of containing exhaust inside a bell, aerospikes fire exhaust along the outside of a central spike. The atmosphere itself becomes the outer wall of the nozzle. At sea level, high air pressure squeezes the exhaust tight against the spike. As the rocket climbs and pressure drops, the exhaust naturally expands outward—creating a "virtual nozzle" that's always perfectly tuned to altitude.

The result? Near-optimal efficiency from launchpad to orbit. No compromise.

This is why aerospikes have been the centerpiece of every serious Single-Stage-to-Orbit (SSTO) concept for decades. SSTO is the dream: one spacecraft that reaches orbit intact, lands, refuels, and flies again—like an airplane. No dropped stages. No hardware falling into the ocean. No assembling multiple rockets into one launcher.

The reason we don't have SSTO vehicles isn't physics. It's engineering complexity. Aerospikes are notoriously difficult to manufacture and iterate. Every design cycle takes months.

Unless you're LEAP 71.

THE BIGGER PICTURE

For Everyone Who Isn’t a Rocket Scientist:

LEAP 71 didn't use generative AI. No ChatGPT. No neural networks. No models trained on millions of examples to predict the next likely output.

They built something called Noyron—a "Large Computational Engineering Model" that computes rocket engines from first principles. Physics. Thermodynamics. Fluid dynamics. Material science. Manufacturing constraints. All encoded in algorithms.

Lin Kayser, LEAP 71's co-founder, is emphatic about the distinction:

"A Computational Engineering Model is an algorithm, not a neural net. It's closer in spirit to an expert system. Ironically, expert systems were once called 'AI' before neural networks redefined the term."

Lin Kayser

His co-founder Josefine Lissner—an aerospace engineer who cut her teeth in Formula One—puts it more bluntly:

"You cannot ask a black-box AI to generate an airplane and then trust that it works. No one will board a plane whose structure no engineer can understand."

Josefine Lissner

Generative AI—ChatGPT, Midjourney, foundation models—are probabilistic pattern matchers. Transformative for language and creativity. But unsuited for domains where "95% accuracy" gets people killed.

Noyron operates differently:

Deterministic outputs. Same inputs produce the same design. Every time. No randomness. No "temperature" settings. This is non-negotiable for certification and regulatory approval.

Complete traceability. Every design decision exists in reviewable source code. If a regulator asks "why is this combustion chamber shaped this way?" there's an answer in the code—not a statistical correlation buried in 175 billion parameters.

Physics-based foundations. Real equations describing real phenomena, validated against real-world tests. Not patterns extracted from training data.

No hallucinations. Noyron can't invent fake physics. It can't confidently produce nonsense. It outputs what the math says—nothing more, nothing less.

EXPLAINABLE AI

What This Means for Governance and Compliance

If you've spent any time wrestling with AI governance, you know the core problem: most AI systems are black boxes. You can test their outputs. You can measure their accuracy. But you can't explain why they made a specific decision.

That's a problem for regulated industries. Healthcare. Finance. Aerospace. Anywhere a wrong answer has consequences, regulators want to understand the reasoning—and "the model learned patterns from training data" isn't an acceptable explanation.

Noyron sidesteps this entirely.

Because every design decision is encoded in explicit logic, the entire system is auditable. You can trace any output back to the physics that produced it. You can review the code. You can certify the model itself, not just sample its outputs and hope for the best.

This is what "explainable AI" looks like when it actually works—not as a post-hoc interpretation layer bolted onto a neural network, but as the fundamental architecture.

THE LESSONS FROM LEAP 71

What To Take Back To Your Organization

  1. Know your AI types. Not all AI is generative AI. Computational models, expert systems, and physics-based algorithms solve different problems than neural networks—and some of those problems matter more in regulated industries.

  2. Traceability is a competitive advantage. Noyron can explain every decision because every decision lives in code. Can your AI tools do the same? That gap is where liability lives—and where differentiation is possible.

  3. The hybrid future is here. The smart play isn't "neural nets for everything" or "expert systems forever." It's understanding which tool fits which problem—deterministic where it matters, probabilistic where it helps.

FINAL THOUGHTS

LEAP 71’s story teaches us about how to think differently about using AI in an organization for the future. While we’ve all been enthralled in the hype of GenAI, other ‘kinds’ of AI are making big strides too.

Our future is going to be a hybrid of AI systems and Agents, however, using AI safely, responsibly, and in an accountable way is not optional.

How helpful was this week's email?

Login or Subscribe to participate

We are out of tokens for this week's context window!

Keep reading and learning and, LEAD the AI Revolution 💪

Hashi & The Context Window Team!

Follow Hashi:

Keep Reading

No posts found