An AI invention laboratory·25 USPTO claims live

Invent. Patent.License. Repeat.

AI is making code easy to copy. The invention underneath isn't. We discover novel primitives in language models, file on them at the USPTO before public disclosure, and license the IP. That's the whole company.

Why now

Scale has stopped
being the moat.

For a decade, AI progress was measured in GPUs. That era is ending. The frontier is no longer who can compute the most — it's who can invent the primitives that make compute matter, and who files on them first. An invention-first, IP-native lab is the shape of a company built for the window that just opened. The labs that move now own the next decade.

Priority date lockedPCT window openPipeline filingLicensing active

Scale has plateaued

Gains from bigger runs and more parameters are slowing. The next breakthroughs are primitive-level, not scale-level — and invention, not compute, is what finds them.

Open weights, closed IP

Frontier-quality base models are becoming commodity. The advantage moves to what you can invent to run inside them — and who owns the rights to those mechanisms.

The licensing vacuum

Every downstream AI company needs defensible, differentiated mechanisms. Few have any. IP-first invention labs are the natural supply side of that market.

Priority windows are narrow

Once a primitive is publicly disclosed, it is no longer patentable. Labs that file before the field rediscovers a mechanism own it for the next decade.

About

An independent lab
at the edge of the known.

01

An AI invention laboratory.

Named after the black hole in Interstellar — a place at the edge of what's known, past which the rules change. That's the work: invent where the published literature hasn't looked yet, then patent and license what we find.

02

Not a SaaS. Not a model lab. Not a consultancy.

We don't host inference. We don't sell per-seat subscriptions. We don't staff your team. We invent novel capabilities in language models, file patents on what we find, and license the IP to the companies that need it.

AB
03

Independent. US-based. Privately held.

Founded and based in the United States. Currently fundraising. Invention first, IP second, commerce third. The base model stays neutral — our licensees own their deployments end-to-end.

04

Execution is no longer the moat.

Agentic coding is collapsing the cost of building software. What stays scarce is the underlying invention — and the patents that surround it. We file broadly on purpose. In a world where anyone can build anything, defensibility moves from code to IP.

Process

Invent. Patent. License.
The whole business.

invention.ts
1// conjecture
2hypothesis: "models can be written
3  to, not just trained"
4
5// validate across families
6for model in [qwen, llama, mistral,
7              phi, gemma]:
8    prove(model, production_scale=True)
9
10status: replicated
Ready
What we're building

A portfolio of
patentable primitives.

Each invention starts with a conjecture the published literature treats as expensive, impossible, or irrelevant. We test it, patent it, and license it.

01

DeltaWrite

Patent pendingUSPTO · April 202625 claims

A method for writing information directly into the weights of a frozen pretrained language model — in milliseconds, using only forward passes. The output is a portable Knowledge Module: loadable, stackable, swappable, and revocable at will. It replaces fine-tuning, RAG, and prompt engineering with a single persistent write operation on the model itself.

02

LRM v2

Novel architectureIn active build

A transformer architecture trained from scratch to be populated by Knowledge Modules at runtime. Reasoning layers handle language, logic, math, and code. Knowledge-bank layers start empty and take their content from whichever KMs are loaded. A small reasoning engine plus a strong capability module approaches frontier performance at a fraction of the serving cost.

Next primitives are in private research. Ask us about the pipeline under NDA.

Mission

Work at a different
edge of the stack.

Most AI research optimizes a single axis: bigger models, longer training, more expensive inference. Valuable work — but crowded. We work on new operations on language models that change what a deployed model can do in milliseconds, without retraining, retrieval, or context-window bloat.

25
Patent claims filed
5
Foundation families
500
Live knowledge modules
Primitive CatalogPriority locked
DeltaWrite
Write to frozen weights in ms
Patent pending
LRM v2
KM-native transformer architecture
In build
Knowledge Modules
Portable. Stackable. Revocable.
Validated
Cross-family replication
Qwen · Llama · Mistral · Phi · Gemma
5 / 5
Scale curve
1.5B · 7B · 14B parameters
Stronger up
Pipeline research
Next primitives under test
Ongoing
The business architecture

Why this becomes
a compounding IP engine.

One patent is not a company. A portfolio that compounds across primitives, model families, and verticals is. Four reasons this goes venture-scale.

01

Each license funds the next invention.

The cadence is designed to compound. A licensed primitive generates revenue. That revenue funds the team building the next one. Every new invention is a separable asset the prior work did not have.

02

Patents outlive models.

A fine-tune is obsolete when the next base model ships. A granted patent runs for roughly twenty years. We accumulate long-dated, cross-model assets in a field that churns quarterly.

03

Licensing fits the buyer.

Enterprise and regulated teams cannot ship a black-box SaaS. They need capability they can run in their own stack, with their own data, under their own compliance. That is what licensing gives them — and what a hosted product cannot.

04

Distribution is already built.

Every company running a language model is a potential licensee. We do not sell infrastructure or hosting. The foundation-model ecosystem — open-source and frontier — does the scaling for us.

What we've achieved

Traction.
In IP and in proof.

Priority locked|USPTO · 2026-04-11
0
Patent claims filed
USPTO provisional · April 2026 · covering the primitive, the mechanism, and the system-level use. Non-provisional window open.
0
Frontier model families replicated
Same mechanism, same code path — no model-specific tuning. Replicated independently on Qwen, Llama, Mistral, Phi, and Gemma.
0
Live demo, shareable under NDA
A 7B-parameter production model, hundreds of capabilities loaded simultaneously. Qualified investors and licensees can run their own evaluation prompts.
~0×
Faster construction than LoRA
On the same knowledge-injection benchmark. ~112 ms per fact versus hours or days for the best-known adapter method. Reproducible numbers, shared on request.
A different kind of AI company

Built like a lab.
Wired for IP.

Gargantua Labs invents novel AI primitives, files patents on them, and licenses the IP. That structure is deliberate — and it rules out the four positions most AI companies occupy.

Not a frontier lab
Compute isn’t the moat. Novel primitives are.
Not an AI SaaS
We don’t wrap existing models. We invent what runs inside them.
Not a research nonprofit
We publish after priority dates lock. The IP is the asset.
Not a consultancy
Licensed inventions scale without a custom engagement.
Not a frontier lab
Compute isn’t the moat. Novel primitives are.
Not an AI SaaS
We don’t wrap existing models. We invent what runs inside them.
Not a research nonprofit
We publish after priority dates lock. The IP is the asset.
Not a consultancy
Licensed inventions scale without a custom engagement.
Patent-first research
Priority dates locked at the USPTO before public disclosure.
Replicated results
Findings verified across Qwen, Llama, and Mistral families.
Licensable IP
Non-exclusive, exclusive, or field-of-use. Customer-owned deployments.
A repeatable pipeline
Invent. Validate. File. License. Then the next primitive.
Patent-first research
Priority dates locked at the USPTO before public disclosure.
Replicated results
Findings verified across Qwen, Llama, and Mistral families.
Licensable IP
Non-exclusive, exclusive, or field-of-use. Customer-owned deployments.
A repeatable pipeline
Invent. Validate. File. License. Then the next primitive.
The thesis
01 / 05

"A world where a language model is an operating system and knowledge is a file."

T

The end state

Our vision, Gargantua Labs

The upshot

Where we're headed

Invention first · IP second · Commerce third

Deep-techPatent-backedIndependentFundraising now
Deep-techPatent-backedIndependentFundraising now
Who we work with

Three ways
to engage

Investors. Licensees. Research partners. We're sequencing a short list of each this quarter — every engagement starts the same way, a private conversation under NDA.

01

Investors

We're raising now.

  • Licensing engine, not SaaS
  • Data room access after first call
  • Direct founder channel
02

Licensees

Your model is the bottleneck. Your knowledge is the moat.

  • Bring your own evaluation prompts
  • Non-exclusive, exclusive, or field-of-use
  • Your deployment, your infrastructure
  • Reproducible benchmark package
03

Research partners

Academic collaborations and strategic co-development.

  • Joint research with our team
  • Early access to pipeline work
  • Structured IP arrangements
  • Long-horizon roadmap alignment

Invest in the lab.
License the IP.

Capital raise is live. Licensing slate is opening. We're prioritizing a small number of flagship investors and licensees this quarter — early conversations are how we set the order. Whichever side you're on, the first step is a private one.

Investor intros and licensing inquiries welcome. NDA standard.