Organic, Artisanal Math
Why we use deterministic algorithms instead of AI to verify your prime — and why that matters more than you'd think.
There’s a trend in tech right now: slap “AI-powered” on everything and watch the venture capital roll in. AI toothbrushes. AI alarm clocks. AI-generated pizza recipes that confidently suggest putting glue on the cheese to help it stick.
So when people hear we find personalized prime numbers, they sometimes ask: “Do you use AI for that?”
No. Absolutely not. And here’s why.
The Dictionary Problem
Asking a large language model whether a number is prime is like asking a dictionary whether a bridge can hold ten tons. Dictionaries are wonderful at words. Bridges require an engineer.
LLMs work by predicting the next plausible token in a sequence. They’ve read a lot of text about math, so they can sound convincing. But they don’t compute — they confabulate. They assemble answers that look right based on patterns in their training data.
Try it yourself. Ask your favorite chatbot: is 91 prime?
A confident “yes” is a common response. It isn’t. 91 = 7 × 13. The model saw “91” near words like “odd” and “not divisible by 2, 3, or 5” and concluded it must be prime. It performed vibes-based arithmetic.
Your prime number deserves better than vibes.
What We Actually Use
We use the Miller–Rabin primality test — a deterministic algorithm that has been mathematically proven to work. No training data. No probability of hallucination. No “I’m sorry, upon reflection, 91 is actually composite.”
Here’s the difference:
| AI Chatbot | Our Algorithm | |
|---|---|---|
| Method | Pattern-matching on text | Modular exponentiation |
| Confidence | ”Probably, based on vibes” | Mathematically provable |
| Can hallucinate | Regularly | Never |
| Knows what a number is | Not really | Yes |
We run 16 independent rounds of Miller–Rabin with carefully chosen bases. If a composite number somehow sneaked past all 16, the odds would be less than 1 in 416 — that’s 1 in 4,294,967,296. In practice, for the number sizes we work with, the test is deterministic: zero false positives. Not “nearly zero.” Zero.
(For the curious, we wrote more about how the test works in The Miller–Rabin Test.)
A Brief History of Machines That Can’t Do Math
This isn’t just an AI problem. Computers have always had a tricky relationship with arithmetic — even the non-artificial-intelligence kind:
- The Pentium FDIV bug (1994): Intel’s flagship processor occasionally got long division wrong. The error affected only a handful of obscure decimal places, but it cost Intel $475 million in replacements.
- Excel’s 65,535 bug (2007): Multiplying certain numbers in Microsoft Excel would display 65,535 instead of the correct result. The spreadsheet program couldn’t do multiplication.
- Floating-point everything: Ask most programming languages what 0.1 + 0.2 equals. The answer is 0.30000000000000004. This is normal and expected, which is somehow worse.
The lesson: even when computers are doing math, you have to be careful about how they’re doing it. Our primality checks use arbitrary-precision integer arithmetic — no floating point, no rounding, no “close enough.”
Why “Organic” Isn’t Just Marketing
When we say “100% organic algorithms,” we’re being a little cheeky — but only a little. What we mean is:
No machine learning. The algorithm doesn’t learn, adapt, or update its weights. It doesn’t have weights. It follows a fixed mathematical procedure that was proven correct decades ago by Gary Miller and Michael Rabin. It will give the same answer today, tomorrow, and in a thousand years.
No black boxes. You can read the algorithm. You can verify it by hand (if you have patience and a very large whiteboard). Every step is auditable. There’s no hidden layer where a number goes in and a guess comes out.
No training data. Our algorithm has never seen a dataset. It doesn’t know what the internet thinks about primes. It only knows modular arithmetic, and modular arithmetic doesn’t have opinions.
The Hallucination-Free Guarantee
When you receive your prime from A Prime for You, it comes with something no AI-generated answer can offer: certainty. Not “97% confidence.” Not “based on my training data, I believe…” Not a caveat in fine print.
Your number is prime. The math says so. And unlike a chatbot, the math doesn’t change its mind when you ask the same question twice.