
TL;DR
We built a LinkedIn growth system used by $100M+ founders, then studied the AI detection science and rebuilt the humanizer from scratch — three tiers, real forensic signal only, no dress code. Then we open-sourced everything: 10 tools, 43 files, MIT license, free on GitHub. Works with Claude Code, Claude Desktop, or any AI tool you already use.
For a year, this stayed private
We built a LinkedIn growth system for founders running $100M+ companies. Their posts got seen. Their comments earned CEO-to-CEO replies. Their rhythm printed real inbound.
Most of what's inside isn't invented — it's distilled from our Continuous Intelligence: tracking thousands of top creators daily, stress-testing what still holds up in 2026. The real work was building the scaffolding around it.
My team didn't want me to give it away. Fuck that.
10 tools. 43 files. Free on GitHub. MIT licensed.
- 10 hook formulas with verified engagement multipliers
- Comment templates — including the one that got a SaaS CEO to reply in 22 hours
- A 20-point pre-publish checklist for LinkedIn's 2026 ranking quirks
- AI-tell scrubber so posts stop reading like ChatGPT wrote them
- A 7-day content planner and profile rewriter (9 sections, 220-char headline formula)
- Benchmarks from real pharma, law, and SaaS teams
Why the humanizer in that toolkit is different
When we built the first version of the AI-tell scrubber, we did what most people do: read every "signs of AI writing" article, aggregated the patterns, and turned them into rules. The rules felt rigorous. Then we read the actual research.
Stanford tested 7 AI detectors against non-native English writers: 61.3% false positive rate. OpenAI shut down its own AI text classifier in 2023 — it peaked at 26% accuracy. A federal court ordered Adelphi University to expunge an AI cheating violation against an autistic student whose paper was flagged as "too advanced."
And Wikipedia's own "signs of AI writing" page literally says these patterns are "potential signs, not the problem itself." Most people skip that line and quote the rest as a dress code.
A detector that flags 61% of ESL writers as bots isn't detecting AI. It's doing class detection. We had baked some of that into our own tool.
The distinction that changed everything: forensic vs. aesthetic
Some AI signals are forensic — real artifacts from language model internals. Others are aesthetic — stylistic patterns that correlate with AI but have centuries of human precedent.
Forensic rules (actual signal):
- Em dash overuse — LLMs use it at 4–6× the rate of human writers
- Knowledge-cutoff disclaimers — the "as of my last update" tic
- Broken
oaicitecitation markers leaking into output
Aesthetic rules that punish good writing:
- The rule-of-three flag targets Lincoln: "of the people, by the people, for the people"
- The em dash flag targets Emily Dickinson — she used it more than anyone
- The "AI vocabulary" list calls "robust" inhuman — a word every epidemiologist has used for 100 years
We were treating a thermometer like a thermostat. So we rebuilt.
The scrubber now runs in three tiers
Forensic — always on. Real LLM artifacts only: citation leaks, cutoff language, statistical over-patterns.
Strict — on by default. Patterns genuinely rare in human writing at scale, with override available per case.
Aesthetic — opt-in only. The dress code. Em dash preferences, transition ratios, rhythm checks. Some writers want it. It's not a verdict.
Three new skills shipped alongside:
- detector-tester — runs your post through 5 AI detectors simultaneously and shows where they disagree (they always disagree)
- emoji-detector — catches the lightbulb / rocket / sparkles cluster that LLMs favor
- rules-explainer — every rule with the human writer who fails it, so you understand what you're actually enforcing
Get it
GitHub: github.com/sergebulaev/linkedin-skills — MIT license, fork it, adapt it, ship it.
I don't know if it'll work for everyone. It worked for the people who paid for it. Take what's useful, leave the rest.
Star it if it helps. Open an issue if a rule punishes a good writer. That's the whole point.