You're absolutely right to be concerned, this is something I think about constantly.
The reality is:
bad actors don't need to reverse-engineer anything. AI engines already prioritize structured, citable content. Anyone can spin up a website with schema.org markup and fake citations.
The barrier is low.
What makes this hard to abuse at scale:
1. Domain verification – For businesses, we require proof of domain ownership. You can't claim to be Apple unless you control apple.com or an official subdomain or work at apple respectively having @apple.com business mail.
2. Citation requirements – Claims need links to primary sources. AI engines cross-reference. If your "citations" point to non-existent papers or contradict other sources, you lose authority fast.
3. Reputation signals – We're building verification badges (ORCID for researchers, business registries, etc.). Over time, verified profiles will rank higher.
But you've identified the fundamental tension: any system that makes it easier for legitimate businesses to be cited also makes it easier for bad actors. This is the same problem Google faced in the '90s, Wikipedia deals with daily, and AI engines are grappling with now.
The goal isn't to be manipulation-proof, nothing is. It's to make CoThou profiles more trustworthy than the alternatives (random blogs, SEO spam, outdated info).
What would you add?
This is an evolving problem and I'd love HN's input.
—Marty
You're absolutely right to be concerned, this is something I think about constantly.
The reality is: bad actors don't need to reverse-engineer anything. AI engines already prioritize structured, citable content. Anyone can spin up a website with schema.org markup and fake citations. The barrier is low.
What makes this hard to abuse at scale:
1. Domain verification – For businesses, we require proof of domain ownership. You can't claim to be Apple unless you control apple.com or an official subdomain or work at apple respectively having @apple.com business mail.
2. Citation requirements – Claims need links to primary sources. AI engines cross-reference. If your "citations" point to non-existent papers or contradict other sources, you lose authority fast.
3. Reputation signals – We're building verification badges (ORCID for researchers, business registries, etc.). Over time, verified profiles will rank higher.
But you've identified the fundamental tension: any system that makes it easier for legitimate businesses to be cited also makes it easier for bad actors. This is the same problem Google faced in the '90s, Wikipedia deals with daily, and AI engines are grappling with now.
Long-term solutions I'm exploring:
- Community flagging + reputation scoring - Integration with trust registries (DUNS, ORCID, Crossref DOIs) - Transparent edit histories (like Wikipedia)
The goal isn't to be manipulation-proof, nothing is. It's to make CoThou profiles more trustworthy than the alternatives (random blogs, SEO spam, outdated info).
What would you add? This is an evolving problem and I'd love HN's input. —Marty