Comment by maxall4
I can’t tell if this is some complex joke or a real product. This is literally string.contains() as a service.
Edit: 300ms?!
I can’t tell if this is some complex joke or a real product. This is literally string.contains() as a service.
Edit: 300ms?!
Sure, but I’m not convinced that producing a blacklist and filtering system is that difficult. More importantly, it’s little things like this that slowly and insidiously degrade the user experience. Sure it starts with one 300ms API call, maybe most people won’t notice. But when you reach for solutions like this to every minor technical problem, the next thing you know it takes 5 seconds to sign-up.
My take on latency in general is this: You may just use the API to flag (not act) in an async way. This way, you can just alert/monitor and decide later whether or not to take any actions while keeping the flow non-blocking. Another approach would be to run it against existing handles to see what opportunities exist (ex: premium usernames, impersonators etc.).
Not a joke (I'm taking this in the spirit intended) but I can see there are TONS of things I need to be improving on:
1. latency: my original goal was to make it sub-10s but with checking for auth, cold starts, the actual lookup, couldn't get it to do better than 2-300ms. I need to improve this though and I will. 2. increased list size: currently, the lookup happens across 1.7million records (will go up to 2.5m in the next days/weeks) BUT I don't think that would ever cover ALL scenarios. 3. better categorisation
I think there's some value in providing a huge dictionary of things to test against, with tagging for what things are to help filter. This doesn't do a great job at it, and it would make 100x more sense as a library, but it's a little more than just string.contains().