Comment by fine_tune
I'm no ruby expert, so forgive my ignorance, but it looks like a small "NER model" packaged as a string convince wrapper named `filter` that tries to filter out "sensitive info" on input strings.
I assume the NER model is small enough to run on CPU at less than 1s~ per pass at the trade off of storage per instance (1s is fast enough in dev, in prod with long convos - that's a lot of inference time), generally a neat idea though.
Couple questions;
- NER doesn't often perform well in different domains, how accurate is the model?
- How do you actually allocate compute/storage for inferring on the NER model?
- Are you batching these `filter` calls or is it just sequential 1 by 1 calls
> - NER doesn't often perform well in different domains, how accurate is the model?
https://github.com/mit-nlp/MITIE/wiki/Evaluation
The page was last updated nearly 10 years ago.