Comment by llbbdd

Comment by llbbdd 2 days ago

2 replies

I'm not sure it would work in either case anymore. for better or worse, LLMs make it a lot easier to determine whether text is hidden explicitly through CSS attributes, or implicitly through color contrast or height/overflow tricks, or basically any other method you could think of to hide the prompt. I'm sympathetic, and I'm not sure what the actual rebuttal here is for small sites, but stuff like this seems like a bitter Hail Mary.

bryanrasmussen 2 days ago

does it though? Are LLMs used to filter this stuff out currently? If so, do they filter out visually hidden content, that is to say content that is meant for screen readers, and if so is that a potential issue? I don't know, it just seems like a conceptual bug, a concept that has not been fully thought through.

second thought, sometimes you have text that is hidden but expected to be visible if you click on something, that is to say you probably want the rest of the initially hidden content to be caught in the crawl as it is still potentially meaningful content, just hidden for design reasons.

  • llbbdd 2 days ago

    I don't know what the SOTA is especially because these types of filters get expensive, but it's definitely plausible if you have the capital, it just requires spinning up a real browser environment of some kind. I know from experience that I can very easily set up a system to deeply understand every web page I visit, and it's not hard to imagine doing that at scale in a way that can handle any kind of "prompt poisoning" to a human level. The popular Anubis bot gateway setup has skipped past that to the point of just requiring a minimum of computational power to let you in, just to keep the effort of data acquisition above the threshold that makes it a good ROI.