Comment by 9dev

Comment by 9dev 9 days ago

0 replies

Cache the response for a given query-page hash pair maybe? So the LLM will only be consulted when the page content hash changes, the previous answer be reused otherwise