Comment by rglynn

Comment by rglynn 3 days ago

0 replies

I suppose one shouldn't be surprised that an LLM-adjacent technology is treated like magic.

I wonder though, for cases where you genuinely are trying to match like to like, rather than question to answer, is vector embeddings with cosine similarity still the way to go?

My understanding, as stated in TFA, is that if you put careful thought (and prompt engineering) into the content before vectorisation, you can get quite far with just cosine similarity. But how far has "tool use" come along, could it be better in some scenarios?