Comment by themanmaran

Comment by themanmaran a day ago

2 replies

This is a big problem when it comes to conversational agents. Sometimes users ask questions that are really prying, potentially misleading, or just annoying repeats (like asking for a cheaper price 50 times).

In these situations a real person would just ignore them. But most LLMs will cheerfully continue the conversation, and potentially make false promises or give away information they shouldn't.

notahacker a day ago

Indeed I suspect if anything the weighting is the opposite (being annoyingly persistent weights and LLM towards spitting out text that approximates what the annoyingly persistent person wants to get), whereas with humans it weights then towards being less helpful...