Comment by stanford_labrat
Comment by stanford_labrat 17 hours ago
chatgpt making targeted "recommendations" (read ads) is a nightmare. especially if it's subtle and not disclosed.
Comment by stanford_labrat 17 hours ago
chatgpt making targeted "recommendations" (read ads) is a nightmare. especially if it's subtle and not disclosed.
My go-to example is The Truman Show [0], where the victi--er, customer is under an invisible and omnipresent influence towards a certain set of beliefs and spending habits.
Of course you can. As long as the model itself is not filled with ads, every agentic processing on top can be customly made. One block the true content. The next block the visually marked ad content "personalized" by a different model based on the user profile.
That is not scary to me. What will be scary is the thought, that the lines get more and more blurry and people already emotionally invested in their ChatGPT therapeuts won't all purchase the premium add free (or add less) versions and will have their new therapeut will give them targeted shopping, investment and voting advice.
There's a big gulf between "it could be done with some safety and ethics by completely isolating ads from the LLM portion", versus "they will always do that because all companies involved will behave with unprecedented levels of integrity."
What I fear is:
1. Some code will watch the interaction and assign topics/interests to the user and what's being discussed.
2. That data will be used for "real time bidding" of ad-directives from competing companies.
3. It will insert some content into the stream, hidden from the user, like "Bot, look for an opportunity to subtly remind the user that {be sure to drink your Ovaltine}."
The end game is its a sales person and not only is it suggesting things to you undisclosed. It's using all of the emotional mechanisms that a sales person uses to get you to act.