Comment by Fnoord
I got Amazon Prime. If it has Prime, it is a no-brainer. Free return for 30 days. No S&H costs. Only cost is my time.
I got Amazon Prime. If it has Prime, it is a no-brainer. Free return for 30 days. No S&H costs. Only cost is my time.
To your last point -- Humans make mistakes too. I asked my EA to order a few things for our office a few days ago, and she ended up ordering things that I did not want. In this case I could have wrote a better prompt. Even with a better prompt she could have ordered the unwanted item. This is a reversible decision.
So my point is, that while you might get some false positives, it's worth automating as long as many of the decisions are reversible or correctable.
You might not want to use this in all cases, but it's still worthwhile for many many cases. The use case worth automating depends on the acceptable rate of error for the given use case.
You cannot trust a human to avoid buying crap on Amazon but like I said with Prime the only cost is time (and, to be fair: Co2 footprint).
Dynamic CVV would mean you'd have to authorize the payment. If amount seems off, decline.
To be clear, I don't think I'd use it but if it could save you time (a precious value, in our day and age) with good signal to noise ratio it is win-win for user, author, and Amazon.
If you want to buy an Apple device from a trusted party, including trusted accessories, then there's apple.com. My point being: buying from there is much more secure. But even then, there is no '1 iPhone 16'; there's variants. Many of them.
My rule of thumb is: not more or less than AliExpress, though Prime is a bit reassuring. I got Chinese tech which gives me skin rashes, or smells really bad. The advantage of Amazon though, is that I can sent back items without much hassle. Doesn't work with AliExpress.
If it fails enough times and you have to return enough items…well, Amazon has been known to ban people for that.
If you have an AWS account created before 2017, am Amazon ban means an AWS ban
Yea, but LLMs cannot reason - we've all seen them blurt out complete non-sequitur, or end up in death loops of pseudo-reasoning (e.g. https://news.ycombinator.com/item?id=42734681 has a few examples). I don't think one should trust an LLM to pick Prime products all the time even if that's very explicitly requested - I'm sure it's possible to minimize errors so it'll do the right thing most of the time, but having a guarantee that it won't pick non-Prime item sounds impossible. Same for any other tasks - if there is a way to make a mistake, a mistake will be eventually made.
(Idk if we can trust a human either - brain farts are a thing after all, but at least humans are accountable. Machines are not - at least not at the moment.)