Comment by _pdp_
You put a warning where it is most likely to be seen by a human coder.
Besides, no amount of prompting will prevent this situation.
If it is a concern then you put a linter or unit tests to prevent it altogether, or make a wrapper around the tricky function with some warning in its doc strings.
I don't see how this is any different from how you typically approach making your code more resilient to accidental mistakes.
Documenting for AI exactly like you would document for a human is ignoring how these tools work