Comment by godelski
For sure some can. I mean the LLMs are susceptible to priming. The OpenAI demo had an error in the airplane wing part because of that. It is a very common mistake (included in many textbooks) but the LLM repeated it. More importantly, I saw someone be able to get it to give the right answer without spoiling it through the prompt.