Comment by btrettel
I'm a computational fluid dynamics (CFD) engineer. Like others have said, I'm pretty confident basic CFD algorithms are in the training data of many LLMs. I would say a bigger problem is the accuracy of the generated simulator. A LLM would not be able to generate good tests. You need both tests for the math ("verification") and tests for the physics ("validation"), and LLMs can't do either at the moment.
Gold-standard verification tests are constructed using the "method of manufactured solutions" (MMS), which can be largely automated with computer algebra software, but are still quite tedious. I know from experience. I don't believe LLMs can handle the algebraic manipulation here particularly well.
Worse, LLMs won't be able to produce actual experimental data to do the validation test with. You'll need to track down one or more experiments from the literature or do your own experiment. LLMs might in the future be able to point you to appropriate experiments in the literature, but they don't seem able to do that at present. I think LLMs might provide useful advice when a simulation ends up not matching the experimental data. LLMs seem to know a thing or two about turbulence modeling, though I would question their knowledge of the most recent advances.
(If you're only interested in fluid simulation for games or computer graphics then physical accuracy is not a priority. But you probably should still use MMS to make sure you've implemented the math correctly. MMS is an interesting technique that has no parallel in software testing in general. Abstractly, the idea is to make a minimal modification to the software so that you have an oracle, and the nature of the modification is such that the modified software passing the test implies that the unmodified software would also pass the test. This idea probably can be applied in other areas.)