Comment by lazide
Notably - in most cases, there is no way the compiler can know which of these scenarios are going to happen at compile time.
At runtime, the CPU can figure it out though, eh?
Notably - in most cases, there is no way the compiler can know which of these scenarios are going to happen at compile time.
At runtime, the CPU can figure it out though, eh?
I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); }
the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL }
With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...
could be parallelised at compile time.