Comment by lazide

Comment by lazide 12 hours ago

3 replies

Notably - in most cases, there is no way the compiler can know which of these scenarios are going to happen at compile time.

At runtime, the CPU can figure it out though, eh?

maccard 10 hours ago

I mean, theoretically it's possible. A super basic example would be if the data is known at compile time, it could be auto-parallelized, e.g.

    int buf_size = 10000000;
    auto vec = make_large_array(buf_size);
    for (const auto& val : vec)
    {
        do_expensive_thing(val);
    }
this could clearly be parallelised. In a C++ world that doesn't exist, we can see that it's valid.

If I replace it with int buf_size = 10000000; cin >> buf_size; auto vec = make_large_array(buf_size); for (const auto& val : vec) { do_expensive_thing(val); }

the compiler could generate some code that looks like: if buf_size >= SOME_LARGE_THRESHOLD { DO_IN_PARALLEL } else { DO_SERIAL }

With some background logic for managing threads, etc. In a C++-style world where "control" is important it likely wouldn't fly, but if this was python...

    arr_size = 10000000
    buf = [None] * arr_size
    for x in buf:
        do_expensive_thing(x)
could be parallelised at compile time.
  • lazide 10 hours ago

    Which no one really does (data is generally provided at runtime). Which is why ‘super smart’ compilers kinda went no where eh?

    • maccard 4 hours ago

      I dunno. I was promised the same things when I started programming and it never materialised.

      It doesn’t matter what people do or don’t do because this is a hypothetical feature of a hypothetical language that doesn’t exist.