Comment by cogman10

Comment by cogman10 a day ago

29 replies

What I've learned is that the fewer flags is the best path for any long lived project.

-O2 is basically all you usually need. As you update your compiler, it'll end up tweaking exactly what that general optimization does based on what they know today.

Because that's the thing about these flags, you'll generally set them once at the beginning of a project. Compiler authors will reevaluate them way more than you will.

Also, a trap I've observed is setting flags based on bad benchmarks. This applies more to the JVM than a C++ compiler, but never the less, a system's current state is somewhat random. 1->2% fluctuations in performance for even the same app is normal. A lot of people won't realize that and ultimately add flags based on those fluctuations.

But further, how code is currently layed out can affect performance. You may see a speed boost not because you tweaked the loop unrolling variable, but rather your tweak may have relocated a hot path to be slightly more cache friendly. A change in the code structure can eliminate that benefit.

tmtvl 21 hours ago

I'd say -O2 -march=native -mtune=native is good enough, you get (some) AVX without the O3 weirdness.

  • pedrocr 20 hours ago

    That's great if you're compiling for use on the same machine or those exactly like it. If you're compiling binaries for wider distribution it will generate code that some machines can't run and won't take advantage of features in others.

    To be able to support multiple arch levels in the same binary I think you still need to do manual work of annotating specific functions where several versions should be generated and dispatched at runtime.

alberth a day ago

Doesn't -O2 still exclude any CPU features from the past ~15 years (like AVX).

If you know the architecture and oldest CPU model, we're better served with added a bunch more flags, no?

I wish I could compile my server code to target CPU released on/after a particular date like:

  -O2 -cpu-newer-than=2019
  • cogman10 20 hours ago

    It's not an -O2 thing. Rather it's a -march thing.

    -O2 in gcc has vectorization flags set which will use avx if the target CPU supports it. It is less aggressive on vectorization than -O3.

  • singron 19 hours ago

    You can use x86_64-v2 or x86_64-v3. Dates are tricky since cpu features aren't included on all SKUs from all manufacturers on a certain date.

  • SubjectToChange 21 hours ago

    A CPU produced after a certain date is not guaranteed to have the every ISA extension, e.g. SVE for Arm chips. Hence things like the microarchitecure levels for x86-64.

    • cogman10 20 hours ago

      For x86 it's a pretty good guarantee.

      • teo_zero 17 hours ago

        I don't understand if your comment is ironic. Intel is notorious for equipping different processors produced in the same period with different features. Sometimes even among different cores on the same chip. Sometimes later products have less features enabled (see e.g. AVX512 for Alder Lake).

vlovich123 19 hours ago

You should at a minimum add flags to enable dead object collection (-fdata-sections and -ffunction-sections for compilation and -Wl,--gc-sections for the linker).

201984 a day ago

What's your reason for -O2 over -O3?

  • cogman10 a day ago

    Historically, -O3 has been a bit less stable (producing incorrect code) and more experimental (doesn't always make things faster).

    Flags from -O3 often flow down into -O2 as they are proven generally beneficial.

    That said, I don't think -O3 has the problems it once did.

    • sgerenser a day ago

      -O3 gained a reputation of being more likely to "break" code, but in reality it was almost always "breaking" code that was invalid to start with (invoked undefined behavior). The problem is C and C++ have so many UB edge cases that a large volume of existing code may invoke UB in certain situations. So -O2 thus had a reputation of being more reliable. If you're sure your code doesn't invoke undefined behavior, though, then -O3 should be fine on a modern compiler.

      • uecker 20 hours ago

        Oh, there are also plenty of bugs. And Clang still does not implement the aliasing model of C. For C, I would definitely recommend -O2 -fno-strict-aliasing

      • drob518 a day ago

        Exactly. A lot of people didn’t understand the contract between the programmer and the compiler that is required to use -O3.

        • MaxBarraclough 18 hours ago

          That's a little vague, I'd put that more pointedly: they don't understand how the C and C++ languages are defined, have a poor grasp of undefined behaviour in particular, and mistakenly believe their defective code to be correct.

          Of course, even with a solid grasp of the language(s), it's still by no means easy to write correct C or C++ code, but if your plan it to go with this seems to work, you're setting yourself up for trouble.

      • afdbcreid 19 hours ago

        Indeed, e.g. Rust by default (release builds) use -O3.

  • o11c 19 hours ago

    Compiler speed matters. I will confess to not as much practical knowledge of -O3, but -O2 is usually reasonable fast to compile.

    For cases where -O2 is too slow to compile, dropping a single nasty TU down to -O1 is often beneficial. -O0 is usually not useful - while faster for tiny TUs, -O1 is still pretty fast for them, and for anything larger, the increased binary size bloat of -O0 is likely to kill your link time compared to -O1's slimness.

    Also debuggability matters. GCC's `-O2` is quite debuggable once you learn how to work past the possibility of hitting an <optimized out> (going up a frame or dereferencing a casted register is often all you need); this is unlike Clang, which every time I check still gives up entirely.

    The real argument is -O1 vs -O2 (since -O1 is a major improvement over -O0 and -O3 is a negligible improvement over -O2) ... I suppose originally I defaulted to -O2 because that's what's generally used by distributions, which compile rarely but run the code often. This differs from development ... but does mean you're staying on the best-tested path (hitting an ICE is pretty common as it is); also, defaulting to -O2 means you know when one of your TUs hits the nasty slowness.

    While mostly obsolete now, I have also heard of cases where 32-bit x86 inline asm has difficulty fulfilling constraints under register pressure at low optimization levels.

  • wavemode a day ago

    You have to profile for your specific use case. Some programs run slower under O3 because it inlines/unrolls more aggressively, increasing code size (which can be cache-unfriendly).

    • grogers a day ago

      Yeah, -O3 generally performs well in small benchmarks because of aggressive loop unrolling and inlining. But in large programs that face icache pressure, it can end up being slower. Sometimes -Os is even better for the same reason, but -O2 is usually a better default.

  • bluGill a day ago

    Most people use -O2 and so if you use -O3 you risk some bug in the optimizer that nobody else noticed yet. -O2 is less likely to have problems.

    In my experience a team of 200 developers will see 1 compiler bug affect them every 10 years. This isn't scientific, but it is a good rule of thumb and may put the above in perspective.

    • macintux a day ago

      Would you say that bug estimate is when using -O2 or -O3?

      • bluGill a day ago

        The estimate includes visual studio, and other compilers that are not open source for whatever optimization options we were using at the time. As such your question doesn't make sense (not that it is bad, but it doesn't make sense).

        In the case of open source compilers the bug was generally fixed upstream and we just needed to get on a newer release.

  • nickelpro a day ago

    People keep saying "O3 has bugs," but that's not true. At least no more bugs than O2. It did and does more aggressively expose UB code, but that isn't why people avoid O3.

    You generally avoid O3 because it's slower. Slower to compile, and slower to run. Aggressively unrolling loops and larger inlining windows bloat code size to the degree it impacts icache.

    The optimization levels aren't "how fast do you want to code to go", they're "how aggressive do you want the optimizer to be." The most aggressive optimizations are largely unproven and left in O3 until they are generally useful, at which point they move to O2.

    • uecker 20 hours ago

      I would say there is a fair share of cases where programmers were told it is UB when it actually was a compiler bug - or non-conformance.

      • saagarjha 10 hours ago

        That share is a vanishingly small fraction of cases.

        • uecker 6 hours ago

          I am not sure. I saw quite a few of these bugs where programmers were told it is UB but it isn't.

          For example, people showed me

            extern void g(int x);
          
            int f(int a, int b)
            {
              g(b ? 42 : 43);
              return a / b;
            }
          
          as an example on how compilers exploit "time-travelling" UB to optimize code, but it is just a compiler bug that got fixed once I reported it:

          https://developercommunity.visualstudio.com/t/Invalid-optimi...

          Other compilers have similar issues.

    • SubjectToChange 21 hours ago

      More aggressive optimization is necessarily going to be more error prone. In particular, the fact that -O3 is "the path less traveled" means that a higher number of latent bugs exist. That said, if code breaks under -O3, then either it needs to be fixed or a bug report needs to be filed.