Comment by meisel

Comment by meisel 4 hours ago

4 replies

> It’s kind of like enabling LTO (Link-Time Optimization) across the libc boundary, except it’s done properly in the frontend instead of too late, in the linker

Why is the linker too late? Is Zig able to do optimizations in the frontend that, e.g., a linker working with LLVM IR is not?

ibejoeb 4 hours ago

Seems like it ought to be able to do inlining and dead code stripping which, I think, wouldn't be viable at link time against optimized static libraries.

  • comex 4 hours ago

    It is viable against the IR that static libraries contain when LTO is enabled.

    LTO essentially means “load the entire compiler backend into the linker and do half of the compilation work at link time”.

    It’s a great big hack, but it does work.

    • gary_0 an hour ago

      As I understand it, compiling each source file separately and linking together the result was historically kind of a hack too, or at least a compromise, because early unix machines didn't have enough memory to compile the whole program at once (or even just hold multiple source files in memory at a time). Although later on, doing it this way did allow for faster recompilation because you didn't need to re-ingest source files that hadn't been changed (although this stopped being true for template-heavy C++ code).

    • ibejoeb 4 hours ago

      Right, but I think that's what the question of "Why is the linker too late?" is getting at. With zig libc, the compiler can do it, so you don't need fat objects and all that.

      ---

      expanding: so, this means that you can do cross-boundary optimizations without LTO and with pre-built artifacts. I think.