pizlonator a day ago

This is a good write up and I agree with pretty much all of it.

Two comments:

- LLVM IR is actually remarkably stable these days. I was able to rebase Fil-C from llvm 17 to 20 in a single day of work. In other projects I’ve maintained a LLVM pass that worked across multiple llvm versions and it was straightforward to do.

- LICM register pressure is a big issue especially when the source isn’t C or C++. I don’t think the problem here is necessarily licm. It might be that regalloc needs to be taught to rematerialize

  • theresistor a day ago

    > It might be that regalloc needs to be taught to rematerialize

    It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.

    • pizlonator a day ago

      > It knows how to rematerialize

      That's very cool, I didn't realize that.

      > but the backend is generally more local/has less visibility than the optimizer

      I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).

      LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me

      • CalChris 19 hours ago

        > to my knowledge LICM doesn't cross function boundary

        LICM is called with runOnLoop() but is called after function inlining. Inlining enlarges functions, possibly revealing more invariants.

        • pizlonator 19 hours ago

          Sure. Any pass that is scoped to functions (or even loops, or basic blocks) will have increased scope if run after inlining, and most passes run after inlining.

          In the context of this thread, your observation is not meaningful. The point is: LICM doesn't cross function boundary and neither does regalloc, so LICM has no greater scope than regalloc.

  • weinzierl 20 hours ago

    "LLVM IR is actually remarkably stable these days."

    I'm by no means an LLVM expert but my take away from when I played with it a couple of years ago was that it is more like the union of different languages. Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands. The IR is more like a common vocabulary than a common language.

    My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

    Do you think I misunderstood?

    • pizlonator 19 hours ago

      > like the union of different languages

      No. Here are two good ways to think about it:

      1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

      2. It's a low level representation. It's suitable for lowering other languages to. Theoretically, you could lower anything to it since it's Turing-complete. Practically, it's only suitable for lowering sufficiently statically-typed languages to it.

      > Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands.

      Definitely not. All of those tools have a shared understanding of what happens when LLVM executes on a particular target and data layout.

      The only flexibility is that you're allowed to alter some of the semantics on a per-target and per-datalayout basis. Targets have limited power to change semantics (for example, they cannot change what "add" means). Data layout is its own IR, and that IR has its own semantics - and everything that deals with LLVM IR has to deal with the data layout "IR" and has to understand it the same way.

      > My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

      Not parsing this statement very well, but bottom line: LLVM IR is remarkably stable because of Hyrum's law within the LLVM project's repository. There's a TON of code in LLVM that deals with LLVM IR. So, it's super hard to change even the smallest things about how LLVM IR works or what it means, because any such change would surely break at least one of the many things in the LLVM project's repo.

      • jcranmer 19 hours ago

        > 1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

        This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.

        (In terms of frontends, I've seen "Rust needs/wants this" as much as Clang these days, and Flang and Julia are also pretty relevant for some things.)

        There's currently a working group in LLVM on building better, LLVM-based semantics, and the current topic du jour of that WG is a byte type proposal.

      • weinzierl 18 hours ago

        Thanks for your detailed answer. You encouraged me to give it another try and have closer look this time.

    • enos_feedler 20 hours ago

      This take makes sense in the context of MLIR creation which introduces dialects which are namespaces within the IR. Given it was created by Chris Lattner I would guess he saw these problems with LLVM as well.

  • fooker 21 hours ago

    There is a rematerialize pass, there is no real reason to couple it with register allocation. LLVM regalloc is already somewhat subpar.

    What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.

    I can understand this is easier said than done of course.

    • pizlonator 21 hours ago

      > There is a rematerialize pass, there is no real reason to couple it with register allocation

      The reason to couple it to regalloc is that you only want to remat if it saves you a spill

      • fooker 21 hours ago

        Remat can produce a performance boost even when everything has a register.

        Admittedly, this comes up more often in non-CPU backends.

hoyhoy 17 hours ago

I asked the guy working on compiler-rt to change one boolean so the LLVM 18 build would work on macOS, and he locked the whole issue down as "heated" and it's still not fixed four years later.

I love LLVM though. clang-tidy, ASAN, UBSAN, LSAN, MSAN, and TSAN are AMAZING. If you are coding C and C++ and NOT using clang-tidy, you are doing it wrong.

My biggest problem with LLVM rn is that -fbounds-safety is only available on Xcode/AppleClang and not LLVM Clang. MSAN and LSAN are only available on LLVM and not Xcode/AppleClang. Also Xcode doesn't ship clang-tidy, clang-format, or llvm-symbolizer. It's kind of a mess on macOS rn. I basically rolled my own darwin LLVM for LSAN and clang-tidy support.

The situation on Linux is even weirder. RHEL doesn't ship libcxx, but Fedora does ship it. No distro has libcxx instrumented for MSAN at the moment which means rolling your own.

What would be amazing is if some distro would just ship native LLVM with all the things working out of the box. Fedora is really close right now, but I still have to build compiler-rt manually for MSAN support..

jcranmer a day ago

Given some of the discussions I've been stuck in over the past couple of weeks, one of the things I especially want to see built out for LLVM is a comprehensive executable test suite that starts not from C but from LLVM IR. If you've ever tried working on your own backend, one of the things you notice is there's not a lot of documentation about all of the SelectionDAG stuff (or GlobalISel), and there is also a lot of semi-generic "support X operation on top of Y operation if X isn't supported." And the precise semantics of X or Y aren't clearly documented, so it's quite easy to build the wrong thing.

tvali 3 hours ago

https://tatsu.readthedocs.io/en/stable/ - this was my result to find lightweight syntax parsers. LLVM: in my experience, to play with little languages or ideas, such as additional tag, is so heavy-weight that it's as hard to learn as Isabelle Proof Assistant; large systems are interesting, but it's worth mentioning that 99% of the functionality could be often 1% of the API.

ggggffggggg 21 hours ago

> This is somewhat unsurprising, as code review … may not provide immediate value to the person reviewing (or their employer).

If you get “credit” for contributing when you review, maybe people (and even employers, though that is perhaps less likely) would find doing reviews to be more valuable.

Not sure what that looks like; maybe whatever shows up in GitHub is already enough.

  • loeg 20 hours ago

    Honestly, the same phenomenon is a problem inside companies as well. My employer credits review quality and quantity relatively well (i.e., in annual performance review), but it still isn't a strong enough motivator to really get the rate up to a satisfactory level.

pja 18 hours ago

Six years ago I was building LLVM pretty regularly on an 8GB Dell 9360 laptop whilst on a compiler related contract. (Still have it actually - that thing is weirdly indestructible for a cheap ultrabook.)

Build time wasn’t great, but it was tolerable, so long as you reduced link parallelism to squeeze inside the memory constraints.

Is it still possible to compile LLVM on such a machine, or is 8Gb no longer workable at all?

  • adgjlsfhk1 14 hours ago

    If you don't build with parallelism and have a couple gigs of swap available, it should work (although you might need to set some command line flags to use the right linker settings).

  • mungaihaha 10 hours ago

    > or is 8Gb no longer workable

    llvm compiles in less than an hour on my old m1 mac in all the build configurations I have tried so far

    • pja an hour ago

      Pretty much the same then - good news!

ksec 21 hours ago

>Compilation time

I remember part of the selling point of LLVM during its early stage was compilation time being so much faster than GCC.

LLVM started about 15 years after GCC. Considering LLVM is 23 years old already. I wonder if something new again will pop up.

  • pjmlp 3 hours ago

    If it wasn't for Apple wanting to get rid of GCC due to licensing, and Google as well on Android, LLVM would have remained like Andrew Compiler Toolkit, MSR Phoenix, and similar endevours, another compiler development research project at Illinois university.

    Thus what would be the commercial reason to support LLVM's sucessor, especially since the companies that were responsible for LLVM going mainstream, are happy with current C and C++ support, mostly using LLVM for other programming language frontends?

    • flamedoge 2 hours ago

      non-C/C++ centric, performant compiler maybe. Aliasing support in C is pretty limited and a performant langauge like fortran and more modern equivalents may seek more efficient, concise IR for compiler with less comparable overhead from LLVM.

      • pjmlp an hour ago

        Yeah, but those already exist as plenty of compiled languages are bootstraped already, thus I don't see the business value of LLVM-vNext.

        One might argue GraalVM could be such one, however it has an history that traces back to SunLabs Maxime VM, it is focused on Java ecosystem, serverless deployments into Oracle Cloud, and for compiler development the target audience doesn't overlap with LLVM folks (C++ vs Java tooling).

  • mungaihaha 10 hours ago

    > Considering LLVM is 23 years old already. I wonder if something new again will pop up

    LLVM is actually really really good at what it does (compiling c/c++ code). Not perfect, but good enough that it would take tens of thousands of competent man hours to match it

  • [removed] 20 hours ago
    [deleted]
anarazel 19 hours ago

FWIW, the article says "Frontends are somewhat insulated from this because they can use the largely stable C API." but that's not been my/our experience. There are parts of the API that are somewhat stable, but other parts (e.g. Orc) that change wildly.

  • nikic 19 hours ago

    Yes, the Orc C API follows different rules from the rest of the C API (https://github.com/llvm/llvm-project/blob/501416a755d1b85ca1...).

    • anarazel 19 hours ago

      I know, but even if it's not breaking promises, the constant stream of changes still makes it still rather painful to utilize LLVM. Not helped by the fact that unless you embed LLVM you have to deal with a lot of different LLVM versions out there...

      • lhames 15 hours ago

        FWIW eventual stability is a goal, but there's going to be more churn as we work towards full arbitrary program execution (https://www.youtube.com/watch?v=qgtA-bWC_vM covers some recent progress).

        If you're looking for stability in practice: the ORC LLJIT API is your best bet at the moment (or sticking to MCJIT until it's removed).

pklausler 17 hours ago

> There are thousands of contributors and the distribution is relatively flat (that is, it’s not the case that a small handful of people is responsible for the majority of contributions.)

This certainly varies across different parts of llvm-project. In flang, there's very much a "long tail". 80% of its 654K lines are attributed to the 17 contributors responsible for 1% or more of them, according to "git blame", out of 355 total.

  • nikic 17 hours ago

    That was ambiguously phrased. The point I was trying to make here is that we don't have the situation that is very common for open-source projects, where a project might nominally have a 100 contributors, but in reality it's one person doing 95% of the changes.

    LLVM of course has plenty of contributors that only ever landed one change, but the thing that matters for project health is that that the group of "top contributors" is fairly large.

    (And yes, this does differ by subproject, e.g. lld is an example of a subproject where one contributor is more active than everyone else combined.)

    • pklausler 17 hours ago

      There may be a difference of degree here, but not a difference of kind.

Panzerschrek 8 hours ago

ABI / calling convention handling - that's exactly my pain. As compiler developer I need to manage arguments passing in my compiler frontend code myself, which sometimes even requires register counting.

apitman 17 hours ago

My main concern with LLVM is that it adds 30+ million lines of code dependency to any language that relies on it.

Part of the reason I'm not ready to go all in on Rust is that I'm not willing to externalize that much complexity in the programs I make.

sixthDot 6 hours ago

Also the C API is a bit the poor child. Plenty of useful options (or even opt passes !) are not available.

  • r2vcap 5 hours ago

    I think a large part of this comes from the fact that the expressiveness of LLVM’s C++ APIs does not translate well into a “plain old C” style interface. Many of the abstractions and extension points are simply awkward or impractical to expose in C.

    On top of that, there is little incentive for contributors to invest in the C API: most LLVM users and developers interact with the C++ API directly, so new features and options tend to be added there first, and often exclusively. As a result, the C API inevitably lags behind and remains a second-class citizen.

Panzerschrek 8 hours ago

LLVM also has (in my opinion) no capacity to review issues. None of the issues I have created were addressed, including a couple of really painful bugs.

[removed] 20 hours ago
[deleted]
hu3 19 hours ago

Hey Nikita, if you're reading this, Thank You! for your contributions to PHP!

We miss you!

sylware 2 hours ago

I remember the time I did dive in LLVM, object orientation was so much aggressive that you have to embrace the whole object oriented model to start to have a chance to understand what the code is actually doing.

[removed] a day ago
[deleted]
phplovesong 21 hours ago

Comptimes aee an issue, not only for LLVM itself, but also for users, as a prime example: Rust. Rust has horrible comptimes for anything larger, what makes its a real PITA to use.

  • zbentley 29 minutes ago

    I think that’s primarily a Rust issue, not an LLVM issue. LLVM is at least competitive performance-wise in every case I’ve used it, and is usually the fastest option (given a specific linker behavior) outright. That’s especially true on larger code bases (e.g. chromium, or ZFS).

    Rust is also substantially faster to compile than it was a few years ago, so I have some hope for improvements in that area as well.

neuroelectron a day ago

It's amazing to me that this is trusted to build so much of software. It's basically impossible to audit yet Rust is supposed to be safe. It's a pipe dream that it will ever be complete or Rust will deprecate it. I think infinite churn is the point.

  • pornel 21 hours ago

    Rust does its own testing, and regularly helps fix issues in LLVM (which usually also benefits clang users and other LLVM languages).

    Optimizing compilers are basically impossible to audit, but there are tools like alive2 for checking them.

  • bigstrat2003 20 hours ago

    > I think infinite churn is the point.

    That would require the LLVM devs to be stupid and/or evil. As that is not the case, your supposition is not true either. They might be willing to accept churn in the service of other goals, but they don't have churn as a goal unto itself.

  • hu3 a day ago

    Go is sometimes criticised for not using LLVM but I think they made the right choice.

    For starters the tooling would be much slower if it required LLVM.

    • phplovesong 21 hours ago

      Also OCaml. Having a own compiler is THE way for language development. IMHO.

      • anonymous908213 19 hours ago

        Personally I think a happy medium is to compile to C99. Then, after your own compiler's high-level syntax transformation pass, you can pass it through the Tiny C Compiler which is somewhere on the order of ~10x faster than Clang -O0. When you need performance optimizations at the cost of build speed, or to support a compilation target that TCC does not, you can freely switch to compiling with Clang, getting much of the value of LLVM without ever specifically targeting it. This is what I do for my own language, and it makes my life significantly easier and is perfectly sufficient for my use, since as with most languages my language will never be used by millions of people (or perhaps only ever one person, as I have not deigned to publish it).

        I think writing a compiler targeting machine code from scratch only really makes sense if you have Google's resources, as Go did. That includes both the money and the talent pool of employees that can be assigned to work on the task full-time; not everyone has Ken Thompson lying around on payroll. To do better than LLVM is a herculean feat, and most languages will never be mainstream enough to justify the undertaking; indeed I think an undertaking of that scale would prevent a language from ever getting far enough along to attract users/contributors if it doesn't already have powerful backing from day 0.