Comment by Svetlitski

Comment by Svetlitski 5 days ago

34 replies

It’s just a huge pain to build and link against. Before the bazel 7.4.0 change your options were basically:

1. Use it as a dynamically linked library. This is not great because you’re taking at a minimum the performance hit of going through the PLT for every call. The forfeited performance is even larger if you compare against statically linking with LTO (i.e. so that you can inline calls to malloc, get the benefit of FDO , etc.). Not to mention all the deployment headaches associated with shared libraries.

2. Painfully manually create a static library. I’ve done this, it’s awful; especially if you want to go the extra mile to capture as much performance as possible and at least get partial LTO (i.e. of TCMalloc independent of your application code, compiling all of TCMalloc’s compilation units together to create a single object file).

When I was at Meta I imported TCMalloc to benchmark against (to highlight areas where we could do better in Jemalloc) by pain-stakingly hand-translating its bazel BUILD files to buck2 because there was legitimately no better option.

As a consequence of being so hard to use outside of Google, TCMalloc has many more unexpected (sometimes problematic) behaviors than Jemalloc when used as a general purpose allocator in other environments (e.g. it basically assumes that you are using a certain set of Linux configuration options [1] and behaves rather poorly if you’re not)

[1] https://google.github.io/tcmalloc/tuning.html#system-level-o...

MaskRay 5 days ago

Thanks for sharing the insight!

As I observed when I was at Google: tcmalloc wasn't a dedicated team but a project driven by server performance optimization engineers aiming to improve performance of important internal servers. Extracting it to github.com/google/tcmalloc was complex due to intricate dependencies (https://abseil.io/blog/20200212-tcmalloc ). As internal performance priorities demanded more focus, less time was available for maintaining the CMake build system. Maintaining the repo could at best be described as a community contribution activity.

> Meta’s needs stopped aligning well with those of external uses some time ago, and they are better off doing their own thing.

I think Google's diverged from the external uses even long ago:) (For a long time google3 and gperftools's tcmalloc implementations were so different.)

mort96 4 days ago

Everything from Google is an absolute pain to work with unless you're in Google using their systems, FWIW. Anything from the Chromium project is deeply intangled with everything else from the Chromium project as part of one gigantic Chromium source tree with all dependencies and toolchains vendored. They do not care about ABI what so ever, to the point that a lot of Google libraries change their public ABI based on whether address sanitizer is enabled or not, meaning you can't enable ASAN for your code if you use pre-built (e.g package manager provided) versions of their code. Their libraries also tend to break if you link against them from a project with RTTI enabled, a compiler set to a slightly different compiler version, or any number of other minute differences that most other developers don't let affect their ABI.

And if you try to build their libraries from source, that involves downloading tens of gigabytes of sysroots and toolchains and vendored dependencies.

Oh and you probably don't want multiple versions of a library in your binary, so be prepared to use Google's (probably outdated) version of whatever libraries they vendor.

And they make no effort what so ever to distinguish between public header files and their source code, so if you wanna package up their libraries, be prepared to make scripts to extract the headers you need (including headers from vendored dependencies), you can't just copy all of some 'include/' folder.

And their public headers tend to do idiotic stuff like `#include "base/pc.h"`, where that `"base/pc.h"` path is not relative to the file doing the include. So you're gonna have to pollute the include namespace. Make sure not to step on their toes! There's a lot of them.

I have had the misfortune of working with Abseill, their WebRTC library, their gRPC library and their protobuf library, and it's all terrible. For personal projects where I don't have a very, very good reason to use Google code, I try to avoid it like the plague. For professional projects where I've had to use libwebrtc, the only reasonable approach is to silo off libwebrtc into its own binary which only deals with WebRTC, typically with a line-delimited JSON protocol on stdin/stdout. For things like protobuf/gRPC where that hasn't been possible, you just have to live with the suffering.

..This comment should probably have been a blog post.

  • ahartmetz 4 days ago

    I think your rant isn't long enough to include everything relevant ;) The Blink web engine (which I sometimes compile for qtwebengine) takes a really long time to compile, several times longer than Gecko according to some info I found online. Google has a policy of not using forward declarations, including everything instead. That's a pretty big WTF for anyone who has ever optimized build time. Google probably just throws hardware and (distributed) caching at the problem, not giving a shit about anyone else building it. Oh, it also needs about 2 GB of RAM per build thread - basically nothing else does.

    • LtdJorge 4 days ago

      Even with Firefox using Rust and requiring a build of many crates, qtwebengine takes more time. It was so bad that I had to remove packaged from my system (Gentoo) that were pulling qtwebengine.

      And I build all Rust crates (including rustc) with -O3, same as C/C++.

    • bialpio 4 days ago

      Chromium deviates from Google-wide policy and allows forward-declarations: https://chromium.googlesource.com/chromium/src/+/main/styleg..., "Forward declarations vs. #includes".

      • ahartmetz 4 days ago

        That is really nice to hear, but AFAICS it only means that it may change in the future. Because in current code, it was ~all includes last time I checked.

        Well, I remember one - very biased - example where I had a look at a class that was especially expensive to compile, like 40 seconds (on a Ryzen 7950X) and maybe 2 GB of RAM. It had under 200 LOC and didn't seem to do anything that's typically expensive to compile... except for the stuff it included. Which also didn't seem to do anything fancy. But transitive includes can snowball if you don't add any "compile firewalls".

  • fc417fc802 4 days ago

    Reading this perspective was interesting. I can appreciate that things didn't fit into your workflow very well, but my experience has been the opposite. Their projects seem to be structured from the perspective of building literally everything from source on the spot. That matches my mindset - I choose to build from scratch in a network isolated environment. As a result google repos are some of the few that I can count on to be fairly easy to get up and running. An alarming number of projects apparently haven't been tested under such conditions and I'm forced to spend hours patching up cmake scripts. (Even worse are the projects that require 'npm install' as part of the build process. Absurdity.)

    > Oh and you probably don't want multiple versions of a library in your binary, so be prepared to use Google's (probably outdated) version of whatever libraries they vendor.

    This is the only complaint I can relate to. Sometimes they lag on rolling dependencies forward. Not so infrequently there are minor (or not so minor) issues when I try to do so myself and I don't want to waste time patching my dependencies up so I get stuck for a while until they get around to it. That said, usually rolling forward works without issue.

    > if you try to build their libraries from source, that involves downloading tens of gigabytes of sysroots and toolchains and vendored dependencies.

    Out of curiosity which project did you run into this with? That said, isn't the only alternative for them moving to something like nix? Otherwise how do you tightly specify the build environment?

    • mort96 4 days ago

      I don't really have the care nor time to respond as thoroughly as you deserve, but here are some thoughts:

      > Out of curiosity which project did you run into this with?

      Their WebRTC library for the most part, but also the gRPC C++ library. Unlike WebRTC, grpc++ is in most package managers so the need to build it myself is less, but WebRTC is a behemoth and not in any package manager.

      > That said, isn't the only alternative for them moving to something like nix? Otherwise how do you tightly specify the build environment?

      I don't expect my libraries to tightly specify the build environment. I expect my libraries to conform to my software's build environment, to use versions of other libraries that I provide to it, etc etc. I don't mind that Google builds their application software the way they do, Google Chrome should tightly constrain its build environment if Google wants; but their libraries should fit in to my environment.

      I'm wondering, what is your relationship with Google software that you build from source? Are you building their libraries to integrate with your own applications, or do you just build Google's applications from source and use them as-is?

      • fc417fc802 4 days ago

        Yeah fair enough, controlling the build environment probably ought to be optional. Sounds like I dodged the issues you ran into due to the combination of specific library plus usecase. My experience is limited to abseil as well as the full dawn stack. In all cases I'm statically linking into my own applications, building everything except glibc & co from source, network isolated environment, using the same toolchain, compiler flags, etc.

    • bluGill 4 days ago

      > I choose to build from scratch in a network isolated environment. As a result google repos are some of the few that I can count on to be fairly easy to get up and running.

      If you are building a single google project they are easy to get up and running. If you are building your own project on top of theirs things get difficult. those library issues will get you.

      I don't know about OP, but we have our own in house package manager. If Conan was ready a couple years sooner we would have used that instead.

  • pavlov 4 days ago

    This matches my own experience trying to use Google's C++ open source. You should write the blog post!

  • ewalk153 4 days ago

    I’ve hit similar problems with their Ruby gRPC library.

    The counter example is the language Go. The team running Go has put considerable care and attention into making this project welcoming for developers to contribute, while still adhering to Google code contribution requirements. Building for source is straightforward and iirc it’s one of the easier cross compilers to setup.

    Install docs: https://go.dev/doc/install/source#bootstrapFromBinaryRelease

    • rstat1 4 days ago

      Go is kinda of a pain to build from source. Build one version to build another, and another..

      Or rather it was the last time I tried.

      • bbkane 4 days ago

        I think that's how most languages bootstrap.

  • rstat1 4 days ago

    I agree to a point. grpc++ (and protobuf and boringssl and abseil and....) was the biggest pain in the ass to integrate in to a personal project I've ever seen. I ended up having to write a custom tool to convert their Bazel files to the format my projects tend to use (GN and Ninja). Many hours wasted. There were no library specfici "sysroots" or "toolchains" involved though thankfully because I'm sure that would made things even worse.

    Upside is (I guess) if I ever want to use grpc in another project the work's already done and it'll just be a matter of copy/paste.

  • rfoo 4 days ago

    > they make no effort what so ever to distinguish between public header files and their source code

    They did, in a different way. The world is used to distinguish by convention, putting them in different directory hierarchy (src/, include/). google3 depends on the build system to do so, "which header file is public" is documented in BUILD files. You are then required to use their build system to grasp the difference :(

    > And their public headers tend to do idiotic stuff like `#include "base/pc.h"`, where that `"base/pc.h"` path is not relative to the file doing the include.

    I have to disagree on this one. Relying on relative include paths suck. Just having one `-I/project/root` is the way to go.

    • mort96 4 days ago

      > I have to disagree on this one. Relying on relative include paths suck. Just having one `-I/project/root` is the way to go.

      Oh to be clear, I'm not saying that they should've used relative includes. I'm complaining that they don't put their includes in their own namespace. If public headers were in a folder called `include/webrtc` as is the typical convention, and they all contained `#include <webrtc/base/pc.h>` or `#include "webrtc/base/pc.h"` I would've had no problem. But as it is, WebRTC's headers are in include paths which it's really difficult to avoid colliding with. You'll cause collisions if your project has a source directory called `api`, or `pc`, or `net`, or `media`, or a whole host of other common names.

      • rfoo 4 days ago

        Thanks for the clarification. Yeah, that's pretty frustrating.

        Now I'm curious why grpc, webrtc and some other Chromium repos were set up like this. Google projects which started in google3 and later exported as an open source project don't have this defect, for example tensorflow, abseil etc. They all had a top-level directory containing all their codes so it becomes `#include "tensorflow/...`.

        Feels like a weird collision of coding style and starting a project outside of their monorepo

    • alextingle 4 days ago

      >> `#include "base/pc.h"`, where that `"base/pc.h"` path is not relative to the file doing the include.

      > I have to disagree on this one.

      The double-quotes literally mean "this dependency is relative to the current file". If you want to depend on a -I, then signal that by using angle brackets.

      • mort96 4 days ago

        Eh, no. The quotes mean "this is not a dependency on a system library". Quotes can include relative to the files, or they can include things relative to directories specified with -I. The only thing they can't is include things relative to directories specified with -isystem and system include directories.

        I would be surprised if I read some project's code where angle brackets are used to include headers from within the same project. I'm not surprised when quotes are used to include code from within the project but relative to the project's root.

        • alextingle 2 days ago

          The only difference between "" and <> is that the former adds the current file's directory to the beginning of the search path.

          So the only reason to use "" instead of <> is when you need that behaviour, because the dependency is relative to the current file.

          If you use "" in any other situation, then you are introducing a potential error, because now someone can change the meaning of your code simply by creating a file with a name and location that happens to match your dependency.

          (Yes, some compilers have -isystem and -iquote which modify that behaviour, but those options are not standard, and can't be relied upon. I'd strongly advise against their use.)

kstrauser 5 days ago

Wow. That does sound quite unpleasant.

Thanks again. This is far outside my regular work, but it fascinates me.

prpl 5 days ago

I’ve successfully used LLMs to migrate Makefiles to bazel, more or less. I’ve not tried the reverse but suspect (2) isn’t so bad these days. YMMV, of course, but food for thought

  • benced 4 days ago

    Yep I've done something similar. This is the only way I managed to compile Google's C++ S2 library (spatial indexing) which depends on absl and OpenSSL.

    (I managed to avoid infecting my project with boringSSL)

  • rfoo 4 days ago

    Dunno why you got downvoted, but I've also tried to let Claude translate a bunch of BUILD files to equivalent CMakeLists.txt. It worked. The resulting CMakeLists.txt looks super terrible, but so is 95% of CMakeLists.txt in this world, so why bother, it's doomed anyway.

    • mort96 4 days ago

      They got downvoted because 1) comments of the form "I gave a chat bot a toy example of a task and it managed it" are tired and uninformative, and 2) because nobody was talking about anything which would make translating a Makefile into Bazel somehow relevant, nobody here has a Makefile which we wish was Bazel, we wish Google code was easier to work with

      • jeffbee 4 days ago

        The person above was saying they did a tedious manual port of tcmalloc to buck. Since tcmalloc provides both bazel and cmake builds, it seems relevant that in these days a person could have potentially forced a robot to do the job of writing the buck file given the cmake or bazel files.

      • prpl 4 days ago

        People are discussing things that are tedious work. I think the conversion to Bazel from a makefile is much more tedious and error prone than the reverse, in part because of Bazel sandboxing although that shouldn’t make much of a difference for a well-defined collection of Makefiles of a C library.

        The reverse should be much easier, which was the point of the post. Pointing it out as a capability (translation of build systems) that is handled well, is, well, informative. The future isn’t evenly distributed and people aren’t always aware of capabilities, even on HN

        • mort96 4 days ago

          What's really tedious is the constant chat bot spam.