Comment by web3aj

Comment by web3aj 2 days ago

75 replies

The internal tools at Meta are incredible tbh. There’s an ecosystem of well-designed internal tools that talk to each other. That was my favorite part of working there.

Random_BSD_Geek 2 days ago

Polar opposite of my experience. To achieve the technical equivalent of changing a lightbulb, spend the entire day wrangling a dozen tools which are broken in different ways, maintained by teams that no longer exist or have completely rolled over, only to arrive at the finish line and discover we don't use those lightbulbs anymore. Move things and break fast.

  • loeg 2 days ago

    IMO there's a mix of a few really good, widely used, well-supported tools as well as a long tail of random tiny tools where the original team is gone that are cruftier.

  • extr 2 days ago

    Yeah 100%. I found it immensely frustrating to be using tools with no community (except internally), so-so documentation, and features that were clearly broken in a way that would be unacceptable for a regular consumer product. If you have a question or error not covered by an internal search or documentation, good luck, you'll need it. Literally part of the reason I left the company.

    • landedgentry 2 days ago

      Well, you're supposed to read the code and figure it out. And if you can't, you're not good enough an engineer. According to people at Meta.

      • extr 2 days ago

        People probably think you’re exaggerating but it’s true. Sometimes when I would get blocked the suggestion was to “read the source code” or “submit a fix” on some far flung internal project. Huge fucking waste of time and effort, completely unserious.

      • KaiserPro a day ago

        Welcome to meta! where everything is a murder mystery.

        Except you're not really sure if there has been a murder, or sometimes you wonder if you're the murderer, because at every turn you're told that you've been a bad dev for trying x,y and z

      • moandcompany 2 days ago

        Same as Google. Many internal tools have painful interfaces and poor or documentation because the hiring bar was high and it was acceptable to assume that the user's skill level is high enough to figure it out. That attitude becomes a bigger problem when trying to sell tools to the public (e.g. Google Cloud Platform).

      • fsociety a day ago

        Or you know, go chat with the tool maintainers because they want people using them for impact.

    • zer0zzz 2 days ago

      Agreed. I often get my work done using open source build instructions and tools and then when everything works I port it to internal infra. Other people are the opposite though, which for open source based code bases has a nasty side effect of the work having no upstream able tests!

  • aprilthird2021 a day ago

    But you're both talking about different things. The tools are both often left in disuse, lacking documentation, etc. But they also have a really tight integration with each other that allows for unparalleled visibility and ability over enormous systems with many moving parts.

  • bozhark 2 days ago

    Move Smooth and Fix Things (tm) is our nonprofit corporation’s version of this atrocious motto.

  • ElonChrist a day ago

    It's been awhile, but I recall fighting with the massive checkout sizes to do anything of consequence with the internal tooling causing the vms to run out of disk space and corrupt my work. I got very used to rsyncing to my laptop every few minutes and rebuilding the vm multiple times per day. Totally frustrating and pointless waste of time.

moandcompany 2 days ago

My opinion: Many Meta tools and processes seem like they were created by former Googlers that sought to recreate something they previously had at Google, during the Google->FB Exodus, but also changed aspects of the tool that were annoying or diverged from their needs. This is not a bad thing.

Since Bento doesn't appear to be usable by the public, aparallel version of this that people can get a feel for cross-tool integration would be Google's Colaboratory / Colab notebooks (https://colab.research.google.com/) that have many baked-in integrations driven by actual internal use (i.e. dogfooding).

  • kridsdale3 2 days ago

    As someone from both, I confirm/support your opinion 100%.

  • mark_l_watson a day ago

    I agree, the paid for Pro version of Colab just seems to have the features I need. I often use it because it simply saves me time and hassles.

KaiserPro a day ago

You and I must be working in different areas.

For any kind of general Python/C++ work, its a _massive_ pain.

The integrated debugger rarely works, and its a 30 minute recompile to figure that out. The documentation for actually being efficient in build/run/test is basically "ask the old guy in the corner". You'd best hope they know and are willing to share.

The code search is great! The downside is that nobody bothers to document stuff, so thats all you've got. (comments/docstrings are for weaklings apparently)

You want to use a common third party library? You'd best hope its already ingested, otherwise you're going to be spending the next few days trying to get that into the codebase. (yes there are auto tools, no they don't always work.) Also, you're now on the hook to do security upgrades.

JohnMakin 2 days ago

One of the crazier things a L4 meta colleague of mine told me, that I still don’t believe entirely, is that meta pretty much has their own fork of everything, even tools like git. is this true?

  • tqi 2 days ago

    Facebook actually doesn't use git, they use mercurial (https://graphite.dev/blog/why-facebook-doesnt-use-git).

    That decision is also illustrative of why they end up forking most things - Facebook's usage patterns at the far extreme end for almost any tool, and things thats are non-issues with fewer engineers or a smaller codebase become complete blockers.

    • kridsdale3 2 days ago

      Yes when I used to talk about this to interviewees, I described that every tool people commonly use is somewhere on the Big-O curves for scaling. Most of the time we don't really care if a tool is O(n) or O(10 n) or whatever.

      At Meta, N tends to be hundreds of billions to hundreds of trillions.

      So your algorithm REALLY matters. And git has a Big-O that is worse than Mercurial, so we had to switch.

      • steventhedev a day ago

        I'm gonna disagree with you there. The difference was with stat patterns, and the person at Facebook who ran the tests had something wrong with the disk setup that was causing it to run slowly. They ignored multiple responses that reproduced very different results.

        Nail in the coffin on this was a benchmark GitHub ran two years ago that got the results that FB should have: git status within seconds.

        Facebook didn't use mercurial because of big O, they used it because of hubris and a bad disk config.

      • [removed] 2 days ago
        [deleted]
      • master_crab a day ago

        If git is blocking you, you are using it wrong. Lotta instances of people treating it as an artifact repository. Use it correctly with a branching strategy that works for your use case and it's bulletproof.

        Plenty of other customers with the same magnitude problems as Meta are using Git perfectly fine.

    • LarsDu88 2 days ago

      They use sapling. An in-house clone of mercurial that was open sourced 2 years ago

    • herval a day ago

      FB uses mercurial _for most things_, but like any company that size, there's teams that use git and even teams that use perforce

  • ipsum2 2 days ago

    Yep. Zeus is a fork of Zookeeper, Hack is a fork of PHP, etc. It's usually needed to make it work with the internal environment.

    The few things that don't have forks are usually the open source projects like React or PyTorch, but even those have some custom features added to make it work with FB internals.

    • gcr 2 days ago

      This is also how things work at Google.

      Google also maintains a monorepo with "forks" of all software that they use. History diverges, but is occasionally synchronized for things like security updates etc.

      • zhengyi13 2 days ago

        Am I completely off-base/confused thinking that the GFE originally started life (like back under csilver) as a fork of boa[0]?

        [0]: http://www.boa.org/

    • grantsucceeded 2 days ago

      Few companies experienced the explosive growth fb did, though many will claim to have done so. Hack made the existing codebase of php scale to insane levels while reaching escape velocity for the overall company to even attempt to transition away or shrink the php codebase, as i recall (i was an SRE, not a dev)

      zeus likewise.

      • ipsum2 2 days ago

        You worked at FB, but you call yourself an SRE, not a PE? ;)

    • ahupp a day ago

      nit: HHVM was a completely new implementation of a runtime for a PHP-like language, it wasn't a fork of Zend.

  • jamra 2 days ago

    Meta doesn't use git. It uses mercurial. It does fork it because they have a huge monorepo. They created a concept of stacked commits which is a way of not having branches. Each commit is in a stack and then merged into master. Lots of things built for scaling.

  • sdenton4 2 days ago

    It wouldn't be terribly surprising. Forking everything provides a liiiitle bit of protection against things like the 'left pad' incident.

    • [removed] 2 days ago
      [deleted]
    • 3eb7988a1663 2 days ago

      Left pad was from the creator pulling the code from the public source forge, not from a destructive code change.

      I assume all of the big tech companies host internal mirrors of every single code dependency + tooling. Otherwise they could not guarantee that they can build all of their code.

jchonphoenix 2 days ago

Meta tools are best in class when the requirement is scale. Or that the external tools haven't matured yet

[removed] 2 days ago
[deleted]
crabbone 2 days ago

A friend of mine is doing his PHD while being an intern at Meta. He does not share your excitement... at all. To summarize his complaints: a framework written a long while ago with design flaws that were cast in stone, that requires exorbitant effort to accomplish simple things (under the pretense of global integration that usually isn't needed, but even if was needed, would still not work).

  • sangnoir a day ago

    How long has he been interning? Is it long enough for him to have learned how long the timescale big-tech roadmaps operate on? If he wants a feature, he better write it himself (if his PR doesn't conflict with an upcoming rewrite, coming "soon"), or lobby to get it slotted for the second quarter of 2026.

    • crabbone a day ago

      He started right about the time COVID started, so... about four years now, I think. I'm not sure if those were contiguous though.

      I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework. He's there for ML stuff. Unfortunately, the road to the later goes through the former, but he's not really a kind of programmer who'd deal with Facebook's infrastructure and plumbing.

      The point is, it's inconvenient. Is it inconvenient because Facebook works on a five-year plan basis or whatever other reason they have for it doesn't really matter. It's just not good.

      I also have no problems admitting that all big companies (two in total, one being Google) I worked for so far had bad internal tools. I don't imagine Facebook is anything special in this respect. I just don't feel like it's necessary to justify it in any way. It's just a fact of life: large companies have a tendency to produce bad internal tools (but small often have none whatsoever!) It's a water is wet kind of thing...

      • sangnoir 18 hours ago

        > I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework.

        My idea is if he's not making the monorepo codebase changes himself, he's going to have to wait for an awfully long time for any non-trivial improvements he'd like because the responsible teams have different priorities sketched out for next calendar year. It's a function of organization size, unless you have the support of someone very high up on the org chart, ICs can't unilaterally adjust another teams priorities.

  • almostgotcaught a day ago

    > A friend of mine is doing his PHD while being an intern at Meta

    I interned thrice as phd student at FB. your friend isn't entirely wrong but also just doesn't have enough experience to judge. all enormous companies are like this. FB is far and away better than almost all such companies (probably only with the exception of Google/Netflix).

    • jonathanyc a day ago

      Agreed. I'm reading some complaints in the thread about being told to "just read the source code" for internal tools at Meta. When I worked at Apple we didn't even get the source code!

    • crabbone a day ago

      I don't see why saying that Facebook's tools are bad should be invalidated by saying that Google's or others' tools are bad too. Google being bad doesn't vindicate or improve Facebook tools. There's no need for perspective: if it doesn't work well for what's it designed to do, then that's all there is to it.

      • almostgotcaught 15 hours ago

        > Google's or others' tools are bad too

        lol bruh read my response again - FB's and Google's and Amazon's tool are lightyears ahead of #ARBITRARY_F100_COMPANY. you haven't a clue what "bad" means if you've never worked in a place that has > 1000 engineers.

  • slt2021 2 days ago

    how else can you build empire as Engineering Manager and get promo?

    fork open source, then demand resources to maintian this monster.

    easiest promotion + job security.

    its even called "Platform Engineering" these days

Qshdg 2 days ago

Looking at some of the bureaucracy in their open source projects, I'd say that they need less tooling and more thinking. These tools help to keep spaghetti code bases from imploding totally.

baggiponte 2 days ago

Uuuh can you tell a bit more about wasabi, the Python LSP? Saw a post years ago and been eager to see whether it’d be open sourced (or why it wouldn’t).