Comment by web3aj
Comment by web3aj 2 days ago
The internal tools at Meta are incredible tbh. There’s an ecosystem of well-designed internal tools that talk to each other. That was my favorite part of working there.
Comment by web3aj 2 days ago
The internal tools at Meta are incredible tbh. There’s an ecosystem of well-designed internal tools that talk to each other. That was my favorite part of working there.
Yeah 100%. I found it immensely frustrating to be using tools with no community (except internally), so-so documentation, and features that were clearly broken in a way that would be unacceptable for a regular consumer product. If you have a question or error not covered by an internal search or documentation, good luck, you'll need it. Literally part of the reason I left the company.
Well, you're supposed to read the code and figure it out. And if you can't, you're not good enough an engineer. According to people at Meta.
Same as Google. Many internal tools have painful interfaces and poor or documentation because the hiring bar was high and it was acceptable to assume that the user's skill level is high enough to figure it out. That attitude becomes a bigger problem when trying to sell tools to the public (e.g. Google Cloud Platform).
Agreed. I often get my work done using open source build instructions and tools and then when everything works I port it to internal infra. Other people are the opposite though, which for open source based code bases has a nasty side effect of the work having no upstream able tests!
But you're both talking about different things. The tools are both often left in disuse, lacking documentation, etc. But they also have a really tight integration with each other that allows for unparalleled visibility and ability over enormous systems with many moving parts.
It's been awhile, but I recall fighting with the massive checkout sizes to do anything of consequence with the internal tooling causing the vms to run out of disk space and corrupt my work. I got very used to rsyncing to my laptop every few minutes and rebuilding the vm multiple times per day. Totally frustrating and pointless waste of time.
Large checkouts is a solved problem now https://github.com/facebook/sapling/blob/main/eden/fs/docs/O...
My opinion: Many Meta tools and processes seem like they were created by former Googlers that sought to recreate something they previously had at Google, during the Google->FB Exodus, but also changed aspects of the tool that were annoying or diverged from their needs. This is not a bad thing.
Since Bento doesn't appear to be usable by the public, aparallel version of this that people can get a feel for cross-tool integration would be Google's Colaboratory / Colab notebooks (https://colab.research.google.com/) that have many baked-in integrations driven by actual internal use (i.e. dogfooding).
As someone from both, I confirm/support your opinion 100%.
I agree, the paid for Pro version of Colab just seems to have the features I need. I often use it because it simply saves me time and hassles.
You and I must be working in different areas.
For any kind of general Python/C++ work, its a _massive_ pain.
The integrated debugger rarely works, and its a 30 minute recompile to figure that out. The documentation for actually being efficient in build/run/test is basically "ask the old guy in the corner". You'd best hope they know and are willing to share.
The code search is great! The downside is that nobody bothers to document stuff, so thats all you've got. (comments/docstrings are for weaklings apparently)
You want to use a common third party library? You'd best hope its already ingested, otherwise you're going to be spending the next few days trying to get that into the codebase. (yes there are auto tools, no they don't always work.) Also, you're now on the hook to do security upgrades.
One of the crazier things a L4 meta colleague of mine told me, that I still don’t believe entirely, is that meta pretty much has their own fork of everything, even tools like git. is this true?
Facebook actually doesn't use git, they use mercurial (https://graphite.dev/blog/why-facebook-doesnt-use-git).
That decision is also illustrative of why they end up forking most things - Facebook's usage patterns at the far extreme end for almost any tool, and things thats are non-issues with fewer engineers or a smaller codebase become complete blockers.
Yes when I used to talk about this to interviewees, I described that every tool people commonly use is somewhere on the Big-O curves for scaling. Most of the time we don't really care if a tool is O(n) or O(10 n) or whatever.
At Meta, N tends to be hundreds of billions to hundreds of trillions.
So your algorithm REALLY matters. And git has a Big-O that is worse than Mercurial, so we had to switch.
I'm gonna disagree with you there. The difference was with stat patterns, and the person at Facebook who ran the tests had something wrong with the disk setup that was causing it to run slowly. They ignored multiple responses that reproduced very different results.
Nail in the coffin on this was a benchmark GitHub ran two years ago that got the results that FB should have: git status within seconds.
Facebook didn't use mercurial because of big O, they used it because of hubris and a bad disk config.
If git is blocking you, you are using it wrong. Lotta instances of people treating it as an artifact repository. Use it correctly with a branching strategy that works for your use case and it's bulletproof.
Plenty of other customers with the same magnitude problems as Meta are using Git perfectly fine.
Yep. Zeus is a fork of Zookeeper, Hack is a fork of PHP, etc. It's usually needed to make it work with the internal environment.
The few things that don't have forks are usually the open source projects like React or PyTorch, but even those have some custom features added to make it work with FB internals.
Am I completely off-base/confused thinking that the GFE originally started life (like back under csilver) as a fork of boa[0]?
[0]: http://www.boa.org/
Few companies experienced the explosive growth fb did, though many will claim to have done so. Hack made the existing codebase of php scale to insane levels while reaching escape velocity for the overall company to even attempt to transition away or shrink the php codebase, as i recall (i was an SRE, not a dev)
zeus likewise.
Meta doesn't use git. It uses mercurial. It does fork it because they have a huge monorepo. They created a concept of stacked commits which is a way of not having branches. Each commit is in a stack and then merged into master. Lots of things built for scaling.
Left pad was from the creator pulling the code from the public source forge, not from a destructive code change.
I assume all of the big tech companies host internal mirrors of every single code dependency + tooling. Otherwise they could not guarantee that they can build all of their code.
Meta tools are best in class when the requirement is scale. Or that the external tools haven't matured yet
A friend of mine is doing his PHD while being an intern at Meta. He does not share your excitement... at all. To summarize his complaints: a framework written a long while ago with design flaws that were cast in stone, that requires exorbitant effort to accomplish simple things (under the pretense of global integration that usually isn't needed, but even if was needed, would still not work).
How long has he been interning? Is it long enough for him to have learned how long the timescale big-tech roadmaps operate on? If he wants a feature, he better write it himself (if his PR doesn't conflict with an upcoming rewrite, coming "soon"), or lobby to get it slotted for the second quarter of 2026.
He started right about the time COVID started, so... about four years now, I think. I'm not sure if those were contiguous though.
I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework. He's there for ML stuff. Unfortunately, the road to the later goes through the former, but he's not really a kind of programmer who'd deal with Facebook's infrastructure and plumbing.
The point is, it's inconvenient. Is it inconvenient because Facebook works on a five-year plan basis or whatever other reason they have for it doesn't really matter. It's just not good.
I also have no problems admitting that all big companies (two in total, one being Google) I worked for so far had bad internal tools. I don't imagine Facebook is anything special in this respect. I just don't feel like it's necessary to justify it in any way. It's just a fact of life: large companies have a tendency to produce bad internal tools (but small often have none whatsoever!) It's a water is wet kind of thing...
> I'm not sure what your idea about PRs and features has to do with the above... he's not there to work on the internal infra framework.
My idea is if he's not making the monorepo codebase changes himself, he's going to have to wait for an awfully long time for any non-trivial improvements he'd like because the responsible teams have different priorities sketched out for next calendar year. It's a function of organization size, unless you have the support of someone very high up on the org chart, ICs can't unilaterally adjust another teams priorities.
> A friend of mine is doing his PHD while being an intern at Meta
I interned thrice as phd student at FB. your friend isn't entirely wrong but also just doesn't have enough experience to judge. all enormous companies are like this. FB is far and away better than almost all such companies (probably only with the exception of Google/Netflix).
Agreed. I'm reading some complaints in the thread about being told to "just read the source code" for internal tools at Meta. When I worked at Apple we didn't even get the source code!
I don't see why saying that Facebook's tools are bad should be invalidated by saying that Google's or others' tools are bad too. Google being bad doesn't vindicate or improve Facebook tools. There's no need for perspective: if it doesn't work well for what's it designed to do, then that's all there is to it.
> Google's or others' tools are bad too
lol bruh read my response again - FB's and Google's and Amazon's tool are lightyears ahead of #ARBITRARY_F100_COMPANY. you haven't a clue what "bad" means if you've never worked in a place that has > 1000 engineers.
Uuuh can you tell a bit more about wasabi, the Python LSP? Saw a post years ago and been eager to see whether it’d be open sourced (or why it wouldn’t).
Polar opposite of my experience. To achieve the technical equivalent of changing a lightbulb, spend the entire day wrangling a dozen tools which are broken in different ways, maintained by teams that no longer exist or have completely rolled over, only to arrive at the finish line and discover we don't use those lightbulbs anymore. Move things and break fast.