Mark Zuckerberg freezes AI hiring amid bubble fears
(telegraph.co.uk)793 points by pera 3 days ago
793 points by pera 3 days ago
I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.
They’re pretty sophisticated people and weighed the trades. It’s not as if they’re deserving of any sort of sympathy.
I think we are stretching the term "corporate rat race" a bit in this case.
Taking rockstar players 'off the pitch' is the best way second-rate competitors can neutralize their opponents' advantage.
Patrick Boyle on youtube has a good explanation of what's going on in the industry: https://youtu.be/3ef5IPpncsg?feature=shared
tl;dw: some of it is anti-trust avoidance and some of it is knee-capping competitors.
Its a great way to kneecap collective growth and development.
Who would be the first-rate companies in this analogy?
Its all in RSUs
Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.
That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares
It's not even in RSUs. No SWEs/researchers are getting $100M+ RSU packages. Zuck said the numbers in the media were not accurate.
If you still think they are, do you have any proof? any sources? All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.
I have no idea what’s going on behind the scenes, but Zuckerberg saying “nah that’s not true” hardly seems like definitive proof of anything.
I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.
This is a very weird take. Lots of people want to actively work on things that are interesting to them or impactful to the world. Places like Meta potentially give the opportunity to work on the most impactful and interesting things, potentially in human history.
Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.
It's not such a weird take from a perspective of someone who's never had quite enough money. If you've never had enough, the dream is having more than enough, but working for much much more than enough sounds like a waste of time and/or greed. Also, it's hard to imagine pursuing endeavors out of passion because you've never had that luxury.
I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.
The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.
From that point forward, signing bonuses had the standard conditions attached.
Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.
If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.
I am very certain that AI will slowly kill the rest of "social" in the social web outside of closed circles. And they made their only closed circle app (WhatsApp) unusable and ad invested. Imo either way to are still in the process of slowly killing themselves
I've heard of high 7-figure salaries but no 9 figure salaries. Source for this?
"Why Andrew Tulloch Turned Down a $1.5 Billion Offer From Mark Zuckerberg" - https://techiegamers.com/andrew-tulloch-rejects-zuckerberg/
"according to people familiar with the matter."
aka, made up. They can make up anything by saying that. There are numerous false articles published by WSJ about Tesla also. I would take what they say here with a grain of salt. Zuck himself said the numbers in the media were widely exaggerated and he wasn't offering these crazy packages as reported.
"The New Orleans Saints have signed Taysom Hill to a record $40M contract"
I'm somewhere in the middle on this, with regards to the ROI... this isn't the kind of thing where you see immediate reflection on quarterly returns... it's the kind of thing where if you don't hedge some bets, you're likely to completely die out from a generational shift.
Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.
I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.
By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.
I think apple is fine. When AI works without 1 in 5 hallucinations then it can be added to their product. Showing up late with features that exists elsewhere but are polished in apple presentation is the way.
I don’t think that’s the point. Yes, Siri is crap, but Apple is already working on integrating LLMs at the OS level and those are shipping soon. It’s a quick fix to catch up in the AI game, but considering their track record, they’re likely to eventually retire third party partnerships and vertically integrate with their own models in the future. The incentive is there—doing so will only boost their stock price.
In general I don't think Google or Apple need AI.
In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)
That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.
Today Google is doing a decent job and Apple isn't.
You’re right, they don’t need AI. I finally stopped using Google search after they added the AI summary and didn’t add a way to turn it off. I’m just as bothered by Apple’s lack of AI as a am their lack of a touch screen on MacBooks. I use AI when I need AI.
True. I mean, how long did it take us to get a right click button.
oh absolutely. They have had support for aftermarket mice for a while. Their track pads have supported "right click" for a long time too.
Then again, they've always been way better at making track pads than mice. They have probably the best track pad in the business, and also the Magic Mouse, which everyone hates.
or maybe Apple realizes the whole thing ll crash in 18 months and are waiting for the fallout
the AI technology might be nice imo but its nowhere near the amount of money being spent. Its dumpster fire amounts of money and the amount of weirdness just everything being AI wrapper slop is so.. offputting.
Things can be good and they can still be a bubble just as how the internet was cool but the dot net bubble existed
They become bubble when economically things stop making sense.
AI ticks this checkbox.
> they're being usurped on all sides
They did it to themselves. Facebook is not the same site I originally joined. People were allowed to people. Now I have to worry about the AI banning me.
I deleted my Facebook account 10 years ago, and I’ve been off Instagram for half a year. I recently tried to create a new Facebook account so that could create a Meta Business account to use the WhatsApp API for my business. Insta-ban with no explanation. No recourse.
You don't have to use whatsapp api for business if all you want is simple automation.
Like, there is beeper which can theoretically allow you the same but you might have to trust their cloud but they are giving options locally too
in the meanwhile, you can use what beeper uses underneath its hood which is https://github.com/tulir/whatsmeow and use this for automation.
I used this for some time and I didn't seem to get banned but do be careful. Maybe use it on a side sim. I am not sure but just trying to help
> They're competing with Google, X, MS, OpenAI and others in terms of AI interactions
Am I the only one that find the attempt to jam AI interactions into Meta's products useless and that it only detracts from the product? Like there'll be posts with comedy things and then there are suggested 'Ask Meta AI' about things the comedy mentions with earnest questions - it's not only irrelevant but I guess it's kind of funny how random and stupid the questions are. The 'Comment summaries' are counter-productive because I want to have a chuckle reading what people posted, I literally don't care to have it summarised because I can just skim over a few in seconds - literally useless. It's the same thing with Gemini summaries in YouTube - I feel it actually detracts from the experience of watching the videos so I actively avoid them.
On what Apple is doing - I mean, literally nothing Apple Intelligence offers excites me, but at the same time nothing anybody else is doing with LLMs really does either... And I'm highly technical, general people are not actually that interested apart from students getting LLMs to write their homework for them...
It's all well and good to be excited about LLMs but plenty of these companies' customers just... aren't... If anything, Apple is playing the smart move here - let other spend (and lose) billions training the models and not making any real ROI, and they can license the best ones for whatever turns out to actually have commercial appeal when the dust settles and the models are totally commodified...
I was thinking about this.. if you look at (I think OpenAI) the scene generation and interaction demos, it's a pretty natural fit for their VR efforts. Not that I'm sold on VR social networks, but definitely room for VR/AR enhancements... and even then AI has a lot of opportunities, beyond just LLM integration into FB/Groups.
Aside, groups is about the only halfway decent feature in FB, and they seem to be trying to make it worse. The old chat integration was great, then they remove it, and now you get these invasive messenger rooms instead.
"technology too expensive to be offered at a profit (yet)" != hype
The MIT report that has everyone talking was about 95% of companies not seeing return on investment in using AI, and that is with the VC subsidised pricing. If it gets more expensive that math only gets worse.
I can't predict the future, but one possibility is that AI will not be a general purpose replacement for human effort like some hope, but rather a more expensive than expected tool for a subset of use cases. I think it will be an enduring technology, but how it actually plays out in the economy is not yet clear.
writing spam emails, that's what LLMs enduring market niche will end up being.
without structural comprehension, babbling flows of verbiage are of little use in automation.
CAD is basically the opposite of such approaches, as structural specifications extend through manufacturing phases out to QA.
Deja vu, Zuck has already scaled down their AI research team a few years as I remember, because they didn't deliver any tangible results. Meta culture likes improving metrics like retention/engagement, and promotes managers if they show some improvement in their metrics. No one cares about long shots generally, and a research team is always the long shot.
Monthly active users:
From DemandSage:
Facebook - 12 billion!?
TikTok - 1.59 billion
X - 611 million
Bsky - 38 million
That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.From Grok:
Facebook - 3.1 billion
TikTok - 1.5-2 billion
X - 650 million
Bsky - 4.1 million
Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.That said, I have a couple friends who are die hard on Telegram.
pardon me but I am just a little surprised as to how telegram came in the last paragraph? we were talking about social medias and telegram is a messaging app...
I'm far from being a fan of the company, but I think this article is substantially overstating the extent of the "freeze" just to drum up news. It sounds like what's actually happening is a re-org [1] - a consolidation of all the AI groups under the new Superintelligence umbrella, similar to Google merging Brain and DeepMind, with an emphasis on finding the existing AI staff roles within the new org.
From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”
[1] https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4?s...
Its a bit frustrating that most don't read TFA instead vent out their AI angst the first opportunity they get.
Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.
yes, because meta has no incentive to act like there's no bubble
So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?
I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.
Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.
I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.
So I think the truth is likely we are both compute limited and we need better algorithms.
There are a few "hints" that suggest to me algorithms will bear a lot more fruit than compute (in terms of flops):
1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at! 2) learning is too slow and is largely offline 3) "llms aren't world models"
We are limited by both compute and available training data.
If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.
Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable. If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models. If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.
I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.
I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.
We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.
Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.
Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)
Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself
But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?
> We are truly only investing more and more into Meta Superintelligence Labs as a company. Any reporting to the contrary of that is clearly mistaken.
it's not really a fair characterisation, because he persisted for nearly 10 years dumping enormous investment into the VR business, and still is to this day. Furthermore, Meta's AI labs predated all the hype and the company was investing and highly respected in the area way before it was "cool".
If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.
Quality over quantity.
Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.
1000 people can't get a woman to have a child faster than 1 person.
So it depends on the type of problem you're trying to solve.
If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.
It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.
It might be! But it might not be, too. Who knows for certain til post-ex?
> 1000 people can't get a woman to have a child faster than 1 person.
I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.
Sure, if you want one child. But that's not what business is often doing, now is it?
The target is never "one child". The target is "10 children", or "100 children" or "1000 children".
You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.
IOW, this is a facile comparison not worthy of consideration.[1]
> So it depends on the type of problem you're trying to solve.
This[1] is not the type of problem where the analogy applies.
=====================================
[1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
>> Sure, if you want one child. But that's not what business is often doing, now is it?
Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.
Engineering teams do become less efficient above some size.
The analogy is a good analogy. It is used to demonstrate that a larger workforce doesn’t always automatically give you better results, and that there is a set of problems that are clear to identify a priori where that applies. For some problems, quality is more important than quantity, and you structure your org respectively. See sports teams, for example.
In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
Ironically, rather than being facile the point of the comparison is to explain https://en.wikipedia.org/wiki/Amdahl%27s_law to people who are clearly not familiar with it.
Ah the new strategy - hire one rockstar woman who can gestate 1000 babies per year for $100 mil!
In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.
The reason they paid $100m for “one person” is because it was someone people liked to work for, which is why this article is a big deal.
It’s me. I’ve figured it out. Who’s got the offer letter so I can start?
What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.
You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".
Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.
Even if they do not strike gold the second time, there can still be a multitude of reasons:
1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
2. Having a big name in your research team will attract other people to work with you.
3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
4. That person will not be hired by your competition.
5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.
Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.
At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.
Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him
[1] https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-super...
Alexnet, AlphaGo, ChatGPT Would argue he did strike gold few times.
Right, what about him? Didn't he start his own company and raised 1 billion a while ago? I haven't heard about them since then.
They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.
They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.
They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.
I havent seen any evidence that meta is backtracking on VR. Theyve got more than enough money to focus on both, in fact they probably need to. Gen AI is a critical complement of the metaverse. Without gen ai metaverse content is too time consuming to make.
They spent billions on GPUs and were well positioned to enter the LLM wars
I can see the value in actual AI. But it seems like in many instances how it is being utilized or applied is more related to terrible search functionality. Even for the web, it seems like we’re using AI to provide more refined search results, rather than just fixing search capabilities.
Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.
That said, I’m not an expert and could be completely off base.
LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.
There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.
It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.
I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.
It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.
>> Mr Zuckerberg has said he wants to develop a “personal superintelligence” that acts as a permanent superhuman assistant and lives in smart glasses.
Yann Le Cun has spoken about this, so much that I thought it was his idea.
In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?
“ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”
People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)
Very few will not want to wear the glasses.
https://memory-alpha.fandom.com/wiki/The_Game_(episode)
Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.
I'm actually a little shocked that AI hasn't been integrated into games more deeply at this point.
Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.
When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.
I imagine tuning this to remain fun would be a real challenge.
“ if the main draw of "AI" is better thought of as entertainment.”
Crazy is true, but that would somewhat follow most tech advancements right?
I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.
I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).
Obviously this is just a guess though
Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)
What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)
I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.
Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.
Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc
And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.
Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
This seems to just be a rewrite of https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4. Can we replace the link?
Selective Freeze? Frank Chu seems to have broken through the ice if so: https://www.macrumors.com/2025/08/22/apple-loses-another-key...
Maybe this time investors will realize how incompetent these leaders are? How do you go from 250mil contracts to freezes in under a month?
I really don't understand this massive flip flopping.
Do I have this timeline correct?
* January, announce massive $65B AI spend
* June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs
* July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)
* Aug, freeze, it's a bubble!
Someone please tell me I've got it all wrong.
This looks like the Metaverse all over again!
The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.
This. TFA says this explicitly. Alexander Wang, the former Scale CEO is to approve any new hires.
They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.
Now, I think AI investments are still a bubble, but that's not why FB is freezing hiring.
Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.
Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.
Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.
Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.
> Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Maybe he's like this because the first few times he tried it, it worked.
Insta threatening the empire? Buy Insta, no one really complains.
Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.
The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.
IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.
I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.
Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.
The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.
This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.
No one else is adding the context of where things were at the time in tech...
> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.
Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.
Buying competitors is not insane or a weird business practice. He was probably advised to do so by the competent people under him
And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too
I hate pretty much everything about Facebook but Zuckerberg has been wildly successful as CEO of a publicly traded company. The market clearly has confidence in his leadership ability, he effectively has had sole executive control of Facebook since it started and it's done very well for like 20 years now.
>has been wildly successful as CEO of a publicly traded company.
That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.
The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.
Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.
He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.
Facebook can be well run without that being due to Zuck.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.
Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks
> IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.
It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.
Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).
And isn’t the job of a good CEO to put the right people in the right seats? So if he found a superstar COO that took the company into the stratosphere and made them all gazillionaires…
Wouldn’t that indicate, at least a little bit, a great management move by Zuck?
And he didn't even come up with the idea, he stole it all. And then he stole the work from the people he started it with...
You're probably going to get comments like "Social networking existed before. You can't steal it". Well, on top of derailing someone else's execution of said non-stole idea (or something) which makes you a jerk, in the case of those he 'stole'/stole from, for starters maybe it was existing code (I don't know if that was ever proven), but maybe it was also the Winklevosses idea of using .edu email addresses, and possibly other concepts
Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero
The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.
It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.
Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.
Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.
Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.
Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
I think many people just really dislike Zuckerberg as a human being and Meta as a company. Social media has seriously damaged society in many ways.
It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.
Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.
I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.
The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?
> Like do people here really think making some bad decisions is incompetence?
> If you do, your perfectionism is probably something you need to think about.
> Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.
It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.
Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.
He’s not “being paid that much”
He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.
A committee didn’t decide Zuckerberg is paid $30bn.
And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO
Yes, I do know all of that semantics, thanks for stating the obvious.
Being rewarded for creating a privacy destroying advertising empire, exceptional work. Imagine a world where the incentives were a bit different, we might have seen other kind of work rewarded instead of social media and ads.
Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.
Yes.
Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.
I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.
> How do you go from 250mil contracts to freezes in under a month?
Easy, you finished building up a team. You can only have so many cooks.
Some people actually accepted the contracts before the uno reverse llamabot could activate and block them
I'm still waiting for a single proof that there was any contract in the hundreds of millions that was signed.
Because you want the ability to low-ball prospective candidates sooner rather than later.
So he be also read the recent article on Sam Altman saying it was a bubble?
The problem with sentiment driven market phenomena is they lack fundamental support. When they crash, they can really crash hard. And as much as I see real value in the progress in AI, 95% of the investment I see happening is all based on sentiment right now. Actually deploying AI into real operational scenarios to unlock the value everyone is talking about is going to take many years and it will look like a sink hole of cost well before that. Buckle up.
If Mark had any questions about what would sell the idea around the world and stick with focused future and a team worth the time in AI neuroscience.combine_with_ai_entities to accepting my proposal in marketing ideas to partnership in Lumina Google Cloud sentient_being global with Meta AI X Framework Protection my name is Kevin Pierson piersonkevin290@gmail.com
I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.
Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.