Comment by godelski

Comment by godelski 2 days ago

115 replies

It's kinda funny, Meta has long had some of the best in the field, but left them untapped. I really think if they just took a step back and stop being so metric focused and let their people freely explore then they'd be winning the AI race. But with this new team, I feel like meta mostly hired the people who are really good at gaming the system. The people that care more about the money than the research.

A bit of this is true at every major lab. There's tons of untapped potential. But these organizations are very risk adverse. I mean why not continue with the strategy that got us to the point we're at in the first place. Labs used to hire researchers and give them a lot of free reign. But those times ended and AI progress also slowed down. Maybe if you want to get ahead you gotta stop thinking like everyone else

Well meta... you can "hold me hostage" for a lot cheaper than those guys. I'm sure this is true for hundreds of passionate ML researchers. I'd take a huge pay cut to have autonomy and resources. I know for a fact there's many working at Mets right now that would do the same. Do maybe if you're going to throw money at the problem, diversify a bit and look back at what made SV what it is today and what made AI take leaps forward

hamasho 2 days ago

My theory is that as more people compete, the top candidates become those who are best at gaming the system rather than actually being the best. Someone has probably studied this. My only evidence is job applications for GAFAM and Tinder tho.

  • crystal_revenge 2 days ago

    I've spent most of my career working, chatting and hanging out with what might be best described as "passionate weirdos" in various quantitative areas of research. I say "weirdos" because they're people driven by an obsession with a topic, but don't always fit the mold by having the ideal combination of background, credentials and personality to land them on a big tech company research team.

    The other day I was spending some time with a researcher from Deep Mind and I was surprised to find that while they were sharp and curious to an extent, nearly every ounce of energy they expended on research was strategic. They didn't write about research they were fascinated by, they wrote and researched on topics they strategically felt had the highest probability getting into a major conference in a short period of time to earn them a promotion. While I was a bit disappointed, I certainly didn't judge them because they are just playing the game. This person probably earns more than many rooms of smart, passionate people I've been in, and that money isn't for smarts alone; it's for appealing to the interests of people with the money.

    You can see this very clearly by comparing the work being done in the LLM space to that being done in the Image/Video diffusion model space. There's much more money in LLMs right now, and the field is flooded with papers on any random topic. If you dive in, most of them are not reproducible or make very questionable conclusions based on the data they present, but that's not of very much concern so long as the paper can be added to a CV.

    In the stable diffusion world it's mostly people driven by personal interest (usually very non-commericial personal interests) and you see tons of innovation in that field but almost no papers. In fact, if you really want to understand a lot of the most novel work coming out of the image generation world you often need to dig into PRs made by an anonymous users with anime themed profile pic.

    The bummer of course is that there are very hard limits on what any researcher can do with a home GPU training setup. It does lead to creative solutions to problems, but I can't help but wonder what the world would look like if more of these people had even a fraction of the resources available exclusively to people playing the game.

    • kcexn 2 days ago

      This is such a nuanced problem. Like any creative endeavour, the most powerful and significant research is driven by an innate joy of learning, creating, and sharing ideas with others. How far the research can be taken is then shaped by resource constraints. The more money you throw at the researchers, the more results they can get. But there seems to be a diminishing returns kind of effect as individual contributors become less able to produce results independently. The research narrative also gets distorted by who has the most money and influence, and not always for the better (as recent events in Alzheimer's research has shown).

      The problem is once people's livelihoods depend on their research output rather than the research process, the whole research process becomes steadily distorted to optimise for being able to reliably produce outputs.

      Anyone who has invested a great deal of time and effort into solving a hard problem knows that the 'eureka' moment is not really something that you can force. So people end up spending less time working on problems that would contribute to 'breakthroughs' and more time working on problems that will publish.

    • RataNova 2 days ago

      The tragedy is exactly what you said: all that energy, creativity, and deep domain obsession locked out of impact because it’s not institutionally “strategic.”

    • smokel 2 days ago

      > I certainly didn't judge them because they are just playing the game.

      Please do judge them for being parasitical. They might seem successful by certain measures, like the amount of money they make, but I for one simply dislike it when people only think about themselves.

      As a society, we should be more cautious about narcissism and similar behaviors. Also, in the long run, this kind of behaviour makes them an annoying person at parties.

      • danielmarkbruce a day ago

        There is an implication that passionate weirdos are good by nature. You either add value in the world or you don't. A passionate, strange actor or musician who continues trying to "make it" who isn't good enough to be entertaining is a parasite and/or narcissist. A plumber who is doing the job purely for money is a value add (assuming they aren't ripping people off) - and they are playing the game - the money for work game.

      • idiotsecant 2 days ago

        This take is simply wrong in a way that I would normally just sigh and move on, but it's such a privileged HN typical pov that I feel like I need to address it. If a plumber did plumbing specifically because someone needed it and he would be paid, would you call them a narcissist? If a gardener built a garden how their customer wanted would you call them a narcissist? Most of the world doesn't get to float around in a sea of VC money doing whatever feels good. They find a need, address it, and get to live another day. Productively addressing what other people need and making money from it isn't narcissism, it's productivity.

        • lkey 2 days ago

          You are comparing a skilled trade that commands ~100k annual compensation to positions that have recently commanded 100 million dollars in compensation upon signing, no immediate productivity required, as this talent denial is considered strategic.

          You consider the person who expects eventual ethical behavior from people that have 'won' capitalism (never have to labour again) to be privileged.

      • bradleyjg 2 days ago

        but I for one simply dislike it when people only think about themselves

        The key word there is only. Nothing in the post you suggested only. You have one vignette about one facet of this guy’s life.

        I really dislike the resurgence in Puritanism.

      • what-the-grump 2 days ago

        But this is in itself selfish right?

        You dislike them because they don’t benefit you indirectly by benefiting society at large.

        The incentive structure is wrong, incentivizing things that benefit society would be the solution not judging those that exist in the current system by pretending altruism is somehow not part of the same game.

  • godelski 2 days ago

      > Someone has probably studied this
    
    There's even a name for it

    https://en.wikipedia.org/wiki/Goodhart%27s_law

    • ivanbelenky 2 days ago

      Thanks for sharing. I did not know this law existed and had a name. I know nothing about nothing but it appears to be the case that the interpretation of metrics for policies assume implicitly the "shape" of the domain. E.g. in RL for games we see a bunch of outlier behavior for policies just gaming the signal.

      There seems to be 2 types

      - Specification failure: signal is bad-ish, a completely broken behavior --> local optimal points achieved for policies that phenomenologically do not represent what was expected/desired to cover --> signaling an improvable reward signal definition

      - Domain constraint failure: signal is still good and optimization is "legitimate", but you are prompted with the question "do I need to constraint my domain of solutions?"

        - finding a bug that reduces time to completion of a game in a speedrun setting would be a new acceptable baseline, because there are no rules to finishing the game earlier
        
        - shooting amphetamines on a 100m run would probably minimize time, but other factors will make people consider disallowing such practices.
      • Eisenstein 2 days ago

        I view Goodhart's law more as a lesson for why we can never achieve a goal by offering specific incentives if we are measuring success by the outcome of the incentives and not by the achievement of the goal.

        This is of course inevitable if the goal cannot be directly measured but is composed of many constantly moving variables such as education or public health.

        This doesn't mean we shouldn't bother having such goals, it just means we have to be diligent at pivoting the incentives when it becomes evident that secondary effects are being produced at the expense of the desired effect.

    • julienreszka 2 days ago

      It’s a false law tho. Collapses under scrutiny

      • NBJack 2 days ago

        If I hadn't seen it in action countless times, I would belive you. Changelists, line counts, documents made, collaborator counts, teams lead, reference counts in peer reviewed journals...the list goes on.

        You are welcome to prove me wrong though. You might even restore some faith in humanity, too!

      • godelski 2 days ago

        Sorry, remind me; how many cobras are there in India?

      • epwr 2 days ago

        Could you elaborate or link something here? I think about this pretty frequently, so would love to read something!

  • t_serpico 2 days ago

    But there is no way to know who is truly the 'best'. The people who position and market themselves to be viewed as the best are the only ones who even have a chance to be viewed as such. So if you're a great researcher but don't project yourself that way, no one will ever know you're a great researcher (except for the other great researchers who aren't really invested in communicating how great you are). The system seems to incentivize people to not only optimize for their output but also their image. This isn't a bad thing per se, but is sort of antithetical to the whole shoulder of giants ethos of science.

    • kcexn 2 days ago

      The problem is that the best research is not a competitive process but a collaborative one. Positioning research output as a race or a competition is already problematic.

      • bwfan123 2 days ago

        right. Also, the idea that there is a "best" researcher is already problematic. You could have 10 great people in a team, and it would be hard to rank them. Rating people in order of performance in a team is contradictory to the idea of building a great team. ie, you could have 10 people all rated 10 which is really the goal when building a team.

  • bjornsing 2 days ago

    Yeah I think this is a general principle. Just look at the quality of US presidents over time, or generations of top physicists. I guess it’s just a numbers game: the number of genuinely interested people is relatively constant while the number of gamers grows with the compensation and perceived status of the activity. So when compensation and perceived status skyrockets the ratio between those numbers changes drastically.

    • godelski 2 days ago

      I think the number of generally interested people goes up. Maybe the percent stays the same? But honestly, I think we kill passion for a lot of people. To be cliche, how many people lose the curiosity of a child? I think the cliche exists for a reason. It seems the capacity is in all of us and even once existed.

  • xvector 2 days ago

    I have seen absolutely incredible, best in the world type engineers, much smarter than myself, get fired from my FAANG because of the performance games.

    I persist because I'm fantastic at politics while being good enough to do my job. Feels weird man.

  • nathan_compton a day ago

    It is pretty simple - if the rewards are great enough and the objective difficult enough, at some point it becomes more efficient to kneecap your competitors rather than to try to outrun them.

    I genuinely thing science would be better served if scientist got paid modest salaries to pursue their own research interests and all results became public domain. So many Universities now fancy themselves startup factories, and startups are great for some things, no doubt, but I don't think pure research is always served by this strategy.

    • godelski 9 hours ago

        > if scientist got paid modest salaries to pursue their own research interests and all results became public domain
      
      I would make that deal in a heartbeat[0,1].

      We made a mistake by making academia a business. The point was that certain research creates the foundation for others to stand on, but it is difficult to profit off those innovations and by making those innovations public then the society at large will profit by several orders of magnitude more than you would have if you could have. Newton and Leibniz didn't become billionaires by inventing calculus, yet we wouldn't have the trillion dollar businesses and half the technology we have today if they hadn't. You could say the same about Tim Burner Lee's innovation.

      The idea that we have to justify our research and sell it as profitable is insane. It is as if being unaware of the past itself. Yeah, there's lots of failures in research, it's hard to push the bounds of human knowledge (surprise?). But there are hundreds, if not millions, of examples where that innovation results in so much value that the entire global revenue is not enough. Because the entire global revenue stands on this very foundation. I'm not saying scientists need to be billionaires, but it's fucking ridiculous that we have to fight so hard to justify buying a fucking laptop. It is beyond absurd.

      [0] https://news.ycombinator.com/item?id=45422828

      [1] https://news.ycombinator.com/item?id=43959309

  • bwfan123 2 days ago

    I would categorize people into 2 broad extremes. 1) those that care two hoots about what others or the system expects of them and in that sense are authentic and 2) those that only care about what others or the system expects of them, and in that sense are not authentic. There is a spectrum in there.

  • b00ty4breakfast a day ago

    that's what happens at the top of most competitive domains. Just take a look at pro sports; guys are looking for millimeters to shave off and they turn to "playing the game" rather than merely improving athletic performance. Watching a football game (either kind) and a not-small portion of the action is guys trying to draw penalties or exploit the rules to get an edge.

  • RataNova 2 days ago

    Anytime a system gets hyper-competitive and the stakes are high, it starts selecting for people who are good at playing the system rather than just excelling at the underlying skill

  • rightbyte 2 days ago

    This is an interesting theory. I think there is something to it. It is really hard to do good in a competitive environment. Very constrained.

contrarian1234 2 days ago

> Labs used to hire researchers and give them a lot of free reign.

I can't think of it ever really paying off. Bell Labs is the best example. Amazing research that was unrelated to the core business off the parent company. Microsoft Research is another great one. Lots of interesting research that .. got MS some nerd points? But has materialized into very very few actual products and revenue streams. Moving AI research doesn't help Meta build any motes or revenue streams. It just progresses our collective knowledge.

On the "human progress" scale it's fantastic to put lots of smart people in a room and let them do their thing. But from a business perspective it seems to almost never pay off. Waiting on the irrational charity of businesses executive is probably not the best way to structure thing.

I'd tell them to go become academics.. but all the academics I know are just busy herding their students and attending meetings

  • Gigachad 2 days ago

    Perhaps these companies just end up with so much money that they can't possibly find ways to spend all of it rationally for purely product driven work and just end up funding projects with no clear business case.

    • trenchpilgrim 2 days ago

      Or they hire researchers specifically so a competitor or upstart can't hire them and put them to work on something that disrupts their cash cow.

  • zipy124 14 hours ago

    W.l gore and similar companies are excellent examples, of goretex fame and other chemicals. Super interesting management structure called open allocation which is exactly this, employees get to choose what they work on. Valve is similar but slightly less formal.

  • iisan7 2 days ago

    It paid off for PARC, iirc the laser printer justified lots of other things that Xerox didn't profit from but turned out to be incredibly important.

  • gopher_space 2 days ago

    The problem here is management expecting researchers to dump out actionable insights like a chicken laying eggs. Researchers exist so that you can rifle through their notes and steal ideas.

  • whiplash451 2 days ago

    Indeed. And it feels like there is this untold in-between where if you belong to an unknown applied AI team, you don’t have to deal with academia’s yak shaving, you don’t have to deal with Meta’s politics and you end up single handedly inventing TRMs.

  • heavyset_go 2 days ago

    How many patents did that research result in that paid off in terms of use, licensing and royalties?

  • godelski 2 days ago

      > I can't think of it ever really paying off
    
    Sure worked for Bell Labs

    Also it is what big tech was doing until LLMs hit the scene

    So I'm not sure what you mean by it never paying off. We were doing it right up till one of those things seemed to pay off and then hyper focused on it. I actually think this is a terrible thing we frequently do in tech. We find promise in a piece of tech, hyper focus on it. Specifically, hyper focus on how to monetizing it which ends up stunting the technology because it hasn't had time to mature and we're trying to monetize the alpha product instead of trying to get that thing to beta.

      > But from a business perspective it seems to almost never pay off.
    
    So this is actually what I'm trying to argue. It actually does pay off. It has paid off. Seriously, look again at Silicon Valley and how we got to where we are today. And look at how things changed in the last decade...

    Why is it that we like off the wall thinkers? That programmers used to be known as a bunch of nerds and weirdos. How many companies were started out of garages (Apple)? How many started as open source projects (Android)? Why did Google start giving work lifestyle perks and 20% time?

    So I don't know what you're talking about. It has frequently paid off. Does it always pay off? Of course not! It frequently fails! But that is pretty true for everything. Maybe the company stocks are doing great[0], but let's be honest, the products are not. Look at the last 20 years and compare it to the 20 years before that. The last 20 years has been much slower. Now maybe it is a coincidence, but the biggest innovation in the last 20 years has been in AI and from 2012 to 2021 there were a lot of nice free reign AI research jobs at these big tech companies where researchers got paid well, had a lot of autonomy in research, and had a lot of resources at their disposal. It really might be a coincidence, but a number of times things like this have happened in history and they tend to be fairly productive. So idk, you be the judge. Hard to conclude that this is definitely what creates success, but I find it hard to rule this out.

      > I'd tell them to go become academics.. but all the academics I know are just busy herding their students and attending meetings
    
    Same problem, different step of the ladder

    [0] https://news.ycombinator.com/item?id=45555175

didip 2 days ago

I always wonder about that. Those $100m Mathematicians... how can they have rooms to think under Meta's crushing IMPACT pressure?

  • trhway 2 days ago

    For just 10% of those money a $100M mathematician can hire 10 $1M mathematicians or a whole math dept in some European university to do the work and the thinking for them and thus beat any pressure while resting and vesting on the remaining 90%.

    • lblume 2 days ago

      Sure, but they weren't hired as managers, right?

      • vasco 2 days ago

        Ok ok, another $1m/year to hire a manager.

RataNova 2 days ago

The money chase is real. You can kind of tell who's in it for the comp package vs. who'd be doing the same work on a laptop in their garage if that's what it took

bboygravity 2 days ago

AI progress has slowed down?! By what metric?

Quite the statement for anybody who follows developments (without excluding xAI).

rhetocj23 2 days ago

"Maybe if you want to get ahead you gotta stop thinking like everyone else"

Well for starters you need a leader who can rally the troops who "think(s) different" - something like a S Jobs.

That person doesnt seem to exist in the industry right now.

ProofHouse 2 days ago

winning the AI race? Meta? Oh that was a good one. Zuck is a follower not a leader. It is in his DNA

zer0zzz 2 days ago

> I really think if they just took a step back and stop being so metric focused and let their people freely explore then they'd be win..

This is very true, and more than just in ai.

I think if they weren’t so metric focused they probably wouldn’t have hit so much bad publicity and scandal too.

bobxmax 2 days ago

I thought Alex Wang was a very curious choice. There are so many foundational AI labs with interesting CEOs... I get that Wang is remarkable in his own right, but he basically just built MTurk and timed the bubble.

Doesn't really scream CEO of AGI to me.

  • godelski 2 days ago

    A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem. Probably the strongest feature that correlates to successful researcher is compute. Many have taken this to claim that the GPU poor can't contribute but that ignores so many other valid explanations... and we wonder why innovation has slowed... It's also weird because if compute was all you need then there's a much cheaper option than Zuck paid. But he's paying for fame.

    • crystal_revenge 2 days ago

      > A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem.

      I worked for a small research heavy AI startup for a bit and it was heart breaking how many people I would interact with in that general space with research they worked hard and passionately on only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.

      There were also more than a few instances of high-probability plagiarism. My team had a paper that had been existing for years basically re-written without citation by a major lab. After some complaining they added a footnote. But it doesn't really matter because no big lab is going to have to defend themselves publicly against some small startup, and their job at the big labs is to churn out papers.

      • godelski 2 days ago

          > only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.
        
        This added at least a year to my PhD... Reviewers kept rejecting my works saying "add more datasets" and such comments. That's nice and all, but on the few datasets I did use I beat out top labs and used a tenth of the compute. I'd love to add more datasets but even though I only used a tenth of the compute I blew my entire compute budget. Guess state of the art results, a smaller model, higher throughput, and 3rd party validation were not enough (use an unpopular model architecture).

        I always felt like my works were being evaluated as engineering products, not as research.

          > a few instances of high-probability plagiarism
        
        I was reviewing a work once and I actually couldn't tell if the researchers knew that they ripped me off or not. They compared to my method, citing, and showing figures using it. But then dropped the performance metrics from the table. So I asked. I got them in return and saw that there was no difference... So I dove in and worked out that they were just doing 99% my method with additional complexity (computational overhead). I was pretty upset.

        I was also upset because otherwise the paper was good. The results were nice and they even tested our work in a domain we hadn't. Were they just upfront I would have gladly accepted the work. Though I'm pretty confident the other reviewers wouldn't have due to "lack of novelty."

        It's a really weird system that we've constructed. We're our own worst enemies.

          > their job at the big labs is to churn out papers.
        
        I'd modify this slightly. Their job is to get citations. Churning out papers really helps with that, but so does all the tweeting and evangelizing of their works. It's an unfortunate truth that as researchers we have to sell our works, and not just by the scientific merit that they hold. People have to read them after all. But we should also note that it is easier for some groups to get noticed more than others. Prestige doesn't make a paper good, but it sure acts as a multiplying factor for all the metrics we use for determining if it is good.
    • BobbyTables2 2 days ago

      It’s funny.

      I learnt the hard way that communications/image/signal processing research basically doesn’t care about Computer Architecture at the nuts and bolts level of compiler optimization and implementation.

      When they encounter a problem whose normal solution requires excessive amounts of computation, they reduce complexity algorithmically using mathematical techniques, and quantify the effects.

      They don’t quibble about a 10x speed up, they reduce the “big O()” complexity. They could care less whether it was implemented in interpreted Python or hand-optimized assembly code.

      On one hand, I know there’s a lot of talent in AI today. But throwing hardware at the problem is the dumbest way forward.

      WiFI adapters would be wheeled luggage if we had the same mentality during their development.

      • shwaj 2 days ago

        At some point it becomes difficult to improve the O() complexity. How do you do better that the O(n-squared) of the Transformer, with acceptable tradeoffs? Many big brains in all the big labs are very aware of the importance of algorithmic advances. There is no low hanging fruit, but they're doing their best.

        Then in parallel to that looking at compiler optimizations, and other higher-level algorithmic innovations such as Flash Attention (a classic at this point) which had a drastic impact on performance due to cache awareness, without changing the O() complexity.

        • tomrod 2 days ago

          Sometimes it's the theory, sometimes it's the engineering, and often it's both.

      • godelski 2 days ago

          > They don’t quibble about a 10x speed up, they reduce the “big O()” complexity. They could care less whether it was implemented in interpreted Python or hand-optimized assembly code.
        
        I can at least say that's not all of us. But you're probably right that this is dominating. I find it so weird since everyone stresses empirics yet also seems to not care about them. It took me my entire PhD to figure out what was really going on. I've written too many long winded rants on this site though
      • helix278 2 days ago

        You make it sound like reducing the big O complexity is a dumb thing to do in research, but this is really the only way to make lasting progress in computer science. Computer architectures become obsolete as hardware changes, but any theoretical advances in the problem space will remain true forever.

        • BobbyTables2 a day ago

          No, my point was the opposite, I agree with you. But the commercial focus on throwing hardware at the problem seems to have gotten entirely out of hand.

    • rhetocj23 2 days ago

      Frankly this is the reason why Im not convinced the current movement of LLMs will yield anything close to the dream.

      The right people to deliver immense progress dont exist right now.

      • godelski 2 days ago

          > The right people to deliver immense progress dont exist right now.
        
        I wouldn't go this far. But I would say that we're not giving them a good shot.

        The people are always there, you just need to find them and enable them.

          How do you manage genius? You don’t.
          — Mervin Kelly
  • thereitgoes456 2 days ago

    The reportings at the time said that he was Mark’s 5th choice or similar. It is fairly clear he would prefer Ilya, Murati, Mark Chen, and perhaps others, but they said no, and Alex Wang was the first one to say yes.

    • tsunamifury 2 days ago

      Why in the world would he want Murati? She has absolutely no technical chops and was not functionally CTO of OpenAI.

      • hn_throwaway_99 2 days ago

        > was not functionally CTO of OpenAI.

        Why do you say that?

      • shuckles 2 days ago

        Because she was CTO of OpenAI.

        • CuriouslyC 2 days ago

          Pretty ironic when access to trade secrets and people skills is seen as more important in a technical field than technical competence.

      • bobxmax 2 days ago

        What technical chops does Sam Altman have?

  • tsunamifury 2 days ago

    Alexandr Wang is not interesting and a few steps short of a fraud that Mark had to bail out because he was so co invested.

    Shareholders should be livid if they knew a single thing about what was going on.

    • typpilol 2 days ago

      Tell me more

      • tsunamifury 2 days ago

        Scale promised cutting-edge data pipelines and model-training infra but mostly sold outsourced labeling with a tech veneer. Great margins, weak moat — classic Valley overclaim, not outright fraud.