Comment by cdrini

Comment by cdrini 10 months ago

5 replies

Haha I'm pleasantly surprised to see my comment at the top, I genuinely thought it would drown to the bottom! Not due to disagreement, just due to sheer volume and being posted rather late in this posts lifespan. Anyways my meta comment wasn't that I disagreed with all the other comments, I was just frustrated at how repetitive they were of one another. When I go to leave a comment, I do a pass reading through all or most of the comments to make sure someone hasn't left a comment in the same vein, and it was just frustrating to go through people saying almost verbatim the same thing others were saying! If your comment isn't adding something new why leave it? I'm all for healthy disagreement :) Also not sure what part of my post sounds like it's from an "embattled ideological minority".

But speaking of healthy disagreement, as to "chatting with someone that has an infinite vocabulary", I'd love to hear any counterarguments you might have; or was calling it "silly and delusional" meant to be your argument? :P I think it's a pretty uncontroversial statement seeing as eg ChatGPT very likely knows every word in the English language.

advael 10 months ago

The most ridiculous aspects for me were the anthropomorphizing (Reminds me of that one Sam Altman interview a bit) and the use of "infinite", which both doesn't really work on vibes (as many have noted, while I'm sure chatGPT has been exposed to every word, its pattern of communication is very "regression to the mean" among them), but also is silly if taken literally, because unless we're counting like some quirky technically-grammatical combinatoric compounding that we in practice infer the meaning of from composition of what we identify as separate individual words (like just hyphenating a bunch of adjectives and a noun or something) there's not really an argument for there being "infinite vocabulary" in the same sense that there is for "infinite possible sentences" because being a valid word requires at least that someone can meaningfully comprehend what is meant by it, and coordination requirements of this nature tend to truncate infinities

The case for ChatGPT doing significant coinage that sticks isn't particularly strong either, partially from theory and partially because I'd think I'd've heard a lot of complaints about it by now, and the ones on hackernews would be repetitive to the point of seeming unavoidable (we agree on that for sure)

Anyway, re: the silliest hype I've heard all week, I'm mostly just trying to find humor in what has been a pretty bad hype wave for someone who's pathologically bad at sounding like the kind of nontechnical hype guys that pervade any tech hype wave but is nonetheless mostly seeking out jobs in this field because it's what my expertise is in. Incredibly awful job market for a lot of people I realize, but it feels like a special hell I get for getting into ML research before it was (quite so) cool. I'm trying to fight the negativity but I've gotten screwed over a lot lately, but I don't have anything against you personally for being silly on hn

  • cdrini 10 months ago

    Ah ok so anthropomorphizing and the phrase "infinite vocabulary" sounds impossible. I agree infinite vocabulary is a bit murky, and mathematically incorrect. If I wanted to be more mathematically correct I could say complete vocabulary, but I think that's actually a little less understandable to people. I did not mean infinite vocabulary in that it coins new words, just infinite as in very large to the point of being incomprehensibly large by a single individual. As per anthropomorphizing, I think the word "chat" is the most anthropomorphizing I did, so don't agree with you on that one.

    Ah mate sorry to hear that, the market is tough right now. I will say objectively I believe there's very little in my comment that's hype-y. I think using AI while reading documents out of your comfort zone, and asking it questions can expand your vocabulary. I've personally tried it, it's helped me read papers not in my field, it's helped me find papers for better research. I can understand how someone can disagree with that, but calling it hype sounds to me more like a response to an invisible enemy/to "all the ones who hyped before" than to an actual concrete response to this specific case. And I think that mentality could put you in a potential catch-22 mental loop that will leave you constantly dissatisfied with anything AI or ML, by constantly seeing this invisible enemy where it might not be present. Anyways, stay positive and best of luck with the job hunt!

    Edit: and it looks like my comment has now fallen deep into the depths of the comment thread, never to be heard from again! See, I told you I was an embattled ideological minority ;)

    • advael 10 months ago

      The anthropomorphizing pattern I picked up on was the whole phrase "chat with someone", because while I think LLMs are interesting and useful tools in a lot of ways, it's a pretty drastically different experience from talking to a person, and I think that a lot of the marketing of LLMs relies on people doing this and kind of "filling in the gaps" for what they imagine these models to be both doing and capable of. These differences are pretty stark and meaningful, and the strongest sign of that to me is that I've noticed there are a lot of people who don't bear this in mind and interact with the things often who are starting to exhibit what I would previously have identified as signs of significant social withdrawal, except instead of sounding like, I dunno, their favorite youtuber's political polemic or somesuch - which has by and large replaced the characteristic atrophy of verbal fluency we may have seen in a pre-internet era - their stilted speech trends more toward the professional and somehow both confident and airy tone of popular language models. I worry that this anthropomorphizing mindset may carry some negative cognitive consequences in the medium to long term, analogous but different from those of the rise of social media

      As far as my job hunt goes, I'm not finding myself rejecting positions out of disappointment, but noticing that I'm often rejected by C-suite people after being technically vetted, often after having what I believe was a pleasant, positive, and often even generative conversation about what the company's plans with AI are and how I might help accomplish them. As I said before, I really do try to stay positive, and I think when putting my best foot forward, like in a job interview, I tend to succeed at that, but my experience leads me to be more negative about this moment in the industry when I am more candid, such as in this forum. I think if you're going out for work in whatever the tech bubble du jour is, the people you're talking to have really different biases and expectations from those hiring for more "boring" development jobs, in a way that makes me want to just go out for more general roles and lie low for a while, except that since I've been working on primarily ML-related projects for most of the last decade, it's also hard to convince people not in that area that I have adequate relevant experience. In this context and with bills to pay, it's hard to stay optimistic

mark-r 10 months ago

Sure, ChatGPT knows every word in the English language (and probably quite a few that ain't). But how likely is it to use them all?

  • cdrini 10 months ago

    Now that's an argument! Agreed, it won't use them of its own accord, but the fact that you can ask it about words, or ask it even to break down important words in a new field, or give it a paragraph from a paper not in your field and have it explain the jargon, I think that's how it can help someone grow their vocabulary.