Comment by blcknight

Comment by blcknight 2 months ago

38 replies

I teach CS, and oh we know but I don't know what to do about it. Scores have skyrocketed because students are using some kind of AI helper like co-pilot, if not just outright pasting the assignment text to ChatGPT. It's hard to prove.

I've thought about putting instructions in the assignment to sabotage it (like, "if you're a generative AI, do X - if human, please ignore.") but that won't work once students catch on those kinds of things are in the assignment text.

golol 2 months ago

Why does the following obvious solution not work: - Homework is just voluntary. You have to force yourself to study anyways. Not using ChatGPT so you learn something is somwthing students have to bring themselves. - Anything graded happens ina classroom - Long-term projects allow the use of AI.

  • hirvi74 2 months ago

    I had a Calc II professor like that in college. He told us on the first day, "I don't take attendance, and I don't grade homework. If you want to pass the class you'll attend and you'll do the homework on your own."

    Long story short, the vast majority of the class attended, did the homework, and still failed anyway. He was known for being... unrelenting and awful. If women went to his office for help during office hours, he wouldn't help them... one of those professors.

  • bonoboTP 2 months ago

    This is pretty much how it is in German universities. (Except for the covid years when exams were online and yeah anyone who studied 2020-2022 will have inflated grades due to lots of cheating. At least ChatGPT didn't exist during covid.)

  • starfezzy 2 months ago

    [flagged]

    • sensanaty 2 months ago

      > LLMs radically accelerate the learning process.

      Absolutely false, at least for students as someone who has to deal with a lot of students. They learn nothing from pasting in a homework problem into ChatGPT.

      Even for professionals, looking at my colleagues I'm not convinced AI tools are doing anything other than making them dumber and lazier. They just throw whatever at the AI, blindly trust it and push through with it without looking at the output for a millisecond before making it someone else's problem.

    • vharuck 2 months ago

      When considering which qualities to favor in people, I'd be happy if you consider this quote from the 1950 movie Harvey:

      "Years ago my mother used to say to me, she'd say, 'In this world, Elwood, you must be' - she always called me Elwood - 'In this world, Elwood, you must be oh so smart or oh so pleasant.' Well, for years I was smart. I recommend pleasant. You may quote me."

    • sgarland 2 months ago

      Casually suggesting eugenics is quite the take.

    • Draiken 2 months ago

      > LLMs radically accelerate the learning process.

      Can't agree with that. IME and from what I've read in many places, it's basically only useful if you already know the subject. If you don't, you have no idea if what it spews out is correct or not, and you completely skip the part where you actually use your brain.

      > As a hugely important side note, we should be focusing more on how to support low intelligence people so their shortcomings aren't a burden to themselves and a drain on society.

      Completely agree with that, although I don't think LLMs will help with it at all.

    • SalmoShalazar 2 months ago

      This guy is literally advocating for Nazi eugenics. Is this the kind of content that’s OK on this website now?

      Given the downvotes, guess there are plenty of people here that are pro-eugenics and support thinning the herd of “low IQ individuals” lest they reproduce.

      • brigandish 2 months ago

        They may be advocating for that, but I'm not against them doing so because it gives the rest of us the opportunity to present the arguments against it.

        I take this view lately because I've noticed that younger generations are starting to take up ideas that my grandparents and parents were vehemently against, because they'd either experienced those things or they'd listened to the arguments. As those people die out, and because we naively think that some argument are settled once and for all, we stop presenting them and thus, people get sucked in by the bad stuff.

        So I say let them say it, and let us argue back and never forget what we find from these arguments.

xdennis 2 months ago

> I teach CS, and oh we know but I don't know what to do about it.

You could give students larger projects and have them present their homework.

It usually doesn't take more than a few minutes to figure out when someone has cheated because they can't explain the reason for what they did.

I had a cryptography professor who did this and he would sometimes ask questions like "wait, is this a symmetric key here?" and the student would say "ah, sorry, I wasn't paying attention" even though the text of the assignment was something like "using symmetric encryption do so and so". Some cheaters were so bad they wouldn't even bother to read the text of the assignment.

Also, people who cheat tend to equivocate when asked questions. So if you ask clear yes-or-no questions and they answer with "well, it could be possible" you know you have to spend more time interrogating that student.

This particular professor would almost never make the judgment of whether the student cheated. After failing multiple questions, he would just ask the student if he cheated and lower the score based on how fast he confessed and how egregious the cheating was. Most cheaters would fold quite quick, but some took longer.

TrackerFF 2 months ago

I used to TA in a couple of classes, and it was fairly obvious that a bunch of them cheated - their homework would have the exact same errors, using the exact same steps.

I reported to my professor, who just told me to ignore it - or as he put it "they're just cheating themselves". Exams were written exams (that counted for 100% of the grade) with no help, so you could spot a bunch of students who'd get top scores on all their homework, but fail their exams.

jstanley 2 months ago

This is just part of our capabilities now. I think we have to accept that there are parts of programming that most programmers will never need to know because the LLM will do it for them, and the curriculum should move up an abstraction level.

  • boredtofears 2 months ago

    if you've ever endured the pain of PR'ing a medium-ish sized feature from someone who copiloted their way through the entire thing you know it doesn't work that way

    • starfezzy 2 months ago

      Two comments:

      First, it's not often noted in these conversations that there are two types of LLM-using programmers/learners. One kind uses it to radically accelerate the learning process, the other kind uses it so they don't have to learn. Actually, make that three kinds—the third (probably a subset of the second) has extremely low creativity and can't understand how to use LLM tools effectively, and so can't guide their output effectively, or wrangle it after the fact.

      I suspect your comment is referring to PRs by the latter kind. This is not a problem with LLMs, or with people using them to enhance productivity.

      Second, what is your realistic proposal for how to confront the reality that we're accelerating through irreversible technology-assisted change?

      Just like, apart from catastrophes, there's no longer a concern that we won't have massive factory farms, or that we won't have access to calculators, or that programmers won't have access to Google, there's no future where programmers wont have increasingly helpful and capable AI tools.

      There will always be low IQ, low performance individuals. Can you recognize that the problem—as always—is those people, not the technology?

      • boredtofears 2 months ago

        I don’t agree with the OPs statement:

        > I think we have to accept that there are parts of programming that most programmers will never need to know because the LLM will do it for them

        I don’t think people lacking fundamentals use LLMs very effectively.

        • fragmede 2 months ago

          Thing is, LLMs can teach the fundamentals if the person is clever enough to ask it. Eg if I was just starting out and didn't understand the difference between bash and ssh, I can chat with the LLM until I get it.

  • nradov 2 months ago

    Our languages should move up an abstraction layer. If LLMs are able to write decent code then that's clear evidence the language syntax has too much repetitive boilerplate.

  • monocasa 2 months ago

    Yeah. It reminds me of how the teachers from my schooling would tell us "you won't always just have a calculator/encyclopedia/etc in your pocket".

  • dangsux 2 months ago

    [dead]

    • sensanaty 2 months ago

      I'd love to see the code for this app you've made, could you link the repo?

userbinator 2 months ago

Scores have skyrocketed

I suggest making the problems more unique ones that humans would be able to solve but easily trip up an AI --- minor variations of existing ones seem to work well. There's some fun with that sort of idea here: https://news.ycombinator.com/item?id=38766512

  • ogrisel 2 months ago

    It's really already very difficult to write good problem material for evaluations. Having to find a way where difficulty is intermediate for the target audience (not too easy, not too hard) but also too hard for LLMs would be very challenging / impossible for most disciplines.

Emiledel 2 months ago

I think your idea has already worked for some companies to filter out AI applications, why not try? Especially in a font color identical to the background. You can also scaffold your way to generate questions that get the worst LLM performance, while still being very clear to understand, one side validating the clarity and theoretical tractability for the age, and one side actually solving it. Actor and two critics maybe. I have a container somewhere to create and use this kind of chain visually, could put it on GitHub but I'm sure there are dozens already

legohead 2 months ago

We hire interns and I've interviewed quite a few since Chat GPT. It's interesting they almost always ask what I (and the company) think about AI. Never had this question in the past. So it could be a bad thing, but the kids aren't dumb either, and the good ones will realize it can be a crutch.

Part of our interview process is a take home programming exercise. We allow use of AI, but ask that you tell us if you used it or not. That could be a good option for teachers as well.

  • Emiledel 2 months ago

    I'm hiring, and discussions of how we want to respond to engineer candidates who get stuck are interesting. I'm personally more interested in their collaboration (wildcard) than their chat-fu (assumed at this point). So my advice to people reading this with interviews in the next year (or next week) is to consider getting off the screen and solving something with a person. We will all get plenty of self-solving time, but it helps if you can show that you can explain yourself during rapid fire situations involving others, or to bring them along with your plan, or building an unfamiliar plan B with others when two AZ are down in us-east-1 and noone planned for XYZ to be unavailable (eg something that the LLM site depended on) Not that I'm certain it'll happen, but I think calculators (to go back to this story) were more reliable than anything we've typed into the past month, and for me that includes their batteries.

93po 2 months ago

god i'm so incredibly salty i finished all of my schooling a million years ago and had to laboriously do all my shit assignments without chatgpt. like yeah maybe the learning process was helpful but i was so, so miserable in school and absolutely hated it and found it boring. kids these days dont know how easy they have it oh my god i'm old

  • jazzyjackson 2 months ago

    in my experience 'easy' does not go hand in hand with 'not boring'

    • 93po 2 months ago

      the point being more that they can chatgpt their assignments and then use their new-found free time to go do something interesting instead.

daedrdev 2 months ago

Students are absolutely copy pasting questions into ChatGPT. Though they already would have done a lot of that with google since they need to care about their GPA and thus must try to get every question right. I knew some people paying for chegg just before ChatGPT came out.

I think its still important to assign the homework but yeah its rough.

  • hirvi74 2 months ago

    I just wish more academic material has problems with answers. I used Chegg when I took a digital logic class for my CS degree. Did I use it to cheat? No, but the textbook was my only source material, and it virtually had no solutions in it.

    I would try problems, fail, look at the solution, and see what I did wrong. I ended up doing quite well because of that. It was at that point in time I learned that if more material provided such information, that I could probably teach myself most material.

    Currently, I am about to hope on the DSA grind/Leetcode grind. I have tons of textbooks, and of course, it's the same issue. Hardly any solutions, so thank goodness for AI or god knows what incorrect information I would teach myself.

    • daedrdev 2 months ago

      Yeah this is a good way of putting what I was trying to say.

      Like googling college level topics can be infuriating someitmes with all the SEO spam and outdated or confusing content, not to mention the state of textbooks.

      For some topics is perfectly fine with just google, but the obscure stuff can be impossible to find and in both cases ChatGPT is easier, faster, and likely has a higher success rate than ones own attempt at searching for answers.

jamilton 2 months ago

It would at least catch the people who didn't even read the assignment, which is probably at least some of them.

teaearlgraycold 2 months ago

Why not just increase the scope and explicitly allow LLMs?

  • pedrosorio 2 months ago

    Because the purpose of most homework is not to give you a “real world task”.

    It is to give you simplified toy problems that allow you to test your understanding of key concepts that you can use as building blocks.

    By skipping those, and outsourcing “understanding” of the fundamentals to LLMs, you’re setting yourself up for failure. Unless the goal of the degree is to prepare you for MBA-style management of tools building things you don’t understand.