Comment by waffletower

Comment by waffletower a day ago

10 replies

The core premise is decidedly naive and simplistic -- AI is used to cheat and students can't be trusted with it. This thesis is carried through the entirety of the article.

ragingregard a day ago

That's not the core premise of this article, go read the article to the end and don't use your LLM to summarize it.

The core premise is cognitive development of students is being impaired with long term implications for society without any care or thought by university admins and corporate operators.

It's disturbing when people comment on things they don't bother reading, literally aligning with the point the article is arguing, that critical thinking is decaying.

allturtles 21 hours ago

So you believe students don't use AI to cheat, and you are calling the OP naive?

  • waffletower 20 hours ago

    That's an utterly hilarious straw man, a spin worthy of politics, and someone else would label, a tautological "cheat". Students "cheated" hundreds of years ago. Students "cheated" 25 years ago. They "cheat" now. You can make an argument that AI mechanizes "cheating" to such an extent that the impact is now catastrophic. I argue that the concern for "cheating", regardless of its scale, is far overblown and a fallacy to begin with. Graduation, or measurement of student ability, is a game, a simulation that does not test or foster cognitive development implicitly. Should universities become hermetic fortresses to buttress against these untold losses posed by AI? I think this is a deeply misguided approach. While I had been a professor myself for 8 years, and do somewhat value the ideal of The Liberal Arts Education, I think students are ultimately responsible for their own cognitive development. University students are primarily adults, not children and not prisoners. Credential provisions, and graduation (in the literal sense) of student populations, is an institutional practice to discard and evolve away from.

  • flag_fagger 21 hours ago

    ChatGPT told them otherwise.

    Seriously, you’re arguing with people who have severe mental illness. One loon downthread genuinely thinks this will transform these students into “genuises”

waffletower a day ago

You can straw man all you like, I haven't used an LLM in a few days -- definitely not to summarize this article -- and what you claim is the central idea, is directly related to my claim. Its very easy to combine them directly: students intellectual development is going to be impaired by AI because they can't be trusted to use it critically. I disagree.

  • gizmo 21 hours ago

    When AI tools make it easy to cruise through coursework without learning anything then many students will just choose to do that? Intellectual development requires strenuous work and if universities no longer make students strain then most won’t. I don’t understand why you think otherwise.

  • ragingregard 21 hours ago

    > You can straw man all you like

    No one is misrepresenting your argument, it's well understood and being argued that it is false.

    > students intellectual development is going to be impaired by AI because they can't be trusted to use it critically.

    This debate is going nowhere so I'll end here. Your core premise is on trust and student autonomy, which is nonsense and not what the article tackles.

    It argues LLM literally don't facilitate cognitive brain development and can actually impair it, irrelevant to how they are used so it's malpractice for university admins to adopt it as a learning tool in a setting where the primary goal should be cognitive development.

    Student's are free to do as they please, it's their brain, money and life. Though I've never heard anyone argue they were their wisest in their teens and twenties as a student so the argument that students should be left unguided is also nonsense.

    • waffletower 20 hours ago

      You said I didn't read the article. That is your weak and petty straw man. Very clearly.

  • awillowingmind 19 hours ago

    I’m not sure how you lived through the last decade and came to the conclusion that people aged 17-25 make rational decisions with novel technologies that have short term gain and long term (essentially hidden) negative side effects.

    • waffletower 19 hours ago

      It seems that 10% of college students in the U.S. are younger than 18, or do not have adult status. The other 90% are adults and are trusted with voting, armed services participation and enjoy most other rights that adults have (with several obvious and notable exceptions -- car rental and legal controlled substance purchase etc.) Are you saying that these adults shouldn't be trusted to use AI? In the United States, and much of the world, we have drawn the line at 18. Are you advocating that AI use shouldn't be allowed until a later cutoff in adulthood? It is not at all definitively established what these "essentially hidden" negative side effects are, that you elude to, and if they actually exist.