That’s not universal. For instance, last week I got help writing a bash script. But I hope they’re helping lots of you in lots of ways.

  • HStone32@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    I TA for an electrical engineering class. It’s amusing, to look at student’s code these days. Everything is so needlessly wrapped up in 3-line functions, students keep trying to do in 25 lines what can be done in 2, and it all becomes impossible to debug.

    When their code inevitably breaks, they ask me to tell them why it isn’t working. My response is to ask them what its meant to be doing, but they can’t answer, because they don’t know.

    The sad thing is we try to make it easy on them. Their assignment specs are filled with tips, tricks, hints, warnings, and even pseudo-code for the more confusing algorithms. But these days, students would rather prompt chatgpt than read docs.

    I’ve never seen chatgpt ever benefit a student. Either it misunderstands and just confuses the student with nonsense code and functions, or else in rare cases it does its job too well and the students don’t end up learning anything. The department has collectively decided to ban it and all other genAI chatbots starting next semester.

    • comfy@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 days ago

      I don’t understand why it would be acceptable to submit generated code in the first place. I’d say it’s functionally asking others to complete your assignment. Sampling code excessively and without attribution is plagiarism.

      And seconding that concern about people not even learning how code works. This was an issue even before chatGPT, when people would by-default look up stack overflow snippets or existing algorithms instead of thinking and training their mind to be able to solve actual real problems, but now it’s probably much more widespread as an easier way out. If the school is able to do a code exam in an offline environment, even with manual docs available, it should weed out the ones who didn’t learn pretty quickly.

    • perishthethought@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      This is my big concern at my day job. Management keeps pushing AI chat on my younger co-workers, but they can’t tell when it’s hallucinating. And since there’s no feedback loop (our chatbot doesn’t learn from us as we type), it just keeps spewing the same lies.

      • morbidcactus@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Yeah, been dealing with that a bunch lately too, I’ve started pushing them towards the documentation directly (though to be fair, sometimes that’s ass or nearly nonexistent) with some success.

    • tee9000@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      4 days ago

      How do you know if it doesnt benefit a student? If their work is exceptional, do you assume they didnt use an LLM? Or do you not see any good code anymore?

      • hemko@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        It replaces the work required to research and think about the problem. You know the part where you’d normally learn and understand the issue at hand

        • tee9000@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          3
          ·
          4 days ago

          Im asking about this individuals experience as a ta, not for an opinion on llms.

    • WbrJr@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      A friend of mine works in a similar position and we discussed it a bit.

      Since ai is a thing and we have some newer, younger and motivated profs, they actually kind of teach and discuss the use of ai in class, which is pretty important.

      In my opinion we will not get rid of them, just like the internet.

      And we have no metric to determined if ai was used or not.

      So the only way to deal with this situation is to accept the existence and use of ai and create different tasks.

      For example make them explain the code and make it clear there will be questions. That way they have to learn code. If they use ai or not does not matter.

      And create tasks that require human interaction, like collaborative tasks, those can’t be done by ai and you have to structure the project.