Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register.

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    43
    ·
    18 days ago

    Honestly, a lot of these predicted problems with AI are actually overly optimistic because they assume AI can actually DO the work that we are talking about and the current state of AI very much cannot.

    I sometimes think that articles like this are plants by the AI industry to create the narrative that their shit is even capable of causing this problem.

      • UnspecificGravity@piefed.social
        link
        fedilink
        English
        arrow-up
        14
        ·
        18 days ago

        I would say that THIS is the biggest risk of AI. its not what it does, its what people believe it does. Especially people who aren’t capable of actually assessing its performance.

        • Ænima@lemmy.zip
          link
          fedilink
          arrow-up
          4
          ·
          18 days ago

          Or C-level execs that are so out of touch with what their employees do and are convinced it can and/or should replace one or all of an employees job duties.

    • chisel@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      18 days ago

      The problem is AI replacing essentially busy work that historically brand-new workers tend to be assigned. AI, in its current state, absolutely can do a lot of this work and replace the need for junior employees in a lot of cases. The problem is, junior employees need this work in order to improve, so if we delegate all of it to AI, there will no no future senior employees to do the more advanced work that AI can’t do.

      • rebelsimile@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        18 days ago

        I think it’s actually just the perception that it’s doing that from people who are so far away from the working that they don’t have any clue.

        Like 300 years ago if you wanted to be a sailor maybe you started by rowing oars, or swabbing decks or dealing with ropes and rigging or whatever. If you’re a 19 year old seaman in 2026 you’re probably using a throttle controller that’s 26 different systems disconnected from the actual mechanical work. No one says “oh that guys not a novice seaman” or “the boat literally drives itself hurr”

        If you’re a graphic designer in 1970, you’d cut out and hand lay a magazine page out on a big glowing table, page by page, resetting the type so it fit along with giant physical images so you could design the page. In 2026, you’d use Illustrator and lay it out on your computer. No one says “oh the magazines just lay themselves out” or “that’s not how you lay out magazine”. It’s just done with the tools available.

        What’s going on is very few people seem to understand the difference between a tool and a solution. A tool is a thing that does something. A hammer is a tool. A solution is something that solves a problem. A hammer is a tool but it is not a good solution for a crying baby. Applying random tools to non-problems does not generate solutions randomly, it just creates even more intractable problems.

        • chisel@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          18 days ago

          I feel like that goes without saying, yeah. AI can do a lot of a junior software engineer’s job, but it’s going to fail miserabley at being a junior house cleaner.

      • Bustedknuckles@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        18 days ago

        I agree, and that’s a great way of putting it. We’re kneecapping ourselves collectively because enough individual companies are deprecating the junior dev experience. We’ll see if it holds up when senior devs are in such short supply that companies have to pay them 4x the margin they saved on junior devs. I think they’re hoping that the machine learning gets good enough to do senior dev work before the humans retire. Or else they’re just line-go-up types

    • Virtvirt588@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      18 days ago

      I do agree, it may be that the situation is exacerbated to the point where doomerism is essentially encouraged.

      The thing is, there was an article where the students themselves didn’t want AI on the mandated Chromebooks. It feels force fed, and the fact is, nobody, no matter the age wants garbage screwing up their workflow.

      Honestly, a lot of these articles are repeating the same story. In the end it all leads to a similar conclusion - and it isn’t age related.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 days ago

      LLMs do tend to be pretty good at textbook problems, because they’ve been trained on the textbooks. We have working students at $DAYJOB, who tell us that you can often get a flawless grade by handing in something AI-generated.

      But then, yeah, you don’t learn anything, and that will become a problem sooner or later, because none of problems at work are textbook problems.