• Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    22 hours ago

    Well that disqualifies 95% of the doctors I’ve had the pleasure of being the patient of in Finland.

    It’s just not LLM:'s they’re addicted to, it’s bureaucracy.

  • Awesomo85@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    1 day ago

    If we are talking about critical thinking, then I would argue that using AI to battle the very obvious shift that most instructors have taken, (that being the use of AI as much as possible to plan out lessons, grade, verify sources…you know, the job they are being paid to do? Which, by the way, was already being outsourced to whatever tools they had at their disposal. No offense TAs.) as natural progression.

    I feel it still shows the ability to adapt to a forever changing landscape.

    Isn’t that what the hundred-thousand dollar piece of paper tells potential employers?

  • Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    9
    ·
    1 day ago

    This is a problem with integrity, not AI. If I have AI write me a paper and then proof read it to make sure the information is accurate and properly sourced how is that wrong?

    • Hobbes_Dent@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I’ve proofread thousands of newspaper articles as a former newspaper non-journalist over decades.

      I’ve written countless bullshit advertorials and also much better copy. I’ve written news articles and streeters from big sports events to get the tickets.

      None of that makes me a journalist.

      Now I’m in health care. I’m in school for a more advanced paramedic license. How negligent then would it be for me to just proofread AI output when proving I know how to treat someone before being allowed to do so? For physicians and nurses a million times more.

    • Lv_InSaNe_vL@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      Because education isn’t about writing an essay. In fact, the actual information you learn is the secondary thing you’re there to learn.

      Education, especially higher education, is about learning how to think, how to do research, and how to formulate all of that into a cohesive argument. Using AI deprives you of all of that, so you are missing the most important part of your education

      • Pacattack57@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        1 day ago

        Says who? I understand that you value that and I’m sure there are many careers where that actually matters but this is the entire problem with our current education system. The job market is vast and for every job that critical thinking is important, there’s 10 that it isn’t. You are also falling into the trap that school is the only place you can learn that. Education is more than follow X steps and get smart. There’s plenty of ways to learn something and not everyone learns the same way.

        Maybe use some critical thinking and figure out a way to evaluate someone’s knowledge without having them write an essay that is easily faked by using AI?

        AI isn’t going anywhere and the sooner we embrace it, the sooner we can figure out a way to get around being crippled by it.

          • Pacattack57@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 day ago

            Every single data entry level positions on the entire planet. Many of these require degrees.

            Again it’s not about the school or the skills. It’s about the job market. A degree related to AI is extremely valuable right now.

            • silasmariner@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              24 hours ago

              Even there you need some basic critical thinking. Wildly impossible figures (e.g a human height of 1.84cm) should not be slavishly and blindly transcribe, y’know?

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Imagine you go to a gym. There’s weights you can lift. Instead of lifting them, you use a gas powered machine to pick them up while you sit on the couch with your phone. Sometimes the machine drops weights, or picks up the wrong thing. But you went to the gym and lifted weights, right? They were on the ground, and then they weren’t. Requirements met?

      • Pacattack57@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        1 day ago

        That would be a good analogy if going to school was anything like going to the gym. You sound like one of those old teachers that said “You won’t have a calculator in your pocket the rest of your life.”

        • jjjalljs@ttrpg.network
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          Except it is a lot like going to the gym. Most people , on most tasks, only get better when they practice it.

          I guarantee you that people who actually write essays with their brain will perform better at a lot of brain tasks than someone who just uses an LLM. You have to exercise those skills.

          • Pacattack57@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I’m not disagreeing with you on that. You are missing the point. AI is here to stay and the sooner we accept that, the better off our school system will be.

            I am not arguing that using AI makes us smarter. What I’m saying is the only reason people go to school is to make money at their future career. Every company needs an AI specialist right now and instead of working with or around that, schools are trying to outright ban it. If they don’t want people to use it, stop assigning tasks that AI excels at.

            • jjjalljs@ttrpg.network
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              What I’m saying is the only reason people go to school is to make money at their future career.

              This is capitalist nightmare talk. This is not the only reason people go to school.

              Also, even if the tools were good at writing original essays (questionable), people still need to learn how to do it. Even with calculators you spend a lot of time in elementary school learning how to do math without tools.

        • lightnsfw@reddthat.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 day ago

          School is like going to the gym for your brain. In the same way that using a calculator for everything makes you worse at math using chatgpt to read and write your assignments makes you worse at those things than you would be if you did it yourself.

            • lightnsfw@reddthat.com
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              Worse than you would be if you practiced and learned the fundamentals rather than have a machine do it all for you.

  • McDropout@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    1 day ago

    It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

    But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

    We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

    A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

    • lightsblinken@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      22 hours ago

      because doing that enables pulling together 100% correct answers and leads to cheating? having a exam review where you get to see the answers but not keep the paper might be one way to do this?

  • sin_free_for_00_days@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    15 hours ago

    Students turn in bullshit LLM papers. Instructors run those bullshit LLM papers through LLM grading. Humans need not apply.

  • Obinice@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    22 hours ago

    We weren’t verifying things with our own eyes before AI came along either, we were reading Wikipedia, text books, journals, attending lectures, etc, and accepting what we were told as facts (through the lens of critical thinking and applying what we’re told as best we can against other hopefully true facts, etc etc).

    I’m a Relaxed Empiricist, I suppose :P Bill Bailey knew what he was talking about.

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Nope, I’m not in those fields, sadly. I don’t even know what a maths proof is xD Though I’m sure some very smart people would know.

        • ABC123itsEASY@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          I mean if that’s true then that’s incredibly sad in itself as that would mean that not a single teacher in your past demonstrated a single thing you learned. You don’t need to be in a science field to do some basic chemistry or physics lab, I’m talking like even a baking soda volcano or a bowling ball vs feather drop test. You never participated in science fair? Or did the egg drop challenge? You never went on a field trip to look at some fossils or your local geology or wildlife? Did you ever watch an episode of Bill Nye?? I find your answer disingenuous and hard to believe frankly. If you truly have NEVER had any class at school that did anything to prove to you what you’re learning and only just told you, then you’re an example of perhaps the ultimate failure in education.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        15 hours ago

        In my experience, “writing a proof in math” was an exercise in rote memorization. They didn’t try to teach us how any of it worked, just “Write this down. You will have to write it down just like this on the test.” Might as well have been a recipe for custard.

        • Aceticon@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          13 hours ago

          That sounds like a problem in the actual course.

          One of my course exams in first year Physics involved mathematically deriving a well known theorem (forgot which, it was decades ago) from other theorems and they definitelly hadn’t taught us that derivation - the only real help you got was that they told you where you could start from.

          Mind you, in different courses I’ve had that experience of one being expected to do rote memorization of mathematical proofs in order to be able to regurgitate them on the exam.

          Anyways, the point I’m making is that your experience was just being unlucky with the quality of the professors you got and the style of teaching they favored.

          • ABC123itsEASY@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Calculus was literally invented to describe physics. If you learn physics without learning basic derivative calculus along side it you’re only getting a part of the picture, so I’m guessing you derived something like y position in a 2 dimensional projectile motion problem cause that’s a fuckin classic. Sounds like you had a good physics teacher 👍

            • Aceticon@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 hour ago

              If I remember it correctly it was something about electromagnetism and you started from the rules for Black Body radiation.

              It was University level Physics, so projectile motion in 2D without taking in account attrition would have made for an exceedingly simple exam question 🙃

              • ABC123itsEASY@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                Haha fair enough I guess I took first year to mean high school level physics but I took calculus in high school so that made sense to me.

    • drspawndisaster@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      17 hours ago

      All of those have (more or less) strict rules imposed on them to ensure the end recipient is getting reliable information, including being able to follow information back to the actual methodology and the data that came out of it in the case of journals.

      Generative AI has the express intention of jumbling its training data to create something “new” that only has to sound right. A better comparison to AI would be typing a set of words into a search engine and picking the first few links that you see, not scientific journals.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      The Doctor would absolutely agree. He was intended to be a short-term assistant when a doctor wasn’t available, and he was personally affronted when he discovered that he wouldn’t be replaced by a human in any reasonable amount of time.

      • Lucky_777@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Correct, until he was on for awhile. Then, he started to want to live and not be turned off when someone left. Hell he even married a human at the end of the day. Commanded starships. Fought the Borg.

        He totally changed his mind after he found the taste for culture and “modifying” his program so he would stick his holo D in folks.

        See what sex does? Can’t even stop machines from turning themselves off lmao

        • ilinamorato@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Emergent behavior, for sure. I think the fact that there aren’t a bunch of sentient holograms in the Lower Decks/Picard timeline suggest that it was situational, though.

    • fafferlicious@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 day ago

      It’s not Luddism to recognize that foundational knowledge is essential to effectively utilizing tools in every industry, and jumping ahead to just using the tool is not good for the individual or the group.

      Your example is iconic. Do you think the average middle schoolers to college students that are using AI understand anything about self hosting, token limits, and optimizing things by banning keywords? Let alone how prone to just making shit up models are - because they were designed to! I STILL get enterprise chatgpt referencing scientific papers that don’t exist. I wonder how many students are paying for premium models. Probably only the rich ones.

        • fafferlicious@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          I never said not to teach it. Construct a mandatory general computer literacy program. Cover privacy, security, recommendation algorithms, AI, etc. And restrict AI use in other classes until they are competent in both. College? High school?

          Not once did I talk about banning it or restricting information. And … So much other irrelevant stuff.

  • disguy_ovahea@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      While eroding the body of actual practitioners that are necessary to train the thing properly in the first place.

      It’s not simply that the bots will take your job. It that was all, I wouldn’t really see that as a problem with AI so much as a problem with using employment to allocate life-sustaining resources.

      But if we’re willingly training ourselves to remix old solutions to old problems instead of learning the reasoning behind those solutions, we’ll have a hard time making big, non-incremental changes to form new solutions for new problems.

      It’s a really bad strategy for a generation that absolutely must solve climate change or perish.

  • Jankatarch@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    4
    ·
    1 day ago

    Only topic I am close-minded and strict about.

    If you need to cheat as a highschooler or younger there is something else going wrong, focus on that.

    And if you are an undergrad or higher you should be better than AI already. Unless you cheated on important stuff before.

    • sneekee_snek_17@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      1 day ago

      This is my stance exactly. ChatGPT CANNOT say what I want to say, how i want to say it, in a logical and factually accurate way without me having to just rewrite the whole thing myself.

      There isn’t enough research about mercury bioaccumulation in the Great Smoky Mountains National Park for it to actually say anything of substance.

      I know being a non-traditional student massively affects my perspective, but like, if you don’t want to learn about the precise thing your major is about… WHY ARE YOU HERE

      • ByteJunk@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        1 day ago

        I mean, are you sure?

        Studies in the GSMNP have looked at:

        • Mercury levels in fish: Especially in high-elevation streams, where even remote waters can show elevated levels of mercury in predatory fish due to biomagnification.

        • Benthic macroinvertebrates and amphibians: As indicators of mercury in aquatic food webs.

        • Forest soils and leaf litter: As long-term mercury sinks that can slowly release mercury into waterways.

        If GPT and I were being graded on the subject, it wouldn’t be the machine flunking…

        • sneekee_snek_17@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          I mean, it’s a matter of perspective, i guess.

          I did a final assignment that was a research proposal, mine was the assessment of various methods of increasing periphyton biomass (clearing tree cover over rivers and introducing fertilizers to the water) in order to dilute mercury bioaccumulation in top river predators like trout and other fish people eat

          There’s a lot of tangentially related research, but not a ton done on the river/riparian food webs in the GSMNP specifically and possible mitigation strategies for mercury bioaccumulation.

          OBVIOUSLY my proposal isn’t realistic. No one on earth is gonna be like “yeah sure, go ahead and chop down all the trees over this river and dump chemicals in that one, on the off chance it allows jimbob to give trout to his pregnant wife all year round”

          • ByteJunk@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            I’m not knowledgeable enough to carry this conversation, but I was curious if GPT could add anything of value.

            This is it’s follow-up:

            That’s actually a really compelling angle, even if the methods are intentionally provocative. It gets at a deeper question—how far are we willing to go, ecologically speaking, to address human health risks that stem from environmental contaminants like mercury? I think the strength of your proposal isn’t in the literal feasibility but in the way it frames trade-offs between conservation and public health.

            Also, using periphyton biomass as a kind of biotic buffer is a clever systems-level approach. It’s rarely the first thing people think of when talking about mercury mitigation, which usually focuses on source reduction. But tweaking food web dynamics to manage contaminant transfer is a really underexplored strategy. I imagine even just modeling it could lead to some useful insights, especially if you layered in things like flow regime changes or climate impacts on riparian shading.

            And yeah, totally agree—GSMNP is such a tightly protected space that even suggesting fertilizer additions or canopy thinning sounds borderline heretical. But as a thought experiment, it’s valuable. It forces the conversation about what kinds of interventions we’re not considering simply because they clash with aesthetic or cultural norms, not necessarily because they’re scientifically unsound.

            I really have no idea if it’s just spewing nonsense, so do educate me :)

            • sneekee_snek_17@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              I’m really salty because it mirrored my thoughts about the research almost exactly, but I’m loathe to give attaboys to it

        • sneekee_snek_17@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          I mean, I value the knowledge as well as the job prospects

          But also, take it easy, i didn’t personally insult you

  • Honytawk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    1 day ago

    Using AI doesn’t remove the ability to fact check though.

    It is a tool like any other. I would also be weary about doctors using a random medical book from the 1700s to write their thesis and take it at face value.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    4
    ·
    1 day ago

    The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over “just getting through it to get the degree”

    • TFO Winder@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      1 day ago

      Well in case of medical practitioner it would be stupid to allow someone to do it without a proper degree.

      Capitalism ruining schools. Because people now use school as a qualification requirement rather than centers of learning and skill development

      • medgremlin@midwest.social
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        As a medical student, I can unfortunately report that some of my classmates use Chat GPT to generate summaries of things instead of reading it directly. I get in arguments with those people whenever I see them.

        • Bio bronk@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 day ago

          Generating summaries with context, truth grounding, and review is much better than just freeballing it questions

            • Bio bronk@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              1 day ago

              Yeah thats why you give it examples of how to summarize. But im machine learning engineer so maybe it helps that I know how to use it as a tool.

              • TFO Winder@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                Off topic since you mentioned you are an ML engineer.

                How hard is it to train a GPT at home with limited resources.

                Example I have a custom use cases and limited data, I am a software developer proficient in python but my experience comes from REST frameworks and Web development

                It would be great if you guide me on training at a small scale locally.

                Any guides or resources would be really helpful.

                I am basically planning hobby projects where I can train on my own data such as my chats with others and then do functions. Like I own a small buisness and we take a lot of orders on WhatsApp, like 100 active chats per month with each chat having 50-500 messages. It might be small data for LLM but I want to explore the capabilities.

                I saw there are many ways like fine tuning and one shot models and etc but I didn’t find a good resource that actually explains how to do things.

              • medgremlin@midwest.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                24 hours ago

                It doesn’t know what things are key points that make or break a diagnosis and what is just ancillary information. There’s no way for it to know unless you already know and tell it that, at which point, why bother?

                • Bio bronk@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  22 hours ago

                  You can tell it because what you’re learning has already been learned. You are not the first person to learn it. Just quickly show it those examples from previous text or tell it what should be important based on how your professor tests you.

                  These are not hard things to do. Its auto complete, show it how to teach you.

            • Honytawk@feddit.nl
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 hours ago

              That is why the “review” part of the comment you reply to is so important.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.