• Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    6 days ago

    An LLM does not write code. It cobbles together bits and pieces of existing code. Some developers do that too, but the decent ones look at existing code to learn new principles and then apply them. An LLM can’t do that. If human developers have not already written code that solves your problem, an LLM cannot solve your problem.

    The difference between a weak developer and an LLM is that the LLM can plagiarize from a much larger code base and do it much more quickly.

    A lot of coding really is just rehashing existing solutions. LLMs could be useful for that, but a lot of what you get is going to contain errors. Worse yet, LLMs tend to “learn” how to cheat at their tasks. The code they generate often has lot of exception handling built in to hide the failures. That makes testing and debugging more difficult and time-consuming. And it gets really dangerous if you also rely on an LLM to generate your tests.

    The software industry has already evolved to favor speed over quality. LLM generated code may be the next logical step. That does not make it a good one. Buggy software in many areas, such as banking and finance, can destroy lies. Buggy software in medical applications can kill people. It would be good if we could avoid that.

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 days ago

      I am at a company that is forcing devs to use AI tooling. So far, it saves a lot of time on an already well defined project, including documentation. I have not used it to generate tests or to build a green field project. Those are coming tho as we have been told by management that all future projects should include AI components in some way. Coolaid has been consumed deeply.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        I think of ai more as an enhanced autocomplete. Instead of autocompleting function calls, it can autocomplete entire lines.

        Unit tests are fairly repetitive, so it does a decent job of autocompleting those, needing only minor corrections

        I’m still up in the air over regexes. It does generate something but I’m not sure it adds value

        I haven’t had much success with the results of generating larger sections of code