Testing it after the fact, reading the code for obvious mistakes, and also, any compiler worth its’ salt (so rustc and few others) will yell at you. Good libraries (again, at least in Rust) will also set linting rules for common mistakes.
It’s easier to fix a broken example than to write things in scratch if you’re using a library you don’t use regularly or haven’t used before, particularly in a language you don’t use.
At the moment, using AI tools for code is little more than having essentially a custom-tailored StackOverflow (AI chatbots) and fancy autocomplete (Copilot/TabNine). You’ll need to adjust things, but you have a starting point. In this instance, the starting point has been tailored for what you want it to do.
Any time I have to do FE work (so almost never, but surprisingly within this year, like 3 or 4 times), I go to ChatGPT and let it give me a bare-bones project that I can then customize, because I don’t do FE and I despise (Java/Type)script and webapps in general, but if all I have to do is fix some minor bugs and add tiny features in a single-page demo app that ChatGPT gave me a skeleton for, it’s fine by me.
As for plagiarism - Copilot has an “avoid plagiarism” setting and ChatGPT has a tendency to mix and match things anyway. As more software gets written, it’s essentially impossible to write anything new without some part of it resembling some other code that has a nasty restrictive copyleft license somewhere. The lines of what is plagiarism get blurrier and blurrier this way and one day we’ll likely have to give up on restrictive licenses.
Then how do you know that the AI is using the library correctly?
Testing it after the fact, reading the code for obvious mistakes, and also, any compiler worth its’ salt (so rustc and few others) will yell at you. Good libraries (again, at least in Rust) will also set linting rules for common mistakes.
It’s easier to fix a broken example than to write things in scratch if you’re using a library you don’t use regularly or haven’t used before, particularly in a language you don’t use.
At the moment, using AI tools for code is little more than having essentially a custom-tailored StackOverflow (AI chatbots) and fancy autocomplete (Copilot/TabNine). You’ll need to adjust things, but you have a starting point. In this instance, the starting point has been tailored for what you want it to do.
Any time I have to do FE work (so almost never, but surprisingly within this year, like 3 or 4 times), I go to ChatGPT and let it give me a bare-bones project that I can then customize, because I don’t do FE and I despise (Java/Type)script and webapps in general, but if all I have to do is fix some minor bugs and add tiny features in a single-page demo app that ChatGPT gave me a skeleton for, it’s fine by me.
As for plagiarism - Copilot has an “avoid plagiarism” setting and ChatGPT has a tendency to mix and match things anyway. As more software gets written, it’s essentially impossible to write anything new without some part of it resembling some other code that has a nasty restrictive copyleft license somewhere. The lines of what is plagiarism get blurrier and blurrier this way and one day we’ll likely have to give up on restrictive licenses.