I mean, autocorrect has written about 50 percent of this comment, but that doesn’t mean my phone is writing it for me as much as it accelerates what I wanted to type in the first place.
Maybe that’s how they got to a 30% estimate, cause ain’t no other way that would have worked
I wonder how they measure that. Writing 30% of loc with AI seems like it will be terrible. Writing 30% of each loc with AI (i.e. autocomplete) seems feasible
Yeah, what percentage of their code was previously written by Intellisense, because I suspect this is just copilot replacing intellisense plus a little more.
Copilot is great for:
a) replacing intellisense
b) minor refractors / and very short method writing
c) writing out boiler plate / test code
we can tell
I wonder if Microsoft has a model that learns from their own source code. I have tried having copilot write sql queries and they work (without modification) about 10% of the time
I could believe this.
But how is their OS or any other product even close to functional at that point?
Is it tho?
I’m using Windows daily.
I never said it was good, but it is functional.
That’s like, not a good thing dude
@casmael @Ninjazzon @technology It would be an amazing thing but there is so little support for workers in America that we all fear the destitution that will result.
Imagine if a worker replaced by ai could still maintain their life through UI, it would actually be awesome to be replaced by machines.
There saying it’s not a good thing because code produced by AI is riddled with problems, today.
Edit: AI code could explain the unstable experience in some recent updates to Microsoft Teams…
Given they made a big hoohah a few years ago of getting rid of most of their QA—this in combination is a particularly bad look
That exains a lot…
I do use AI to assist my programming, but I always take what it suggests as likely highly flawed. It frequently sends me in the right direction but almost never is fully correct. I read the answers carefully, throw away answers frequently, and never use a solution without modifying it in some way.
Also, it is terrible at handling more complex tasks. I just use it to help me construct small building blocks while I design and build the larger code.
If 30% of my code was written by AI it would be utter trash.
And presumably must developers at Microsoft take a similar approach (all the ‘this explains everything’ comments notwithstanding, so it’s ridiculous that they’re even tracking this as a metric. If 30% is AI generated, but the devs had to throw away 90% of it, that doesn’t mean you could get rid of the developer, as they did a huge amount of work just checking the AI and potentially fixing stuff after it.
This is a metric that is misleading and will cause management to make the wrong decisions.
AI is like a utils library: it can do well known boilerplate like sorting very well, but it’s not likely to actually write your code for you
AI is like fill down in spreadsheets: it can repeat a sequence with slight, obvious modifications but it’s not going to invent the data for you
AI is like static analysis for tests: it can roughly write test outlines, but they might not actually tell you anything about the state of the code under test
Well said. Fully agreed.
Pathetic