I spent 45 mins with ChatGPT trying to give me the quick resolve for something querying with M.
It ended with me telling ChatGPT that if it worked for me, it would be fired because it kept trying to reoptimise my query, resulting in syntax and load errors, then “fixing” them by ignoring my query’s criteria.
I ended up going old school and taking an extra 30 mins to just figure it out myself. Now that I know how it’s done, it’s surprisingly easy to understand.
So I took that as a compliment; or ChatGPT just sucks at PowerQuery.
It probably learned, though. If anyone has transform queries around multi-level filtering criteria and ChatGPT helps, that’s because of my suffering.
It will eventually incorporate user inputs in the model. So yes it won’t learn in real time from other users, but at some point those inputs will be fed back into itself.
In each session, the last several thousand words (from the user and AI) are kept in a context buffer to be used as additional inputs for the neural network. But I don’t think ChatGPT lets you choose the AI’s responses for that buffer, so you can’t really “train” it in any sense of the word. If you want that functionality, use LLaMa.
I spent 45 mins with ChatGPT trying to give me the quick resolve for something querying with M.
It ended with me telling ChatGPT that if it worked for me, it would be fired because it kept trying to reoptimise my query, resulting in syntax and load errors, then “fixing” them by ignoring my query’s criteria.
I ended up going old school and taking an extra 30 mins to just figure it out myself. Now that I know how it’s done, it’s surprisingly easy to understand.
So I took that as a compliment; or ChatGPT just sucks at PowerQuery.
It probably learned, though. If anyone has transform queries around multi-level filtering criteria and ChatGPT helps, that’s because of my suffering.
Does chatGPT really learn from user inputs? I thought it was always restarting from the same base
It will eventually incorporate user inputs in the model. So yes it won’t learn in real time from other users, but at some point those inputs will be fed back into itself.
In each session, the last several thousand words (from the user and AI) are kept in a context buffer to be used as additional inputs for the neural network. But I don’t think ChatGPT lets you choose the AI’s responses for that buffer, so you can’t really “train” it in any sense of the word. If you want that functionality, use LLaMa.