Recently there’s been a surge of excitement around ChatGPT. I have made a few posts referencing my apprehension about recommending it as a learning resource. To avoid rewriting the same thing over and over again, I took to my blog to gather my thoughts and try to be reasonable.
I’ve published my post and thought maybe I’d open a discussion here to see what other developers think. I have heard it’s better at writing more well known languages, my post is kind of specific to what I’ve seen regarding Xojo. What are your thoughts?
Tim, What is wrong with the answer where you asked to explain classes in an abstract way wrong? That looks to me exactly like Xojo code when it’s written out plainly. The file formats don’t count. But that’s what the compiler gets after the IDE produces the code from the project.
Or what am I missing?
We don’t define classes in a plain text manner in Xojo. The code area is definitely lifted and tweaked from another language example. A human would know it’s useless to a Xojo learner. Additionally, yes, I was also annoyed that it wasn’t the actual plain-text format. You can’t put that code block anywhere in Xojo, by paste or text file, and have it actually work as a class.
How else should the AI display the code? Instruct how to add each property in the GUI? The code is correct and if users don’t understand how to enter that code into the IDE then it’s Xojo’s fault, not the AI’s.
It is already agreed by many, even here, that the AI is not meant for beginners but for those who are able to validate what the AI suggests.
I believe I never copied something wrong from SO, and I use it a lot.
Same with AIs. You have to read the comments, or verify that the suggestion is valid.
If a programmer doesn’t understand that, he need to be taught better or leave it.
not specifically SO but from where Joe & Aaron work
Dont get me wrong
I use SO just like anyone but I try hard to be careful about reviewing the code before just blindly copy pasting it
And thats where I see this AI generated code as being problematic
As Tim does
If new coders use it thinking it generates great code, or passable code, then it may fool them into a false sense of security and expertise
For one thing, ChatGPT kept including incorrect information in its explainer—sometimes mixing up basic facts about the history of its own technology (factual inaccuracy has been an ongoing problem for the program)
A robot could be writing articles for news sites tomorrow. Would the articles be any good? Based on this experiment, the answer is: no, probably not. They would be pretty boring and, given ChatGPT’s penchant for making shit up, would have to be heavily fact-checked.
So if it get prose wrong what about code ?
_ would have to be heavily checked_
And to do that you have to know what you’re doing.
So where’s the leg up with an AI writing code ?
Well, taking part in AdventOfCode, I can tell you that coming up with the right algorithm is often the smaller part - I waste a lot of time just writing out the code (I’m a terrible typer). So, if the AI can help me with that, I’d already find it quite useful.