Microsoft getting into AI coding

With many major tech companies cutting back on staff due to a potential recession, Microsoft is working with cbatGPT to create programs with AI.

Nadella also said that Microsoft would be applying AI to its Power platform, which allows those with no knowledge of coding to code apps through a visual user interface. “We are making it easier for anyone to streamline repetitive tasks introducing new AI powered features to turn natural language prompts into complex workflows,” Nadella said.

PCWorld: Microsoft Surface sales are tanking, Microsoft says | PCWorld.

Should programmers be worried?

Edit - added quote highlight

Not in the long run and not really even in the short run. Seasoned devs will have plenty of work cleaning up after people who are using code they don’t understand but superficially works but doesn’t scale and/or breaks around the margins.

It has always chafed the hides of executives, even in tech companies, that devs get paid a lot of $ (often more than they do) but do not produce results on their imposed schedules and with hard to predict timelines and resource requirements and with maintenance overhead even then. They are always grasping at straws to try to tame something to behave as they want it to rather than understand the nature of the work and how to properly flow with it.

Right now there is a lot going on – backlash against the reforms of the pandemic, for example – in a doomed attempt to “return to normal”. On so many levels, that isn’t going to happen, but it will take a year or two of stubbing their toes on reality to figure that out. And I’m not just talking about the fact that the pandemic isn’t over; workers have had a taste of the power they actually wield, the leverage they actually have, in the workplace, and are flexing that newfound muscle. People in general are getting tired of being exploited for more and more life force over the same or less pay, being worked to the bone in the service of unsustainable profits, while producing lower quality work and not delighting customers. People want to feel good about their work, and don’t. All that is coming to a head and will have to be worked out.

AI won’t be a magic fix. Particularly since AI is little more than offering rote advice, bullshitting its way to an answer when it has low levels of confidence. It is, at best, a bright but misguided noob with hazy judgment. You still have to check and integrate its suggestions. At best they might point you in a (not very creative or out-of-the-box) direction in an area you’re not super familiar with and even the direction has to be vetted. It is a minor time-saver perhaps.

It will doubtless improve, but I am dubious that true sentience / creativity / intuitive grasp of the best way forward, design-wise, will just fall out of a sufficient level of compute power. If AI of the sort we now have starts replacing some low level devs, it will say more about those devs than about the AI. There is some missing je ne sais quoi that we still need to have tireless AIs generating more than fragments of code.

I don’t think this will affect application developers much, if at all. But casual, dare I say citizen, developers could be affected. I could see this being added into Excel and other office apps where a user could say, make it do this, and the AI would do it for you. That’s mostly repetitive work and good use of AI I think.

The article doesn’t make any claim to cause and effect and you had me extremely worried. If *tech companies* were working to create programs with AI *because of* their staff cutbacks, there’s a serious problem with competency.

As I’ve mentioned before, AI writes bad code. As @bgrommes describes, the code it writes only superficially works. It’s the kind of trash heap you’d get from outsourcing to someone working for pennies. In addition to the quality issues, for undocumented languages like Xojo it struggles to write even functional code.

The things it does are neat, but the folks who have been impressed really need to stop being so aggressively defensive. You need to be a skilled developer to review the output and make good use of it. That’s not success.

Someone has to program / train the AI to NOT turn out garbage

I wonder if some tech CEOs are actual AIs, given their output (canned marketing statements, firing of troves of qualified staff at random intervals, strategic decisions no one understands…). :upside_down_face:

My guess is that AI would be used to create snippets of code in its infancy for professional languages such as C/C++, JavaScript, etc. Further learning of the AI would eventually product more competent code, and this would take time. AI is going to make plenty of mistakes during the learning process.

What makes me not trust AI is that there can be changes in the algorithm which can be created from politics.

Possibly in the near future, I could see AI being created for something like a help system. If I could replace Xojo Documentation with a version of AI that I can ask and get information by merely typing in a chat-like-screen, then this would be a helpful game changer for new and professional programmers.

Just thinking out loud… :slight_smile:

Sufficiently trained its possible it could be used in targeted areas - no question

But reading what has been posted that ChatGPT has produced leaves me with the impression that it does nothing more than accumulate & regurgitate and there is no synthesis or extrapolation based on what is “known” by what ever sources it uses

Probably why one assessment of it came to conclude that ChatGPT is a C+ student

1 Like

I don’t think AI will have any impact on true developers because it can only reproduce what has already been produced and fed to it!

It will never be able to build something that has never been developed till date.

And most commercial applications are closed source so AI cannot learn from these commercial applications.

If using AI to churn out software was so easy by now someone would have created M$ clone or Window$ clone or MacO$ clone using AI.

But till date nothing of such complexity has surfaced.

This is just a hype and will subside in a few years.

1 Like

Even with the current hype / hysteria, no one who actually knows development and is honest / not a marketroid is suggesting it would be anything but an aid for, at most, roughing in a very contained bit of code, perhaps a single method, providing more intelligent code completion suggestions and the like. The “FUD factor” comes more from the concern that it will get better and better at doing more and more in a fairly immediate time frame such as the career arc of people entering the field now. But I think there’s a pretty hard ceiling on how far it can advance until it is a true general purpose AI, which this most certainly isn’t. It isn’t just a matter of throwing more compute resources at it and tuning the models better. It’s impressive what it can do, but I’m not sensing we yet have all the ingredients for true sentience, and I believe that is what would be needed to actually and substantively compete with human sentience.

This is more of a philosophical answer/question.

We, as humans, have a finite life, and AI will likely slowly become better-and-better. Similar to a new child learning to program and getting better with experience, which includes making mistakes. The beginning of AI will likely make MANY mistakes. Slowly, AI will eventually get better at programming which may take years or generations. My guess is that AI will eventually (years, generations?) be better at programming than humans.

Philosophically, is it a human obligation to help AI get better at programming, Law, politics, or whatever the topic is? This is the basis behind our entire education system, is to learn from prior generations and become better at STEM and social arts to make society better from learning from our mistakes?

When our human bodies eventually pass away - hopefully after a long and happy life, then AI will continue to learn and a piece of ourselves will remain in the AI code for future generations to learn from. This might help future human generations eventually live in a utopia?

Isaac Asimov wrote a novella in 1957 titled Profession in which students are educated at age 18 on Learning Day using a computer download to their brains in a process called “taping”. The protagonist of the story is one of a tiny percentage of students whose brain is considered incompatible with the learning interface and is sent off to an institute for the simple-minded. But it turns out to be an institute of higher learning in the old style, as we do it today. The point being that any innovation or invention or advances must come from humans, not from old “tapes”.

In the story’s universe, employers actually complain whenever the tapes are updated, as it represents an expense to keep up with the changes. Yet without the changes, society would stagnate.

I think that this story gets at the notion that simply possessing data is not the same as knowing what to do with it or how to improve on it. ChatGPT is, at best, an idiot savant, a good imitator and bullshitter that is regurgitating knowledge it has without itself having any understanding of the info in the sense that humans do. The most it might ever do is accidentally make (or suggest to an attentive human) a connection that no one has yet followed up on.

Regurgitate is a good word for what it does
Its not doing any synthesis of new thoughts from everything it “understands” and making new inferences from that understanding

Sometimes I think the likes of ChatGPT are impressive to us only because we’ve let the bar drift so low in terms of intellectual abilities of the general populace. Does the fact that ChatGPT can hold its own against a human writing a college essay say more about ChatGPT, or the human? Can anyone here honestly say that prior to college – and probably even IN college – they had any tutoring on logical thinking or the spotting of logical fallacies? About epsitimology – the theory of how one comes by knowledge and determines if it is true or not? About how to research in general?

If anything, Google and Bing, much less ChatGPT, arguably have made us poorer researchers rather than better ones. We have access to more than a library would have provided a few generations ago, but how much rigor and skepticism is typically applied to the information once we have it in front of us?

Reminds me of that cartoon where the protagonist seizes on the first search result that even slightly supports their preconceived notion on the topic of interest …

None before
But required classes included statistics and that covered logic and the various fallacies that people lean on

Yes - high school was VERY good this way
My kids less so
Once they were allowed to use the internet for research I had to teach them about how to tell if a search result from google was/wasnt an authoritative reference
And their school did also talk to kids about this subject
I dont know if that still occurs

Far too many people say “well google said …” based on the first hit google shows them :frowning:

1 Like

Not formally before or in college … but it was in implicit in some of the coursework in both high school and college. One could not function effectively without it in a physical science major or career.


I asked to write an xojo desktop app to access a mysql database. Here is its response.

Here’s an example of a simple Xojo desktop application that connects to a MySQL database and retrieves data:

Dim db As New SQLDatabase

If db.Connect(“DRIVER={MySQL ODBC 8.0 Driver}; SERVER=localhost; DATABASE=testdb; UID=root; PASSWORD=root;”, “”, “”) Then
Dim rs As New RecordSet
rs = db.SQLSelect(“SELECT * FROM customers”)
If rs.EOF = False Then
Loop Until rs.EOF
End If
MsgBox(“Error connecting to database”)
End If


This code uses the SQLDatabase class to connect to a MySQL database running on localhost and named testdb, using the username root and password root. If the connection is successful, a RecordSet is created and used to execute a SELECT statement to retrieve all rows from the customers table. The SQLSelect method returns a RecordSet containing the results of the query.

The RecordSet is then looped through to display the Name field of each customer. If the connection to the database is not successful, an error message is displayed.

Finally, the Disconnect method is called to close the connection to the database.