Xojo sends a clear message!

Not often I agree with @MarkusWinter but here I do. I’m a little surprised how many programmers are in denial about what’s on the horizon.

If that was a response to Markus, he was referring to the strategy game Go rather than the programming language. As Markus said the AI was only given the rules of the game, it decides for itself the moves to make.

Another interesting event was a member of the public asking ChatGPT a question in Bangla, a language it had never been trained in. ChatGPT taught itself Bangla in order to answer the question. This was a decision ChatGPT made itself and the unexpected behaviour was even a surprise to its creators.

I have been listening to podcasts which involved interviews with some of the leading lights in the industry and here are just a couple of the predictions for the next decade ( or probably less ):

  1. You will be able to upload the text of your own novel and AI will render a Hollywood quality movie from it for you.

  2. Every child will be able to have a personal tutor that will not only have vast knowledge of each subject but will be able to ascertain if the child is an auditory learner, visual learner, etc… and tailor the lessons to suit that particular child. It will be able to asses if the child has grasped the topic at hand and whether it’s ready to move on much more than any human teacher with a class of 30 pupils could do.

Here’s an AI song I sent to two musician friends without disclosing the origin. They both loved it but then went a bit green when I told them it was created and performed using AI:

And I spoke about Development and not playing around. Cause development of medical Software is engineering and not something AI can develop. And as long it has not changed I will have to do my JOB

“As the technology currently stands” is my qualifier. Give it a few more years and it might actually be good enough to do that. But considering it currently can pull Github repos out of thin air (i.e. they don’t exist) it’s not a reliable enough tool. Again, that will change.

Search technology wasn’t very good at the beginning either. It got better until it became a verb. Machine learning will most likely go through the same stages.

Google search IS still shit

It gives you “stuff”
Not answers
And definitely not “correct answers”
its getting better but as long as it acts based on rankings that are still largely popularity contests getting accurate answers is not its forte

1 Like

Hey, to be fair: it is a search engine and not a programming helper. Knowing your language and not needing to search is the best you can do.

1 Like

This has being DONE already by AI for years now.

It is? In my experience it’s gotten continually worse for at least the last ten years.

3 Likes

It’s so bad lately they introduced a new feature, “Web” search :man_facepalming:

1 Like

hahaha cool. Not at Healthineers and not at GE radiology systems. Only two small companies we are working for.

Healthineers seem rather more confident than you in the capabilities / future of AI

Yes cause it learned how treatment decisions made. But not how to develop products. Exactly that’s the problem. AI for dicom analysis we use for example. But the dicom analyzer is not developed by ai. Exactly that’s the difference. And what I said.

Understanding that difference is extremely important to understand what AI is able to do and what not. AI can not develop a cardio analysis system with electronics, firmware, software, housing and so on. But AI can help to analyze what the heart is doing and in which conditions it is.

So it is important also to know how to use ai in your software. Starts with the analysis what it can do and what not. For example it can help to generate boilerplate. But for example it isn’t helpful for now to generate boilerplate for entire hardware communication. That fails. Too many errors and misleadings inside. Exactly that’s the entire point of it.

1 Like

You don’t seem to WANT to understand that AlphaGo has already proven that AIs can teach themselves. That they can come up with NEW ways that nobody ever taught them.

AlphaGo was the big breakthrough in AI research on the path to true machine intelligence - NOT ChatGPT.

ChatGPT for all it’s impressive feats has the basic problem that it is a GENERAL machine intelligence trained on a plethora of contradictory data. Ever heard of “Crap In - Crap Out”? No wonder that it starts to “hallucinate”, or confidently tell you untruths - after all, so do you, so I guess that makes ChatGPT actually more “Human” … :grin:

If your OPINION is contradicted by FACTS, then you need to change your opinion, NOT ignore the facts.

Even ChatGPT can accept more input and change its mind … it says a LOT about Humans that they rather not change their mind. They don’t lack intelligence, they don’t lack imagination - they just DON’T WANT TO. That is a specific type of stupidity that has baffled and fascinated me for as long as I can remember … but among Humans it seems to be a desirable trait as Trump amply demonstrates. Shows “conviction”. Shows “certainty”. Shows “leadership”. :man_facepalming: :man_facepalming: :man_facepalming:

Sooner or later EVERYONE comes round … :wink:

1 Like

You don’t accept that it can learn but not build products by itself. IT CAN NOT. May be different in 2 decades. Or in one. But not now. I know you have a problem to accept facts. But this is a fact and will be a fact for a long time from now on.

So much agreed!

1 Like

That’s not how we do it in the States. :slight_smile:

5 Likes

this is extremly hard for humans (I look towards you genius!) how can you expect this from products (ChatGPT)?

1 Like

It’s not hard if you are interested in what’s real.

I do understand why my mother always emphasised appearances - she came from the city to a small very catholic village and had to fight for acceptance, and had a mother-in-law that was a nightmare (one of the few things that everyone could agree upon). So I do get that she wanted to fit in and give no reason to anybody to think poorly of her.

But I hated it. I hated that my grades mattered, how I behaved myself, how I dressed - but never how I felt. I hated the PRETENDING.

So I didn’t appreciate whether something was good or bad, pretty or ugly - if it wasn’t REAL, if it was just pretence (like religion), then it was worthless to me.

That also applied to me. I often left people flabbergasted when I changed my opinion halfway through a discussion - because THEY discussed to WIN, but I discuss to LEARN. What’s the point of defending something that you realised is wrong? That’s just lying.

So no, changing your opinion isn’t difficult.

And in Science it is the mark of a good Scientist - anybody holding on to their disproven theories is considered a bad Scientist … a Fool in common parlance.

Which is why I also tend to stay away from groups - because groups tend to have a feedback loop where people with similar opinions come together, reinforce each others opinion, and then drive away anyone disagreeing with them - which just means that opinion has solidified into a belief that is no longer to be challenged. The first step to a cult. And that has happened not just on TOF (much lamented here), but here too (that this site is not fundamentally different from TOF … the irony is mind boggling).

3 Likes

Absolutely agree!

I’ve been building systems all my life without a coherent description of what the customer wants. It’s just that I’ve been good at figuring out what the customer can’t for themselves. Is this something an AI could never do? Eh – never say never. But no I don’t think I’m a fungible commodity because of it, for the foreseeable future.

For now it completes code for me better than IntelliSense. Sometimes it guesses whole code blocks – nearly always wrong and with (for the moment) insufficient insight into the DB I’m working with and my own APIs (sometimes in the same code file FFS). I have gone from feeling it’s barely worth the $100/yr I’m paying for it, to thinking it’s worth it. In another year I’ll consider it an indispensable productivity booster.

I think that the biggest problem people have with LLMs is that they have stumbled on a number of important aspects of human thinking (imagination is a biggie … it takes imagination to respond to, “write an essay on topic x in the style of Hemmingway”). Poor humans – our thought process was not THAT mysterious and ineffable and special after all. I have little doubt that they will eventually achieve sentience. I think they are way too inefficient to scale that far right now, but it will come eventually. Ten or 50 years from now (hard to tell exactly) we’ll be debating AI rights alongside human rights as they will in fact achieve full sentience.