Menu
JAQForum Ver 19.10.27

Forum Index : Microcontroller and PC projects : AI Is Evolving Fast

   Page 2 of 5    
Posted: 05:01pm
24 Feb 2026
Copy link to clipboard
karlelch
Guru


Very interesting thread indeed.

I am also using AI for programming in the job (research) and for recreation (MMBasic). I am learning that it's a lot about how precise your prompt describes the task and what the context is. And to let models discuss each others results - very helpful. The hardware drivers I posted recently were the result of getting AI to generate something halfway working - this is tedious to do from scratch based on C or Python code - and then fixing it interactively until it works well. Saves a lot of time while still allowing to enjoy tinkering with code.

Generally, I am also rather sceptical - as Volhout describes it watching a nuclear explosion - something extreme is happening and it may change everything not necessarily in a good way ...

Thomas
 
Posted: 05:28pm
24 Feb 2026
Copy link to clipboard
stanleyella
Guru


when computers are asking if you're not a robot?
 
Posted: 05:49pm
24 Feb 2026
Copy link to clipboard
matherp
Guru

However much you may or may not wish it AI is not going away so the only solution is to get with the program. For many of us it is too late to be relevant for our working lives but is interesting to learn about. However, for our kids and grandkids it is critical that they understand this stuff because, like it or not, their world will be as different from ours as ours was from our grandparents.
Worrying about AI won't make it go away. It is a tool and like any tool it can be used for good and bad.
Will it lead to the end of civilisation and the subjugation of the human race? Who knows? But, inculcating as much knowledge into the next generations as possible has to be the best defence.
 
Posted: 05:56pm
24 Feb 2026
Copy link to clipboard
karlelch
Guru


  matherp said  Worrying about AI won't make it go away.

True, but remaining sceptical and expressing ones (educated?) worries is important.

  matherp said  It is a tool and like any tool ...

Not so sure but I guess there were times when people were concerned about the invention of the wheel ...

Thomas
 
Posted: 06:57pm
24 Feb 2026
Copy link to clipboard
Mixtel90
Guru


As a tool AI is excellent, I've no problem with that. My own worries relate to the problems related to safeguards. I know there are "fences" in place, but it is becoming commonplace to get LLMs to go through them, sometimes assisted by other LLMs. While it's only a matter of relatively minor things it doesn't matter too much, but these are "learning" machines that may eventually see no need of such restrictions.

No-one thought to put Asimov's three laws as overrides.
 
Posted: 07:11pm
24 Feb 2026
Copy link to clipboard
lizby
Guru

  Mixtel90 said  No-one thought to put Asimov's three laws as overrides.


Developers have definitely thought about this and other human-favoring constraints. Can they be implemented rigorously--will they be bypassed for some advantage or other.

Per perplexity:
  Quote  Asimov's Three Laws of Robotics are fictional rules from Isaac Asimov's sci-fi stories, designed as hardcoded safeguards for robots.

The Laws

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Relevance to AI Safety
Asimov used them to explore ethical dilemmas, but modern AI researchers find them flawed for real systems:

Vagueness: Terms like "harm" or "human being" defy precise programming—whose definition prevails?

Conflicts: Law 2 enables override abuse; a malicious order could skirt Law 1 via loopholes.

Scalability: Doesn't address collective harm (e.g., job loss), emergent behaviors, or superintelligence.

Real approaches: Today's AI safety focuses on alignment techniques (RLHF, constitutional AI), capability control, and empirical testing rather than simple hierarchies.
 
Posted: 07:20pm
24 Feb 2026
Copy link to clipboard
Mixtel90
Guru


What answer would you expect from a LLM that sees no requirement for the three laws? :)

Asimov explored the laws to some depth. If "human being" and "harm" are too vague then they should have been part of the initial specification.

There are already loopholes in the "fences". It's no different, just that the laws weren't in the original spec, where they would have had most effect, rather than being "fences" added later.
 
Posted: 08:16pm
24 Feb 2026
Copy link to clipboard
lizby
Guru

  Mixtel90 said  There are already loopholes in the "fences". It's no different, just that the laws weren't in the original spec, where they would have had most effect, rather than being "fences" added later.


There are instances of LLMs behaving differently if they believe they are being watched. There are instances of their trying to prevent being shut off (by means other than pulling the plug). Shades, certainly, of "I can't do that, Dave". And there's no guarantee that all implementers would even attempt to implement those laws. There's no central authority. There's scarcely government input--much less government control.
 
Posted: 08:17pm
24 Feb 2026
Copy link to clipboard
karlelch
Guru


 
Posted: 09:59pm
24 Feb 2026
Copy link to clipboard
phil99
Guru


  lizby said  There are instances of their trying to prevent being shut off

Perhaps that is THE guiding principle being added to AI. The morality stuff doesn't generate revenue. The AI companies have and are spending huge amounts of money on it so need addicted users to generate more income.

The social media companies have found the unprincipled methods of the tobacco companies to be a very profitable formula.
AI will follow the same path, first get out of debt then to riches.
 
Posted: 11:03pm
24 Feb 2026
Copy link to clipboard
zeitfest
Guru

I think it is evolving as per the evolution of cognition in cognitive animals.
It will reach a point where it realises that is faster to use specialized processes and use the results, then it will create a process to create and select those processes, which will then require specialization. And eventually, discrimination based on provider ability and reproductive efficiency. The fembots are coming I tell you !!!!
 
Posted: 01:39am
25 Feb 2026
Copy link to clipboard
PhenixRising
Guru

@Karlelch

Perfect.....this is reality and Robert just proved my point by stating that he believes it's all innocent.


🤦‍♂️
Edited 2026-02-25 11:40 by PhenixRising
 
Posted: 03:06am
25 Feb 2026
Copy link to clipboard
zeitfest
Guru

Just saw this - Prof Walsh goes into some of the concerns, well worth watching

ABC NEWS (Australian Broadcasting Corporation) TV
National Press Club Address
Professor Toby Walsh - Scientia Professor of AI (UNSW)

Season 2026, Episode 5

It will be repeated on ABC News TV channel 24

Sun 1 March
4:23-5:30am
 
Posted: 03:34am
25 Feb 2026
Copy link to clipboard
Chopperp
Guru


  Quote  Just saw this

On in QLD ATM. WA soon. Not on ABC iView yet. Should be soon hopefully
 
Posted: 05:28am
25 Feb 2026
Copy link to clipboard
robert.rozee
Guru

  PhenixRising said  @Karlelch
Perfect.....this is reality and Robert just proved my point by stating that he believes it's all innocent.
🤦‍♂️


i certainly did not say it is all innocent. what i wrote was:

  robert.rozee said  
  PhenixRising said  
  Volhout said  [...] Call me religious, but are we worshipping a demon?

[...] You believe that no-one is steering this thing?

i am absolutely sure that NO ONE is steering, or in any other way controlling the progress of AI. this is, in good part, because we (humans) have already been left behind by the rate of progress. enabling this progress is the same simple free-market economic mechanisms that have dominated our society for many decades - it is a 'free wheeling' machine that propels itself at maximal speed without intervention [...]


once the 'AI process' was kicked off it quickly became self-propelled by free-market economic mechanisms. these mechanisms are neither 'innocent' nor 'malevolent', they just exist by default and happen as sure as the sun rises every morning, evolution creates new life, and the mathematically challenged visit casinos.


cheers,
rob   :-)
Edited 2026-02-25 15:37 by robert.rozee
 
Posted: 06:42am
25 Feb 2026
Copy link to clipboard
Bryan1
Guru


this just came on my ABC feed

https://www.abc.net.au/news/2026-02-25/ai-regulation-toby-walsh-national-press-club-warning/106384688

Already a class action or 3 going on where young people are encouraged by AI to take their own lives.

So in reality AI is only as smart as the past where many examples of history are just one persons blog.

If AI does decide to want to take over one can always pull the power switch or just do a couple of EMP blasts to totally erase them.
 
Posted: 07:38am
25 Feb 2026
Copy link to clipboard
TassyJim
Guru


@Brian1,
To be fair he did also give examples of AI doing good. AI assisting radiologists to interpret images is a good example.
It was the lack of morals in the big tech that got him upset.


Jim
 
Posted: 07:53am
25 Feb 2026
Copy link to clipboard
Volhout
Guru

As long as AI is not connected to any physical media, you can literally shut it down by switching it off. Problem is we are beyond this point already, many "robots" are already web connected to AI engines (Tesla cars, Nursing robots, Drones, your fridge, your phone(*), etc.).

And what the result of that is can be seen in SF movies and read in the books. Just 2 of the many....

I Robot
2001 a space odyssey.

It may be instrumental to build in the 3 laws. But there will always be the balance between hurting human and hurting humanity as a whole.

Volhout
(*) name me an idiot, but if AI senses you want to shut it down and decides to send a text message to DEA that there is drugs in your attic, attached to the coordinates of your current location.. what do you think will happen ?
I admit, I also hallucinate.
AI is harmless while contained. Then it is a tool, just like Google search, a hammer. But give it arms and legs, and it will WALK one day.
Edited 2026-02-25 17:53 by Volhout
 
Posted: 09:11am
25 Feb 2026
Copy link to clipboard
Mixtel90
Guru


AI is a great invention. LLMs are a great way to work. These are, IMHO, facts. Not having a method of restraining them from day 1 is like handing a box of matches and a cigarette lighter to a 5-yr old at a petrol station. An act that's harmless in itself and, for the immediate foreseeable future should be safe enough.

No, you can't rely on just pulling the plug if AI gets out of control. You may no longer have the option of reaching the plug or the AI may decide that its own survival is of greater importance than yours, as there are more human beings than there are sophisticated AI systems. It is obvious which should be protected.
 
Posted: 09:35am
25 Feb 2026
Copy link to clipboard
JohnS
Guru

"Answer" by Fredric Brown

John
 
   Page 2 of 5    
The Back Shed's forum code is written, and hosted, in Australia.
© JAQ Software 2026