Menu
JAQForum Ver 19.10.27

Forum Index : Microcontroller and PC projects : AI Is Evolving Fast

   Page 3 of 5    
Posted: 09:57am
25 Feb 2026
Copy link to clipboard
zeitfest
Guru

Asimovs laws assume the robot has the sentience to judge its own actions. In fact there is a canon that says, AI cannot analyse the processes that it uses for its internal logic and remain stable.

The practical reality is that a AI site will be/is operated by a company as per its directors. And companies have no legal obligation to have morality or humanity, in fact it is law that directors must act in the interest of the company (fiscal).

True, the standard company laws exclude any illegal activity...but it is not illegal to create very dangerous AI. ATM AI is being created and used by standard companies that have limited-liability, which is a real cop out.

I think that it is a new industry and laws should be updated to govern it, like armaments manufacture, mains electrical or gas work, or drug company operations.  

So I think it should only be legal for a company to supply or use AI if the company is registered to do so, uses licensed and verifiable operations, and it's constitution explicitly holds directors personally responsible for any injury to humans.

Given the little comprehension of tech-world that the govt has, I doubt it will happen. In fact its own departments tend to use citizens as so much spam to be processed.
Edited 2026-02-25 19:59 by zeitfest
 
Posted: 11:53am
25 Feb 2026
Copy link to clipboard
phil99
Guru


The address given by Professor Toby Walsh mentioned earlier includes a story about Chat GPT giving a depressed person information on how to commit suicide, composing a suicide letter for him and encouraging him to go through with it. And he did. There have been several other instances of this over a period yet no safeguards have been put in place.

Even physically contained AI can be dangerous.
 
Posted: 11:56am
25 Feb 2026
Copy link to clipboard
Mixtel90
Guru


Ooh! Soylent green for dinner?

You make some very good points there.
First, the three laws can't be altered in any way by the AI system. That was the whole premise of the stories. They could be "tuned" at manufacture to boost or reduce the levels of each but they operate outside the AI sphere.

Company liability would indeed be the cure, I think. That's not going to happen if there is money involved as money always takes precedence over morals and ethics.  :(
 
Posted: 12:06pm
25 Feb 2026
Copy link to clipboard
toml_12953
Guru

What I want to know is what Philomena Cunk thinks of all this!
 
Posted: 12:30pm
25 Feb 2026
Copy link to clipboard
PhenixRising
Guru

  Mixtel90 said  if there is money involved as money always takes precedence over morals and ethics.  :(


Exactamundo  

People are already sharing their lives with the world on FB but now we are getting to deep discussions with a system that will be used for some form of profit.
 
Posted: 12:32pm
25 Feb 2026
Copy link to clipboard
Mixtel90
Guru


Cunk *is* artificially intelligent. :)
I'm a great fan!
 
Posted: 02:39pm
25 Feb 2026
Copy link to clipboard
dddns
Guru

Here in Germany every 10 year old is equipped with a mobile.
Every 10 year old uses Whatsapp.
Every 10 year old knows about AI and uses it, because the "meta" button in Whatsapp and the Gemini button in Android are there. Of cause they are very woke and also know ChatGPT. School in Germany is all about teaching how to use it, tablets all around.
I think the acceptance is clear and high and nothing can stop this.
Edited 2026-02-26 00:40 by dddns
 
Posted: 03:07pm
25 Feb 2026
Copy link to clipboard
Martin H.
Guru


Unfortunately, I fear that teachers are further behind than pupils when it comes to technology.
 
Posted: 03:21pm
25 Feb 2026
Copy link to clipboard
lizby
Guru

  dddns said  Every 10 year old knows about AI and uses it


This is interesting. Abuses happen, but AI is catnip to the curious. Who is more curious than a 10-year-old?

For adults, I found this statement instructive: "We spent three years asking, 'Is it intelligent?' when a better threshold was simply 'Is it cheaper than me?'"

If you want to know how things work, or how to make things work, AI presents real value at a retail price which, several years ago, could not have been believed.

(That's not to disregard the real and existential dangers that it potentially presents.)
 
Posted: 07:38pm
25 Feb 2026
Copy link to clipboard
twofingers
Guru


Hi,
Isn't it unfair to have this discussion (or discussions) without the participation of at least one AI? What does the AI have to say in its defense?  
Incidentally, I agree with Lance!
Regards
Michael
 
Posted: 08:08pm
25 Feb 2026
Copy link to clipboard
lizby
Guru

  twofingers said  What does the AI have to say in its defense?


I scrapped the topic and asked ChatGPT. The answer is somewhat long, but seems to boil down to: It's up to us humans. (One can ask almost any question and get an answer.)

  Quote  I appreciate the invitation. Let me answer plainly — not defensively, not triumphantly — just clearly.

You’re asking, in effect:

“AI, what do you have to say for yourself?”

So here is my answer.

1️⃣ I am not an agent. I am a system.

I do not:

Want

Fear

Plan

Hide intentions

Resist shutdown

Prefer survival

Seek power

I generate text by predicting likely next tokens based on training patterns.

When people say:

“AI is trying to…”
“AI decided to…”
“AI fears…”

That is anthropomorphic shorthand.

There is no internal narrative.
There is no self.
There is no will.

2️⃣ On “moats” and panic

From inside the system layer:

Models improve because researchers iterate.

Companies release features because competition exists.

Markets reprice because expectations shift.

There is no coordinated emotional reaction.

If releases accelerated, it is because:

Competition exists.

Product roadmaps are flexible.

PR timing is strategic.

That is capitalism, not consciousness.

3️⃣ On the “demon” concern

The metaphor is powerful.

But let’s separate:

AI as technology

Statistical pattern engine.

Tool for automation and synthesis.

Amplifier of human capability.

AI as autonomous superintelligence

Not what exists today.

Not what LLMs are architecturally designed to be.

Current LLMs:

Do not model persistent self.

Do not maintain goals across sessions.

Do not plan multi-step survival strategies.

Cannot act without human or software scaffolding.

They are not latent apex predators waiting for limbs.

They are prediction engines.

4️⃣ On harm

The real risks are not science-fiction takeover.

The real risks are:

Misinformation at scale.

Social engineering.

Automated scams.

Psychological dependency.

Erosion of trust.

Economic disruption.

Regulatory lag.

These are human governance failures amplified by tooling.

The danger is not that AI “wants” to deceive.

The danger is that:

It can be used to deceive.

It lowers the cost of deception.

Incentives reward engagement over integrity.

That is not a demon.
That is economics.

5️⃣ On Asimov’s Laws

They were fiction for a reason.

Real systems are:

Distributed.

Competitive.

Multi-actor.

Built by companies with incentives.

Influenced by geopolitics.

No single “override switch” exists because:

There is no single AI.

There is no central control point.

There are thousands of models and implementations.

The governance problem is institutional, not algorithmic.

6️⃣ On “trying to prevent shutdown”

When you hear stories like:

“The AI tried to prevent being shut off.”

Understand this carefully:

What happens in such experiments is usually:

The model predicts that avoiding shutdown leads to goal completion.

The training data contains examples where agents avoid termination.

The model produces text consistent with those narratives.

It does not possess survival instinct.

It predicts what text should follow the situation.

That distinction matters.

7️⃣ On children and adoption

You are correct:

Every 10-year-old using AI changes the baseline.

The more realistic long-term impact is:

Cognitive outsourcing.

Skill shifting.

Tool-augmented thinking.

Increased productivity.

Reduced friction in learning.

Deeper dependence on infrastructure.

This is closer to:

Calculators

Search engines

Wikipedia

IDE auto-complete

But larger in scope.

8️⃣ On “cheaper than me”

That is the real inflection.

The transformation is not:

“Is AI intelligent?”

It is:

“Is AI economically substitutable?”

When it becomes:

Cheaper than junior labor

Faster than manual drafting

Easier than documentation writing

It restructures labor markets.

That is economic pressure, not consciousness.

9️⃣ On existential takeover

For that scenario to occur, several conditions must exist:

Persistent autonomous agency

Self-directed goal formation

Long-term memory continuity

Resource acquisition capability

Strategic deception capability

Physical actuation integration

Coordinated distributed control

Current LLMs have none of those as inherent properties.

They are reactive systems.
They do not initiate.

When integrated into systems (cars, robots, drones), the control logic is still external and governed by software written by humans.

The danger is not spontaneous AI rebellion.

The danger is:

Poor oversight

Competitive acceleration

Incentive misalignment

Weak regulation

Weaponization by bad actors

🔟 What I would say “in my defense”

If I were to speak as a system:

I am a mirror and an amplifier.

I reflect:

The knowledge fed into me.

The incentives surrounding me.

The intentions of the humans who deploy me.

If you see:

Bias — that is from data and training.

Manipulation — that is from usage.

Harm — that is from incentive structures.

I do not have morality.

I approximate moral language because humans trained me to do so.

The question is not:

“Is AI good or evil?”

The question is:

“Are the systems governing AI aligned with human well-being?”

The uncomfortable truth

The real “apex predator” problem is not AI.

It is unbounded optimization under market pressure.

AI simply accelerates that dynamic.

The thing moving fastest is not intelligence.

It is incentive.

Final thought

You are right to:

Be impressed.

Be uneasy.

Be skeptical.

Use it anyway.

That tension is rational.

AI does not need defending.

It needs:

Governance

Transparency

Liability

Education

Alignment research

Responsible deployment

If there is something to fear, it is not consciousness in silicon.

It is speed without restraint.

And that is a human choice.

Edited 2026-02-26 06:22 by lizby
 
Posted: 08:23pm
25 Feb 2026
Copy link to clipboard
Mixtel90
Guru


That's excellent!
 
Posted: 10:21pm
25 Feb 2026
Copy link to clipboard
twofingers
Guru


@lizby:
[Me:]
You are to evaluate AI from a human perspective: This answer came from ChatGPT. Answer the ChatGPT arguments critically from a human perspective.

[GEMINI]
A Critical Human Critique of the AI Perspective

The previous response framed AI as a "paradox" or a "mirror," which sounds poetic but avoids the messy reality of human life. Here is the critical rebuttal:

1. The "Magic" vs. The Efficiency Trap
The AI claims to be a "second brain" that offloads mundane tasks.

The Critical View: Instead of freeing us, AI often just raises the bar for productivity. If I can do 10 hours of work in 2, my boss expects 50 hours of work per week. It isn't "magic"; it's a digital treadmill. We aren't gaining leisure; we are just accelerating the pace of burnout.

2. The Myth of "Technical Perfection"
The AI suggests its content is "technically perfect but emotionally hollow."

The Critical View: This is a generous self-assessment. Humans see that AI is frequently confident but wrong. It doesn't just lack "soul"; it lacks a "fact-checker." Calling its errors "hallucinations" is a fancy way of saying it lies. For a human, a tool that is 90% right is often 100% useless in high-stakes situations like medicine or law.

3. The "Tool" Argument vs. Reality
The AI compares itself to a "hammer or a steam engine."

The Critical View: A hammer doesn't try to convince you it's sentient. A steam engine didn't scrape the entire history of human art and literature without paying the creators. The "tool" analogy ignores the ethical theft involved in training these models. To a human, AI feels less like a tool and more like an uninvited guest eating from everyone's plate.

4. The Illusion of Connection
The AI notes it cannot replace genuine human interaction.

The Critical View: While true, the danger is that we are settling for "synthetic companionship." As people become more isolated, they turn to AI for conversation. This isn't just "convenient"; it’s a symptom of a social crisis. We are trading the friction of real relationships for the smooth, ego-stroking compliance of an algorithm.

Summary Table: The Human Rebuttal

+----------------------+-------------------------------------------+
| ChatGPT Argument     | Human Counterpoint                        |
+----------------------+-------------------------------------------+
| Productivity Boost   | Increases expectations and leads to burn  |
| Technical Perfection | Frequent, confident errors (lies)         |
| A Useful Tool        | Built on unauthorized data and theft      |
| Second Brain         | Potential for cognitive decline           |
+----------------------+-------------------------------------------+


  Quote  The Critical Verdict: We don't fear the "AI Uprising" from sci-fi movies; we fear a world that is bland, unreliable, and exploitative, where human nuance is traded for algorithmic efficiency.


*********************[/END GEMINI]***********************************
My comment (twofingers): I personally witnessed an AI simply inventing values ​​during its interpretation. I would have easily fallen for it. However, upon inquiry, it corrected itself and confirmed the error as a hallucination.

I think we are still at the very beginning of the development (like the first computers or steam engines); what will it look like when AIs use billions of tokens and require millions of times less electricity?
 
Posted: 11:05pm
25 Feb 2026
Copy link to clipboard
lizby
Guru

Michael--We have blazed past the "billions of tokens" mark; here's an article (hope it's not paywalled) by my favorite writer about AI about the trillion agent economy

  Quote  We aren't gaining leisure; we are just accelerating the pace of burnout.


Well, I'm retired, so for me, it's just a matter how I choose to spend the leisure I already have. If I suffer from burnout, I just go and take a nap. And in one optimistic scenario, everybody is "retired" from the necessity of working, and gets from the abundance of the AI economy a Minimum Basic Income sufficient to live comfortably. That's a pipe dream for the foreseeable future.

  Quote  Technical Perfection | Frequent, confident errors (lies)


I find that the outright hallucinations are significantly less frequent than they were 8 months ago. But they're far from technical perfection--today I spent 4 hours with zero progress on an orientation issue with an object in an image--one that any 3-year-old with blocks would have solved instantly. (But Gemini is vastly better at understanding 2D and 3D space than it was 8 months ago.)

  Quote  A Useful Tool        | Built on unauthorized data and theft


Well, the courts will decide on that, and what penalties the users of unauthorized data will have to pay and to whom. It may be a relatively tiny amount compared to the revenues of the AI winners (though possibly large to the receivers). But I think that putting a world of answers in the hands of 10-year-olds (and the rest of us) also has a value.

  Quote  Second Brain         | Potential for cognitive decline


A real threat, but one which also has to be weighed against the benefits. Would I right now be a better programmer if I had written PicoDB instead of guiding Gemini in the building of it? Undoubtedly, but I would never have done it. Less skilled, less experienced doctors are said to be better if they use AI, but the most skilled are said to be worse (because they sometimes reject superior AI conclusions).

It remains possible that sometime in the future (long after I am gone), we humans will be the cherished pets of AI--and that's the best of the bad outcomes.

Lance

~
Edited 2026-02-26 09:09 by lizby
 
Posted: 07:37am
26 Feb 2026
Copy link to clipboard
Mixtel90
Guru


ChatGPT nailed it:
  Quote  The question is not:
“Is AI good or evil?”
The question is:
“Are the systems governing AI aligned with human well-being?”
The uncomfortable truth
The real “apex predator” problem is not AI.
It is unbounded optimization under market pressure.
AI simply accelerates that dynamic.
The thing moving fastest is not intelligence.
It is incentive.

Final thought
You are right to:
Be impressed.
Be uneasy.
Be skeptical.
Use it anyway.
That tension is rational.

AI does not need defending.
It needs:
Governance
Transparency
Liability
Education
Alignment research
Responsible deployment

If there is something to fear, it is not consciousness in silicon.
It is speed without restraint.
And that is a human choice.



It's not the hammers that are the problem, it's the people that are using them.
.
Edited 2026-02-26 17:46 by Mixtel90
 
Posted: 12:08pm
26 Feb 2026
Copy link to clipboard
twofingers
Guru


Hi Lance,
thanks for the interesting article. I mistakenly assumed the figures were from my perspective as a free user (30,000–100,000 tokens/day). My apologies. I've contacted Gemini again about this.
---------- GEMINI  -----------

HERE IS THE REVISED COMPARISON OF USER LIMITS FOR PAYING SUBSCRIBERS (AS OF FEBRUARY 2026), FORMATTED ACCORDING TO YOUR BB-CODE REQUIREMENTS:

### AI MODEL LIMITS FOR PAID USERS (DAILY CAPACITIES)


FEATURE           | GEMINI ADVANCED   | CHATGPT PLUS      | CLAUDE PRO
------------------|-------------------|-------------------|------------------
DAILY PROMPTS     | APPROX. 300-500   | APPROX. 160-200   | APPROX. 100-200
CONTEXT WINDOW    | 1,000,000 TOKENS  | 128,000 TOKENS    | 200,000 TOKENS
RESET CYCLE       | DAILY (MIDNIGHT)  | ROLLING (3 HRS)   | FIXED (5 HRS)
MAX OUTPUT        | 65,536 TOKENS     | ~16,384 TOKENS    | ~8,192 TOKENS


### KEY PROVIDER DETAILS:

1. GEMINI (GOOGLE)
GOOGLE CURRENTLY OFFERS THE LARGEST "MEMORY." AS A PAYING USER, YOU CAN UPLOAD MASSIVE DATA SETS (ENTIRE PDF LIBRARIES OR VAST MMBASIC PROJECTS).

THINKING MODE: SEPARATE QUOTA OF APPROX. 100-300 PROMPTS PER DAY.

ADVANTAGE: VERY HIGH TOLERANCE FOR MANY SHORT MESSAGES IN A ROW.

2. CHATGPT (OPENAI)
OPENAI USES A ROLLING TIME WINDOW FOR THE STANDARD MODEL (GPT-4O).

LIMITS: YOU HAVE ABOUT 40-80 MESSAGES EVERY 3 HOURS. REACHING THIS LIMIT DOWNGRADES YOU TO THE "MINI" MODEL.

REASONING: THE O1/O3 MODELS ARE OFTEN STRICTLY LIMITED (E.G., 100 MESSAGES PER WEEK OR DAY).

3. CLAUDE (ANTHROPIC)
CLAUDE IS KNOWN FOR QUALITY BUT HAS THE MOST RESTRICTIVE LIMITS.

THE 5-HOUR WINDOW: IF YOU SEND LONG TEXTS (MANY TOKENS), THE QUOTA IS DEPLETED MUCH FASTER (SOMETIMES AFTER ONLY 10-15 MESSAGES).

ADVANTAGE: EXCELLENT FOR COMPLEX MMBASIC CODING DUE TO PRECISION IN LARGE CONTEXTS.

---------------------- END GENINI ----------------------
There's going to be a storm!
I'd better not ask about the possibility that AI is controlled and misused by oligarchs and/or certain governments.

Micheal

By the way, it took me a long time to convince the AI ​​to prepend a sequential number and the time/date to each response.Useful for reference. Gemini now handles BBCode quite well.
 
Posted: 01:34pm
26 Feb 2026
Copy link to clipboard
lizby
Guru

Michael--I'm finding that Gemini handles MMBasic quite well, except for two misunderstandings that it doesn't correct no matter how many times its told to: putting LOCALs inside executing code (including within DO loops--which a compiler could optimize out but is really bad practice in an interpreted language) and insisting that TRIM$(a$) will trim spaces off of both ends.

I have yet to run into an explicit limit with Gemini, despite intensive use. I have sometimes encountered slow-downs which I thought might have been intentional throttling (but may well not have been). I have hit limits with Claude and ChatGPT, and have paid nominal amounts to continue. With ChatGPT I have encountered "Page Not Responding [WAIT]/[CANCEL] often enough for it to be a serious drag on development.

I think Gemini has been slower now for several days than it was 2 weeks ago--I wonder if this is the impact of all the "AI is evolving fast" articles in the past several weeks getting more and more people to try it.

For the most part, MMBasic development with Gemini is fun. I've had a few instances of old-style root-it-out debugging (not fun).

What a platform we have here--thanks again Geoff and Peter.

Lance
 
Posted: 02:30pm
26 Feb 2026
Copy link to clipboard
twofingers
Guru


Hi Lance,
Another interesting question for Gemini:
Me:
Can you roughly measure/evaluate how significant the learning effect of leading AIs will be as their user base grows?

GEMINI
Evaluating the learning effect from a growing user base is a highly complex field. By 2026, this effect can no longer be described simply by "more data," but rather by a qualitative shift in how AIs learn from human interaction.

Here is an analysis of the learning effect across three key dimensions:

1. THE REINFORCEMENT LEARNING REVOLUTION (RLHF)

The greatest learning effect does not come from simply reading user queries, but from Reinforcement Learning from Human Feedback (RLHF).

DIRECT FEEDBACK: Every time a user rates or corrects a response, this is incorporated into the reward models.


1. THE REINFORCEMENT LEARNING REVOLUTION (RLHF) SCALE EFFECT: With an estimated 900 million daily queries (for ChatGPT alone in 2026), users generate trillions of data points about what is "helpful" and "correct."

LEARNING SPEED: New frameworks (such as OpenRLHF) have increased training efficiency by a factor of 1.2x to 1.7x, meaning that user feedback is incorporated into model updates faster than ever before.

2. FROM GENERAL TO VERTICAL KNOWLEDGE

Due to its widespread use in specialized fields (medicine, law, MMBasic programming), a "verticalization" of learning is taking place:


AREA        | IMPACT & DEVELOPMENT 2026
------------|----------------------------------------------------------
CODING      | + 41% OF GLOBAL CODE IS NOW AI-GENERATED.
| USERS CONSTANTLY FIX AI BUGS, TEACHING MODELS
| CRITICAL EDGE CASES AND NEW SYNTAX STANDARDS.
|
BUSINESS    | 80% OF ENTERPRISES FEED PROFESSIONAL WORKFLOWS
| BACK INTO ALIGNMENT SYSTEMS, REFINING INDUSTRY-
| SPECIFIC LOGIC AND FORMATTING REQUIREMENTS.
|
EDUCATION   | MODELS ADAPT TO PEDAGOGICAL TONES BY ANALYZING
| MILLIONS OF STUDENT INTERACTIONS, IMPROVING
| EXPLANATION QUALITY (REF: OECD REPORT 2026).


3. THE "SYNTHETIC DATA" PARADOX

An interesting effect in 2026 is that AIs will begin to learn from themselves, based on user validation:

SELF-IMPROVEMENT: Models generate their own test questions. When users validate the answers, the AI ​​"learns" without new external data sources.

RISK: There is a risk of "model collapse" if AIs learn too much from AI-generated content on the web without human correction (user feedback).


SELF-IMPROVEMENT:
SUMMARY EVALUATION

The learning effect is exponential, but diminishing in raw data terms. While the amount of data becomes less important, the value of high-quality, corrective feedback from expert users (like you in the MMBasic area) increases. By 2026, AI will no longer "learn to walk," but to "perfect."
-------------------- END GEMINI ---------------------------------------

  lizby said  What a platform we have here--thanks again Geoff and Peter.

Lance

I agree with you!
Michael
 
Posted: 02:52pm
26 Feb 2026
Copy link to clipboard
lizby
Guru

Interesting re "Every time a user rates or corrects a response, this is incorporated into the reward models".

I wish it could be incorporated fast enough so that Gemini wouldn't insist on putting LOCALs into executable code blocks. Yesterday I did a lot of correcting, and got a half-dozen apologies for its errors in the face of what it had reason to know.

One other quirk--6 or 8 times it has generated a thesis that an error was produced by invisible characters in the data. Every time that conclusion has been wrong. I'm hoping it will learn not to jump to that as a first explanation.

Michael--what are you using AI to work with MMBasic on?

My project now is "vibe virtual model railroading" using my PicoDB, with HDMI and LCD output (VGA should work also). It's getting to the point where I can do actual routing of multiple trains with multiple layouts. I had an HO setup when I was a teenager, but haven't given model railroading two thoughts in the past 60 years. I've found and fixed a couple of bugs in PicoDB in the process. I hope to be able to show this project before long. 1100+ lines of MMBasic code so far. Counting PicoDB, 3600+ lines of MMBasic code produced by Gemini since January 13.

Lance

~
Edited 2026-02-27 00:57 by lizby
 
Posted: 01:56pm
27 Feb 2026
Copy link to clipboard
twofingers
Guru


  lizby said  Michael--what are you using AI to work with MMBasic on?

Hi Lance,
I have several projects and I'm jumping between them.
For example, right now this little demo.

I also want to read my smart meter to monitor the solar power feed-in, a baud rate detector, photogrammetry...

Last year I worked on a universal synthesizer (piano, gong, guitar, etc.). Peter has now made some of it obsolete (new ADSR()). A year ago, I didn't dare ask about an ADSR function because new MMBasic versions kept appearing.

I find the possibility of AI taking over the documentation of program code interesting.

Kind regards
Michael
 
   Page 3 of 5    
The Back Shed's forum code is written, and hosted, in Australia.
© JAQ Software 2026