| Menu | JAQForum Ver 19.10.27 |
Forum Index : Microcontroller and PC projects : AI Is Evolving Fast
Asimovs laws assume the robot has the sentience to judge its own actions. In fact there is a canon that says, AI cannot analyse the processes that it uses for its internal logic and remain stable. The practical reality is that a AI site will be/is operated by a company as per its directors. And companies have no legal obligation to have morality or humanity, in fact it is law that directors must act in the interest of the company (fiscal). True, the standard company laws exclude any illegal activity...but it is not illegal to create very dangerous AI. ATM AI is being created and used by standard companies that have limited-liability, which is a real cop out. I think that it is a new industry and laws should be updated to govern it, like armaments manufacture, mains electrical or gas work, or drug company operations. So I think it should only be legal for a company to supply or use AI if the company is registered to do so, uses licensed and verifiable operations, and it's constitution explicitly holds directors personally responsible for any injury to humans. Given the little comprehension of tech-world that the govt has, I doubt it will happen. In fact its own departments tend to use citizens as so much spam to be processed. Edited 2026-02-25 19:59 by zeitfest |
||||||
The address given by Professor Toby Walsh mentioned earlier includes a story about Chat GPT giving a depressed person information on how to commit suicide, composing a suicide letter for him and encouraging him to go through with it. And he did. There have been several other instances of this over a period yet no safeguards have been put in place. Even physically contained AI can be dangerous. |
||||||
Ooh! Soylent green for dinner? You make some very good points there. First, the three laws can't be altered in any way by the AI system. That was the whole premise of the stories. They could be "tuned" at manufacture to boost or reduce the levels of each but they operate outside the AI sphere. Company liability would indeed be the cure, I think. That's not going to happen if there is money involved as money always takes precedence over morals and ethics. :( |
||||||
What I want to know is what Philomena Cunk thinks of all this! |
||||||
Exactamundo People are already sharing their lives with the world on FB but now we are getting to deep discussions with a system that will be used for some form of profit. |
||||||
Cunk *is* artificially intelligent. :) I'm a great fan! |
||||||
Here in Germany every 10 year old is equipped with a mobile. Every 10 year old uses Whatsapp. Every 10 year old knows about AI and uses it, because the "meta" button in Whatsapp and the Gemini button in Android are there. Of cause they are very woke and also know ChatGPT. School in Germany is all about teaching how to use it, tablets all around. I think the acceptance is clear and high and nothing can stop this. Edited 2026-02-26 00:40 by dddns |
||||||
Unfortunately, I fear that teachers are further behind than pupils when it comes to technology. |
||||||
This is interesting. Abuses happen, but AI is catnip to the curious. Who is more curious than a 10-year-old? For adults, I found this statement instructive: "We spent three years asking, 'Is it intelligent?' when a better threshold was simply 'Is it cheaper than me?'" If you want to know how things work, or how to make things work, AI presents real value at a retail price which, several years ago, could not have been believed. (That's not to disregard the real and existential dangers that it potentially presents.) |
||||||
Hi, Isn't it unfair to have this discussion (or discussions) without the participation of at least one AI? What does the AI have to say in its defense? Incidentally, I agree with Lance! Regards Michael |
||||||
I scrapped the topic and asked ChatGPT. The answer is somewhat long, but seems to boil down to: It's up to us humans. (One can ask almost any question and get an answer.) Edited 2026-02-26 06:22 by lizby |
||||||
That's excellent! |
||||||
@lizby: [Me:] You are to evaluate AI from a human perspective: This answer came from ChatGPT. Answer the ChatGPT arguments critically from a human perspective. [GEMINI] A Critical Human Critique of the AI Perspective The previous response framed AI as a "paradox" or a "mirror," which sounds poetic but avoids the messy reality of human life. Here is the critical rebuttal: 1. The "Magic" vs. The Efficiency Trap The AI claims to be a "second brain" that offloads mundane tasks. The Critical View: Instead of freeing us, AI often just raises the bar for productivity. If I can do 10 hours of work in 2, my boss expects 50 hours of work per week. It isn't "magic"; it's a digital treadmill. We aren't gaining leisure; we are just accelerating the pace of burnout. 2. The Myth of "Technical Perfection" The AI suggests its content is "technically perfect but emotionally hollow." The Critical View: This is a generous self-assessment. Humans see that AI is frequently confident but wrong. It doesn't just lack "soul"; it lacks a "fact-checker." Calling its errors "hallucinations" is a fancy way of saying it lies. For a human, a tool that is 90% right is often 100% useless in high-stakes situations like medicine or law. 3. The "Tool" Argument vs. Reality The AI compares itself to a "hammer or a steam engine." The Critical View: A hammer doesn't try to convince you it's sentient. A steam engine didn't scrape the entire history of human art and literature without paying the creators. The "tool" analogy ignores the ethical theft involved in training these models. To a human, AI feels less like a tool and more like an uninvited guest eating from everyone's plate. 4. The Illusion of Connection The AI notes it cannot replace genuine human interaction. The Critical View: While true, the danger is that we are settling for "synthetic companionship." As people become more isolated, they turn to AI for conversation. This isn't just "convenient"; it’s a symptom of a social crisis. We are trading the friction of real relationships for the smooth, ego-stroking compliance of an algorithm. Summary Table: The Human Rebuttal +----------------------+-------------------------------------------+ | ChatGPT Argument | Human Counterpoint | +----------------------+-------------------------------------------+ | Productivity Boost | Increases expectations and leads to burn | | Technical Perfection | Frequent, confident errors (lies) | | A Useful Tool | Built on unauthorized data and theft | | Second Brain | Potential for cognitive decline | +----------------------+-------------------------------------------+ *********************[/END GEMINI]*********************************** My comment (twofingers): I personally witnessed an AI simply inventing values during its interpretation. I would have easily fallen for it. However, upon inquiry, it corrected itself and confirmed the error as a hallucination. I think we are still at the very beginning of the development (like the first computers or steam engines); what will it look like when AIs use billions of tokens and require millions of times less electricity? |
||||||
Michael--We have blazed past the "billions of tokens" mark; here's an article (hope it's not paywalled) by my favorite writer about AI about the trillion agent economy Well, I'm retired, so for me, it's just a matter how I choose to spend the leisure I already have. If I suffer from burnout, I just go and take a nap. And in one optimistic scenario, everybody is "retired" from the necessity of working, and gets from the abundance of the AI economy a Minimum Basic Income sufficient to live comfortably. That's a pipe dream for the foreseeable future. I find that the outright hallucinations are significantly less frequent than they were 8 months ago. But they're far from technical perfection--today I spent 4 hours with zero progress on an orientation issue with an object in an image--one that any 3-year-old with blocks would have solved instantly. (But Gemini is vastly better at understanding 2D and 3D space than it was 8 months ago.) Well, the courts will decide on that, and what penalties the users of unauthorized data will have to pay and to whom. It may be a relatively tiny amount compared to the revenues of the AI winners (though possibly large to the receivers). But I think that putting a world of answers in the hands of 10-year-olds (and the rest of us) also has a value. A real threat, but one which also has to be weighed against the benefits. Would I right now be a better programmer if I had written PicoDB instead of guiding Gemini in the building of it? Undoubtedly, but I would never have done it. Less skilled, less experienced doctors are said to be better if they use AI, but the most skilled are said to be worse (because they sometimes reject superior AI conclusions). It remains possible that sometime in the future (long after I am gone), we humans will be the cherished pets of AI--and that's the best of the bad outcomes. Lance ~ Edited 2026-02-26 09:09 by lizby |
||||||
ChatGPT nailed it: It's not the hammers that are the problem, it's the people that are using them. . Edited 2026-02-26 17:46 by Mixtel90 |
||||||
Hi Lance, thanks for the interesting article. I mistakenly assumed the figures were from my perspective as a free user (30,000–100,000 tokens/day). My apologies. I've contacted Gemini again about this. ---------- GEMINI ----------- HERE IS THE REVISED COMPARISON OF USER LIMITS FOR PAYING SUBSCRIBERS (AS OF FEBRUARY 2026), FORMATTED ACCORDING TO YOUR BB-CODE REQUIREMENTS: ### AI MODEL LIMITS FOR PAID USERS (DAILY CAPACITIES) FEATURE | GEMINI ADVANCED | CHATGPT PLUS | CLAUDE PRO ------------------|-------------------|-------------------|------------------ DAILY PROMPTS | APPROX. 300-500 | APPROX. 160-200 | APPROX. 100-200 CONTEXT WINDOW | 1,000,000 TOKENS | 128,000 TOKENS | 200,000 TOKENS RESET CYCLE | DAILY (MIDNIGHT) | ROLLING (3 HRS) | FIXED (5 HRS) MAX OUTPUT | 65,536 TOKENS | ~16,384 TOKENS | ~8,192 TOKENS ### KEY PROVIDER DETAILS: 1. GEMINI (GOOGLE) GOOGLE CURRENTLY OFFERS THE LARGEST "MEMORY." AS A PAYING USER, YOU CAN UPLOAD MASSIVE DATA SETS (ENTIRE PDF LIBRARIES OR VAST MMBASIC PROJECTS). THINKING MODE: SEPARATE QUOTA OF APPROX. 100-300 PROMPTS PER DAY. ADVANTAGE: VERY HIGH TOLERANCE FOR MANY SHORT MESSAGES IN A ROW. 2. CHATGPT (OPENAI) OPENAI USES A ROLLING TIME WINDOW FOR THE STANDARD MODEL (GPT-4O). LIMITS: YOU HAVE ABOUT 40-80 MESSAGES EVERY 3 HOURS. REACHING THIS LIMIT DOWNGRADES YOU TO THE "MINI" MODEL. REASONING: THE O1/O3 MODELS ARE OFTEN STRICTLY LIMITED (E.G., 100 MESSAGES PER WEEK OR DAY). 3. CLAUDE (ANTHROPIC) CLAUDE IS KNOWN FOR QUALITY BUT HAS THE MOST RESTRICTIVE LIMITS. THE 5-HOUR WINDOW: IF YOU SEND LONG TEXTS (MANY TOKENS), THE QUOTA IS DEPLETED MUCH FASTER (SOMETIMES AFTER ONLY 10-15 MESSAGES). ADVANTAGE: EXCELLENT FOR COMPLEX MMBASIC CODING DUE TO PRECISION IN LARGE CONTEXTS. ---------------------- END GENINI ---------------------- There's going to be a storm! I'd better not ask about the possibility that AI is controlled and misused by oligarchs and/or certain governments. Micheal By the way, it took me a long time to convince the AI to prepend a sequential number and the time/date to each response.Useful for reference. Gemini now handles BBCode quite well. |
||||||
Michael--I'm finding that Gemini handles MMBasic quite well, except for two misunderstandings that it doesn't correct no matter how many times its told to: putting LOCALs inside executing code (including within DO loops--which a compiler could optimize out but is really bad practice in an interpreted language) and insisting that TRIM$(a$) will trim spaces off of both ends. I have yet to run into an explicit limit with Gemini, despite intensive use. I have sometimes encountered slow-downs which I thought might have been intentional throttling (but may well not have been). I have hit limits with Claude and ChatGPT, and have paid nominal amounts to continue. With ChatGPT I have encountered "Page Not Responding [WAIT]/[CANCEL] often enough for it to be a serious drag on development. I think Gemini has been slower now for several days than it was 2 weeks ago--I wonder if this is the impact of all the "AI is evolving fast" articles in the past several weeks getting more and more people to try it. For the most part, MMBasic development with Gemini is fun. I've had a few instances of old-style root-it-out debugging (not fun). What a platform we have here--thanks again Geoff and Peter. Lance |
||||||
Hi Lance, Another interesting question for Gemini: Me: Can you roughly measure/evaluate how significant the learning effect of leading AIs will be as their user base grows? GEMINI Evaluating the learning effect from a growing user base is a highly complex field. By 2026, this effect can no longer be described simply by "more data," but rather by a qualitative shift in how AIs learn from human interaction. Here is an analysis of the learning effect across three key dimensions: 1. THE REINFORCEMENT LEARNING REVOLUTION (RLHF) The greatest learning effect does not come from simply reading user queries, but from Reinforcement Learning from Human Feedback (RLHF). DIRECT FEEDBACK: Every time a user rates or corrects a response, this is incorporated into the reward models. 1. THE REINFORCEMENT LEARNING REVOLUTION (RLHF) SCALE EFFECT: With an estimated 900 million daily queries (for ChatGPT alone in 2026), users generate trillions of data points about what is "helpful" and "correct." LEARNING SPEED: New frameworks (such as OpenRLHF) have increased training efficiency by a factor of 1.2x to 1.7x, meaning that user feedback is incorporated into model updates faster than ever before. 2. FROM GENERAL TO VERTICAL KNOWLEDGE Due to its widespread use in specialized fields (medicine, law, MMBasic programming), a "verticalization" of learning is taking place: AREA | IMPACT & DEVELOPMENT 2026 ------------|---------------------------------------------------------- CODING | + 41% OF GLOBAL CODE IS NOW AI-GENERATED. | USERS CONSTANTLY FIX AI BUGS, TEACHING MODELS | CRITICAL EDGE CASES AND NEW SYNTAX STANDARDS. | BUSINESS | 80% OF ENTERPRISES FEED PROFESSIONAL WORKFLOWS | BACK INTO ALIGNMENT SYSTEMS, REFINING INDUSTRY- | SPECIFIC LOGIC AND FORMATTING REQUIREMENTS. | EDUCATION | MODELS ADAPT TO PEDAGOGICAL TONES BY ANALYZING | MILLIONS OF STUDENT INTERACTIONS, IMPROVING | EXPLANATION QUALITY (REF: OECD REPORT 2026). 3. THE "SYNTHETIC DATA" PARADOX An interesting effect in 2026 is that AIs will begin to learn from themselves, based on user validation: SELF-IMPROVEMENT: Models generate their own test questions. When users validate the answers, the AI "learns" without new external data sources. RISK: There is a risk of "model collapse" if AIs learn too much from AI-generated content on the web without human correction (user feedback). SELF-IMPROVEMENT: SUMMARY EVALUATION The learning effect is exponential, but diminishing in raw data terms. While the amount of data becomes less important, the value of high-quality, corrective feedback from expert users (like you in the MMBasic area) increases. By 2026, AI will no longer "learn to walk," but to "perfect." -------------------- END GEMINI --------------------------------------- I agree with you! Michael |
||||||
Interesting re "Every time a user rates or corrects a response, this is incorporated into the reward models". I wish it could be incorporated fast enough so that Gemini wouldn't insist on putting LOCALs into executable code blocks. Yesterday I did a lot of correcting, and got a half-dozen apologies for its errors in the face of what it had reason to know. One other quirk--6 or 8 times it has generated a thesis that an error was produced by invisible characters in the data. Every time that conclusion has been wrong. I'm hoping it will learn not to jump to that as a first explanation. Michael--what are you using AI to work with MMBasic on? My project now is "vibe virtual model railroading" using my PicoDB, with HDMI and LCD output (VGA should work also). It's getting to the point where I can do actual routing of multiple trains with multiple layouts. I had an HO setup when I was a teenager, but haven't given model railroading two thoughts in the past 60 years. I've found and fixed a couple of bugs in PicoDB in the process. I hope to be able to show this project before long. 1100+ lines of MMBasic code so far. Counting PicoDB, 3600+ lines of MMBasic code produced by Gemini since January 13. Lance ~ Edited 2026-02-27 00:57 by lizby |
||||||
Hi Lance, I have several projects and I'm jumping between them. For example, right now this little demo. I also want to read my smart meter to monitor the solar power feed-in, a baud rate detector, photogrammetry... Last year I worked on a universal synthesizer (piano, gong, guitar, etc.). Peter has now made some of it obsolete (new ADSR()). A year ago, I didn't dare ask about an ADSR function because new MMBasic versions kept appearing. I find the possibility of AI taking over the documentation of program code interesting. Kind regards Michael |
||||||
| The Back Shed's forum code is written, and hosted, in Australia. |