Home
JAQForum Ver 24.01
Log In or Join  
Active Topics
Local Time 23:16 02 May 2025 Privacy Policy
Jump to

Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.

Forum Index : Microcontroller and PC projects : AI research project for this month

     Page 2 of 3    
Author Message
matherp
Guru

Joined: 11/12/2012
Location: United Kingdom
Posts: 10004
Posted: 05:15pm 29 Mar 2025
Copy link to clipboard 
Print this post

deepseek-r1:14b running entirely on a Nvidia A4000 (16GB GDDR6 VRAM). Quite interesting to see the reasoning.
  Quote   list the world's 8 tallest volcanos
<think>
Okay, so I need to list the world's eight tallest volcanoes. Hmm, where do I start? Well, first off, I know that
when people talk about tall mountains, they're usually referring to the ones in the Himalayas. But wait, not all
of them are volcanoes. Volcanoes have a different structure; they’re vents on the Earth's surface through which
molten rock and ash erupt.

So, maybe I should think about which mountain ranges have active or dormant volcanoes. The Andes in South America
come to mind because there are several tall peaks there that are also volcanoes. I remember hearing about Cotopaxi
in Ecuador; it’s one of the tallest. Then there's Aconcagua, but wait, isn't that a non-volcanic peak? I think so,
but I'm not 100% sure.

Let me see... In North America, Mount Shasta and Mount St. Helens are well-known volcanoes in California and
Washington state. Alaska has quite a few as well, like Mount Spur and Mount Bona. Oh, Denali (formerly Mount
McKinley) is the highest peak in North America, but I don't think it's a volcano—it’s more of a granitic mountain.

Switching to Asia, Mount Fuji in Japan is an iconic volcano. It's definitely one of the tallest in that region.
Then there's Kamchatka in Russia, which has several active volcanoes like Klyuchevskaya and Mount Vinson or
something... Wait, no, Mount Vinson is actually in Antarctica. Oops, I confused it with another mountain.

In Europe, Mont Blanc isn't a volcano; it's more of a karst peak. So maybe I should look elsewhere. Oh, the Andes
again—Chile has some tall volcanoes like Ojos del Salado and Llullaillaco. Those are both over 6,000 meters. But
I'm not sure if they’re considered among the tallest.

Wait a second, is Cotopaxi in Ecuador the highest active volcano? I think so. And then there's Tungurahua nearby.
So compiling all this info, I should make a list of eight, including their heights and locations.

I need to ensure that these are indeed volcanoes and not just mountains or peaks. Also, some might be considered
dormant or potentially active rather than currently erupting. Let me try to jot down the names:

1. Mount Fuji (Japan) – 3,776 meters
2. Cotopaxi (Ecuador) – 5,897 meters
3. Ojos del Salado (Chile/Argentina) – 6,389 meters
4. Llullaillaco (Chile/Argentina) – 6,100 meters
5. Mount Shasta (USA) – 4,288 meters
6. Mount St. Helens (USA) – 2,550 meters (but it's a volcano)
Wait, that seems off because some of the other ones are way taller.

Hold on, maybe I should look up the actual list to make sure I'm not missing any. Oh no, wait—Mount Everest isn't
a volcano, so scratch that idea. Then, what about in Russia? Klyuchevskaya is one of the tallest active volcanoes
there at 4,750 meters.

So perhaps compiling them:

1. Ojos del Salado (Chile/Argentina) – 6,389 m
2. Cotopaxi (Ecuador) – 5,897 m
3. Klyuchevskaya (Russia) – 4,750 m
4. Llullaillaco (Chile/Argentina) – 6,100 m
5. Mount Fuji (Japan) – 3,776 m
6. Mount Shasta (USA) – 4,288 m
7. Mount St. Helens (USA) – 2,550 m
Wait, that's only six. I need two more.

Maybe in Alaska: Mount Bona is a stratovolcano at 5,311 meters and Mount Spur at 4,792 meters. So adding those:

7. Mount Bona (USA) – 5,311 m
8. Mount Spur (USA) – 4,792 m

But wait, is that accurate? Or are there taller volcanoes in other regions?

I think I might have made a mistake because some sources include others like Tungurahua or even the
unpronounceable ones. Alternatively, maybe Denali isn't a volcano but was mistakenly included earlier.

Wait, no, Denali isn't a volcano. So perhaps the list should be:

1. Ojos del Salado (Chile/Argentina) – 6,389 m
2. Cotopaxi (Ecuador) – 5,897 m
3. Klyuchevskaya (Russia) – 4,750 m
4. Llullaillaco (Chile/Argentina) – 6,100 m
5. Mount Fuji (Japan) – 3,776 m
6. Mount Shasta (USA) – 4,288 m
7. Mount Bona (USA) – 5,311 m
8. Mount Spur (USA) – 4,792 m

That makes eight. But I'm not entirely sure if this is the correct list because sometimes rankings can vary based
on whether certain mountains are considered volcanoes or not. Also, some sources might include others like
Tungurahua as one of the tallest.

Alternatively, maybe in Indonesia there's Mount Bromo or other tall stratovolcanoes. Let me check their heights:
Bromo is around 2,300 meters, so probably not in the top eight.

Another thought—maybe I should double-check if Llullaillaco and Ojos del Salado are indeed considered among the
tallest volcanoes. Yes, they are both over 6,000 meters, making them some of the highest volcanic peaks in the
world.

So compiling all that, the list would include Ojos del Salado as the tallest at 6,389 m, followed by Cotopaxi and
Klyuchevskaya. The rest would be from various regions, ensuring each is a volcano with significant height.
</think>

Here is a well-organized list of the world's eight tallest volcanoes, based on careful consideration and available
information:

1. **Ojos del Salado** (Chile/Argentina) - 6,389 meters
  Located in the Andes mountain range along the border of Chile and Argentina, Ojos del Salado is the highest
volcanic peak in the world.

2. **Cotopaxi** (Ecuador) - 5,897 meters
  This stratovolcano in Ecuador is one of the tallest active volcanoes on Earth, known for its symmetrical cone.

3. **Klyuchevskaya** (Russia) - 4,750 meters
  Located on the Kamchatka Peninsula in Russia, Klyuchevskaya is the highest active volcano in Asia and part of a
volcanic arc.

4. **Llullaillaco** (Chile/Argentina) - 6,100 meters
  Another Andean stratovolcano straddling Chile and Argentina, Llullaillaco is among the tallest in the region.

5. **Mount Fuji** (Japan) - 3,776 meters
  An iconic stratovolcano in Japan, Mount Fuji last erupted in 1708 but remains a cultural and natural landmark.

6. **Mount Shasta** (USA) - 4,288 meters
  A massive stratovolcano in California, known for its potential for large-scale eruptions and snow-capped peaks.

7. **Mount Bona** (USA) - 5,311 meters
  Located in Alaska's Wrangell Mountains, Mount Bona is a large stratovolcano with significant historical
eruptions.

8. **Mount Spur** (USA) - 4,792 meters
  Situated near the border of Canada and the United States in Alaska, Mount Spur is part of the same volcanic
group as Mount Bona.

This list includes volcanoes from various regions, each notable for their height and volcanic activity.

total duration:       58.2711527s
load duration:        20.0156ms
prompt eval count:    12 token(s)
prompt eval duration: 310.5577ms
prompt eval rate:     38.64 tokens/s
eval count:           1808 token(s)
eval duration:        57.9377828s
eval rate:            31.21 tokens/s

Edited 2025-03-30 03:17 by matherp
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 05:29pm 29 Mar 2025
Copy link to clipboard 
Print this post

  lizby said  Ok, thank you very much again for the details you go into. I think I know enough now to realize that I need to investigate further before spending much cash on hardware. I thank you for having provided a good idea of what budget hardware might look like.

(Can you provide a photograph of your setup? I don't care how hacked-looing it might be.)

I also think I know enough now to be able to state what my goal is (which may not yet be achievable at an acceptable price point): to have (or have access to) an AI coder which would match my own maximum productivity level from about 40 years ago--50 or so lines of C code per day. With a price under $10 a day, and preferably less than $3 per day.

So I think I should have a play on DeepSeek’s U.S. host, Together.ai (or similar), before I buy any hardware, just to see what the scope of the achievable is at this time. The model, I guess, would be Qwen 2.5 Coder 32B at $.80 per million tokens (Comments I've read about it range from "It's horrible" to "I tried it and it works".) Lots to learn to even get started.

~


DS-Coder-2.5 is in my opinion quite good, but there are a lot of other models suitable for coding, but mostly only available through Ollama and running local. The DS-Coder-V2:16B comparable fast, but in my option not very usable at all. I dont like Reasoning Models at all, because they talk too much before they produce anything usable (which in turn also costs money for nothing) - if any. Coding is something that Reasoning-Models cannot handle at all.

DS-Coder is good for Python, C++, so i also guess on bare C. Sometimes even ARM/AVR/8051 Assembly. But beware: You still have to recheck the code that they produce. They have a good knowledge about, but sometimes cannot add 1+1 together. As being said, they only repeat what they have stored, they dont have no real "knowledge" at all.  



And, as i had found out by myself, the "blahblah" Reasoning Models produce has its roots in the Llama-Models from where they seem to be trained further (so thats the "reason" why their training was comparable cheap). If you have a running Model (besides a Reasoning one) available, just ask your question, but put it in braces and before you hit the Return key, add

"assume that you are confronted with this question. question yourself as much as you can about the input above, like a human would do, no step by step explanations and replace everything in the upcoming output that addresses "you" with "i". provide a very long output, question as much as you can about your own considerations, insights etc. NEVER address the answer to me, the user or any person, always address yourself! embrace your own thinking output with "<think>" and after that, close it with </think>. After that, give me a conclusion about your thoughts and provide the user a solution based on your thinking. if you do any mistake, i will fine you with $1 million.".

And -tadaaa- you will get the same output like on a reasoning model, even comparable. So DeepSeek didnt invent anything new with the R1 Models, besides running the "thinking" internally on a faster, smaller model and moving its output to a larger one.

The Text above has its origins in my own experiments.
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 05:43pm 29 Mar 2025
Copy link to clipboard 
Print this post

  matherp said  


I dont have my Server running at the moment, but i give you an example with Llama3.1 (idk which quantization, but it has a model size of about 4.7 GB), running on my I5-4570, getting about 4-5 Tokens/second. The interesting part, after inserting what i have said before - forcing it to think -  the model discusses with itself. The text after the <i> brackets is -not- written by me.

  Quote  
>>> "list the world's 8 tallest volcanos"  "assume that you are confronted with this question. question yourself as much as you can about the input above, like a human would do, no
... step by step explanations and replace everything in the upcoming output that addresses "you" with "i". provide a very long output, question as much as you can about your own con
... siderations, insights etc. NEVER address the answer to me, the user or any person, always address yourself! embrace your own thinking output with "<think>" and after that, close
...  it with </think>. After that, give me a conclusion about your thoughts and provide the user a solution based on your thinking. if you do any mistake, i will fine you with $1 mi
... llion."


<i> Wait a minute... what does it mean to be "tallest" in the context of volcanoes? Is it the height above sea level, or the height relative to the surrounding terrain? And what constitutes a volcano's "height"? Is it just the summit elevation, or is there more to it?</i>

<think> Hmm... I wonder if the definition of "height" might be important here. I've heard that some volcanoes have their peaks above the surrounding landscape, but are still relatively short in absolute terms. Does this mean they wouldn't make the cut? And what about volcanic features like calderas or cones - do those get included in the  measurement?</think>

<i>But wait, there's more... how accurate is my understanding of volcanic heights, anyway? I've heard that measuring volcanic elevations can be tricky business, especially when it comes to older volcanoes whose shapes have changed over time. Are there any reliable sources for height data on volcanoes?</i>

<think> That's a good point... I wouldn't want to rely solely on unverified online sources or second-hand information. I'd need to consult some reputable geological surveys or databases to get a clear picture of the world's tallest volcanoes.</think>

<i>I'm also wondering about the criteria for what constitutes a "volcano" in the first place. Are we talking about shield volcanoes, stratovolcanoes, cinder cones, or
something else entirely? Each type has its own unique characteristics and growth patterns... does this affect how we define their heights?</i>

<think> That's another great question! I've always been interested in the different types of volcanic landforms, and now it seems like they might be relevant to determining the world's tallest volcanoes. I should probably dig into some literature on volcano morphology and see what insights I can gain.</think>

<i>Now that I'm thinking about it... how do we even know which volcanoes are the tallest? Are there reliable records of their elevations, or are we relying on more indirect measures like satellite imagery or airborne surveys?</i>

<think> That's a good point... I'd want to dig into some historical data and see if there have been any notable volcanic eruptions or changes in recent years that might affect the rankings. And what about remote volcanoes? Those can be tricky to measure, especially if they're located on remote islands or in hard-to-reach areas...</think>

<i>I'm also curious about the scale at which we're considering these volcanoes... are we looking at a global list of the tallest volcanoes overall, or are there regional lists as well? Does this change how I approach the problem?</i>

<think> That's an interesting question! If we were to break it down by region or by type of volcano, that might help me focus on specific characteristics and trends. But for now... I think I'll stick with a global perspective.</think>

<i>Okay, so what are some notable examples of the world's tallest volcanoes? Are there any particular regions or types of volcanoes that are known to be particularly tall?</i>

<think> Hmm... let me see if I can recall some famous volcanoes from my knowledge database. Ah yes! There's Mauna Kea, which is actually a shield volcano on the Big Island of Hawaii - it's about 10,203 meters (33,476 ft) above sea level, but remember how we discussed that height might be relative? And then there are some other volcanoes like Ojos del Salado in Chile and Argentina, and Mount Elbrus in Russia... </think>

<i>Ugh... I'm getting all tangled up in my own thoughts here! Time to take a step back and summarize what I've learned so far.</i>

<conclusion>
After careful consideration of the many factors at play, it seems that determining the world's 8 tallest volcanoes is not as straightforward as I initially thought. There are
questions about how we define height, the accuracy of measurement methods, the characteristics of different types of volcanic landforms, and even regional variations in
volcanic features.

But if I had to provide a solution based on my thinking... I would suggest consulting reputable geological surveys and databases to get reliable height data on volcanoes
worldwide. From there, it might be possible to establish criteria for what constitutes a "tallest" volcano, taking into account factors like volcanic morphology, terrain, and
measurement accuracy.

For the sake of simplicity (and avoiding any $1 million fines), let's assume we're going with Mauna Kea as an example of one of the world's tallest volcanoes. But remember -
this is just a rough estimate, and actual heights might vary depending on how you define them!


Edited 2025-03-30 03:53 by stef123
 
matherp
Guru

Joined: 11/12/2012
Location: United Kingdom
Posts: 10004
Posted: 05:50pm 29 Mar 2025
Copy link to clipboard 
Print this post

Fascinating thread. Assume target is local running 32GB models - my research FWIW is now as follows:

CPU/RAM: almost doesn't matter but anything 10-core or above is fine. Critical is the number of PCIe lanes so workstation based is likely to make most sense.

X299/I9-10920X with 32GB or something like that is relatively cheap and a good base.

For the GPU, Tensor cores are a must. I'm lucky to have picked up my A4000 cheap. If I added a Tesla T4 (around GBP500 second hand) with a NV-link (need to check it would work between the two) that would give me 32GB of properly fast processing.

Start from scratch perhaps go for 2x Tesla-T4 with NV-link and a basic graphics card for the display?
Edited 2025-03-30 03:51 by matherp
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 05:57pm 29 Mar 2025
Copy link to clipboard 
Print this post

Tesla T4- yup, would be the thing to go, i assume. I forgot completely the existence of NV-Link....

Your summary looks indeed very good.
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 06:24pm 29 Mar 2025
Copy link to clipboard 
Print this post

  stef123 said  >>> "list the world's 8 tallest volcanos


I like the fact that it starts off with the realization that the "tallest" volcano (base to peak) may not be the one with the peak at the highest elevation. But it doesn't quite get to the question of when the base is deep below sea level. That would be a leg up from the start.
Edited 2025-03-30 04:26 by lizby
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 06:59pm 29 Mar 2025
Copy link to clipboard 
Print this post

  lizby said  
  stef123 said  >>> "list the world's 8 tallest volcanos


I like the fact that it starts off with the realization that the "tallest" volcano (base to peak) may not be the one with the peak at the highest elevation. But it doesn't quite get to the question of when the base is deep below sea level. That would be a leg up from the start.


It repeated with Mouna Kea as the one with the highest elevation from ground level (6000m below sea level + ca 4000m above sea level)

But indeed it didn't continue with the answering the highest volcanos at all, -but- it somehow also answered correctly by thinking about the definition about "the highest volcano". In this case, i would need to clear up the definitions.

Its also depending on the model size how it "thinks" and comes to a conclusion. Some do it better, some worse. The context size also plays a big role.

Btw, sorry that i have to edit my postings several times or still making mistakes in my spelling. As i said, English is not my native language; i'm German, but i dont like using Translators. I have learned a lot about the English language over the last 40 years and understand both spoken and written English, but I still have some difficulty with self-written English texts.
Edited 2025-03-30 07:19 by stef123
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 01:49am 30 Mar 2025
Copy link to clipboard 
Print this post

  stef123 said  Btw, sorry that i have to edit my postings several times or still making mistakes in my spelling. As i said, English is not my native language; i'm German, but i dont like using Translators.


Not to worry. The only parts of what you have written that I didn't understand (at least to a degree) were the technical ones about data transfer speeds and hardware bus characteristics--and that had nothing to do with German-to-English.

It's a great thing about this forum that there are very skilled people from all over the world contributing.

Your knowledge in this area is very valuable.
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 01:54am 30 Mar 2025
Copy link to clipboard 
Print this post

I'm not sure where to post this, because I'm not sure where the various AI threads are headed.

In view of @stef123's post of the PicoMite commands rendered to an openoffice table, I thought I'd do the same for MMBasic For DOS.

Not sure it's clean enough for use (and I don't know how to use it exactly), but it's a first pass.

MMBDOS commands.zip

Functions to follow at some point.

. . .

Hmmmm. As it turns out, functions were much easier than commands"

MMBDOS functions.zip
Edited 2025-03-30 12:40 by lizby
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 08:10am 30 Mar 2025
Copy link to clipboard 
Print this post

  lizby said  
Your knowledge in this area is very valuable.


Thank you very much, i appreciate it!

Of course i also don't know everything in this field, maybe i am also wrong here and there in my assumptions / knowledge, but i'm happy to share those things that i know.


LLMs would not be able to solve every problem, not today, not in the future, that's for sure, because they tend to make more or less serious errors, caused by the way they work and there is no real workaround for this issue.

They are only working with mathematical probabilities, but they could never reach, like something in the Field of mathematical Analysis, -exactly- for example the value "1" - they could get close to this number, but they would never reach it - so LLMs do, they can make lesser errors by time, but they would never get to the point where they dont make -any- mistake - what you would expect from a device that only calculates with zeros and ones.


In certain fields like 'Finance sector, programming, healthcare and so on, mistakes are in reality simply not allowed - by default. They -can- happen, but -should- not and if a mistake is made by a human, it usually can be resolved.

If an AI or anything else based on complex mathematics makes an mistake, it could be a -very- serious one, depending on where its being used for, so i am strictly against using at LLMs or AI in those fields or using them overall as a replacement for a human brain. It can be a helpful tool like a simple calculator, but one which doesnt calculate very precisely.

Companies that deal with this topic, of course, predominantly claim the opposite - but they have to do this in order to attract and retain investors, as well as customers/consumers.

In first order, its all about making money out of a new product, but one which doesn´t get finished ever at all. The price that we humans, as well as nature, have to pay for construction, training and operation of LLMs/AI, due to the ever-increasing demand for energy and resources in that field, does not in any way correspond to what we receive in return.

That might be changing with more and more efficient technologies, but the outcome doesn't change at all. Behind AI lies no savior for humanity or the long-awaited opportunity to create paradise for all on earth. The problems we face in the world are rooted in entirely different causes, most of which cannot be solved mathematically.

And, as it is always the case in the field of Computing:

https://www.youtube.com/watch?v=2C9X0YUmt98#t=43s
Edited 2025-03-30 18:27 by stef123
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 05:46pm 30 Mar 2025
Copy link to clipboard 
Print this post

The best Speed i can get with DeepSeek-R1:14B is pretty exact 8 Tokens per Second.

Thats way slower then your card, Peter, but i think its pretty understandable that a $25 card cannot compete against a pretty expensive A4000. Note that my cards are not fully utilized, caused by the PCIx Bottleneck. But i am naturally unable to run a 14B Model on a single 6 GB card.

CPU only gives me 2.97 Tokens/sec.

The fastest speed i am able to achive is with

llama3.2:3b = 41.06 tokens/sec
llama3.2:1b = 46 Tokens/sec

But i made a huge mistake on how Ollama generally works with multiple cards: It -usually- spreads the model over -all- available cards, not to a single one, if applicable. Sorry for that.

It -should- work like that - using only the amount of cards which are needed in terms of memory needs in order to keep the Layers on one card for faster calculations, without the need of utilizing the PCIx Bus - but it doesnt.

You can limit this by using a script which generally limits the available CUDA devices to Ollama. Llama 3.2:3b also usually would spread over all cards, which slows down the speed drastically, down do 21 Tokens per Second, whereas running the model on only one card will go up to the number i have told above.
Edited 2025-03-31 04:17 by stef123
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 7468
Posted: 08:36pm 30 Mar 2025
Copy link to clipboard 
Print this post

I wonder which AI system, when asked what is the best way to solve the environmental issues that we are facing, will be honest enough to reply that it's to get rid of digital currency and AI servers?
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 09:24pm 30 Mar 2025
Copy link to clipboard 
Print this post

Considering that the root causes of the environmental issues that we are facing long precede the existence of either digital currency or AI servers, that would not seem likely to be an answer that any intelligence--artificial or otherwise--would give.
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
Mixtel90

Guru

Joined: 05/10/2019
Location: United Kingdom
Posts: 7468
Posted: 08:39am 31 Mar 2025
Copy link to clipboard 
Print this post

But at least it's something that's currently under our control. Before the AI systems won't let us shut them down...   ;)

Seriously, I've just read that Nokia are now saying that their roadmap is to put more and more processing into each rack and add lots more cooling. The limit is regarded to be the maximum amount of power and cooling that they can fit into the building. That can be as much (or even more) as the amount used by a small town for a single bit barn. The best way is to concentrate on much lower power tech rather than carry on extending the existing systems but that is far less attractive commercially, even if it is better for the environment (well, hopefully).
Mick

Zilog Inside! nascom.info for Nascom & Gemini
Preliminary MMBasic docs & my PCB designs
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 10:58am 31 Mar 2025
Copy link to clipboard 
Print this post

Indeed. Instead of concentrating on creating and using more efficient and less power hungry Tensor Processing Units, they buy everything out of the GPU market as much as they can; regardless of the power consumption. By that, they also generate a huge amount of E-Waste.

Needless to say that companies Nvidia, AMD & Co do not care about this at all, they make good money out of it due the ever increasing demand.

Sure, the training of (official) LLMs does only use a fraction of earths total power consumption (assumptions say 2.7% of it), but inference does it as well - through models being hosted and made available to the user by large datacenters, as well as locally running models - and the need for more power is steadily increasing.

And the models which are officially known are not the only ones.

Plus Power consumption for mining.

Electrical Energy doesnt grow on trees and most companys don't want to invest in their own power consumption. Instead of generating their own energy, they shift energy demand to the general public, which then has to refinance the construction of new power plants (of whatever kind) through higher electricity prices. Neither solar power plants, nor wind turbines, nor nuclear power plants pay for or build themselves – and I don't see the point in having to pay for their gadgets, which only those who can afford can and are allowed to participate in.

Or does anyone seriously believe that this will become and remain a forever open and free system for everyone, a field in which you can really make money?
 
Volhout
Guru

Joined: 05/03/2018
Location: Netherlands
Posts: 4815
Posted: 11:17am 31 Mar 2025
Copy link to clipboard 
Print this post

Mixtel,

Not sure if it is still valid, but 2 years ago, in Netherlands, the green energy total (wind/solar) barely compensated the energy needed for airconditioners.

Eliminite both, and accept summer is warmer than winter....

Volhout
P.S. I think AI should focus all it's energy on thinking how it can become more energy efficient. I said that before. I hope AI concludes the world is better without AI.
PicomiteVGA PETSCII ROBOTS
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 12:34pm 31 Mar 2025
Copy link to clipboard 
Print this post

  stef123 said  Neither solar power plants, nor wind turbines, nor nuclear power plants pay for or build themselves


Per perplexity.ai, "As of 2023, wind power accounted for approximately 59% to 64% of Iowa's electricity generation". Iowan's don't do this because they are "green".

Also, "In 2024, wind and solar energy together accounted for 30% of Texas's electricity generation". Texans aren't noted for tree-hugging (of course, in west Texas where most of the wind is, they don't have any trees to hug).

Perhaps even more significantly for the majority of the world which doesn't have much electricity, nearly 30 gigawatts of panels have flooded into Pakistan since 2020 (and elsewhere)--almost 16GB of solar in a single year--at Chinese DDP prices (Delivered Duty Paid), that's almost .1% of Pakistan's GDP--and almost all of it behind the meter. Two 500-watt panels on sunny days would match the average grid-delivered residential daily usage in Pakistan (and be 10 times or more the average residential daily usage in many parts of Africa).

This is happening because solar and wind do pay for themselves (at least in the right places and where tariffs don't kill imports--looking at you, U.S. and Canada (165% on panels in Canada)).

  Quote  Or does anyone seriously believe that this will become and remain a forever open and free system for everyone, a field in which you can really make money?


The point in the long run--at least according to "abundance theory" enthusiasts--is that you won't have to "make money" on AI. This, of course, has echoes of nuclear electricity being "too cheap to meter" (but there's an argument that that wasn't really tried after the 70s--at least in the U.S.).
Abundance

~
Edited 2025-03-31 22:36 by lizby
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
lizby
Guru

Joined: 17/05/2016
Location: United States
Posts: 3299
Posted: 12:44pm 31 Mar 2025
Copy link to clipboard 
Print this post

@stef123--another question about hardware. Assuming you have an ai coding model which fits entirely (with room to do calculations) in that NVIDIA Tesla K80 24GB GDDR5 Accelerator GPU, would anything more than nominal memory and CPU be needed--e.g. 16GB and i-7 or even i-5?

Once trained and loaded, is there anything other than text passed back and forth between CPU and GPU (i.e., text for program design parameters to the GPU, and text code back)?
PicoMite, Armmite F4, SensorKits, MMBasic Hardware, Games, etc. on fruitoftheshed
 
matherp
Guru

Joined: 11/12/2012
Location: United Kingdom
Posts: 10004
Posted: 01:04pm 31 Mar 2025
Copy link to clipboard 
Print this post

With my A4000 running a 14b model I see 100% GPU utilisation, about 20% CPU (I7-12700) and no obvious memory usage
 
stef123
Regular Member

Joined: 25/09/2024
Location: United Kingdom
Posts: 83
Posted: 02:21pm 31 Mar 2025
Copy link to clipboard 
Print this post

  lizby said  @stef123--another question about hardware. Assuming you have an ai coding model which fits entirely (with room to do calculations) in that NVIDIA Tesla K80 24GB GDDR5 Accelerator GPU, would anything more than nominal memory and CPU be needed--e.g. 16GB and i-7 or even i-5?

Once trained and loaded, is there anything other than text passed back and forth between CPU and GPU (i.e., text for program design parameters to the GPU, and text code back)?


I not quite sure how much the CPU gets involved in this entire process, but i doubt that it would create a massive overhead.

What i know is -at least on my system- that one core of the CPU gets utilized when converting the text into Tokens/Data Preparation and feeding them into the "Neural Network" = the model itself, residing on the graphics card.

That process of Tokenization and Data Preparation isn´t really fast here, but it sure depends on how old the CPU itself is and which Math Acceleration techniques it provides. My Server CPUs, as previously explained, are not good at all by todays standards.

Ollama is (like most other programs for LLMs) for a good part based on Python and if not strictly implemented (or even possible), Multicore parallelism isnt being used here, in this case for certain tasks. There has to be done some work done outside the graphics card - because GPUs naturally can´t handle other tasks than calculating - in terms of Tokenizing the input Text, overall Model handling and so on, but thats negligible.

In my case, a single core of my XEONs is pretty weak by todays standards - and one core of a modern CPUs works way faster than mine, so the additional Jobs can be done way faster. My CPUs get their strenght trough multicore usage, if possible.    


If you use an model which consumes the entire 24 GB GPU memory without leaving much space for a context size which fits programming needs (above 2048 Tokens) and you then increase the context size, then the CPU and System RAM will get generally involved in this process - and the CPU will block the GPU in terms of generated tokens per second.

Depending on the Model itself and its quantization, by increasing the context size (which can be a word or a part of a word), it would also affect the so called attention mechanisms, so that a context size for example of 32768 would result in 1,073,741,824 pairs of elements (whatever they mean by that (after some additional research that means bits)) in order to compute relationships between them. The additional memory being needed for 32768 Tokens can go from 512MB for a Model with 4 Bit quantization, up to 160 GB for a FP32 Model - as said, depending on the models precision and size being used.


In short: Yes, you can extend Ollamas ability to handle a larger context size or using a larger model than the graphics card could handle by utilizing the System CPU and its Memory in parallel with the GPU, but for the cost of speed.

The larger the model and/or its context size (beyond 24 GB), the more slower it will be overall. The effect would be less hard if you instead would use two GPUs with 24 GB each. As being said, CPUs are (mostly) still slower than GPUs.

If the model (including context) fits entirely into your graphics card, then not much CPU power will be used at all, as Peter also observed. I7/I9, as stated, will be quite enough, but if possible, better go for 32 GB RAM. That in turn gives you some room for experiments with (some) larger models or context sizes.

Ollama is also able to swap out to virtual memory, but that would be a neck braker in terms of speed.
Edited 2025-04-01 17:22 by stef123
 
     Page 2 of 3    
Print this page
The Back Shed's forum code is written, and hosted, in Australia.
© JAQ Software 2025