Notice. New forum software under development. It's going to miss a few functions and look a bit ugly for a while, but I'm working on it full time now as the old forum was too unstable. Couple days, all good. If you notice any issues, please contact me.
|
Forum Index : Microcontroller and PC projects : Loosing seconds when using CPU nn
Page 1 of 2 | |||||
Author | Message | ||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
Greetings, Did anyone notice that, when used periodically, the CPU <speed> command causes the internal time (TIME$) to slow down or more exactly loose seconds ? In one of my projects, in order to lower the power consumption to the minimum (and no, CPU SLEEP can't be used in this project), the CPU command is used 6 times per second or so, from the main loop, to reduce the CPU speed to 5MHz during a 300ms PAUSE and increase it back to 48MHz after the pause command returns. When doing so, the TIME$ variable looses several seconds (!) every minute (so even resyncing each minute with a RTC IC causes the time to drift during each minute) ! I looked at the code for the firmware, and found nothing that could explain such a huge drift; the interrupts are of course disabled during the CPU speed change and re-enabled after, but that's not the only place in the code where such a thing happens (it's also used during serial comms, for example), and it doesn't seem to cause any issue in these other places... |
||||
robert.rozee Guru Joined: 31/12/2012 Location: New ZealandPosts: 2290 |
i suspect no one envisioned the CPU <speed> command to be used in quite this way. amongst other things, i can see there also being problems with the block of flash memory where the speed is stored rapidly being worn out - i've not seen the mmbasic source, but presume that whenever the speed is changed the new value is saved to flash (the setting being persistent). cheers, rob :-) |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
Nope, it's not. But you almost made me catch a heart attack ! There is no trace of a CPU speed byte in the flash-saved MMBASIC configuration, and for the CPU command, the manual clearly states: "The default speed of the CPU when power is applied is 40MHz." |
||||
Justplayin Guru Joined: 31/01/2014 Location: United StatesPosts: 309 |
I just tested this and CPU <speed> is not saved... At least not on a 28 pin Micromite v5.1. I checked the speed and it was reported as 40MHz then set it to 48MHz and verified the change had taken place. Cycled the power and the speed has returned to 40MHz again. --Curtis I am not a Mad Scientist... It makes me happy inventing new ways to take over the world!! |
||||
matherp Guru Joined: 11/12/2012 Location: United KingdomPosts: 8592 |
I think I see how this may be happening depending on how often you switch speeds At 40MHz the timer is decrementing at 20MHz (2:1 prescaler) so the timer reload value for 1Msec is 19,999. Now we switch the clock to 5MHz - lets say the timer is at 15000 when we switch. It will then take 6msec (40/5 * 15000/19999) before the new smaller reload value ( 2499) is used i.e. we will have lost >5msec. This is then added to the time taken while interrupts are disabled and I don't know how long the clock takes to switch OSCCONbits.OSWEN = 1; while(OSCCONbits.OSWEN); // switch to it and wait for the switch to complete
|
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
I would expect this to be very fast (it's not like if we were switching to an external crystal oscialltor and had to wait for it to start and stabilize): the time for the PLL to lock up, I guess. |
||||
robert.rozee Guru Joined: 31/12/2012 Location: New ZealandPosts: 2290 |
oops, my mistake! but at least my comment did get this thread a little attention, and peter has now spotted the reason for the lost time. perhaps the trick would be to simply issue a RTC GETTIME just before every occurence of TIME$, so effectively not using the micromite's own timekeeping? indeed, an argument could be put for having some way to configure TIME$ so that it always collects the time from the RTC. cheers, rob :-) |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
That's actually, an interesting idea... The cost, when the RTC read is done from the firmware, would be a bit lower (no 'RTC GETTIME' BASIC instruction to interpret), and the firmware could probably keep track of how many milliseconds have elapsed since the last "TIME$" invocation, so to query the RTC only after at least once second has elapsed (thus avoiding redundant/useless calls)... |
||||
TassyJim Guru Joined: 07/08/2011 Location: AustraliaPosts: 5913 |
Rather than spending time reading the RTC, I would think there are fewer overheads by setting the RTC to interrupt once every second and then keeping your own clock registers. Depending on your needs, a simple seconds counter may be all that's needed and convert to hours/minutes/seconds for display purposes only. Jim VK7JH MMedit MMBasic Help |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
Thing is, the project does display the full time and date, so this is not an option. |
||||
matherp Guru Joined: 11/12/2012 Location: United KingdomPosts: 8592 |
piclover Please could you try adding the following command either immediately before or after the CPU change to 5Mhz. poke word $HBF800C10,0
and see if that fixes the problem of losing time |
||||
MicroBlocks Guru Joined: 12/05/2012 Location: ThailandPosts: 2209 |
On a current design for a pic32mx170 28/44 pin module i have room for a RTC (DS3132) which has a SQW pin that pulses every second. I made a solder bridge so that you can connect this to a pin with a COUNT peripheral. This with the idea of reading the RTC, resetting the counter and then when you need an exact time, use the last time you read and add the seconds available in the counter. Would that still work with changing CPU speed and pauses. I presumed yes, but i have not tested it yet. Microblocks. Build with logic. |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
Yep, it does pretty much fix it (losing only one second every several minutes now, which is easily fixed with a RTC GETTIME every three minutes or so) ! Indeed, resetting the TMR4 counter when lowering the CPU frequency seems to do the trick just fine ! Congratulations for this great finding (that I think Geoff could integrate into the firmware, in cmd_cpu() (*)) and thanks a bunch for the solution ! (*) Line 468 of MM_Misc.c, just before the "ClockSpeed = NewClock;", add: if (NewClock < ClockSpeed) TMR4 = 0; // Reset TMR4 when PR4 is lowered to avoid/lower clock drift |
||||
matherp Guru Joined: 11/12/2012 Location: United KingdomPosts: 8592 |
You can easily tune this out by setting timer4 to a non-zero value. If we assume that when you call CPU 5 there is an equal chance that we are in the first half or second half of the millisecond then poke word &HBF800C10,1250 may work even better |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
Since there will *always* be some *penalty* (loss of timer ticks) when calling CPU <freq>, because of the disabled interrupts, wait loop for PLL locking, etc, the TMR4 value should probably be set so that the next tick after the CPU command will cause an increment of all timing variables, so a reset to 0 seems appropriate. |
||||
matherp Guru Joined: 11/12/2012 Location: United KingdomPosts: 8592 |
The timer interrupts when the value in timer4 exactly equals the reload value in PR4. It then starts counting again from zero. This is why you saw such a big effect before. I think this happens as follows: reload register at 40MHz = 19999 (you can peek this at &HBF800C20) reload register at 5MHz = 2499 say the timer is at 5000 when we issue the CPU command. In this case the timer will count all the way to 0xFFFF before restarting at zero and eventually reaching 2499. So if you issue the write before the CPU command you want a value less than 1250 if after then somewhat greater then 1250 (but less than 2499) Of course you should tune the raw clock to be accurate first (OPTION CLOCKTRIM) otherwise there are two different errors involved |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
So, in fact, we should preload TMR4 with the new value of PR4 (in all cases, including when we increase the CPU speed)... |
||||
matherp Guru Joined: 11/12/2012 Location: United KingdomPosts: 8592 |
No this will then always run slightly fast. The perfect solution is to read timer4, scale it by the ratio of the before and after CPU speeds, add a fudge factor to compensate for the time taken when interrupts are disabled and then rewrite it. However, a much simpler alternative is to use PR4 divided by 2 (+/-). |
||||
piclover Senior Member Joined: 14/06/2015 Location: FrancePosts: 134 |
No this will then always run slightly fast.[/quote] I'm currently testing (with all RTC GETTIME calls commented out, excepted on program start, before the main loop): main: .../... CPU 5:POKE WORD &hBF800C10,PEEK(WORD &hBF800C20) PAUSE 300 CPU 48:POKE WORD &hBF800C10,PEEK(WORD &hBF800C20) .../... goto main It doesn't seem to be loosing neither gaining time for now, but the accuracy of the PIC32 internal oscillator (which, even when trimmed properly can loose/gain several seconds per hour) makes it hard to evaluate. [quote]The perfect solution is to read timer4, scale it by the ratio of the before and after CPU speeds, add a fudge factor to compensate for the time taken when interrupts are disabled and then rewrite it.[/quote] Indeed, but this cannot be done in BASIC (too slow: TMR4 would change during the computations), only in the firmware... [quote]However, a much simpler alternative is to use PR4 divided by 2 (+/-). Yes, statistically, this would give a good result. |
||||
Geoffg Guru Joined: 06/06/2011 Location: AustraliaPosts: 3165 |
OK, this will be fixed in 5.2 by scaling the timer count to keep as accurate as possible. Also, while I was at it I found out how to get 20MHz, 10MHz and 5MHz on the MX470 (cool). Finally, while talking about clock speeds. You have always been able to overclock a 50MHz spec MX170 to 60MHz. For some reason you must start at 30MHz. Eg: CPU 30 CPU 60 This is not guaranteed to work but it seems to be OK on most chips, Geoff Geoff Graham - http://geoffg.net |
||||
Page 1 of 2 |
Print this page |