Regarding 12057 RTC accuracy - real-time-clock

i am beginner in embedded programming. In my project i am using ISL12057ibz RTC, we connected an crystal VT-200-f to it with 5 pf of parallel capacitance .the stray capacitance of the board is varying from 7-8 pf (we can't change the PCB designing). in this condition RTC delay is increased by 3 sec. after every 24 hours.
please help me so that i can reduce the time delay problem.

Related

How do GPU/PPU timings work on the Gameboy?

I’m writing a gameboy emulator and I’ve come to implementing the graphics. However I can’t quite figure out how it works with the cpu as far as timing/clock cycles go. Does the CPU execute a certain amount of cycles (if so how many) and then hand it of to the GPU? Or is the gameboy always in a hblank/vblank state and the GPU uses the CPU in between them? I can’t find any information that helps me with this, only how to use the control registers.
This has been answered at https://forums.nesdev.com/viewtopic.php?f=20&t=17754&p=225009#p225009
It turns out I had it completely wrong and they are completely different.
Here is the post:
The Game Boy CPU and PPU run in parallel. The 4.2 MHz master clock is
also the dot clock. It's divided by 2 to form the PPU's 2.1 MHz memory
access clock, and divided by 4 to form a multi-phase 1.05 MHz clock
used by the CPU.
Each scanline is 456 dots (114 CPU cycles) long and consists of mode 2
(OAM search), mode 3 (active picture), and mode 0 (horizontal
blanking). Mode 2 is 80 dots long (2 for each OAM entry), mode 3 is
about 168 plus about 10 more for each sprite on a given line, and mode
0 is the rest. After 144 scanlines are drawn are 10 lines of mode 1
(vertical blanking), for a total of 154 lines or 70224 dots per
screen. The CPU can't see VRAM (writes are ignored and reads are $FF)
during mode 3, but it can during other modes. The CPU can't see OAM
during modes 2 and 3, but it can during blanking modes (0 and 1).
The link gives more of a general answer instead of implementation specifics, so I want to give my 2 cents.
CPU usually is the main part of your emulator and what actually counts cycles. Each time your CPU does something for any amount of cycles you pass that amount of cycles to other components of your emulator so that they can synchronize themselves.
For example, some CPU instructions read and write memory as part of a single instruction. That means is would take Gameboy CPU 4 (read) + 4 (write) cycles to complete the instruction. So in emulator you do the read, pass 4 cycles to GPU, do the write, pass 4 cycles to GPU. You do the same for other components that run parallel to the CPU like timers and sound.
It's actually important to do it that way instead of emulating whole instruction and then synchronizing everything else. Don't know about real ROMs but there're test ROMs that verify this exact behavior. 8 cycles is a long time and in the middle of multiple memory accesses some other Gameboy component might make a change.

RTC calibration in energy meter IC 71M6541F

How do I calibrate the RTC of energy meter IC 71M6541F?
Here are the details:
I am working on the above mentioned SoC on smart meter project. Using 32.768 kHz frequency I am getting an accurate time without any mismatch with global time. But in the crystal there is a small deviation so that I am undergoing 2 to 4 second difference in real time in 24 hours of time. How do I rectify this problem?
The problem is that affordable crystals have always a deviation. Typical values are 30 ppm.
And as each crystal has its own deviation you need to calibrate each system or you need an external sync mechanism.

Does quad-core perform substantially better than a dual-core for web development?

First, I could not ask this on most hardware forums, because they are mostly populated by
gamers. Additionally, it is difficult to get an opinion from sysadmins, because they have a fairly different perspective as well.
So perhaps, amongst developers, I might be able to deduce a realistic trend.
What I want to know is, if I regularly fire up netbeans/eclipse, mysql workbench, 3 to 5 browsers with multi-tabs, along with apache-php / mysql running in the background, perhaps gimp/adobe photoshop from time to time, does the quad core perform considerably faster than a dual core? provided the assumption is that the quad has a slower i.e. clockspeed ~2.8 vs a 3.2 dual-core ?
My only relevant experience is with the old core 2 duo 2.8 Ghz running on 4 Gig ram performed considerably slower than my new Core i5 quad core on 2.8 Ghz (desktops). It is only one sample data, so I can't see if it hold true for everyone.
The end purpose of all this is to help me decide on buying a new laptop ( 4 cores vs 2 cores have quite a difference, currently ).
http://www.intel.com/content/www/us/en/processor-comparison/comparison-chart.html
I did a comparison for you as a fact.
Here Quad core is 2.20 GHz where dual core is 2.3 GHz.
Now check out this comparison and see the "Max Turbo Frequency". You will notice that even though quad core has less GHz but when it hit turbo it passes the dual core.
Second thing to consider is Cache size. Which does make a huge difference. Quad core will always have more Cache. In this example it has 6MB but some has up to 8MB.
Third is, Max memory bandwidth, Quad core has 25.6 vs dual core 21.3 means more faster speed in quad core.
Fourth important factor is graphics. Graphics Base Frequency is 650MHz in quad and 500MHz in dual.
Fifth, Graphics Max Dynamic Frequency is 1.30 for quad and 1.10 for dual.
Bottom line is if you can afford it quad not only gives you more power punch but also allow you to add more memory later. As max memory size with Quad is 16GB and dual restricts you to 8GB. Just to be future proof I will go with Quad.
One more thing to add is simultaneous thread processing is 4 in dual core and 8 in quad, which does make a difference.
The problem with multi-processors/multi-core processors has been and still is memory bandwidth. Most applications in daily use have not been written to economize on memory bandwidth. This means that for typical, everyday use you'll run out of bandwidth when your apps are doing something (i e not waiting for user input).
Some applications - such as games and parts of operating systems - attempt to address this. Their parallellism loads a chunk of data into a core, spends some time processing it - without accessing memory further - and finally writes the modified data back to memory. During the processing itself the memory bus is free and other cores can load and store data.
In a well-designed, parallel code essentially any number of cores can be working on different parts of the same task so long as the total amount of processing - number of cores * processing time - is less than or equal to the total time doing memory work - number of cores * (read time + write time).
A code designed and balanced for a specific number of cores will be efficient for fewer but not for more cores.
Some processors have multiple data buses to increase the overall memory bandwidth. This works up to a certain point after which the next-higher memory - the L3 cache- becomes the bottleneck.
Even if they were equivalent speeds, the quad core is executing twice as many instructions per cycle as the duo core. 0.4 Mhz isn't going to make a huge difference.

Drive high frequency output (32Khz) on 20Mhz Renesas microcontroller IO pin

I need to drive a 32Khz square wave on pin 19 of a Renesas R8C/36C µController. The pin is non-negotiable (the circuit design is already complete.)
The software design uses a 250 µsec interrupt for simulating multi-tasking, but that's only good for 2Khz full-wave.
Do I need to create another higher-priority interrupt for driving 32 Khz, or is there some other trick that I'm not aware of?
R8C/36C Hardware Manual
R8C/36C Software Manual
I am not familiar with the RC8 and Renesas don't say much on the subject of performance, but it is a CISC processor with typically 4 cycles per instruction, so lets estimate about 4 MIPS? Some instructions are much longer with division up to 30 cycles.
So if you create a 64KHz timer and flip the output on each interrupt, you have about 63 instructions between each interrupt, you have the interrupt latency plus the code to flip the bit. If it works at all, it is likely to constitute a significant CPU load and may affect the timeliness of other operations.
Be realistic, without a redesign, the project may not be viable. You are already stressing it with the 4KHz OS tick in my opinion - the software overhead at that rate is likley to be a significant chunk of your CPU load.
[ADDED]
I previously suggested 6 instructions between interrupts - finger trouble in the calculator, I have changed that estimate to 63, and moderated my conclusion to "barely feasible".
However I looked again at the data sheet, interrupt latency is variable because the instruction execution is variable, and the current instruction must complete before the interrupt is serviced, the worst case is when the DIVX instruction is executing, when it takes up-to 51 cycles before the first instruction of the interrupt routine. That's 2.55us, when you need the interrupt to trigger every 15.625us, the variable latency will impose significant jitter and constitutes 6 to 16 % of your total CPU time without even considering that used by the ISR itself.. Plus if the interrupt itself is pre-empted, or a higher priority interrupt is running when this one becomes due, further jitter will be imposed.
Whether it works will depend on the accuracy and jitter constraints of the 32KHz, and whatever else your code needs to get done.
As many people have pointed out, this design doesn't seem to be very good from a hardware standpoint if the 32khz clock is meant to be generated with a gpio.
However, I don't know How desperate is your situation, nor do I know the volume involved. But if it is a prototype or very short series, and pin 20 is free, you can short-circuit pins 19 and 20, setup pin 19 as an input and 20 as output. Since pin 20 can be used as output from timer rd, you could set up that timer to output the 32khz without using any interrupts.
I am not a renesas micro expert, but I'm talking from what I've seen in the data sheet you attached and previous experience with other mcu's.
I hope this helps.
Looking at the datasheet for that chip:
It looks like your only real option is to use the pin as a generic output port.
the only usable output mode seems to be the generic output port.
If you can't strap pin 19 to another pin that has the hardware to generate 32KHz and just make pin 19 an input? Not a proud moment but it was easy on a DIL package.
Could you call an interrupt every 15.6us and toggle pin19 then on the sixteenth interrupt do the multi-tasking stuff but that is likely to be wasteful. With an interrupt rate of 32Khz, setting pin19 then eighth of the time doing the multi-tasking decisions and the other seven times wait till a point you can reset pin19 and do some background code for less than half the CPU time

Accurate Windows timer? System.Timers.Timer() is limited to 15 msec

I need an accurate timer to interface a Windows application to a piece of lab equipment.
I used System.Timers.Timer() to create a timer that ticks every 10 msec, but this clock runs slow. For example 1000 ticks with an interval of 10 msec should take 10 wall-clock seconds, but it actually takes more like 20 wall-clock sec (on my PC). I am guessing this is because System.Timers.Timer() is an interval timer that is reset every time it elapses. Since it will always take some time between when the timer elapses and when it is reset (to another 10msec) the clock will run slow. This probably fine if the interval is large (seconds or minutes) but unacceptable for very short intervals.
Is there a function on Windows that will trigger a procedure every time the system clock crosses a 10 msec (or whatever) boundary?
This is a simple console application.
Thanks
Norm
UPDATE: System.Timers.Timer() is extremely inaccurate for small intervals.
I wrote a simple program that counted 10 seconds several ways:
Interval=1, Count=10000, Run time = 160 sec, msec per interval=16
Interval=10, Count=1000, Run time = 16 sec, msec per interval=15
Interval=100, Count=100, Run time = 11 sec, msec per interval=110
Interval=1000, Count=10, Run time = 10 sec, msec per interval=1000
It seems like System.Timers.Timer() cannot tick faster that about 15 msec, regardless
of the interval setting.
Note that none of these tests seemed to use any measurable CPU time, so the limit is not the CPU, just a .net limitation (bug?)
For now I think I can live with an inaccurate timer that triggers a routine every 15 msec or so and the routine gets an accurate system time. Kinda strange, but...
I also found a shareware product ZylTimer.NET that claims to be a much more accurate .net timer (resolution of 1-2 msec). This may be what I need. If there is one product there are likely others.
Thanks again.
You need to use a high resolution timer such as QueryPerformanceCounter
On surface of it the answer is something like "high resolution timer" however this is incorrect. The answer requires a regular tick generation and the windows high res performance counter API does not generate such a tick.
I know this is not answer inself but the popular answer to this question so far is wrong enough for me to feel that a simple comment on it is not enough.
The limitation is given by the systems heartbeat. This typically defaults to 64 beats/s which is 15.625 ms. However there are ways to modify these system wide settings to achieve timer resolutions down to 1 ms or even to 0.5 ms on newer platforms:
Going for 1 ms resolution by means of the multimedia timer interface (timeBeginPeriod()):
See Obtaining and Setting Timer Resolution.
Going to 0.5 ms resolution by means of NtSetTimerResolution():
See Inside Windows NT High Resolution Timers.
You may obtain 0.5 ms resolution by means of the hidden API NtSetTimerResolution().
I've given all the details in this SO answer.
In System.Diagnostics, you can use the Stopwatch class.
Off the top of my head, I could suggest running a thread that mostly sleeps, but when it wakes, it checks a running QueryPerformanceCounter and occasionally triggers your procedure.
There's a nice write up at the MSDN: Implement a Continuously Updating, High-Resolution Time Provider for Windows
Here's the sample source code for the article (C++).