RTC calibration in energy meter IC 71M6541F - embedded

How do I calibrate the RTC of energy meter IC 71M6541F?
Here are the details:
I am working on the above mentioned SoC on smart meter project. Using 32.768 kHz frequency I am getting an accurate time without any mismatch with global time. But in the crystal there is a small deviation so that I am undergoing 2 to 4 second difference in real time in 24 hours of time. How do I rectify this problem?

The problem is that affordable crystals have always a deviation. Typical values are 30 ppm.
And as each crystal has its own deviation you need to calibrate each system or you need an external sync mechanism.

Related

Need assistance with building a LabVIEW setup for Pressure/Temperature/RPM/Voltage/Amperage

I'm working on assembling a LabVIEW setup that has the ability to measure Pressure, Temperature, RPM, Voltage and Amperage. I believe I have identified the correct modules but am looking for another opinion before giving anyone a lot of money.
Requirements:
Temperature: Able to measure 7 channels of temperatures ranging from
ambient to 300 degrees F.
RPM: Able to measure shaft RPM up to 3600.
Voltage: Able to measure up to 500 Volts, 3 phase AC.
Amperage: Able to measure up to 400 Amps, 3 phase AC.
Pressure: Able to measure 2 channels of various ranges of PSI
(specific transducers to be identified at a later date).
The Gear:
Chassis: cDAQ-9174 with the PS-14.
Temperature: T type thermocouples and NI-9212.
RPM: Monarch Instrument Remote Optical Laser Sensor and NI-9421.
Laser uses 24 volts but returns 19 volts when target is present and 0
volts when the target is not present.
Voltage*: ATO three phase AC voltage sensor ATO-VOS-3AC500 outputting
0-5 volts and either NI-9224 or NI-9252.
Amperage*: 3, Fluke i400 units returning 1mV per Amp and either
NI-9224 or NI-9252.
Pressure: 2, 4-20mA 2 or 3 wire pressure transducers to be identified
at a later date, and either NI-9203 or NI-9253.
*Voltage and amperage will be measured on the same unit
Questions:
RPM: Will the NI-9421 record a pulse of 19 volts?
Voltage and Amperage: What is difference between the NI-9224 and the
NI-9252, which one would work best for my application?
Pressure: What is the difference between the NI-9203 and the NI-9253
other than input resolution and which one would work best for my
application? Resolution is not a priority.
Overall: Anything stand out as a red flag?
I have not tried any of this equipment out myself.
Thanks in advance for your expertise and patience.
First things first, I would encourage you to strike up a conversation with whichever NI distributor is local to you. Checking specifications and compatibility between sensors, modules, chassis, etc. is very much in their wheelhouse, and typically falls in the pre-sales phase of discussion so you shouldn't need to spend money to get their expertise.
(Also, if you're new to LabVIEW and NI: I very much recommend checking out the NI forums in addition to Stack Exchange. Both are generally pretty helpful communities!)
One thing I'm not seeing in the requirements you listed that would be very helpful are timing requirements/sample rates. What frequency do you need to sample each of these inputs, and for how long? How much jitter and skew between samples is acceptable? Building a table of signal characteristics including: original project specification, specification in units of the measurement device, minimum sample rate, analog/digital, and which module the channel is on will make configuring a chassis to meet your needs a lot easier.
For a cDAQ system the sample rates you measure at, and how many different ones you can run at one time, is determined by the chassis rather than the module. (PCI/PXI data acquisition cards have the timing engine on the card.) For the cDAQ-9174 you can run multiple tasks per chassis but only one task per module. You may need to group your inputs onto modules that run at similar rates to fit into the available tasks. I put a link to NI's documentation of the cDAQ timing engine at the bottom.
Now to try to summarize the questions:
Homer512 is correct about voltage, 11V is the ON threshold. However, the NI-9421 can only count pulses up to 10kHZ into the counter. How many pulses are generated per rotation? Napkin math says one pulse per rotation at 3600RPM means you're capturing a 216kHz pulse stream at minimum. (This is why timing is everything. You also probably don't want to transfer every single pulse to calculate the RPM constantly, more likely you need the counter to sum up the pulses as fast as they happen, and at a slower rate you check to see how many counts went by since your last check-in.)
Homer512 is correct again, NI 9252 has additional hardware filtering before the ADCs. This would be for frequency filters on the input source, not usually something you would use if you're just reading a 5V signal from a sensor.
NI 9203 uses a SAR ADC (200kS/s), NI 9253 uses a Delta-Sigma (50kS/s/ch). Long story short: NI 9253 is more accurate but slower. I'd need more information to make a best for application judgement, specifically numerical requirements on resolution and timing.
Red flags: kinda captured it in the other points, but the project requirements have some gaps. I've had " is not a priority" and requirements in a unit other than the measurement device (RPM vs. pulses/s or Hz) bite me enough times that I highly recommend having it written down even if it's blatantly obvious.
Links may move in the future, and the titles are weird, but here are a few relevant NI docs:
"Number of Concurrent Tasks on a CompactDAQ Chassis Gen II" https://www.ni.com/en-us/support/documentation/supplemental/18/number-of-concurrent-tasks-on-a-compactdaq-chassis-gen-ii.html
"cDAQ Module Support for Accessing On-board Counters" https://www.ni.com/en-us/support/documentation/supplemental/18/cdaq-module-support-for-accessing-on-board-counters.html

overhead of changing sensor sampling time

I am looking for the potential overhead of changing the sampling time (not sampling rate) of sensors in embedded systems/robotics/IoT. For example, let's say the sensor is a camera connected to a raspberry pi capturing pictures every 100 ms at 0,100ms,200ms,300ms,... What will happen if I change the sampling time to 50ms,150ms,250ms,...? Do I have to initialize the sensor again? or does it reduce the performance of applications that are using the sensor? Any example is appreciated. You can consider any other type of sensor as well if you have a sensor/system in mind. again, I am looking for the potential overhead of changing the sampling time.

Regarding 12057 RTC accuracy

i am beginner in embedded programming. In my project i am using ISL12057ibz RTC, we connected an crystal VT-200-f to it with 5 pf of parallel capacitance .the stray capacitance of the board is varying from 7-8 pf (we can't change the PCB designing). in this condition RTC delay is increased by 3 sec. after every 24 hours.
please help me so that i can reduce the time delay problem.

ALSA Sample Rate drift vs monotonic clock

I have an application that samples audio at 8Khz using ALSA. This is set via snd_pcm_hw_params() and can be confirmed by looking at /proc:
cat /proc/asound/card1/pcm0c/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 12
rate: 8000 (8000/1)
period_size: 400
buffer_size: 1200
The count of samples read over time is effectively a monotonic clock.
If I compare the number of samples read with the system monotonic clock I note there is a drift over time. The sample clock appears to lose 1s roughly every 5 hours relative to the monotonic clock.
I have code to compensate for this at the application level (i.e. to correctly map sample counts to wall clock times) but I am wondering if we can or why we can't do better at a lower level?
Both clocks are based on oscillators of some kind which may have some small error. So likely we are sampling at 7999.5Khz rather than 8Khz and the error builds up over time. Equally the system clock may have some small error in it.
The system clocks are corrected periodically by NTP so perhaps can permit more error but even so this deviation seems much larger than I would intuitively effect.
However, see for example http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
In theory NTP can generate a drift file which you could use to see the drift rate of your system clock.
I would have thought that knowing that there is some small error. Something would try to autocorrect itself either by swapping between two differently wrong sample rates e.g. 8000.5Khz & 7999.5Khz or dropping the occasional sample. In fact I thought this kind of thing was done at the hardware or firmware level in order to stabilize the average frequency given a crystal with a known error.
Also I would have thought quartz crystals are put in circuits these days with at least temperature compensation.

Crystal core MPU Clock rate differences

I have a embedded system which on boot up shows as below:
Clocking rate (Crystal/Core/MPU): 12.0/400/1000 MHz
Can anybody explain me on differences between these three clock rate.
Processor is ARMv7, OMAP3xxx
As Clement mentioned, the 12.0 is the frequency in MHz of the external oscillator. Core and MPU are the frequencies of the internal PLL's.
The MPU is the Microprocessor Unit Subsystem. This is the actual Cortex-A8 core as well as some closely related peripherals. So your MPU is running at 1000 MHz or 1GHz. This is similar to the CPU frequency in your computer.
In the AM335x, the Core PLL is responsible for the following subsystems: SGX, EMAC, L3S, L3F, L4F, L4_PER, L4_WKUP, PRUSS IEP, Debugss. The subsystems may differ slightly based on the particular chip you are working with. Yours is running at 400MHz. This can be thought of as similar to the Front Side Bus (FSB) frequency in your computer though the analogy isn't exact.
12 Mhz is the frequency of the crystal oscillator present on the board to give a time reference.
A TI OMAP contains 2 cores : an ARM and a DSP. The terminology used here is not clear but it may be the frequencies of these cores. Check you datasheet to be sure.