two analog channel affect each other in pic - embedded

Iam doing a project to recognize gestures by reading adc values in pic 16f73 using embedded c. Everything works fine while using single adc channel. When i use multiple channels, values are affected each other. is this a hardware error or software problem?

Probably. It's very likely to be one, or the other, or both. Split problem in half.
Eliminate one at a time. Scope/meter on both analog inputs. Change one input - does the other change too? If it does, there is a hardware issue at least. If not, it's software.
This is debugging 101.

It's a hardware effect, but not an error.
From the datasheet:
11.1 A/D Acquisition Requirements
For the A/D converter to meet its specified accuracy,
the charge holding capacitor (CHOLD) must be allowed
to fully charge to the input channel voltage level. The
analog input model is shown in Figure 11-2. The source
impedance (RS) and the internal sampling switch (RSS)
impedance directly affect the time required to charge
the capacitor CHOLD. The sampling switch (RSS)
impedance varies over the device voltage (VDD), see
Figure 11-2. The source impedance affects the offset
voltage at the analog input (due to pin leakage current).
The maximum recommended impedance for analog sources is 10 kΩ. After the analog input channel is
selected (changed), the acquisition period must pass
before the conversion can be started.
To calculate the minimum acquisition time, TACQ, see
the PICmicro™ Mid-Range MCU Family Reference
Manual (DS33023). In general, however, given a maximum source impedance of 10 kΩ and at a temperature
of 100°C, TACQ will be no more than 16 µsec.

It will likely be because you have high impedance sources driving all the ADC pins. When the multiplexer switches from one input to the next, any charge that is stored on the sampling capacitor of the ADC from the previous input will still be there.
If you drive each input with the output of a suitable op amp, when the ADC's multiplexer switches, the op amp is able to drive charge in or suck charge out of the sampling capacitor and the time needed for the new input you are reading can be significantly reduced. Plus, with this method you are not loading the voltage you are wanting to read.
If you cannot drive with a low impedance source, then ensure you have plenty of time for the new input's value to settle.

Related

Need assistance with building a LabVIEW setup for Pressure/Temperature/RPM/Voltage/Amperage

I'm working on assembling a LabVIEW setup that has the ability to measure Pressure, Temperature, RPM, Voltage and Amperage. I believe I have identified the correct modules but am looking for another opinion before giving anyone a lot of money.
Requirements:
Temperature: Able to measure 7 channels of temperatures ranging from
ambient to 300 degrees F.
RPM: Able to measure shaft RPM up to 3600.
Voltage: Able to measure up to 500 Volts, 3 phase AC.
Amperage: Able to measure up to 400 Amps, 3 phase AC.
Pressure: Able to measure 2 channels of various ranges of PSI
(specific transducers to be identified at a later date).
The Gear:
Chassis: cDAQ-9174 with the PS-14.
Temperature: T type thermocouples and NI-9212.
RPM: Monarch Instrument Remote Optical Laser Sensor and NI-9421.
Laser uses 24 volts but returns 19 volts when target is present and 0
volts when the target is not present.
Voltage*: ATO three phase AC voltage sensor ATO-VOS-3AC500 outputting
0-5 volts and either NI-9224 or NI-9252.
Amperage*: 3, Fluke i400 units returning 1mV per Amp and either
NI-9224 or NI-9252.
Pressure: 2, 4-20mA 2 or 3 wire pressure transducers to be identified
at a later date, and either NI-9203 or NI-9253.
*Voltage and amperage will be measured on the same unit
Questions:
RPM: Will the NI-9421 record a pulse of 19 volts?
Voltage and Amperage: What is difference between the NI-9224 and the
NI-9252, which one would work best for my application?
Pressure: What is the difference between the NI-9203 and the NI-9253
other than input resolution and which one would work best for my
application? Resolution is not a priority.
Overall: Anything stand out as a red flag?
I have not tried any of this equipment out myself.
Thanks in advance for your expertise and patience.
First things first, I would encourage you to strike up a conversation with whichever NI distributor is local to you. Checking specifications and compatibility between sensors, modules, chassis, etc. is very much in their wheelhouse, and typically falls in the pre-sales phase of discussion so you shouldn't need to spend money to get their expertise.
(Also, if you're new to LabVIEW and NI: I very much recommend checking out the NI forums in addition to Stack Exchange. Both are generally pretty helpful communities!)
One thing I'm not seeing in the requirements you listed that would be very helpful are timing requirements/sample rates. What frequency do you need to sample each of these inputs, and for how long? How much jitter and skew between samples is acceptable? Building a table of signal characteristics including: original project specification, specification in units of the measurement device, minimum sample rate, analog/digital, and which module the channel is on will make configuring a chassis to meet your needs a lot easier.
For a cDAQ system the sample rates you measure at, and how many different ones you can run at one time, is determined by the chassis rather than the module. (PCI/PXI data acquisition cards have the timing engine on the card.) For the cDAQ-9174 you can run multiple tasks per chassis but only one task per module. You may need to group your inputs onto modules that run at similar rates to fit into the available tasks. I put a link to NI's documentation of the cDAQ timing engine at the bottom.
Now to try to summarize the questions:
Homer512 is correct about voltage, 11V is the ON threshold. However, the NI-9421 can only count pulses up to 10kHZ into the counter. How many pulses are generated per rotation? Napkin math says one pulse per rotation at 3600RPM means you're capturing a 216kHz pulse stream at minimum. (This is why timing is everything. You also probably don't want to transfer every single pulse to calculate the RPM constantly, more likely you need the counter to sum up the pulses as fast as they happen, and at a slower rate you check to see how many counts went by since your last check-in.)
Homer512 is correct again, NI 9252 has additional hardware filtering before the ADCs. This would be for frequency filters on the input source, not usually something you would use if you're just reading a 5V signal from a sensor.
NI 9203 uses a SAR ADC (200kS/s), NI 9253 uses a Delta-Sigma (50kS/s/ch). Long story short: NI 9253 is more accurate but slower. I'd need more information to make a best for application judgement, specifically numerical requirements on resolution and timing.
Red flags: kinda captured it in the other points, but the project requirements have some gaps. I've had " is not a priority" and requirements in a unit other than the measurement device (RPM vs. pulses/s or Hz) bite me enough times that I highly recommend having it written down even if it's blatantly obvious.
Links may move in the future, and the titles are weird, but here are a few relevant NI docs:
"Number of Concurrent Tasks on a CompactDAQ Chassis Gen II" https://www.ni.com/en-us/support/documentation/supplemental/18/number-of-concurrent-tasks-on-a-compactdaq-chassis-gen-ii.html
"cDAQ Module Support for Accessing On-board Counters" https://www.ni.com/en-us/support/documentation/supplemental/18/cdaq-module-support-for-accessing-on-board-counters.html

So while the chip select is enabled, if the clock speed varies but still within the range specified, would there be spi communication?

While communicating with the slave will there be SPI communication if the clock speed and frequency varies?
Usually the specification on the part should give a minimal timing requirements for the signal. But usually it give no upper limit.
That means, the time between level changes should be at least as it said in the specification, but in is not required to be at any particular frequency, or to have equal periods for different pulses. You can even pause communication by holding level unchanged for a long period of time.
Still, in some devices there may be special requirements on the frequency of pulses and it's maximal duration. For example, ADC parts can rely on SPI-clock to perform measurements. Making uneven or too long SPI clocks may have an influence upon the result.
So, the answer is: in either case carefully read the datasheet on the part you're using.

How do you measure RSSIs of different parts of the spectrum(like FM, DVB-T and so on) using LabView?

I am doing a project on indoor localisation using fingerprinting. Is it possible to build a system in LabView which can scan the entire spectrum and provide me the RSSI measurements of different types of signals?(say FM, GSM, DVB-T and so on.) In case it has to be done separately, can someone please point me to some resources that would help me to find the RSSIs of say, FM signals? I am new to SDRs and would really appreciate some help. I have used this paper as a reference:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7444902
There can be no general method. RSSI is inherently signal-specific, and hence, for each signal type, you will need a different estimator. An estimator that estimates the received signal strength of FM broadcasts will only see noise in a DVB-T signal at incredibly high noise power with no stable signal at all, whereas a DVB-T RSSI indicator would only see a slowly moving interfere in an FM signal.
I see you're using the indoor-positioning-system tag. That is a very bad thing: In an indoor scenario, fading is extremely important, and your received signal strength allow absolutely no conclusions on the distance to the transmitter. This is pretty much the definition of the indoor channel, where lots and lots of reflections overlap and interefere, and there's not always a direct line of sight between transmitter and receiver.
I'm afraid you still have quite some theory to read up on.

Labview: I can't read the voltage from more than one channel (DAQmx read)

I have a SCB 68A connector from National Instruments and I want to read out the open voltage from it. So I used the example code provided by National Instruments (https://decibel.ni.com/content/docs/DOC-28502):
I got 5 mV which is a reasonable value (I measured the noise signal with an oscilloscope). Now I want to read out the noise signal from few channels. So I sightly changed the VI (according to the documentation I need to create an array of channels and flatten them):
But now I read out approximately 200 mV on both channels (and one of them is the same as in the first VI). It doesn't make any sense.
What am I doing wrong?
I want the user to be able to choose the channels, so I can't just write "Dev1/ai0:4".
Edit: I'm using the DAQ 14.0.0.
Edit 2: 1) There is nothing connected to the deivce - I just want to read out the noise signal.
2) I'm using the connector in the MIO with the disabled temperature sensor mode (the default configuration).
You are observing charge injection from the DAQ device's multiplexer. Connect each aiN terminal to aignd and you will be able to measure the noise of the DAQ device.
Charge Injection
Most NI DAQ boards have a single analog to digital converter (ADC) and provide multiple input channels by using a multiplexer (MUX) to switch the input of the ADC to the different analog input terminals ai0, ai1, etc:
As NI explains, when the DAQ device's multiplexer moves from one channel to the next, it can introduce a small charge on each channel. Since the open channel does not have a path for this charge to dissipate, the voltage of the channel will increase. This can also cause the channel to rail, slowly floating up to the maximum input voltage (usually 10 V).
Characterizing Noise
You can determine the noise of each component in your system by:
Measuring the noise of the DAQ device
Measuring the noise of the DAQ device and terminal block
Subtracting the DAQ device noise (step 1) from the system noise (step 2)
When you're finished, the value from step 1 is the noise of the DAQ device, and the value from step 3 is the noise of the SCB-68.
To measure the noise of an electric path, there must be a complete circuit for the ADC to sample. For step 1, connect each aiN terminal to aignd and run your VI. For step 2, connect the terminal block to the DAQ device, disconnect the sensor, and connect the terminal block's channel terminals to its ground terminal and run your VI.
Minimizing Noise
In addition to charge injection, noise can be introduced to a DAQ system from several sources, including the environment. Open terminals act like small antennas and receive radiated energy from other electronics, lights, and the AC mains.
The link also outlines how to find and minimize noise, but the gist is:
Systematically identify the sources of the noise.
Remove sources of noise that aren't necessary for your measurements.
Depending on the nature and source of the remaining noise, use appropriate shielding, cabling, and terminal configuration.
Over-sample and average the signal.
Please have a look on the links below:
http://forums.ni.com/t5/Multifunction-DAQ/How-to-use-DAQmx-Read-to-measure-multiple-analog-channels/td-p/2620949
http://digital.ni.com/public.nsf/allkb/A3A05920BF915F1486256D210069BE49
There is the complete solution to your question.

Plot a graph of Time vs RSSI for a 433Mhz RF ASK Receiver

Hi Im using the following RF module
http://www.apogeekits.com/rf_receiver_module_rx433.htm
on an embedded board with the PIC16F628A. Sadly, I realized that the signal strength was in analog form and couldn't get any ideas to get the RSSI reading off the pin because well my PIC is digital DUH!.
My basic idea was
To get the RSSI value from my Receiver
Send it to the PIC
Link the PIC to a PC via RS232
Plot a graph of time vs RSSI of the receiver (so I can make out how close my TX is to my RX)
I thought it was bloody brilliant at first but ive hit a dead end here. Any ideas on getting the RSSI data to my PC from this receiver would be nice.
Thanks in Advance
You can get a PIC that has an integrated ADC for sampling the analog signal. Or, you can use an external ADC chip to do the conversion. You would connect that to your PIC using SPI or I2C.
The simplest thing to do is obviously to use a more appropriate microcontroller - one with an ADC! There are many (most), including PICs (though that wouldn't be my first choice).
Attaching an external SPI or I2C ADC might be a bit tedious since having no SPI or I2C on your part, you'd have to bit-bash it. If you do that, use an SPI part - its simpler. Your sample rate will suffer and may end-up being a bit jittery if you are not careful.
Another solution is to use a voltage controlled PWM, then use the timer input capture to time the pulse width. That will give you good regularity and potentially good resolution. You can get a chip (example) to do that, or grow your own. That last option requires a triangle wave input as well as the measured (control) voltage, but on the same site...
In a similar vein, you could use a low frequency VCO (example) and use the output to clock one of the timers, then using a second timer periodically sampling the first and reset it. The count will relate to the voltage, though not necessarily a linear relationship, linearisation could be none on the PIC or at the receiving PC - I'd go for the latter - your micro will suck at arithmetic (performance wise) - even integer arithmetic, especially if it involves division.