Gas senor sensitivity characteristics - air

the gas sensor can detect multiple gases. but at a given time, if its detecting multiple gas concentration, then the net resistance should indicate total gas concentration. And if that's the case , then we cannot find concentration of a single gas from given graph. how to find concentration then ? enter image description here

Related

How to count peaks on chart in LabVIEW above some specific value. How to count amount of hills (Heart Rate Monitor)

I want to create some simple heart rate monitor in LabVIEW.
I have sensor which gives me heart workflow (upper graph): Waveform
On second graph (lower graph) is amount of hills (0 - valley, 1 - hill) and that hills are heart beats (that is voltage waveform). From this I want to get amount of those hills, then multiply this number by 6 and I'll get heart rate per minute.
Measuring card I use: NI USB-6009.
Any idea how to do that?
I can sent a VI file if anyone will be able to help me.
You could use Threshold Peak Detector VI
This VI does not identify the locations or the amplitudes of peaks
with great accuracy, but the VI does give an idea of where and how
often a signal crosses above a certain threshold value.
You could also use Waveform Peak Detection VI
The Waveform Peak Detection VI operates like the array-based Peak
Detector VI. The difference is that this VI's input is a waveform data
type, and the VI has error cluster input and output terminals.
Locations displays the output array of the peaks or valleys, which is
still in terms of the indices of the input waveform. For example, if
one element of Locations is 100, that means that there is a peak or
valley located at index 100 in the data array of the input waveform.
Figure 6 shows you a method for determining the times at which peaks
or valleys occur.
NI have a great tutorial that should answer all your questions, it can be found here:
I had some fun recreating some of your exercise here. I simulated a squarewave. In my sample of the square wave, I know how many samples I have and the sampling frequency. As a result, I calculate how much time my data sample represents. I then count the number of positive edges in the sample. I do some division to calculate beats/second and multiplication for beats/minute. The sampling frequency, Fs, and number of samples, N or #s are required to calculate your beats per minute metric. Their uses are shown below.
The contrived VI
Does that lead you to a solution for your application?

Adapting Smartphone Camera to derive Blackbody temperature

At first blush this presumably means -
(1) looking only at lower IR frequencies,
(2) select a IR frequency cut-off for low frequency buckets of the u/v FFT grid
(3) Once we have that, derive the power distribution - squares of amplitudes - for that IR range of frequency buckets the camera supports.
(4) Fit that distribution against the Rayleigh-Jones classical Black Box radiation formula:
(https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Jeans_law#Other_forms_of_Rayleigh%E2%80%93Jeans_law)
(5) Assign a Temperature of 'best fit'.
The units for B(ν,T) are Power per unit frequency per unit surface area at equilibrium Temperature
Of course, this leaves many details out, such as (6) cancelling background, etc, but one could perhaps use the opposite facing camera to assist in that. Where buckets do not straddle the temperature of interest, (7) use a one-sided distribution to derive an inferred Gaussian curve to fit the Rayleigh-Jeans curve at that derived central frequency ν, for measured temperature T.
Finally (8) check if this procedure can consistently detect a high vs low surface temperature (9) check if it can consistently identify a 'fever' temperature (say, 101 Fahrenheit / 38 Celcius) pointing at a forehead.
If all that can be done, (10) Voila! a body fever detector
So those who are capable can fill us in on whether this is possible to do so for eventual posting at an app store as a free Covid19 safe body temperature app? I have a strong sense there's quite a few out there who can verify this in a week or two!
It appears that the analog signal assumed in (1) and (2) are not available in the Android digital Camera2 interface.
Android RAW image stream, that is uncompressed YUV, is already encoded Y green monochrome, and U,V are blue and red shifts from zero for converting green monochrome to color.
The original analog frequency / energy signal is not immediately accessible. So adaptation is not possible (yet).

Getting fuel% from analog data

I am getting analog voltage data, in mV, from a fuel gauge. The calibration readings were taken for every 10% change in the fuel gauge as mentioned below :
0% - 2000mV
10% - 2100mV
20% - 3200mV
30% - 3645mV
40% - 3755mV
50% - 3922mV
60% - 4300mV
70% - 4500mv
80% - 5210mV
90% - 5400mV
100% - 5800mV
The tank capacity is 45L.
Post calibration, I am getting reading from adc as let's say, 3000mV. How to calculate the exact % of fuel left in the tank?
If you plot the transfer function of ADC reading agaist the percentage tank contents you get a graph like this
There appears to be a fair degree of non linearity in the relationship between the sensor and the measured quantity. This could be down to a measurement error that was made while performing the calibration or it could be a true non linear relationship between the sensor reading and the tank contents. Using these results will give fairly inaccurate estimates of tank contents due to the non linearity of the transfer function.
If the relationship is linear or can be described by another mathematical relationship then you can perform an interpolation between known points using this mathematical relationship.
If the relationship is not linear than you will need many more known points in your calibration data so that the errors due to the interpolation between points is minimised.
The percentage value corresponding to the ADC reading can be approximated by finding the entries in the calibration above and below the reading that has been taken - for the ADC reading example in the question these would be the 10% and 20% values
Interpolation_Proportion = (ADC - ADC_Below) / (ADC_Above - ADC_Below) ;
Percent = Percent_Below + (Interpolation_Proportion * (Percent_Above - Percent_Below)) ;
.
Interpolation proportion = (3000-2100)/(3200-2100)
= 900/1100
= 0.82
Percent = 10 + (0.82 * (20 - 10)
= 10 + 8.2
= 18.2%
Capacity = 45 * 18.2 / 100
= 8.19 litres
When plotted it appears that the data id broadly linear, with some outliers. It is likely that this is experimental error or possibly influenced by confounding factors such as electrical noise or temperature variation, or even just the the liquid slopping around! Without details of how the data was gathered and how carefully, it is not possible to determine, but I would ask how many samples were taken per measurement, whether these are averaged or instantaneous and whether the results are exactly repeatable over more than one experiment?
Assuming the results are "indicative" only, then it is probably wisest from the data you do have to assume that the transfer function is linear, and to perform a linear regression from the scatter plot of your test data. That can be most done easily using any spreadsheet charting "trendline" function:
From your date the transfer function is:
Fuel% = (0.0262 x SensormV) - 54.5
So for your example 3000mV, Fuel% = (0.0262 x 3000) - 54.5 = 24.1%
For your 45L tank that equates to about 10.8 Litres.

NRZ/PM demodulation for an old satellite in GNU Radio

There is an old S-band satellite that im trying to receive telemetry data from by using a USRP board and GNU Radio. Below are the specs
Modulation - NRZ/PM
Modulation index - 1.86rad
Data rate - 720896bps
Required bandwidth (taking account Doppler and carrier drift) - 4367285.12 Hz
Based on the specs above, I found the following aspects challenging. Im looking for tips on how to proceed
Sampling
The total required bandwidth , Δω = 4367285.12 Hz has to be captured. Therefore, I have upsampled the signal by a factor of 16. The resulting sample rate is Rs = 69876561.92Hz. Given that the data rate is R = 720896bps, the number of samples per symbol becomes
sps = Rs⁄R = 96.93. To get a good sps value, I upsample the signal by 1600 and downsample it by 9693. This will give sps = 16, which is easier to deal with. Is my approach correct? Any suggestions on how to set the USRP clock rate to accommodate this sampling rate will also be appreciated.
Carrier frequency tracking
In my other satellite applications, I have been relying on GPredict for Doppler effect mitigation, which cant be used in my case [tracking software is not GPredict]. Doppler shift and carrier drift account for 242Khz of overall carrier shift. The approach I have in mind is to use something like a Phase-Locked Loop for carrier tracking. An example of how to do this in GNU Radio will be highly appreciated
NRZ/PM Demodulation
To my understanding this modulation scheme encodes data to the phase of a sinusoid. Its pretty different from the standard modulation schemes im familiar with such PSK, FSK etc. Any information about this modulation scheme is highly appreciated. Also, there is no demodulator block in GNU Radio. Any suggestions on how to implement it will also be appreciated

iBeacons: "Accuracy" unit

What is the unit in which core location framework gives the "Accuracy" (distance) for iBeacons. According to my knowledge it should be in Meters. But, in my app I have placed some beacons in distance of 19 Meters (63 foot) and the accuracy value of beacon from the framework is coming to be greater than 25 also sometimes.
The unit of CLBeacon.accuracy is in meters, but as you have witnessed, it is only a rough estimate. At short distances of 3 meters or less, the estimate will usually be within a meter. At longer distances it can be off by 10 meters or more.
This error is due to radio noise, multipath and attenuation. Estimation errors are a fundamental limitation of the technology, so you must set expectations appropriately.
Read more here: http://developer.radiusnetworks.com/2014/12/04/fundamentals-of-beacon-ranging.html