I have a program in matlab which uses power and delay profile for modelling the channel. For example
power profile =[0 -8 -10 -15]
delay profile =[0 117 183 333]
However I have to use another pathloss equation which is
38.7 +16.7 log10(d)+18.2log_10(fc)+Xs+Ad
where d is distance between trans-recviever, fc is carrier frequency and Xs is the shadowing loss, Ad is some attenuation factor that I want to add. Should I multiple this expression with the power profile
Related
I want to create some simple heart rate monitor in LabVIEW.
I have sensor which gives me heart workflow (upper graph): Waveform
On second graph (lower graph) is amount of hills (0 - valley, 1 - hill) and that hills are heart beats (that is voltage waveform). From this I want to get amount of those hills, then multiply this number by 6 and I'll get heart rate per minute.
Measuring card I use: NI USB-6009.
Any idea how to do that?
I can sent a VI file if anyone will be able to help me.
You could use Threshold Peak Detector VI
This VI does not identify the locations or the amplitudes of peaks
with great accuracy, but the VI does give an idea of where and how
often a signal crosses above a certain threshold value.
You could also use Waveform Peak Detection VI
The Waveform Peak Detection VI operates like the array-based Peak
Detector VI. The difference is that this VI's input is a waveform data
type, and the VI has error cluster input and output terminals.
Locations displays the output array of the peaks or valleys, which is
still in terms of the indices of the input waveform. For example, if
one element of Locations is 100, that means that there is a peak or
valley located at index 100 in the data array of the input waveform.
Figure 6 shows you a method for determining the times at which peaks
or valleys occur.
NI have a great tutorial that should answer all your questions, it can be found here:
I had some fun recreating some of your exercise here. I simulated a squarewave. In my sample of the square wave, I know how many samples I have and the sampling frequency. As a result, I calculate how much time my data sample represents. I then count the number of positive edges in the sample. I do some division to calculate beats/second and multiplication for beats/minute. The sampling frequency, Fs, and number of samples, N or #s are required to calculate your beats per minute metric. Their uses are shown below.
The contrived VI
Does that lead you to a solution for your application?
At first blush this presumably means -
(1) looking only at lower IR frequencies,
(2) select a IR frequency cut-off for low frequency buckets of the u/v FFT grid
(3) Once we have that, derive the power distribution - squares of amplitudes - for that IR range of frequency buckets the camera supports.
(4) Fit that distribution against the Rayleigh-Jones classical Black Box radiation formula:
(https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Jeans_law#Other_forms_of_Rayleigh%E2%80%93Jeans_law)
(5) Assign a Temperature of 'best fit'.
The units for B(ν,T) are Power per unit frequency per unit surface area at equilibrium Temperature
Of course, this leaves many details out, such as (6) cancelling background, etc, but one could perhaps use the opposite facing camera to assist in that. Where buckets do not straddle the temperature of interest, (7) use a one-sided distribution to derive an inferred Gaussian curve to fit the Rayleigh-Jeans curve at that derived central frequency ν, for measured temperature T.
Finally (8) check if this procedure can consistently detect a high vs low surface temperature (9) check if it can consistently identify a 'fever' temperature (say, 101 Fahrenheit / 38 Celcius) pointing at a forehead.
If all that can be done, (10) Voila! a body fever detector
So those who are capable can fill us in on whether this is possible to do so for eventual posting at an app store as a free Covid19 safe body temperature app? I have a strong sense there's quite a few out there who can verify this in a week or two!
It appears that the analog signal assumed in (1) and (2) are not available in the Android digital Camera2 interface.
Android RAW image stream, that is uncompressed YUV, is already encoded Y green monochrome, and U,V are blue and red shifts from zero for converting green monochrome to color.
The original analog frequency / energy signal is not immediately accessible. So adaptation is not possible (yet).
Is there a way that the following process:
https://www.tensorflow.org/performance/quantization
And the call:
tf.contrib.quantize.create_eval_graph()
Could be tuned in such way like the following call does?
https://www.tensorflow.org/versions/master/api_docs/python/tf/quantize
I would like to have the weights being scaled to 8bits with symmetric ranges, with exact 0 and max/min being power 2 like it's with the SCALED mode. For example I would prefer -31 to 31 instead of -10 to 30. Even when -10 to 30 would give better resolution at 8bits, but accurate 0, symmetricity and range as power of 2 is more important for DSP devices.
TOCO(tf.lite.TocoConverter) so far does not have the option to control quantization type since you actually want symmetric quantization instead of asymmetric approach. However, real value of 0.0 is guaranteed to be accurate during quantization. This means 0.0 is mapped to an uint8 q without any rounding error.
I am implementing the ValveLinear model from the Modelica standard fluid library into a model of mine using Dymola. I have some questions regarding its parameters which I can hopefully clear up:
The key parameters for this valve are as follows:
parameter Medium.MassFlowRate m_flow_nominal
"Nominal mass flowrate at full opening";
final parameter Types.HydraulicConductance k = m_flow_nominal/dp_nominal
"Hydraulic conductance at full opening";
Modelica.Blocks.Interfaces.RealInput opening(min=0,max=1)
"=1: completely open, =0: completely closed"
The mass flow over the valve is then caclulated as
m_flow = opening*k*dp;
Am I right in assuming that m_flow_nominal is the maximum mass flow rate with a linear drop off in mass_flow down to zero as opening goes from 1 to 0?
Furthermore is dp_nominal the corresponding minimum pressure drop across the valve? (i.e. at fully open). Therefore would we see a linear increase in dp from dp_nominal to some maximum value as opening goes from 1 to 0?
The answer may seem trivial but I have run some examples with valves in Dymola so far and in some cases it seems that dp remains constant across the valve as the opening in varied which doesn't make sense to me.
The nominal mass flow rate and pressure drop are just design values used to calculate the valve coefficient k (fixed relation between pressure drop and mass flow). Since no "nominal opening degree" can be specified in ValveLinear the valve opening in the design point is assumed to be one (fully open valve).
The mass flow rate through the valve is not limited to m_flow_nominal. If you double the pressure drop the mass flow through the valve will double, regardless of the nominal mass flow rate.
An example model is shown below:
m_flow_nominal is 5 kg/s and dp_nominal is 10 bar.
At time = 0 s the (fixed) pressure drop over the valve is 10 bar and the valve is fully open. Therefore, the mass flow through the valve is 5 kg/s.
At time = 1 s the pressure drop over the valve is increased by 50
pct (from 10 to 15 bar). The mass flow increases with 50 pct as well
(to 7.5 kg/s).
At time = 3 s the valve opening is reduced by 50 % (from fully to
half open). The pressure drop remains at 15 bar (of course, since
it's a boundary value) while the mass flow rate is reduced to 50 pct (= 3.75 kg/s).
Regarding your second question. The pressure drop is not limited. If the mass flow through the valve is given as a boundary condition (e.g. if source in the model is replaced with a MassFlowSource_T) and the mass flow rate is reduced to half of the nominal value (from 5 to 2.5 kg/s) the pressure drop will also be reduced to half of the nominal value (10 to 5 bar). If the mass flow rate is zero, so will the pressure drop be.
If, on the other hand, you fix the mass flow rate to a value > 0 kg/s and ramp the valve opening towards zero, the pressure drop will approach infinity.
Best regards,
Rene Just Nielsen
I can change parameters C and epsilon manually to obtain an optimised result, but I found that there is parameter optimization of SVM by PSO (or any other optimization algorithm). There is no algorithm. What does it mean: how can PSO automatically optimize the SVM parameters? I read several papers on this topic, but I'm still not sure.
Particle Swarm Optimization is a technique that uses the ML parameters (SVM parameters, in your case) as its features.
Each "particle" in the swarm is characterized by those parameter values. For instance, you might have initial coordinates of
degree epsilon gamma C
p1 3 0.001 0.25 1.0
p2 3 0.003 0.20 0.9
p3 2 0.0003 0.30 1.2
p4 4 0.010 0.25 0.5
...
pn ...........................
The "fitness" of each particle (p1-p4 shown here out of a population of n particles) is measured by the accuracy of the resulting model: the PSO algorithm trains and tests a model for each particle, returning that model's error rate as the value analogous to that from the training loss function (which it how the value is computed).
On each iteration, particles move toward the fittest neighbours. The process repeats until a maximum (hopefully the global one) appears as a convergence point. This process is simply one from the familiar gradient descent family.
There are two basic PSO variants. In gbest (global best), every particle affects every other particle, sort of a universal gravitation principle. It converges quickly, but may well miss a global max in favor of a local max that happened to be nearer to the swarm's original center. In lbest (local best), a particle responds to only its k closest neighbors. This can form localized clusters; it converges more slowly, but is more likely to find the global max in a non-convex space.
I'll try to briefly explain enough to answer your clarification questions. If that doesn't work, I'm afraid you'll probably have to find someone to discuss this in front of a white board.
To use PSO, you have to decide which SVM parameters you'll try to optimize, and how many particles you want to use. PSO is a meta-algorithm, so its features are the SVM parameters. The PSO parameters are population (how many particles you want to use, update neighbourhood (lbest size and a distance function; gbest is the all-inclusive case), and velocity (learning rate for the SVM parameters).
For a bit of illustration, let's assume the particle table above, extended to a population of 20 particles. We'll use lbest with a neighbourhood of 4, and a velocity of 0.1. We choose (randomly, in a grid, or however we think might give us nice results) the initial values of degree, epsilon, gamma, and C for each of the 20 particles.
Each iteration of PSO works like this:
# Train the model described by each particle's "position"
For each of the 20 particles:
Train an SVM with the SVM input and the given parameters.
Test the SVM; return the error rate as the PSO loss function value.
# Update the particle positions
for each of the 20 particles:
find the nearest 4 neighbours (using the PSO distance function)
identify the neighbour with the lowest loss (SVM's error rate).
adjust this particle's features (degree, epsilon, gamma, C) 0.1 of the way toward that neighbour's features. 0.1 is our learning rate / velocity. (Yes, I realize that changing degree is not likely to happen (it's a discrete value) without a special case in the update routine.
Continue iterating through PSO until the particles have converged to your liking.
gbest is simply lbest with an infinite neighbourhood; in that case, you don't need a distance function on the particle space.