Calculate A-Weighting and C-weighting values - objective-c

I am working on fetching the current peak and average peak values using Core Audio, however I would also like to fetch A-Weighted, Flat, and C-Weighted dB values. I've searched the documentation, but failed to find anything on this concept.

CoreAudio does not provide such options. You can filter the signal, then calculate the peak/avg from the filtered signal.

You could use (full- or third-) octave band filters and use tabulated values to perform the A- or C- weighting for each band. Calculate the signal level in each band and add the tabulated weighting values found here.

Related

Detect breakdown voltage in an AC waveform

I need to monitor an AC Voltage waveform and record the RMS value when the breakdown happens. I roughly know how to acquire data from videos I have watched, however, it is difficult for me to produce a solution that reads the Breakdown Voltage Value. Ideally, I would also take a screenshot along with the breakdown voltage value,
In case you are not familiar with this topic, When a breakdown happens the voltage will drop immediately to zero. So what I need is to measure the voltage just before it falls to zero, and if possible take a screenshot. This is an image of a normal waveform (black) with a breakdown one (red).
Naive solution*:
Take the data and get the Y values (this would depend on the datatype you have, which would depend on how you acquire the data).
Find the breakdown point by iterating over the values and maintaining a couple of flags (I would probably say track "got higher than X" and once that's true, track "got lower than Y").
From that, I would just say take the last N points (Get Array Subset) and get the array max. Or just track the maximum value as you run.
Assuming you have the graph in a control, you can just right click and select Create>>Invoke Node>>Export Image.
I would suggest trying playing with that with a VI with static data which you can repeatedly run to check how your code behaves.
*I don't know the problem domain and an not overly familiar with the various analysis VIs that ship with LV, so there are quite possibly more efficient ways of doing this.

What's the best way to account for missing records when performing aggregate queries?

I have a table in QuestDB with IoT sensor data. The usual operation pattern is that sensors write info to a table while they have an active internet connection. This means they are anywhere from a few minutes to a few hours per day or constantly sending me data. When I want to run an aggregate query on top of this, how can I account for missing values?
If I want an average by minute over a 24 hour period, but 4 hours of data is missing, will my results be skewed? For example:
select avg(tempFahren) from (iot_logger timestamp(ts)) sample by 1m
It becomes obvious that I'm skipping directly to the next reported value when graphing so instead of a cyclical pattern, I get a sudden cliff when the sensor comes online again:
If you want to fill missing values, there is also the option to use the FILL keyword in SAMPLE BY aggregations. There are a few ways you can use this, such as filling by previous value, linear interpolation, or specify a constant:
select ts, avg(tempFahren) from (iot_logger timestamp(ts)) sample by 1m fill(linear);
There are some more examples of how to use this on the official documentation
Aggregation functions like avg() ignore missing data (for example null values).
So no, your results will not be skewed if your sensors do not send data for some time.

python - pandas - dataframe - data padding multidimensional statistics

i have a dataframe with columns accounting for different characteristics of stars and rows accounting for measurements of different stars. (something like this)
\property_______A _______A_error_______B_______B_error_______C_______C_error ...
star1
star2
star3
...
in some measurements the error for a specifc property is -1.00 which means the measurement was faulty.
in such case i want to discard the measurement.
one way to do so is by eliminating the entire row (along with other properties who's error was not -1.00)
i think it's possible to fill in the faulty measurement with a value generated by the distribution based on all the other measurements, meaning - given the other properties which are fine, this property should have this value in order to reduce the error of the entire dataset.
is there a proper name to the idea i'm referring to?
how would you apply such an algorithm?
i'm a student on a solo project so would really appreciate answers that also elaborate on theory (:
edit
after further reading, i think what i was referring to is called regression imputation.
so i guess my question is - how can i implement multidimensional linear regression in a dataframe in the most efficient way???
thanks!

GPS error value on eclipse

I am working on project that get the longitude and latitude and it is everywhere, but I need to get the error estimation at least to draw a circle around my GPS point. How i can find this error in eclipse.
It is known that the total Potential error is around 15 m and the total Typical error is around 10 m. Do i need the average deviation to find the error?
I just need to know how to find the accuracy of my GPS and what values can i fetch using Eclipse?
Thanks alot.
Not sure about your "eclipse" As far as GPS error, there are some things to consider...
Some receivers can output a NMEA GST message with includes error estimates.
You can watch GDOP and HDOP values to get an idea as well as number of satellites tracked.
Assuming a GDOP < 6 and an HDOP <3 with more then 4 tracked satellites you should achieve the receivers specified RMS accuracy.
Another, more complicated method, which should be the basis for the results in then mentioned GST message is Range Residuals. Advanced receivers may output these RRE measurements for each satellite. Using this information you can even detect errors such as multipath, and RF interference. RRE values tell how well a satellites range measurements match the resolved position result.

How to distinguish in master data and calculated interpolated data?

I'm getting a bunch of vectors with datapoints for a fixed set of values, in the example below you see an example of a vector with a value per time point
1D:2
2D:
7D:5
1M:6
6M:6.5
But alas not for all the timepoints is a value available. All vectors are stored in a database and with a trigger we calcuate the missing values by interpolation, or possibly a more advanced algorithm. Somehow I want to be able to tell which data points have been calculated and which have been original delivered to us. Of course I can add a flag column to the table with values indicating whether the value was a master value or is calculated, but I'm wondering whether there is a more sophisticated way. We probably don't need to determine on a regular basis, so cpu cycles are not an issue for determining or insertion.
The example above shows some nice looking numbers but in reality it would look more somethin like 3.1415966533.
The database for storage is called oracle 10.
cheers.
Could you deactivate the trigger temporarily?