CoreWLan - RSSI value difference between CWNetwork and CWInterface - objective-c

I'm using the CoreWLan framework for Mac OS X to read RSSI values from an access point.
I can do it two different ways:
Using the Interface (that is connected to my network)
currentInterface = [CWInterface interface];
[currentInterface rssivalue];
Using the network:
currentInterface = [CWInterface interface];
networks = [[currentInterface scanForNetworksWithName:#"mySSID" error:nil] allObjects];
[networks[0] rssiValue];
However, it seems that these two methods (that should give the same value since the interface is connected to the network) give different results. The latter method seems to consistently give values of 3-5 dB stronger than the Interface method.
Any ideas as to why this discrepancy is happening? Which one is more "legitimate"?

The rssiValue property of the current interface (first case) gives you the aggregate rssi.
While rssiValue of the scan results (second case) gives your the rssi value for the moment of scan.
Thats why in general first one less than second one. But sometimes you can see the very low rssi values on scan results because of some interference.
And there is another thing: there can be few access points with the same ssid, and in such case you can't be sure that networks[0] will be the result for currently connected one. You should check the bssid value first.

Related

SUMO - simulating traffic scenario

How can I simulate continuous traffic flow from historical data which consists of:
1. Vehicle ID;
2. Speed;
3. Coordinates
without knowing the routes of each vehicle ID.
This is a commonly asked questions but probably hasn't been answered here before. Unfortunately the answer largely depends on the quality of your input data mainly on the frequency / distance of your location updates (it would be also helpful if there is a time stamp to each datum) and how precise the locations fit your street network. In the best case there is a location update on each edge of the route in the street network and you can simply read off the route by mapping the location to the street. This mapping can be done using the python sumolib coming with sumo:
import sumolib
net = sumolib.net.readNet("myNet.net.xml")
route = []
radius = 1
for x, y in coordinates:
minDist, minEdge = min([(dist, edge) for edge, dist in net.getNeighboringEdges(x_coordinate, y_coordinate, radius)])
if len(route) == 0 or route[-1] != minEdge.getID():
route.append(minEdge.getID())
See also http://sumo.dlr.de/wiki/Tools/Sumolib#locate_nearby_edges_based_on_the_geo-coordinate for additional geo conversion.
This will fail when there is an edge in the route which did not get hit by a data point or if you have a mismatch (for instance matching an edge which goes in the "wrong" direction). In the former case you can easily repair the route using sumo's duarouter.
> duarouter -n myNet.net.xml -r myRoutesWithGaps.rou.xml -o myRepairedRoutes.rou.xml --repair
The latter case is considerably harder both to detect and to repair because it largely depends on your definition of a wrong edge. There are almost clear cases like hitting suddenly the opposite direction (which still can happen in real traffic) and a lot of small detours which are hard to decide and deserve a separate answer.
Since you are asking for continuous input you may also be interested in doing this live with TraCI and in this FAQ on constant input flow.

nvmlDeviceGetPowerManagementMode() always returning NVML_ERROR_INVALID_ARGUMENT?

I am writing a code to measure the power usage of an NVIDIA Tesla K20 GPU (Kepler architecture) periodically using the NVML API.
Variables:
nvmlReturn_t result;
nvmlEnableState_t pmmode;
nvmlDevice_t nvmlDeviceID;
unsigned int powerInt;
Basic code:
result = nvmlDeviceGetPowerManagementMode(nvmlDeviceID, &pmmode);
if (pmmode == NVML_FEATURE_ENABLED) {
result = nvmlDeviceGetPowerUsage(nvmlDeviceID, &powerInt);
}
My issue is that nvmlDeviceGetPowerManagementMode is always returning NVML_ERROR_INVALID_ARGUMENT. I checked this.
The NVML API Documentation says that NVML_ERROR_INVALID_ARGUMENT is returned when either nvmlDeviceID is invalid or pmmode is NULL.
nvmlDeviceID is definitely valid because I am able to query its properties which match with my GPU. But I don't see why I should set the value of pmmode to anything, because the documentation says that it is a Reference in which to return the current power management mode. For the record, I tried assigning an enable value to it, but the result was still the same.
I am clearly doing something wrong because other users of the system have written their own libraries using this function, and they face no problem. I am unable to contact them. What should I fix to get this function to work correctly?
The problem here was not directly in the API call - it was in the rest of the code - but the answer might be useful to others. Before attempting this solution, one must know for a fact that Power Management mode is enabled (check with nvidia-smi -q -d POWER).
In case of the invalid argument error, it is very likely that the problem lies with the nvmlDeviceID. I said I was able to query the device properties and at the time I was sure it was right, but be aware of any API calls that modify the nvmlDeviceID value later on.
For example, in this case, the following API call had some_variable as an invalid index, so nvmlDeviceID became invalid.
nvmlDeviceGetHandleByIndex(some_variable, &nvmlDeviceID);
It had to be changed to:
nvmlDeviceGetHandleByIndex(0, &nvmlDeviceID);
So the solution is to either remove all API calls that change or invalidate the value of nvmlDeviceID, or at least to ensure that any existing API call in the code does not modify the value.

Objective C - Cross-correlation for audio delay estimation

I would like to know if anyone knows how to perform a cross-correlation between two audio signals on iOS.
I would like to align the FFT windows that I get at the receiver (I am receiving the signal from the mic) with the ones at the transmitter (which is playing the audio track), i.e. make sure that the first sample of each window (besides a "sync" period) at the transmitter will also be the first window at the receiver.
I injected in every chunk of the transmitted audio a known waveform (in the frequency domain). I want estimate the delay through cross-correlation between the known waveform and the received signal (over several consecutive chunks), but I don't know how to do it.
It looks like there is the method vDSP_convD to do it, but I have no idea how to use it and whether I first have to perform the real FFT of the samples (probably yes, because I have to pass double[]).
void vDSP_convD (
const double __vDSP_signal[],
vDSP_Stride __vDSP_signalStride,
const double __vDSP_filter[],
vDSP_Stride __vDSP_strideFilter,
double __vDSP_result[],
vDSP_Stride __vDSP_strideResult,
vDSP_Length __vDSP_lenResult,
vDSP_Length __vDSP_lenFilter
)
The vDSP_convD() function calculates the convolution of the two input vectors to produce a result vector. It’s unlikely that you want to convolve in the frequency domain, since you are looking for a time-domain result — though you might, if you have FFTs already for some other reason, choose to multiply them together rather than convolving the time-domain sequences (but in that case, to get your result, you will need to perform an inverse DFT to get back to the time domain again).
Assuming, of course, I understand you correctly.
Then once you have the result from vDSP_convD(), you would want to look for the highest value, which will tell you where the signals are most strongly correlated. You might also need to cope with the case where the input signal does not contain sufficient of your reference signal, and in that case you may wish to (for example) ignore values in the result vector below a certain level.
Cross-correlation is the solution, yes. But there are many obstacles you need to handle. If you get samples from the audio files, they contain padding which cross-correlation function does not like. It is also very inefficient to perform correlation with all those samples - it takes a huge amount of time. I have made a sample code which demonstrates time shift of two audio files. If you are interested in the sample, look at my Github Project.

Custom EQ AudioUnit on iOS

The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/

Ada Modifying a variable address during runtime

I have an array and a variable declared like this
NextPacketRegister : array (1 .. Natural (Size)) of Unsigned_32;
PacketBufferPointer : Unsigned_32;
for PacketBufferPointer'Address use To_Address (SPW_PORT_0_OUT_REG_ADDR);
for NextPacketRegister'Address use To_Address (16#A000_0000# + Integer_Address (PacketBufferPointer));
PacketBufferPointer points to an HW registers that you access thru the PCI of our board.
NextPacketRegister uses this register's value + 16#A000_0000#
The thing is everytime I access NextPacketRegister, behind the scene I perform a PCI access, these access are very slow and we are trying to remove this limitation.
But I can't seem to find a way to modify NextPacketRegister'Address during runtime (I'd like to read ONCE the PacketBufferPointer register and then add this value + 16#A000_0000# only once so I don't have to perform PCI access everytime.
I looked around but I have no clue how I could achieve this.
That is correct; if you use for ...'address use to overlay an object at a specific address, you cannot change it later.
Generally I try to avoid overlays. What you show is one drawback to them. Another is that if the object has any parts that require initialization, they will be reinitialized every time the object is elaborated.
One thing I do have to ask up front though: This looks like a device driver. If you don't like the hit from going to the PCI bus then, fine. The obvious way around your problem of course is to just read the object into a temporary variable and use that when you don't want to hit the PCI bus. But obviously when you do that you are no longer reading directly from the device, and thus won't see changes it made to its memory-mapped registers (and your changes won't go straight to those memory-mapped registers). That's what you want, right? Ada contains no magic to allow you to get data on and off the PCI bus without hitting the PCI bus.
It almost looks like you are thinking that this line:
for NextPacketRegister'Address use To_Address (16#A000_0000# + Integer_Address (PacketBufferPointer));
Means: "Every time I access NextPacketRegister, go find the value of PacketBufferPointer and overlay it where it happens to be right now". That is not the case. This will only happen once when your declaration is processed. Thereafter, every access to something like NextPacketRegister[12] will go to the same place, without any access to PacketBufferPointer.
Another way would be to use pointers and Unchecked_Conversion. That's generally my preferred solution for overlays. It looks hairer, but what you are doing is hairy, so it should look that way. Also, it doesn't perform initializations on the overlaid memory area. I suppose that could be a bad thing though, if you count on those. Of course overlaying this way could cause an access to PacketBufferPointer, if you want. You'd have control over it depending on how you code it.
Since you asked about pointers, in this case I think you have a very valid case for using the package System.Address_to_Access_Conversions. I don't have the compiler handy, but I think it would go something like this:
type Next_Packet_Array is array (1 .. Natural (Size)) of Unsigned_32;
package Next_Packet_Array_Convert is new System.Address_To_Access_Conversions
(Next_Packet_Array);
Synced_Next_Packet_Address : System.Address;
Now when you "sync", I guess you'd want to hit that PacketBufferPointer to get the register value (as a SYSTEM.ADDRESS), and save it into a variable for later use:
Synced_Next_Packet_Address = 16#A000_0000# + Integer_Address (PacketBufferPointer);
And when you want to access the Next_Packet_Array, it would be something like this: Next_Packet_Array_Convert.To_Pointer (Synced_Next_Packet_Address).all
Make a structure (array of buffers ? ) that is what your set of packet buffers looks like and sit that at the address of the start of the array.
read the array index from the register.
you can write C in any language, even Ada.
At least it works and you get some sensible bounds checks.