How to add double tap detection to sample app gatt_sensordata_app? - bluetooth-gatt

I am developing an Android app that gets data from Movesense using the GATT profile from the sample app GATT Sensor Data App here.
I followed the tutorial available here. Building the app and getting the DFU worked fine. I can get IMU, HR and temperature data with no issues.
Now I'd like add a tap detection feature to my app. I understand that I have to subscribe to 'System/States', but first I need to be able to receive the system state data.
I understand that I need a modified DFU for that, but I don't understand what changes I should make in which files of the gatt_sensordata_app before rebuilding and generating the new DFU.
What changes should I make in order to broadcast /System/State data?
(I usually just deal with Android so apologies for the very basic question.)
I tried adding #include "system_states/resources.h" to GATTSensorDataClient.cpp but I don't know how to continue.

The normal data straming in the gatt_sensordata_app uses the sbem-encoding code that the build process generates when building the firmware. However /System/States is not among the paths that the code can serialize. Therefore the only possibility is to implement the States-support to the firmware.
Easiest way is to do as follows:
In your python app call data subscription with "/System/States/3" (3 == DOUBLE_TAP)
Add a special case to the switch in onNotify which matches the localResourceId to the WB_RES::LOCAL::SYSTEM_STATES_STATEID::LID
In that handler, return the data the way you want. Easiest is to copy paste the "default" handler but replace the code between getSbemLength() & writeToSbemBuffer(...) calls with your own serialization code
Full disclosure: I work for the Movesense team

Related

how to perform continuous speech to text on webrtc communication audio stream in mobile app

I am trying to add a continuous speech to text recognizer in a mobile application during a webrtc audio-only call.
I'm using react native on the mobile side, with the react-native-webrtc module and a custom web api for the signaling part. I've got the hand of the web api, so I am able to add the feature on it's side if it's the only solution, but I prefer to perform it on the client side to avoid consuming bandwidth if there is no need.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
I have merged the audio only webrtc demo with the audiovisualiser demonstration in one page but there, I did not find how to connect a mediaElementSourceNode (created via AudioContext.createMediaElementSource(remoteStream) at line 44 of streamvisualizer.js) to a web_speech_api SpeechRecognition class. In the Mozilla documentation, the audio stream seems to came with the constructor of the class, which may call the getUserMedia() api.
Second, during my researches I have found two open source speech to text engine : cmusphinx and mozilla's deep-speech. The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try. However, how to embed this in my react native application?
There are also Android and iOS natives webrtc modules, which I may be able to connect with cmusphinx platform specific bindings (iOS, Android) but I don't know about native classes inter-operability. Can you help me with that?
I haven't already created any "grammar" or define "hot-words" because I am not sure of technologies involved, but I can do it latter if I am able to connect a speech recognition engine to my audio stream.
You need to stream the audio to the ASR server by either adding another webrtc party on the call or by some other protocol (TCP/Websocket/etc). On the server you perform recognition and send results back.
First, I have worked and tested some ideas with my laptop browser. My first idea, was to use the SpeechRecognition interface from the webspeechapi : https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition
This is experimental and does not really work in Firefox. In Chrome it only takes microphone input directly, not dual stream from caller and callee.
The first one have a js binding and seems great with the audioRecoder that I can feed with my own mediaElementSourceNode from the first try.
You will not be able to run this as local recognition inside your react native app

Worklight JSON Store, can we get race conditions?

Worklight 6.1 on both Windows (colleague) and Mac (me), building an a Hybrid app destined for Android device but to speed up development we do initial testing as Mobile Web App in Chrome browser on desktop.
We get a weird symptom that I'm trying to fine-down to a reproducible test case. I think I see different behaviours when stepping in debugger and just letting it run. Want to check whether a certain coding pattern could be the cause of the symptom before I go any further.
Fundamental question: should we wait for the resolution of a promise returned by a JSONSTore request for an action on a collection before issuing another request? more explanation below.
The overall intent is to load some data into the JSONStore, with some intelligent replace/merge action if a record is already present. Pseudo code:
for each record retrieved from back-end
if ( record already present in Store )
do some data merging
replace record
else
add record
The application code actually works like this, just considering the add() case, the problem manifests when the store is empty, all records need to be added
for each record to add
addPromise = store.get().add(record);
listOfPromises.insert(addPromise);
examine the list of promises recording any errors
That is there is no "wait" for add to finish before issuing the next add request. Hence in effect we've initiated a set of adds "in parallel" whatever that might mean in JavaScript in Chrome.
The code appears to run just fine, no errors reported. On android device it works reliably. In Chrome under normal running (no stepping in debugger) we end up with no reported errors but only one record inserted - indeed as though a snapshot of the initial "empty" store had been taken and each add is working on that "empty" copy.
After writing this I'm now pretty convinced that the coding pattern described above is vulnerable to a kind of race and that the better approach is build a list of documents to be added and insert them in a single operation.
A more detailed answer will be coming later, but I now know that this
the coding pattern described above is vulnerable to a kind of race and
that the better approach is build a list of documents to be added and
insert them in a single operation.
is true. In the browser the JSONStore does require that we wait for the result of one request before issuing another one. The recommended approach is
var dataToAdd = buildArrayOfDataToAdd(responseFromServer);
var dataToReplace = buildArrayOfDataToReplace(responseFromServer);
jsonstore.add( dataToAdd ).then( function() { jsonstore.replace( dataToReplace); })

Working around Windows Store App Sandbox

I want to create a Metro app for learning and personal consumption purposes to do all sorts of low-level device API work, such as tracking power consumption, enumerating processes and calculating CPU usage per process, etc... Unfortunately, these Desktop APIs are forbidden from Metro applications.
My first attempt to work around this was to create a non-windows store C++ library, which has the WINAPI_FAMILY variable set correctly in order to use functions like QueryIdleProcessorCycleTime() and CallNtPowerInformation(). Unfortunately, it is this latter function call that fails when I pass it the ProcessorInformation token in the first argument, with a STATUS_ACCESS_DENIED return code.
Interestingly, CallNtPowerInformation() works just fine when given SystemBatteryState as the first argument, so I imagine there is some kind of access privilege I am missing when running as a Metro app for getting processor info. I read that Metro apps are run with quite restricted privileges, and so I am looking for a way to increase these privileges to allow my API calls to go through properly. To test that it is the process privileges and not a coding error, I used the C++ library from a console application, and everything worked just fine.
I would really like to not have to create a second, desktop background process that does all the dirty work and communicates the results to the Metro app over a socket. I realize this can work, but I would rather have everything housed in the same process space.

Is there any private api to monitor network traffic on iPhone?

I need to implement an app that monitors inbound/outbound connections by different apps on iPhone. my app is going to run in background using apple's background multitasking feature for voip and navigators.
I can use private api as my client doesn't need this app on appstore.
Thanks.
I got through this.
I didn't need any private Api to get information of inbound/outbound active connections.
Its just you need to know basic C programming and some patience.
I wrote to apple dev forums regarding this,and got response as-
From my perspective most of what I have to say about this issue is covered in the post referenced below.
<https://devforums.apple.com/message/748272#748272>>
The way to make progress on this is to:
o grab the relevant headers from the Mac OS X SDK
o look at the Darwin source for netstat to see how the pieces fit together
WARNING: There are serious compatibility risks associated with shipping an app that uses this technique; it's fine to use this for debugging and so on, but I recommend against shipping code like this to end users.
What I did step by step is -
1)downloaded code of netstat from BSD opensource -
2)add this to your new iphone project.
3)see,some header files are not present in ios sdk so you need take it copied from opensource.apple.com and add those in your iphone sdk at relevant path under-
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.0.sdk/usr/include
My xcode version is 4.5.2. so this is path relevent to my xcode. you can have different path according to versions of xcodes.Anyway. and remember to add those headers in both iosSdk & iOSSimulatorSdk both so that code will work on device as well as on simulator.
4)you may find some minor errors in netstat code relating not finding definitions of some structures in header files.e.g " struct xunpcb64 " .dont wory. definitions are present there.you need to comment some "#if !TARGET_OS_EMBEDDED" #else in those header files so that ios sdk can reach in those if condition and access the definition.(need some try and error.be patient.)
5)finally you will be abe to compile your code.Cheers!!
In case you haven't seen this already, I have used it successfully on my iPhone.
https://developer.apple.com/library/ios/ipad/#qa/qa1176/_index.html
I realize it is not exactly what you want.
Generally, I agree with Suneet. All you need is network sniffer.
You can try to do partial port of WireShark (it's open source and it works on MacOS) to iOS. Both iOS and OS X share most part of the kernel, so if it's not explicitly prohibited on iOS, it should work for you.
WireShark is quite big product, so you may be intersted to look for another open source network sniffer which work on OS X.

IAT/EAT hooking "gethostbyname"

I wrote this code to hook API functions by changing the address in the IAT and EAT: http://pastebin.com/7d9N1J2c
This works just fine when I want to hook "recv" or "connect". However for some unknown reason when trying to hook "gethostbyname", my hook function is never called.
I tried to find "gethostbyname" in a debugger by taking the base address of the wsock32.dll module + 0x375e, which is what the ordinal 52 of my wsock32.dll is showing as offset. But that just makes me end up in some random asm code, not at the beginning of a function.
The same method however works fine for trying to find the "recv" entry point.
Does anyone see what I might be doing wrong?
I recommend this tool:
http://www.moduleanalyzer.com/
They do exactly the same and show the url that was connected with that API.
The problem is that there are more than one API to translate an url to an address. The application you are hooking may be using another version of the API that you're not intercepting.
Run some disassembler like IDA and attach to your process after you hook this functions, ida get apply changes on attaching and play process and check what is wrong.
In other way you have many libraries to do hooks with trampolines like Microsoft Detours, NCodeHook etc.