I have a few TinkerForge sensors connected to a Raspberry Pi. I have humidity and barometer bricklets and almost everything is working fine. The only exception is for a single sensor: the Illuminance bricklet. For this there is no data sent to Cumulocity. The TinkerForge demo application works perfectly and displays luminosity values, so the sensor is producing data properly. But the Agent is not recognizing this sensor, or this type of sensor. The log contains the following warning:
Jun 17 15:12:51 raspberrypi root: 15:12:51.719 [Callback-Processor] WARN c8y.tinkerforge.Discoverer - Unsuported device identifier: 2131
Please advise.
Short answer: Some bricklets are just not supported.
I downloaded the source code for the Cumulocity Agent (written in Java) and I browsed through it and I found out that the Illuminance bricklet is not listed.
Until Cumulocity updates their agent, it just won't work.
If and when this happens, I will update this answer.
Related
I have built the latest version of wso2 emm android agent (cdmf-agent-android v3.1.30) and got some initial tests working in BYOD mode with IoT server 3.1.0
When built for COSU it is waiting for provisioning with another device via NFC. But I want to provision devices without NFC. What options do I have? Could I trigger programmatically a custom provisioning option?
There are some options to do this, depending on your android version.
I will start with the simplest option. If you have Android 7+ you can use QR Code provisioning, this follows the exact same process as NFC provisioning. You can see some specifications from Google regarding this.
The second option is a bit trickier and requires some custom dev on your side. First thing to to make your device a Device Owner (Which is needed for COSU mode, read up about Device Owner here). Using the command: adb shell dpm set-device-owner org.wso2.iot.agent/org.wso2.iot.agent.services.AgentDeviceAdminReceiver
Note: Only one device owner can be set, to remove a device owner the device has to be factory reset.
Once this is done you can launch your app using adb shell am start -n "org.wso2.iot.agent/org.wso2.iot.agent.activities.SplashActivity".
The above will get your app to run correctly but now it has to authenticate itself to communicate to the server. When using NFC provisioning an Access Token is delivered in the Extra Bundle as 'android.app.extra.token', you can insert this extra in the launch intent as follows: adb shell am start -n "org.wso2.iot.agent/org.wso2.iot.agent.activities.SplashActivity" --es android.app.extra.token generated_access_token. You will have to edit the SpashActivity class to accept this token and follow the general authentication processes built into the app.
This may be a little bit late but I hope it is still helpful!
Some extra information you may appreciate, here is a string representation of the NFC message used, these are the specifications set in the NFC Provisioning App:
`
#Thu Apr 12 13:42:11 GMT+02:00 2018
android.app.extra.PROVISIONING_LOCAL_TIME=1523533331087
android.app.extra.PROVISIONING_TIME_ZONE=Asia/Colombo
android.app.extra.PROVISIONING_SKIP_ENCRYPTION=true
android.app.extra.PROVISIONING_WIFI_SECURITY_TYPE=WPA
android.app.extra.PROVISIONING_WIFI_PASSWORD=PASSWORD
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_DOWNLOAD_LOCATION=LOCATION_OF_APK
android.app.extra.PROVISIONING_WIFI_SSID="WIFI_SSID_NAME"
android.app.extra.PROVISIONING_LOCALE=en_US
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_CHECKSUM=E8PtiqUOcqKi5IXeRBF-5Br0zXg
android.app.extra.PROVISIONING_ADMIN_EXTRAS_BUNDLE=\#admin extras bundle\n\#Thu Apr 12 13\:42\:11 GMT+02\:00 2018\nandroid.app.extra.token\=GENERATED_ACCESS_TOKEN\n
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME=org.wso2.iot.agent
`
An example of a QR Code representation would be:
`
{
"android.app.extra.PROVISIONING_DEVICE_ADMIN_COMPONENT_NAME": "org.wso2.iot.agent/org.wso2.iot.agent.services.AgentDeviceAdminReceiver",
"android.app.extra.PROVISIONING_DEVICE_ADMIN_SIGNATURE_CHECKSUM": "CSGeivCEHdJrPT0qy4W67LZSy32Fus7GyUn0jE5o028",
"android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_DOWNLOAD_LOCATION": "APK_DOWNLOAD_LOCATION",
"android.app.extra.PROVISIONING_SKIP_ENCRYPTION": false,
"android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME": "org.wso2.iot.agent",
"android.app.extra.PROVISIONING_ADMIN_EXTRAS_BUNDLE": {
"android.app.extra.token":"GENERATED_ACCESS_TOKEN"
}
}
`
I am running a simple SparkStreaming application, that consists in sending messages through a socket server to the SparkStreaming Context and printing them.
This is my code, which I am running in IntelliJ IDE:
SparkConf sparkConfiguration= new SparkConf().setAppName("DataAnalysis").setMaster("spark://IP:7077");
JavaStreamingContext sparkStrContext=new JavaStreamingContext(sparkConfiguration, Durations.seconds(1));
JavaReceiverInputDStream<String> receiveData=sparkStrContext.socketTextStream("localhost",5554);
I am running this application in a standalone cluster mode, with one worker (an Ubuntu VM) and a master (my Windows host).
This is the problem: When I run the application, I see that it successfully connected to the master, but it doesn't print any lines:
it just stays this way permanently.
If I go to the Spark UI, I find that the SparkStreaming Context is receiving inputs, but they are not being processed:
Can someone help me please? Thank you so much.
You need to perform below.
sparkStrContext.start(); // Start the computation
sparkStrContext.awaitTermination(); // Wait for the computation to terminate
Once you do this , you need to post the messages at port 5554 , for this you will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using and start pushing the stream.
For example , you need to do like below.
TERMINAL 1:
# Running Netcat
$ nc -lk 5554
hello world
TERMINAL 2: RUNNING Your streaming program
-------------------------------------------
Time: 1357008430000 ms
-------------------------------------------
hello world
...
...
You can check similar example here
My Node Red application in IBM BlueMix is repeatedly crashing - once an hour - with no real error message other than "exited with status: 1."
How can I troubleshoot this issue?
Is there someone from IBM BlueMix support that monitors this that could take a look?
I looked at my logs and there's nothing in there that really says what's going on.
Edit per requests:
The regular log for "OUT/ERR" is scrolling so fast with HTTPD logs that I can't get it to copy/paste. Filtering to "ERR" Channel the only thing I see is below. I believe this is an error which occurs during deploy when the application restarts.
[App/0] ERR js-bson: Failed to load c++ bson extension, using pure JS version
My Node Red application is gathering data from Wink, LIFX, and other IoT services and compiles them together into a Freeboard dashboard.
Caught crash on screenshot here -- not enough cred to post images so it'll only post as a link
The zlib error was fixed in the 0.13.2 Node-RED release (that shipped 19/02/16).
If you re-stage your application is should pick up the new version of Node-RED
You can re-stage the application using the cf command line management application:
cf restage <app name>
I set policy to android devices on EMM, and the device which I enroll into EMM is violated but in Device Compliance Monitoring report, the status of device is Policy Compliance while in Devices Compliance Monitoring (Current Status: Active) its violated!
I think the problem is in status variable value in devices_complience.hbs, it was Policy Compliance, where this data is loaded from ?
How can I fix this issue?
More info:
wso2-emm version 1.1.0
server: win 7
This seems to be a bug
https://wso2.org/jira/browse/EMM-764
Data is getting form the module mdm-reports.js in emm. You can find that file from
https://github.com/wso2/product-emm/blob/master/modules/apps/emm/modules/mdm_reports.js
It uses SQL query to retrieve information form the database.
I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.