I am trying to develop a smart home action that works with local fulfillment but my device doesn't receive the UDP broadcast request. I have a google home device at home which is connected to my account.
I have made the next steps:
add otherDevicesIds field to my sync response
add device scan configuration (I have chosen UDP )
implement UDP server on the device. it listens on 8888 ports and responds to echo -n "test data" | nc -u -b 255.255.255.255 8888
Packets from my laptop are received by my DIY smart home device but I don't see any packets from the google home assistant device(it is in the same network as my laptop). It seems that google assistant does not send UDP broadcast at all.
I have added an example of my sync response and a screenshot of Device Scan Configuration for this action.
How to make my device receive UDP broadcast? Tell me if I have understood the local fulfillment wrong.
SYNC response example below
{
"payload": {
"agentUserId": "fas87df6a8s7d6f",
"devices": [
{
"otherDeviceIds": [
{
"agentId": "fasf87da",
"deviceId": "sdfta87sd6f"
}
],
"deviceInfo": {
"model": "LIGHT",
"manufacturer": "87sd6f87asd",
"swVersion": "1.0",
"hwVersion": "LIGHT"
},
"customData": {},
"id": "light-1234112",
"attributes": {},
"type": "action.devices.types.LIGHT",
"name": {
"defaultNames": [
"light"
],
"nicknames": [
"light"
],
"name": "7f6as87fa8"
},
"traits": [
"action.devices.traits.OnOff",
"action.devices.traits.Brightness"
],
"willReportState": false
}
]
},
"requestId": "7298347129347192374"
}
Below is the example of google action UDP config.
As mentioned in the comments by you, you didn’t create a local fulfillment app which is necessary for developing & testing local fulfillment for any of your existing Smart Home Actions. You can find more information about this at https://developers.google.com/assistant/smarthome/develop/local?hl=en
Additionally, make sure that you have added the devices correctly after enabling the test suite.
Related
I am working on one sample IoT project. Where 1 IoT device is registered on IotHub. It is exposing 1 direct method to control device temperature. On, device startup it is registering callback on IoTHub to listen for method invocation requests.
As per my understanding and knowledge, there is no way on cloud side that we can know that particular device is exposing this many direct methods and what is name of that method. (Because of MQTT/AMQP use internally).
Still, to be sure of if there is any work around to get direct methods registered by end device. Is there any SDK function or REST API to get list of direct methods registered by end-device.
You're correct in assuming there is no built-in support that lists the direct methods of your device. A device doesn't publish what methods it has implemented by default.
Options:
IoT PnP
Microsoft created IoT Plug and play, which focuses on "device models". When a Plug and Play device connects to the IoT Hub, it can make its device model known. Part of this model is the concept of "commands", which translates to a direct method for IoT Hub. Your device probably doesn't have this device model yet, as PnP is pretty new. A device manufacturer/developer can integrate this model into the device.
Create your own index command
If you are the one writing code for this device, and you don't want to do PnP, you could create a direct method that lists all the methods supported by your device. Of course, you would need to know the name of that direct method to call it.
Recently, the Azure IoT Hub (version 2020-09-30) has been publicly enabled for IoT Plug and Play, where the device model is the "glue" between the device and service facing sides. See more details about this concept here. The device twin has been extended for new property such as modelId, which it represents an identifier of the pnp model in the repository, see more details here.
Once, the modelId has been populated in the device twin, the device knows all expected direct methods included their request/response schemas, c2d messaging, reported and desired properties and telemetry data. On the other side such as a service side, the invoker knows how to invoke a direct method on the device, etc.
The following is an example of the short pnp model with one telemetry data (Temperature) and one command for invoking a direct method SetTemp on the device in the synchronous manner (no c2d message). It has been created in the IoT Central App:
pnp model (modelId = "dtmi:rk2021iotcfree:Test6vj;1"):
{
"#id": "dtmi:rk2021iotcfree:Test6vj;1",
"#type": "Interface",
"contents": [
{
"#id": "dtmi:rk2021iotcfree:Test6vj:Temperature;1",
"#type": [
"Telemetry",
"Temperature"
],
"displayName": {
"en": "Temperature"
},
"name": "Temperature",
"schema": "double",
"unit": "degreeCelsius"
},
{
"#id": "dtmi:rk2021iotcfree:Test6vj:SetTemp;1",
"#type": "Command",
"commandType": "synchronous",
"displayName": {
"en": "SetTemp"
},
"name": "SetTemp",
"request": {
"#id": "dtmi:rk2021iotcfree:Test6vj:SetTemp:__request:temp;1",
"#type": "CommandPayload",
"displayName": {
"en": "temp"
},
"name": "temp",
"schema": "double"
}
}
],
"displayName": {
"en": "Test"
},
"#context": [
"dtmi:iotcentral:context;2",
"dtmi:dtdl:context;2"
]
}
Based on the modelId, the simulated device10 has been connected as a pnp device to the Azure IoT Hub and the screen snippet shows a received message on the direct method SetTemp invoked from the Azure IoT Explorer tool:
the following screen snippet shows a device twin of the device10, as you can see there is a modelId property:
I do recommend to use the pnp model for your solution. if you are interesting only for commands, you can create a small subset of the model just only for that, see the following example:
{
"#id": "dtmi:rk2021iotcfree:Test6vj;1",
"#type": "Interface",
"contents": [
{
"#type": "Command",
"commandType": "synchronous",
"name": "SetTemp"
}
],
"#context": [
"dtmi:dtdl:context;2"
]
}
where:
"#id": "dtmi:rk2021iotcfree:Test6vj;1"
represents the modelId
"commandType": "synchronous"
represents the direct method call
"name": "SetTemp"
represents the method name
I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon
I am developing one video calling application. Currently using Xirsys's stun and turn server. I am using the result of https://service.xirsys.com/ice as my configurations. Is it the right username and credential to use in the Javascript page or anything else. If it is wrong then please guide me where will I get the correct iceServers values.
iceServers = [
{ "url": "stun:turn01.uswest.xirsys.com" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turn:turn01.uswest.xirsys.com:80?transport=udp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turn:turn01.uswest.xirsys.com:3478?transport=udp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turn:turn01.uswest.xirsys.com:80?transport=tcp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turn:turn01.uswest.xirsys.com:3478?transport=tcp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turns:turn01.uswest.xirsys.com:443?transport=tcp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" },
{ "username": "0xxxxxx8-fxxc-1xx6-bxxb-bxxxxxxxxxx8", "url": "turns:turn01.uswest.xirsys.com:5349?transport=tcp", "credential": "0xxxxxxe-fxxc-1xx6-axx0-axxxxxxxxxx9" }
];
Note: Its working in the same network but not in different network. Even in different network I can get the incomming call but after receiving the call iceConnectionState gets failed.
I have also raised similar question here where I was using numb as stun and turn server.
Thanks in advance.
The ICE string should be used 'as-is' in the ice configuration for your WebRTC application. Note, however, that the ICE credentials are only valid for 30 seconds. You need to request a fresh ICE string immediately before each connection.
Local network connections will work WITHOUT a valid ICE string, because your NAT translation will NOT use TURN / STUN. This is because your NAT translates your local IP's itself. Therefore, local network connections will always work (unless you have a non-common NAT situation). If you were using the ICE credentials without refreshing them before each call, that will certainly be why your external connections were failing (or certainly will contribute to the problem).
Lee
I have a hybrid app and having some issue seeing notification..
I get the error
com.ibm.pushworks.server.exceptions.PushWorksException: FPWSE0009E: Internal server error. No devices found
I running on Local MFP (eclipse -- V7.1).. I see the device in the worklightconsole and the app is install on my phone (Xcode->phone via USB) and I see the opt-in notification message.. However, I get the error when I tried to send a push..
I am using postman and the restAPI
http://localhost:10080/worklightadmin/management-apis/1.0/runtimes/MyMobile/notifications/applications/myProj/messages
Here is the body of the post request
{
"message": {
"alert": "Test message"
},
"settings": {
"apns": {
"badge": 1,
"iosActionKey": "Ok",
"payload": {},
"sound": "song.mp3"
},
"gcm": {
"payload": {},
"sound": "song.mp3"
}
},
"target": {
"consumerIds": [],
"deviceIds": ["166CB698-45C2-4C61-9074-248EA4F8AA8F"],
"platforms": [
"A","G"
]
}
}
Can you give some hints to solve this issue ..
Thanks
As Vivin mentioned in the comments, your device ID may be wrong.
The same JSON works for me , in my local setup. Is it possible the device id you entered is wrong? With the current parameters within "target" you can get the error message only if the device id is wrong.
How can I access the complete avContent API service in a Sony ActionCam AS-200V when operating the camera in WiFi mode? This avContent API responds to "getMethodTypes" with only the following methods when the camera is operating in WiFi mode. When operating the camera in direct attached mode this API shows all of its methods.
Results in WiFi mode:
{
"results": [
[
"getMethodTypes",
[
"string"
],
[
"string",
"string*",
"string*",
"string"
],
"1.0"
],
[
"getVersions",
[],
[
"string*"
],
"1.0"
]
],
"id": 1
}
The HDR-AS200 supports single connection and multi connection as the way to connect to the other device for end-users, but for Camera Remote API beta, we only support single connection which is the one-to-one access of the camera and the other device.
So any functionality of the avContent service while in WiFi mode is not officially supported.