how can I integrate web app with fablo API? - api

I have deployed the smart contract(go) and the fablo network , I also have my own fablo rest APIs but I need to integrate this API with my web app how can I do it?
I'm working with this repository https://github.com/fablo-io/fablo-rest
I invoked the following steps in the readme file:
1- Run the script ./hyperledger-citc.sh to install some necessary software
2- Run the command sudo ./fablo recreate to start the network
3- You may open the file fablo-config.json to view the network components. It also includes the the installation of the chaincode "asset-transfer-basic" in Golang
4- You may use the commands sudo ./fablo [down | start | stop | up | prune | reset] to interact with the network
5- We use Fablo Rest API (https://github.com/fablo-io/fablo-rest) to interact with chaincodes and execute Transactions.
6- Create an authorization token using the below command. This token expires in few minutes, so it needs to be regenerated
I have learned that I can use fetch API function but it's only receives URL as a parameter but I don't have the URL of my API

In fabric samples,Check the rest-api-typescript project which is available under asset-basic-transfer or you can search "Asset Transfer REST API Sample" with this text on Fabric sample you will get the project info.

Related

Calling an API that runs on another GCP project with Airflow Composer

I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.

How use realm-cli from a react native application?

I need to execute a realm-cli command (disable or delete a user) from a mobile application that uses RealmDB, i didn't find any part of the docs that was related to do it.
I thought that i can use mongoClient but i didn't find any methods that allows me to execute raw cli commands.
I need to execute commands like:
realm-cli users disable --app=<Your App ID> --user=<User ID>
Font:
https://docs.mongodb.com/realm/users/delete-or-revoke/
Is there any other way ?
You may need to host the realm-cli and write a HTTP interface middleware to make these call.
I don't believe you can run the application on a mobile application as you would need access to spawning libraries. realm-cli is available open source so it would be possible to port the application to something like C++ (from golang) to make it executable for something like Android or iOS - but it may be cheaper to just buy a VPS somewhere (or even host it locally for a spell) and just pass the arguments to a web route.

How ambari detect a service state

I'm adding a new custom service to Ambari.
I have successfully created the service and install it in the Ambari web UI. After starting the master component of my new service, Ambari claims that the master is in stop status, however, the master has been run successfully on the intended node and I can use its API.
I wonder how Ambari checks a component status?
Does it use the status function which I have provided in the component definition? I don't see logs of calling my status function in the Ambari logs.
Or does it use the PID file? My component does not have a PID file.
#TailofGodzilla (cool name btw), When I make custom services, I start with existing open source examples, and then finally create management packs. You can easily reverse engineer these, including the service status function.
I checked 3 of these services (Hue, Elk, NiFi) and all are using PID File with entries for status function and status_params file.

Enable Cloud Vision API to access a file on Cloud Storage

i have already seen there are some similar questions but none of them actually provide a full answer.
Since I cannot comment in that thread, i am opening a new one.
How do I address Brandon's comment below?
"...
In order to use the Cloud Vision API with a non-public GCS object,
you'll need to send OAuth authentication information along with your
request for a user or service account which has permission to read the
GCS object."?
I have the json file the system gave me as described here when I created the service account.
I am trying to run the api from a python script.
It is not clear how to use it.
I'd recommend to use the Vision API Client Library for python to perform the call. You can install it on your machine (ideally in a virtualenv) by running the following command:
pip install --upgrade google-cloud-vision
Next, You'll need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. For example, on a Linux machine you'd do it like this:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Finally, you'll just have to call the Vision API client's method you desire (for example here the label_detection method) like so:
def detect_labels():
"""Detects labels in the file located in Google Cloud Storage."""
client = vision.ImageAnnotatorClient()
image = types.Image()
image.source.image_uri = "gs://bucket_name/path_to_image_object"
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
By initialyzing the client with no parameter, the library will automatically look for the GOOGLE_APPLICATION_CREDENTIALS environment variable you've previously set and run on behalf of this service account. If you granted it permissions to access the file, it'll run successfully.

WSO2 EMM Agent with COSU not using NFC

I have built the latest version of wso2 emm android agent (cdmf-agent-android v3.1.30) and got some initial tests working in BYOD mode with IoT server 3.1.0
When built for COSU it is waiting for provisioning with another device via NFC. But I want to provision devices without NFC. What options do I have? Could I trigger programmatically a custom provisioning option?
There are some options to do this, depending on your android version.
I will start with the simplest option. If you have Android 7+ you can use QR Code provisioning, this follows the exact same process as NFC provisioning. You can see some specifications from Google regarding this.
The second option is a bit trickier and requires some custom dev on your side. First thing to to make your device a Device Owner (Which is needed for COSU mode, read up about Device Owner here). Using the command: adb shell dpm set-device-owner org.wso2.iot.agent/org.wso2.iot.agent.services.AgentDeviceAdminReceiver
Note: Only one device owner can be set, to remove a device owner the device has to be factory reset.
Once this is done you can launch your app using adb shell am start -n "org.wso2.iot.agent/org.wso2.iot.agent.activities.SplashActivity".
The above will get your app to run correctly but now it has to authenticate itself to communicate to the server. When using NFC provisioning an Access Token is delivered in the Extra Bundle as 'android.app.extra.token', you can insert this extra in the launch intent as follows: adb shell am start -n "org.wso2.iot.agent/org.wso2.iot.agent.activities.SplashActivity" --es android.app.extra.token generated_access_token. You will have to edit the SpashActivity class to accept this token and follow the general authentication processes built into the app.
This may be a little bit late but I hope it is still helpful!
Some extra information you may appreciate, here is a string representation of the NFC message used, these are the specifications set in the NFC Provisioning App:
`
#Thu Apr 12 13:42:11 GMT+02:00 2018
android.app.extra.PROVISIONING_LOCAL_TIME=1523533331087
android.app.extra.PROVISIONING_TIME_ZONE=Asia/Colombo
android.app.extra.PROVISIONING_SKIP_ENCRYPTION=true
android.app.extra.PROVISIONING_WIFI_SECURITY_TYPE=WPA
android.app.extra.PROVISIONING_WIFI_PASSWORD=PASSWORD
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_DOWNLOAD_LOCATION=LOCATION_OF_APK
android.app.extra.PROVISIONING_WIFI_SSID="WIFI_SSID_NAME"
android.app.extra.PROVISIONING_LOCALE=en_US
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_CHECKSUM=E8PtiqUOcqKi5IXeRBF-5Br0zXg
android.app.extra.PROVISIONING_ADMIN_EXTRAS_BUNDLE=\#admin extras bundle\n\#Thu Apr 12 13\:42\:11 GMT+02\:00 2018\nandroid.app.extra.token\=GENERATED_ACCESS_TOKEN\n
android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME=org.wso2.iot.agent
`
An example of a QR Code representation would be:
`
{
"android.app.extra.PROVISIONING_DEVICE_ADMIN_COMPONENT_NAME": "org.wso2.iot.agent/org.wso2.iot.agent.services.AgentDeviceAdminReceiver",
"android.app.extra.PROVISIONING_DEVICE_ADMIN_SIGNATURE_CHECKSUM": "CSGeivCEHdJrPT0qy4W67LZSy32Fus7GyUn0jE5o028",
"android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_DOWNLOAD_LOCATION": "APK_DOWNLOAD_LOCATION",
"android.app.extra.PROVISIONING_SKIP_ENCRYPTION": false,
"android.app.extra.PROVISIONING_DEVICE_ADMIN_PACKAGE_NAME": "org.wso2.iot.agent",
"android.app.extra.PROVISIONING_ADMIN_EXTRAS_BUNDLE": {
"android.app.extra.token":"GENERATED_ACCESS_TOKEN"
}
}
`