I am trying to estimate the cost in Google cloud using cost billing API.In order to test the API with proper output am using POSTMAN and passing JSON representation inside the body.Was able to get the json representation for cloud storage and compute workload but unable to understand for cloud CDN, can anyone help me out?
https://cloud.google.com/billing/docs/reference/cost-estimation/rest/v1beta/CostScenario#cloudcdnworkload
`{
"cacheLookUpRate": {
object (Usage)
},
"cacheFillOriginService": enum (CacheFillOriginService),
"cacheFillRegions": {
object (CacheFillRegions)
},
"cacheFillRate": {
object (Usage)
}
---
}`
I am looking out for proper representation in JSOn so that I can pass it in POSTMAN and get the estimate for CLOUD CDN workload just like that I have got for compute.
Related
NOAA provides tidal and weather data through their own http API, and I would like to be able to use their API to get data into ThingsBoard (Professional) every six minutes to overlay with my device data (their data are updated every 6 minutes). Can someone walk me through the details of using the correct Integrations or Rule chains to get the time series data added to the database? It would also be nice to only use the metadata once. Below you can see how to get the most recent tide gauge level (water level) using their API.
For example, to see the latest tide gauge water level for a tide gauge (in this case, tide gauge 8638610), the API allows for getting the most recent water level information -- https://api.tidesandcurrents.noaa.gov/api/prod/datagetter?date=latest&station=8638610&product=water_level&datum=navd&units=metric&time_zone=lst_ldt&application=web_services&format=json
That call produces the following JSON: {"metadata":{"id":"8638610","name":"Sewells Point","lat":"36.9467","lon":"-76.3300"},"data":[{"t":"2022-02-08 22:42", "v":"-0.134", "s":"0.003", "f":"1,0,0,0", "q":"p"}]}
The Data Converter was fairly easy to construct (except maybe the noaa_data.data[0, 0] used in the code below):
//function Decoder(payload,metadata)
var noaa_data = decodeToJson(payload);
var deviceName = noaa_data.metadata.id;
var dataType = 'water_level';
var latitude = noaa_data.metadata.lat;
var longitude = noaa_data.metadata.lon;
var waterLevelData = noaa_data.data[0, 0];
//functions
function decodeToString(payload) {
return String.fromCharCode.apply(String, payload);
}
var result = {
deviceName: deviceName,
dataType: dataType,
time: waterLevelData.t,
waterLevel: waterLevelData.v,
waterLevelStDev: waterLevelData.s,
latitude: latitude,
longitude: longitude
}
function decodeToJson(payload) {
var str = decodeToString(payload);
var data = JSON.parse(str);
return data;
}
return result;
which has an Output:
{
"deviceName": "8638610",
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
I am not sure what process to use to get the data into ThingsBoard to be displayed as a device alongside my other device data.
Thank you for your help.
If you have a specific(and small) number of stations to grab then you can do the following:
Create the devices in Thingsboard manually
Go into rule chains, create a water stations rule chain
For each water station place a 'Generator' node, selecting the originator as required.
Route these into an external "Rest API" node.
Route the result of the post into a blue script node and put your decoder in there
Route result to telemetry
Example rule chain
More complex solution but more scalable:
Use a single generator node
Route the message into a blue script. This will contain a list of station id's that you want to pull info for. By setting the output of a script to the following you can make it send out multiple messages in sequence:
return [{msg:{}, metadata:{}, msgType{}, ...etc...]
Route the blue script into the rest api call and get the station data
Do some post processing with another blue script node if you need to. Don't decode the data here though.
Route all this into another rest api node and POST the data back to your HTTP integration endpoint (if you don't have one you will need to create it. Fairly simple)
Connect your data converter to this integration.
Finally, modify your output so that it is accepted by the converter output
{
"deviceName": "8638610",
"deviceType": "water-station",
"telemetry": {
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
}
Rough example
Above is how I would do it if I didn't want to use any external services. If you're AWS savvy I'd say set up a CRON job to trigger a lambda function every 6 minutes and post into your platform. Either will work.
I've tried booking references from a dozen providers (which I don't want to post for privacy reasons) and every time the API returns 'Unable to parse' but with no additional diagnostic information.
As a self-service API they don't offer support through any channel other than Stack Overflow, but I'm hoping someone has successfully used the endpoint.
I'm mostly using GMail to access sample flight booking emails, then selecting "View Original" to download the original MIME format email
This is what I use to read the .eml file into code:
function base64_encode(file) {
// read binary data
var bitmap = fs.readFileSync(file);
// convert binary data to base64 encoded string
return new Buffer(bitmap).toString('base64');
}
However every single email I submit to the endpoint eventually returns:
data:
{ data:
{ type: 'trip-parser-job',
id: 'REDACTED',
self: [Object],
status: 'ERROR',
detail: 'Unable to parse' } } }
and at this point, I'm starting to think that either the API is broken, or they haven't correctly documented what data should be submitted as content. I've decoded the sample document they provide and can't see any major difference between that and my inputs.
Does someone have either some working samples that the API was able to process, or some NodeJS code which seems to reliably get a result from the API?
I'm trying to work out whether the Google Analytics reporting API for the userActivity search returns any sampling on the data.
The documentation for the API:
https://developers.google.com/analytics/devguides/reporting/core/v4/rest/v4/userActivity/search
The documentation return response JSON:
{
"sessions": [
{
object(UserActivitySession)
}
],
"totalRows": number,
"nextPageToken": string,
"sampleRate": number
}
For the sampleRate, it states:
This field represents the sampling rate for the given request and is a number between 0.0 to 1.0. See developer guide for details.
But what does 1.0 actually mean? Does it mean it returns 100% of the sessions? The links it provides doesn't actually mention anything about the sample rate as a number.
I've tried to compare the JSON response with the Google Analytics UI, but this didn't confirm whether the data is sampled in the API response.
It means that the dimensions and attributes gathered/calculated are based on 100% of the sessions. If it is say 0.6, then it is only based on 60% of the sessions.
I'm taking my first experimental steps with google-pre-setup templates in a Google Cloud Template (Cloud Pub/Sub to BigQuery).
As a milestone to my final goal (having physical gadgets reporting a data stream to Google Cloud Pub/Bub), my wish is to achieve something like this:
POSTMAN (make authenticated POST request with JSON message to an Google Cloud Platform, GPC, endpoint) --> GPC Pub/Sub --> GPC DataFlow --> GPC BigQuery.
Right now I am following the tutorial found in Executing Templates, https://cloud.google.com/dataflow/docs/templates/executing-templates, "Example 2: Custom template, streaming job". This section states:
...This example projects.templates.launch request creates a streaming job
from a template that reads from a Pub/Sub topic and writes to a
BigQuery table. The BigQuery table must already exist with the
appropriate schema. If successful, the response body contains an
instance of LaunchTemplateResponse. ...
and further more how to do a POST:
https://dataflow.googleapis.com/v1b3/projects/[YOUR_PROJECT_ID]/templates:launch?gcsPath=gs://[YOUR_BUCKET_NAME]/templates/TemplateName
{
"jobName": "[JOB_NAME]",
"parameters": {
"topic": "projects/[YOUR_PROJECT_ID]/topics/[YOUR_TOPIC_NAME]",
"table": "[YOUR_PROJECT_ID]:[YOUR_DATASET].[YOUR_TABLE_NAME]"
},
"environment": {
"tempLocation": "gs://[YOUR_BUCKET_NAME]/temp",
"zone": "us-central1-f"
}
}
There are two things that confuses me. Let's for the sake of a simple example say that I have multiple vehicles who continuously should report their current status. I have already created my MQTT topic: VEHICLE_STATUS. Each och my vehicles should be able to report its:
Position [String]
Speed [Float]
Time [String]
VehicleID [Integer]
=======
I'm aware of the prototype for a PubsubMessage:
{
"data": string,
"attributes": {
string: string,
...
},
"messageId": string,
"publishTime": string,
}
My questions:
How should my BigQuery table schema look (which columns do I need to create)?
How should the entire corresponding JSON message look? What should my vehicle report to the endpoint each time?
I control the softlayer resources(Server, Storage etc) by JAVA API.
I am verifying an upgrade to the Evault storage space ( 20GB => 40GB) via the API but the API returns an error message
"error": "EVault service already exists for the requested location (Seoul 1).",
"code": "SoftLayer_Exception_Public"
from the POST event
URL(POST) https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder.json
Here is the attached request body
{"parameters":
[
{"complexType":"SoftLayer_Container_Product_Order"
,"orderContainers":[
{"complexType":"SoftLayer_Container_Product_Order_Network_Storage_Backup_Evault_Vault"
,"location":"1555995"
,"packageId":0
,"quantity":1
,"virtualGuests":[
{"complexType":"SoftLayer_Virtual_Guest"
,"id":376047
}
],
"useHourlyPricing":false
,"prices":[
{"complexType":"SoftLayer_Product_Item_Price","id":66257}
]
}
]
}
]
}
What you are doing with that request is ordering an eVault storage, besides the itemId set is for a 60GB EVault Disk capacity and not 40 Gb.
UPDATE
Retrieve item prices only for eVault storage capacities.
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/0/getItemPrices?objectMask=mask[id,locationGroupId,item[id,keyName,description],pricingLocationGroup[locations[id,name,longName]]]&objectFilter={"itemPrices":{"item": {"keyName":{"operation":"*=EVAULT"}}}}
Currently to perform an upgrade what you require is to use the method SoftLayer::upgradeVolumeCapacity, please see following request:
Perform capacity upgrade to an specific eVault storage:
(url POST) https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Network_Storage/eVaultId/upgradeVolumeCapacity
with the following request BODY:
{
"parameters":
[
559
]
}
Do not forget to change eVaultId on the request for your eVault storage, try this rest request to retrieve the specific eVault id:
Retrieve an account's associated EVault storage volumes:
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Account/getEvaultNetworkStorage?objectMask=mask[id, serviceResourceName,guestId,billingItem[id,location]]
Once obtained, then you may specify an upgradeItem (e.g. "itemId": 559 that might be the itemId for 40 Gb evault disk).
To retrieve the upgrade itemId's for the different upgrade capacities allowed, use the following request:
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Network_Storage/eVaultId/getObject?objectMask=mask[id, billingItem[id, upgradeItems[prices]]]
(don't forget to change the eVaultId).
Review the upgradeItems property and choose the capacity required, you should use the id value for the capacity you need on the method upgradeVolumeCapacity.
For more information about eVaults, see below:
Sample code to handle the upgrade of EVault?
How to find location of an EVault using SoftLayer API?
Sample code for ordering an EVault backup in SoftLayer