I am trying to load a CSV file to BigQuery using a python script modelled on the python sample code here: https://developers.google.com/bigquery/docs/developers_guide
But I'm running into the following error when I try to load a table with the REST API:
{'status': '200', 'content-length': '1492', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jun 14 2012 02:12:09 (1339665129)', 'etag': '"tcivyOj9QvKAbuEJ5MEMf9we85w/-mxYhUDjvvydxcebR8fXI6l_5RQ"', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Fri, 06 Jul 2012 22:30:55 GMT', 'content-type': 'application/json'}
{
"kind": "bigquery#job",
"etag": "\"tcivyOj9QvKAbuEJ5MEMf9we85w/-mxYhUDjvvydxcebR8fXI6l_5RQ\"",
"id": "firespotter.com:firespotter:job_d6b99265278b4c0da9c3033acf39d6b2",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/firespotter.com:firespotter/jobs/job_d6b99265278b4c0da9c3033acf39d6b2",
"jobReference": {
"projectId": "firespotter.com:firespotter",
"jobId": "job_d6b99265278b4c0da9c3033acf39d6b2"
},
"configuration": {
"load": {
"schema": {
"fields": [
{
"name": "date",
"type": "STRING"
},
{
"name": "time",
"type": "STRING"
},
{
"name": "call_uuid",
"type": "STRING"
},
{
"name": "log_level",
"type": "STRING"
},
{
"name": "file_line",
"type": "STRING"
},
{
"name": "message",
"type": "STRING"
}
]
},
"destinationTable": {
"projectId": "385479794093",
"datasetId": "telephony_logs",
"tableId": "table_name"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"encoding": "UTF-8"
}
},
"status": {
"state": "DONE",
"errorResult": {
"reason": "notFound",
"message": "Not Found: Dataset 385479794093:telephony_logs"
},
"errors": [
{
"reason": "notFound",
"message": "Not Found: Dataset 385479794093:telephony_logs"
}
]
}
}
The projectId listed in the error "385479794093" is not the projectId that I pass in, it's the "project number". The projectId should be "firespotter.com:firespotter":
{
"kind": "bigquery#datasetList",
"etag": "\"tcivyOj9QvKAbuEJ5MEMf9we85w/ZMa8z6LKMgWZIqLWh3ti2SsSs4g\"",
"datasets": [
{
"kind": "bigquery#dataset",
"id": "firespotter.com:firespotter:telephony_logs",
"datasetReference": {
"datasetId": "telephony_logs",
"projectId": "firespotter.com:firespotter"
}
}
]
}
Why does the REST API insist on supplying its own incorrect projectId, when I pass the correct value in three different places? Is there another place where I need to pass in or set the Project ID?
For reference, here is the relevant code snippet:
PROJECT = 'firespotter.com:firespotter'
DATASET = 'telephony_logs'
FLOW = OAuth2WebServerFlow(
client_id='385479794093.apps.googleusercontent.com',
client_secret='<a_secret_here>',
scope='https://www.googleapis.com/auth/bigquery',
user_agent='firespotter-upload-script/1.0')
def loadTable(http, projectId, datasetId, tableId, file_path, replace=False):
url = "https://www.googleapis.com/upload/bigquery/v2/projects/" + projectId + "/jobs"
# Create the body of the request, separated by a boundary of xxx
mime_data = ('--xxx\n' +
'Content-Type: application/json; charset=UTF-8\n' + '\n' +
'{\n' +
' "projectId": "' + projectId + '",\n' +
' "configuration": {\n' +
' "load": {\n' +
' "schema": {\n' +
' "fields": [\n' +
' {"name":"date", "type":"STRING"},\n' +
' {"name":"time", "type":"STRING"},\n' +
' {"name":"call_uuid", "type":"STRING"},\n' +
' {"name":"log_level", "type":"STRING"},\n' +
' {"name":"file_line", "type":"STRING"},\n' +
' {"name":"message", "type":"STRING"}\n' +
' ]\n' +
' },\n' +
' "destinationTable": {\n' +
' "projectId": "' + projectId + '",\n' +
' "datasetId": "' + datasetId + '",\n' +
' "tableId": "' + tableId + '"\n' +
' },\n' +
' "createDisposition": "CREATE_IF_NEEDED",\n' +
' "writeDisposition": "' + ('WRITE_TRUNCATE' if replace else 'WRITE_APPEND') + '",\n' +
' "encoding": "UTF-8"\n' +
' }\n' +
' }\n' +
'}\n' +
'--xxx\n' +
'Content-Type: application/octet-stream\n' +
'\n')
# Append data from the specified file to the request body
f = open(file_path, 'r')
header_line = f.readline() # skip header line
mime_data += f.read()
# Signify the end of the body
mime_data += ('--xxx--\n')
headers = {'Content-Type': 'multipart/related; boundary=xxx'}
resp, content = http.request(url, method="POST", body=mime_data, headers=headers)
print str(resp) + "\n"
print content
# --- Main ----------------------------------------------
def main(argv):
csv_path = args[0]
# If the credentials don't exist or are invalid, run the native client
# auth flow. The Storage object will ensure that if successful the good
# credentials will get written back to a file.
storage = Storage('bigquery2_credentials.dat') # Choose a file name to store the credentials.
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run(FLOW, storage)
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with our good credentials.
http = httplib2.Http()
http = credentials.authorize(http)
loadTable(http, PROJECT, DATASET, 'table_name', csv_path, replace=True)
if __name__ == '__main__':
main(sys.argv)
Did you recently set the project id to firespotter.com:firespotter? If the dataset was created before the project was named, there will be a mismatch between the old project id and the new. There is an automated system that updates project ids, but it is possible that it hasn't run yet or is having a problem (I'm on vacation right now, so can't check). Hopefully, if you retry again some time soon it will just work. If not, let us know.
There are a few questions here:
Why did my load job fail? Just to check, was that the entire request you sent? If so, it looks like there's no data to be loaded, i.e. sourceUris is empty. If so, that's the problem, and we're apparently returning the world's worst error message.
Why the numeric project ID? BigQuery uses the project name and the associated numeric ID interchangeably, so all you're seeing is that we tend to convert project names to IDs on the way in. Just to confirm, if you visit the Google APIs Console and look up your project, do you see that same numeric ID in the url?
Why does the project ID get specified in multiple places? First, it seems that you specified the project ID as a top-level attribute in the job; that shouldn't be necessary. (I suspect that it simply overrides whatever project ID you specify in the job reference itself.) That leaves two locations -- once as part of the job reference and the other as part of the table reference. These actually signify two different things -- the one in the job specifies what project you're inserting a job into, i.e. who's paying for the job, and the one in the table specifies what project the resulting table lives in, i.e. who owns the resulting data. In general, these will be the same, but the API allows them to be distinct. (This could be useful if, for example, you built a service that needed to insert data into tables ultimately owned by customers.)
Related
I am trying to loop around all of our subscriptions and get Policy Exemptions, but only get the ones that we have created. The loop appears fine, but the Match element appears to bring back some Exemptions that don't meet the -Match criteria.
$allSubscriptions = Get-AzSubscription
$baseFolder = "C:\source\PowerShell Exemptions Dump\"
# loop subscriptions
foreach($sub in $allSubscriptions){
$subName = $sub.Name
# Get Exemptions at Sub level
Set-AzContext -Subscription $subName
# Write to File
$exemptionsIn = Get-AzPolicyExemption|ConvertTo-Json
$fileName = $baseFolder + $subName + ".json"
$exemptionsOut = ''
foreach($ex in $exemptionsIn|ConvertFrom-Json){
if($ex.Properties.PolicyAssignmentId -Match "abc") {
$exemptionsOut += $ex|ConvertTo-Json
}
}
if ($exemptionsOut -ne '') {
$exemptionsOut | Out-File -filepath $fileName
$exemptionsOut = ''
}
}
It does work to a certain extent i.e. if a Subscription has a 0% match in everything it brings back, then it doesn't create a file. but it appears if it finds one match, then it saves Exemptions to the file that don't match.
Here is some example Json that was saved to one of the files:
[
{
"Properties": {
"PolicyAssignmentId": "/providers/Microsoft.Management/managementGroups/abc-mg/providers/Microsoft.Authorization/policyAssignments/abc-mg",
"PolicyDefinitionReferenceIds": "",
"ExemptionCategory": "Waiver",
"DisplayName": "abc - abc-mg Policy Assignment",
"Description": "AIB Testing",
"ExpiresOn": "\/Date(1662134400000)\/",
"Metadata": ""
},
"SystemData": null,
"Name": "456",
"ResourceId": "/subscriptions/123/providers/Microsoft.Authorization/policyExemptions/789",
"ResourceName": "456",
"ResourceGroupName": null,
"ResourceType": "Microsoft.Authorization/policyExemptions",
"SubscriptionId": "123"
},
{
"Properties": {
"PolicyAssignmentId": "/providers/Microsoft.Management/managementGroups/root-mg/providers/Microsoft.Authorization/policyAssignments/111",
"PolicyDefinitionReferenceIds": "installEndpointProtection",
"ExemptionCategory": "Waiver",
"DisplayName": "root-mg - Azure Security Benchmark",
"Description": "currently use sophos and not defender",
"ExpiresOn": null,
"Metadata": ""
},
"SystemData": null,
"Name": "345",
"ResourceId": "/providers/Microsoft.Management/managementGroups/root-mg/providers/Microsoft.Authorization/policyExemptions/345",
"ResourceName": "345",
"ResourceGroupName": null,
"ResourceType": "Microsoft.Authorization/policyExemptions",
"SubscriptionId": null
}
]
Finally, I don't appear to get all Exemptions back in this loop i.e. some are set at Resource Group or Resource Level. Do I need to drill further beyond Set-AzContext?
After reproducing the same code from my end, I could able to see the expected results. However, make sure you are checking in the right file and the location to which you are sending your data to.
Finally, I don't appear to get all Exemptions back in this loop i.e. some are set at Resource Group or Resource Level.
This might be due to the scope that you are looking into. After setting the scope to the required level I could able to get the expected results. Below is the code that worked for me.
$Resource = Get-AzResource -ResourceGroupName <YOUR_RESOURCEGROUP_NAME>
for($I=0;$I -lt $Resource.ResourceId.Count;$I++)
{
$a=Get-AzPolicyExemption -Scope $Resource.ResourceId[$I]
for($J=0;$J -lt $a.Count;$J++)
{
If($a.ResourceId[$J] -Match $Resource.ResourceId[$I])
{
$exemptionsIn = Get-AzPolicyExemption -Scope $Resource.ResourceId[$I] | ConvertTo-Json
$fileName = "sample2" + ".json"
$exemptionsOut = ''
foreach($ex in $exemptionsIn|ConvertFrom-Json){
if($ex.Properties.PolicyAssignmentId -Match "Swetha*") {
$exemptionsOut += $ex|ConvertTo-Json
}
}
if ($exemptionsOut -ne '') {
$exemptionsOut | Out-File -filepath $fileName
$exemptionsOut = ''
}
}
}
}
I have few policy exemptions in my subscription but above script gave me the results at Resource level which -Match with Swetha.
RESULTS:
It is possible to store SQL query in JSON array of objects?? Because when i have something like this:
[{
"id": "1",
"query": "SELECT ID FROM table"
},
{
"id": "2",
"query": "SELECT ID FROM table"
},
{
"id": "3",
"query": "SELECT USER FROM table"
}
]
JSON file in VSCode is ok no error it is getting nasty when i want to store complex queries with joins etc.
for example this query even if i format it correctly it will generate error in JSON file about formatting
(just example i not it is not valid)
SELECT user, id, , count(price) as numrev
FROM price
where id = 1 and user = 0
group by user, id, price
that it can't be stored in string
It is bit easy to do, but requires on extra step.
Simply convert/encode you raw SQL queries in base64 text.
Decode the text before you execute the queries in you code.
If the JSON file is created automatically by a program/code
All most all programming languages proved base64 encode / decode functions as part of the core if not download compatible package / library to achieve this automation
var queries = [{
"id": "1",
"query": "U0VMRUNUIElEIEZST00gdGFibGU="
},
{
"id": "2",
"query": "U0VMRUNUIElEIEZST00gdGFibGU="
},
{
"id": "3",
"query": "U0VMRUNUIFVTRVIgRlJPTSB0YWJsZQ=="
},
{
"id": "4",
"query": "U0VMRUNUIHVzZXIsIGlkLCAsIGNvdW50KHByaWNlKSBhcyBudW1yZXYKICBGUk9NIHByaWNlCiAgd2hlcmUgaWQgPSAxIGFuZCB1c2VyID0gMCAKICBncm91cCBieSB1c2VyLCBpZCwgcHJpY2U="
}
];
for (i = 0; i < queries.length; i++) {
console.log("id = " + queries[i].id + ", query = " + atob(queries[i].query));
}
when you parse you JSON array make sure to decode before you execute the SQL queries.
let me know this one helped you.. ☺☻☺
FYI , refer http://www.utilities-online.info/base64/
enter image description here
From a form submission I receive two objects: the original values and the dirty values. I like to figure out how to create a diff to send to the server using the following rules:
id field of the root object should always be included
all changed primitive values should be included
all nested changes should be included as well.
if a nested value other than id changed, it should include id as well.
Original values:
{
"id":10,
"name": "tkvw"
"locale": "nl",
"address":{
"id":2,
"street": "Somewhere",
"zipcode": "8965",
},
"subscriptions":[8,9,10],
"category":{
"id":6
},
}
Example expected diff objects:
1) User changes field name to "Foo"
{
"id":10,
"name":"foo"
}
2) User changes field street on address node and category
{
"id":10,
"address":{
"id": 2,
"street":"Changed"
},
"category":{
"id":5
}
}
I do understand the basics of functional programming, but I just need a hint in the right direction (some meta code maybe).
Take a look at JSON Patch (rfc6902), JSON Patch is a format for describing changes to a JSON document. For example:
[
{ "op": "replace", "path": "/baz", "value": "boo" },
{ "op": "add", "path": "/hello", "value": ["world"] },
{ "op": "remove", "path": "/foo"}
]
You generate a patch by comparing to JS objects/arrays, and then you can apply the patch to the original object (on the server side for example) to reflect changes.
You can create a patch using the fast-json-patch lib.
const obj1 = {"id":10,"name":"tkvw","locale":"nl","address":{"id":2,"street":"Somewhere","zipcode":"8965"},"subscriptions":[8,9,10],"category":{"id":6}};
const obj2 = {"id":10,"name":"cats","locale":"nl","address":{"id":2,"street":"Somewhere","zipcode":"8965"},"subscriptions":[8,9,10,11],"category":{"id":7}};
const delta = jsonpatch.compare(obj1, obj2);
console.log('delta:\n', delta);
const doc = jsonpatch.applyPatch(obj1, delta).newDocument;
console.log('patched obj1:\n', doc);
<script src="https://cdnjs.cloudflare.com/ajax/libs/fast-json-patch/2.0.6/fast-json-patch.min.js"></script>
I am trying to do BPM and SoftLayer integration using Java REST client. On my initial analysis(as well as help form stack overflow),I found
Step 1) we to get getPriceItem list to have all IDs for next request.
https://username:api_key#api.softlayer.com/rest/v3/SoftLayer_Product_Package/2/getItemPrices?objectMask=mask[id,item[keyName,description],pricingLocationGroup[locations[id, name, longName]]]
and then do verify and place order POST call using respective APIs.
I am stucked on Step 1) as filtering here seems to be bit tricky. I am getting a json response of over 20000 lines.
I wanted to show similar data(just like SL Performance storage UI ) on my custom BPM UI . (One drop down to select type of storage, 2nd to show location, 3rd to show size and 4th would be IOPS) where user can select the items and place request.
Here I found, SL is something similar to this for populating the drop downs-
https://control.softlayer.com/sales/productpackage/getregions?_dc=1456386930027&categoryCode=performance_storage_iscsi&packageId=222&page=1&start=0&limit=25
Can't we have implementation where we can use control.softlayer.com just like SL instead of api.softlayer.com? In that case we can use similar logic to display data on UI.
Thanks
Anupam
Here, using the API, the steps for performance storage. For endure storage the steps are similar you just need to review the value for categoryCode and modify if it needed
you can get the locations using this method:
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getRegions
you just need to know the package of the storage e.g.
GET https://api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/222/getRegions
then, you can get the storage size for that you can use the SoftLayer_Product_Package::getItems or SoftLayer_Product_Package::getItemPrices methods and a filter e.g.
GET https://api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/222/getItemPrices?objectFilter={"itemPrices": {"categories": {"categoryCode": {"operation": "performance_storage_space"}},"locationGroupId": { "operation": "is null"}}}
Note: We are filtering the data to get the prices whose category code is "performance_storage_space" and we want the standard price locationGroupId = null
then, you can get the IOPS, you can use the same approach like above, but there is a dependency between the IOPS and storage space e.g.
GET https://api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/222/getItemPrices?objectFilter={"itemPrices": { "attributes": { "value": { "operation": 20 } }, "categories": { "categoryCode": { "operation": "performance_storage_iops" } }, "locationGroupId": { "operation": "is null" } } }
Note: In the example we assume that selected storage space was "20", the prices for IOPS have an record called atributes, this record tell us the valid storage spaces of the IOPS, then we have other filters to get only the IOPS prices categoryCode = performance_storage_iops and we want only the standard prices locationGroupId=null
To selecting the storage type I do not think there is a method the only way I see is that you call the SoftLayer_Product_Package::getAllObjects method and filter the data to get the packages for endurance, performance and portable storage.
Just in case here an example using the Softlayer's Python client to order
"""
Order a block storage (performance ISCSI).
Important manual pages:
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/verifyOrder
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getItems
http://sldn.softlayer.com/reference/services/SoftLayer_Location
http://sldn.softlayer.com/reference/services/SoftLayer_Location/getDatacenters
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Storage_Iscsi_OS_Type
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Storage_Iscsi_OS_Type/getAllObjects
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Location
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order_Network_Storage_Enterprise
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Product_Item_Price
http://sldn.softlayer.com/blog/cmporter/Location-based-Pricing-and-You
http://sldn.softlayer.com/blog/bpotter/Going-Further-SoftLayer-API-Python-Client-Part-3
http://sldn.softlayer.com/article/Object-Filters
http://sldn.softlayer.com/article/Python
http://sldn.softlayer.com/article/Object-Masks
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
import json
# Values "AMS01", "AMS03", "CHE01", "DAL05", "DAL06" "FRA02", "HKG02", "LON02", etc.
location = "AMS01"
# Values "20", "40", "80", "100", etc.
storageSize = "40"
# Values between "100" and "6000" by intervals of 100.
iops = "100"
# Values "Hyper-V", "Linux", "VMWare", "Windows 2008+", "Windows GPT", "Windows 2003", "Xen"
os = "Linux"
PACKAGE_ID = 222
client = SoftLayer.Client()
productOrderService = client['SoftLayer_Product_Order']
packageService = client['SoftLayer_Product_Package']
locationService = client['SoftLayer_Location']
osService = client['SoftLayer_Network_Storage_Iscsi_OS_Type']
objectFilterDatacenter = {"name": {"operation": location.lower()}}
objectFilterStorageNfs = {"items": {"categories": {"categoryCode": {"operation": "performance_storage_iscsi"}}}}
objectFilterOsType = {"name": {"operation": os}}
try:
# Getting the datacenter.
datacenter = locationService.getDatacenters(filter=objectFilterDatacenter)
# Getting the performance storage NFS prices.
itemsStorageNfs = packageService.getItems(id=PACKAGE_ID, filter=objectFilterStorageNfs)
# Getting the storage space prices
objectFilter = {
"itemPrices": {
"item": {
"capacity": {
"operation": storageSize
}
},
"categories": {
"categoryCode": {
"operation": "performance_storage_space"
}
},
"locationGroupId": {
"operation": "is null"
}
}
}
pricesStorageSpace = packageService.getItemPrices(id=PACKAGE_ID, filter=objectFilter)
# If the prices list is empty that means that the storage space value is invalid.
if len(pricesStorageSpace) == 0:
raise ValueError('The storage space value: ' + storageSize + ' GB, is not valid.')
# Getting the IOPS prices
objectFilter = {
"itemPrices": {
"item": {
"capacity": {
"operation": iops
}
},
"attributes": {
"value": {
"operation": storageSize
}
},
"categories": {
"categoryCode": {
"operation": "performance_storage_iops"
}
},
"locationGroupId": {
"operation": "is null"
}
}
}
pricesIops = packageService.getItemPrices(id=PACKAGE_ID, filter=objectFilter)
# If the prices list is empty that means that the IOPS value is invalid for the configured storage space.
if len(pricesIops) == 0:
raise ValueError('The IOPS value: ' + iops + ', is not valid for the storage space: ' + storageSize + ' GB.')
# Getting the OS.
os = osService.getAllObjects(filter=objectFilterOsType)
# Building the order template.
orderData = {
"complexType": "SoftLayer_Container_Product_Order_Network_PerformanceStorage_Iscsi",
"packageId": PACKAGE_ID,
"location": datacenter[0]['id'],
"quantity": 1,
"prices": [
{
"id": itemsStorageNfs[0]['prices'][0]['id']
},
{
"id": pricesStorageSpace[0]['id']
},
{
"id": pricesIops[0]['id']
}
],
"osFormatType": os[0]
}
# verifyOrder() will check your order for errors. Replace this with a call to
# placeOrder() when you're ready to order. Both calls return a receipt object
# that you can use for your records.
response = productOrderService.verifyOrder(orderData)
print(json.dumps(response, sort_keys=True, indent=2, separators=(',', ': ')))
except SoftLayer.SoftLayerAPIError as e:
print("Unable to place the order. faultCode=%s, faultString=%s" % (e.faultCode, e.faultString))
Question about real-time notification..
Post: https://xxxiot.cumulocity.com/cep/realtime
Body:
[
{
"channel": "/meta/handshake",
"version": "1.0",
"mininumVersion": "1.0beta",
"supportedConnectionTypes": ["long-polling","callback-polling"],
"advice":{"timeout":120000,"interval":30000}
}
]
My Response:
[
{
"minimumVersion": "1.0",
"supportedConnectionTypes": [
"smartrest-long-polling",
"long-polling"
],
"successful": true,
"channel": "/meta/handshake",
"ext": {
"ack": true
},
"clientId": "5o0ghvle7yy4ix41on423v6k3j87",
"version": "1.0"
}
]
After received the clientId.. I have run the following command:
Post: https://xxxiot.cumulocity.com/cep/realtime
Body:
[
{
"channel": "/meta/subscribe",
"clientId": "5o0ghvle7yy4ix41on423v6k3j87",
"subscription": "/alarms/overHeatAlarms"
}
]
Response:
[
{
"error": "403:denied_by_security_policy:create_denied",
"subscription": "/alarms/overHeatAlarms",
"successful": false,
"channel": "/meta/subscribe"
}
]
Where is the problem? I'm trying to subcribing to "overheatAlarms"!
It may be that it does not exist? Can I read the existing information?
Thanks,
Alim
Yes, your suspicion is correct. There are basically two options for you:
Subscribe to all alarms or alarms from a particular device: Use "/cep/realtime" and channel "/alarms/* resp. channel "/alarms/[device ID]".
Create a processing rule that filters out overheat alarms and subscribe to that rule: Use "/cep/notifications" and channel "/[module name]/[statement name]".
The module name is what you enter as name when you click "New module". The statement name is what you add to the statement, e.g.
#Name('overHeatAlarms')
select * from AlarmsCreated where [your condition for overheat alarms]
(If you don't put a name there, they will be name statement_1, statement_2, ....)
To get notifications from Java, have a look at an example of getting notifications for changes in devices. In the subscribe() method, you pass "*" or the device ID. To get the notification, pass an implementation of SubscriptionListener, in particular the onNotification method. You can modify "channelPrefix" to "/alarms/" or "/measurements/" to get other notifications.
Thanks, André.
I've tested following Code Snippet.. it works, but it is not the best solution :-)
MeasurementApi measurementApi = getMeasurementApi();
MeasurementFilter measurementFilter = new MeasurementFilter();
while (true) {
Calendar cal = Calendar.getInstance();
Date toDate = cal.getTime();
cal.add(Calendar.SECOND, -25);
Date fromDate = cal.getTime();
measurementFilter.byDate(fromDate, toDate);
measurementFilter.byFragmentType(TemperatureMeasurement.class);
measurementFilter.bySource(new GId(DEVICE_SIMULATOR));
MeasurementCollection mc = measurementApi
.getMeasurementsByFilter(measurementFilter);
MeasurementCollectionRepresentation measurements = mc.get();
for (; measurements != null; measurements = mc
.getNextPage(measurements)) {
for (MeasurementRepresentation measurement : measurements
.getMeasurements()) {
TemperatureMeasurement temperatureSensor = measurement
.get(TemperatureMeasurement.class);
System.out.println(measurement.getSource().getId() + " "
+ measurement.getTime()+ " " + temperatureSensor.getTemperature() );
}
}
}