Is there any way to access private datasets through API call - api

I'm running Mirth 3.6.1 with CKAN 2.8 and being a newbie to this I've run into an issue: Is there a way to access resources in private datasets in CKAN through API requests? I can't seem to do it.
I have an organization with a public dataset and I can can go through Mirth via the API router to the correct Mirth channel and get the data from CKAN, like normal, with an API request. But if I make the dataset private, it all falls apart. Even though I use the correct API-key. Although, that doesn't seem to make a difference. I get success true regardless of whether I use the API-key or not (or if it's even the correct key)
The API-key included in the request is that of the sysadmin.
When I directly access the CKAN resource through a CKAN-endpoint and the dataset is public, I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search", (URL instead of real url)
"success": true,
"result": {
"count": 1,
"results": [
{
"mimetype": null,
"cache_url": null,
"state": "active",
"hash": "REDACTED__", (sensitive data)
"description": "",
"format": "",
"url": "https://URL/datastore/dump/0696c0a1-b249-4fd5-ba80-caf7046a650b", (URL instead of real url)
"datastore_active": true,
"created": "2019-03-19T00:30:04.313593",
"cache_last_updated": null,
"package_id": "11211598-34f8-4d67-ab34-b7fd590ae08d",
"mimetype_inner": null,
"last_modified": null,
"position": 1,
"revision_id": "17b85d36-4ec1-4645-b9b1-dcfe310a54e6",
"size": null,
"url_type": "datastore",
"id": "0696c0a1-b249-4fd5-ba80-caf7046a650b",
"resource_type": null,
"name": "REDACTED" (sensitive data)
}
]
}
}
When the dataset is private, regardless of whether I include the API-key or not (or if it's even the real api-key), I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search",
"success": true,
"result": {
"count": 0,
"results": []
}
}
So, how can I do a resource_search for a resource in a private dataset?
Thanks in advance.

Yes you can do that by using include_private:True in the dataset
Please see the below link
https://docs.ckan.org/en/2.8/api/index.html#ckan.logic.action.get.package_search

Related

Ruckus SmartZone API

I am having issues when trying to create a Zone using the API.
I can create the zone with the basic info, but as soon as I want to add another property (specifically "location") I get an error.
This is my dataset I use for the POST
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA"
"location": "$CalledStationName_val",
}
The API creates everything until I either include the "location" property in the original POST or if I try a PUT or PATCH atferwards.
Result value:
{"message":["object instance has properties which are not allowed by the schema: [\"location\"]"],"errorCode":101,"errorType":"Bad HTTP request"}
Anyone come across this or have any ideas on how to get this working?
Thanks
A comma is required after "countryCode": "ZA". The post payload should look like this:
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA",
"location": "$CalledStationName_val",
}

Creating an event through the ST Developer Portal's API Console

I'm trying to create an event using the API console and keep getting errors. Any ideas why?
I've been using different versions of the example value:
{
"name": "string",
"description": "string",
"status": "string",
"event_id": "string",
"start_epoch": 0,
"end_epoch": 0,
"industry": "string",
"archived": true,
"deleted": true,
"legacy_id": 0,
"is_public": true
}
I get the following back. Any thoughts?
{
"code": "BadRequestError",
"message": "[\"Has time can't be blank\",\"true is not included in the list\"]"
}
You will need to fetch the user/team information first
Once you have your oauth token from above and set to the Authorization header, make a call to https://developer-portal.socialtables.com/api-console#!/Authentication/get_4_0_oauth_token
This will give you the user and team object back to make subsequent calls to make events
Once you have the team_id you can now make events
You can POST to /4.0/events
Swagger doc: https://developer-portal.socialtables.com/api-console#!/Events/post_4_0_events
Example POST payload:
{
"name": "NAME",
"description": "DESCRIPTION",
"status": "new",
"start_epoch": TIME_IN_MS,
"end_epoch": TIME_IN_MS,
"industry": "INDUSTRY_TYPE",
“has_time”: 1 // 0 = all day event, 1 = from/to a specific time in day
}
- This will return the event ID under data.event.id in the response from the above POST
- You can then link the user to:
https://home.socialtables.com/events/EVENT_ID

How do I create an event using the SocialTables API?

I'm trying to use the /4.0/legacyvm3/teams/{team}/events endpoint to create an event. I'm running into some trouble with spaces.
I used the /4.0/legacyvm3/teams/{team}/venues endpoint to get a list of venues. I chose one to include in the spaces section and posted this:
{
"name": "Event via API Test 04",
"category": "athletic event",
"public": true,
"attendee_management": true,
"start_time": "2017-04-05T16:13:54.217Z",
"end_time": "2017-04-05T16:13:54.217Z",
"uses_metric": false,
"venue_mapper_version": 0,
"spaces": [
{
"venue_id": 128379,
"name": "Snurrrggggg"
}
]
}
The endpoint returns a 400 code and this error:
{
"code": 400,
"message": "Cannot read property 'toLowerCase' of undefined"
}
I tried including the wizard section, but each time it would return this error:
{
"message": "Access Denied to this feature"
}
After some experimentation, this body succeeded:
{
"name": "Event via API Test 03",
"category": "athletic event",
"public": true,
"attendee_management": true,
"start_time": "2017-04-05T16:13:54.217Z",
"end_time": "2017-04-05T16:13:54.217Z",
"uses_metric": false,
"venue_mapper_version": 0,
"spaces": [
{
"name": "Fake News Room"
}
]
}
But the application itself would not display the diagram, and the newly created room did not show up in my list of venues. Perhaps it did not assign permissions to it?
In any case, I don't actually want to create a new venue/space. I want to pass in an existing venue/space. How do I do that?
The short answer is to create a working diagram in 4.0 you will need to POST some data to the /4.0/diagrams endpoint.
The room you create doesn't map to the same concept as venues. When you create an event as you did, it creates a new space entity. The spaces endpoints can return information on those.

Dropbox API V2 list_file_members/batch empty results

I'm currently trying to work with the Dropbox list_file_members API endpoint, as it appears to me to be the only place to find out who owns a file (
see follow example result taken from the documentation page )
{
"users": [
{
"access_type": {
".tag": "owner"
},
"user": {
"account_id": "dbid:AAH4f99T0taONIb-OurWxbNQ6ywGRopQngc",
"same_team": true,
"team_member_id": "dbmid:abcd1234"
},
"permissions": [],
"is_inherited": false
}
],
"groups":[...]
...
}
However, when I call the API on a single file I get the follow
{
"users": [],
"groups": [
{
"access_type": {
".tag": "editor"
},
"permissions": [],
"is_inherited": true,
"group": {
"group_name": "Everyone at TEAM_NAME_HERE",
"group_id": "g:GROUP_ID_HERE",
"member_count": 6,
"group_management_type": {
".tag": "company_managed"
},
"group_type": {
".tag": "team"
},
"is_owner": false,
"same_team": true
}
}
],
"invitees": []
}
This result contains no owner information, so I'm assuming this is because everyone has the same access levels ??
The problem worsens when I try to call files in batches using the sharing_list_file_members/batch endpoint, I get the following result
[
{
"file": "id:THIS_IS_MY_FILE_ID",
"result": {
".tag": "result",
"members": {
"users": [],
"groups": [],
"invitees": []
},
"member_count": 0
}
}
]
Obviously this is even less helpful, this is the same when I access the API via my own PHP, as well as the API explorer, could anyone tell me where I'm going wrong and why I'm getting no results from users and even groups when done in batches ?
The /2/sharing/list_file_members endpoint is documented as:
Use to obtain the members who have been invited to a file, both inherited and uninherited members.
The /2/sharing/list_file_members/batch endpoint is documented as:
Get members of multiple files at once. The arguments to this route are more limited, and the limit on query result size per file is more strict. To customize the results more, use the individual file endpoint.
Inherited users are not included in the result, and permissions are not returned for this endpoint.
It sounds like the file for your example is in a team folder, and so the group listed for your non-batch example is the team group, i.e., an inherited group. The documentation indicates that this group isn't expected when using the batch endpoint.

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.