Cloudflare GraphQL Analytic API does not have access to the path - cloudflare

When I tried query this query
query ($zoneID: String!) {
viewer {
zones(filter: {zoneTag: $zoneID}) {
httpRequestsAdaptiveGroups(filter: {date_gt: "2022-05-29"}, limit: 100) {
count
dimensions {
requestSource
}
sum {
visits
edgeResponseBytes
}
}
}
}
}
and it gave me this error
{
"data": null,
"errors": [
{
"message": "zone '0ab45c20ea56c46d2db5999b19221234' does not have access to the path",
"path": [
"viewer",
"zones",
"0",
"httpRequestsAdaptiveGroups"
],
"extensions": {
"code": "authz",
"timestamp": "2022-06-29T06:14:55.82422442Z"
}
}
]
}
How to have access to viewing httpRequestsAdaptiveGroups, do I have to upgrade the project plan because right now it is currently on free-tier
What I've tried so far was giving all the zone readable-permission and it still happen

Related

PowerBI Create Dataset from a Postgresql datasource using REST API

I created a datasource behind gateway for using rest API. Datasource got created. However, now I want to add a table(create a dataset) from the created datasource to use it in a report. However, I am getting the below error from API.
{
"error": {
"code": "FailedToDeserializeDatasetError",
"pbi.error": {
"code": "FailedToDeserializeDatasetError",
"parameters": {},
"details": [],
"exceptionCulprit": 1
}
}
}
API request body:
{
"datasources": [
{
"gatewayId":"gateway_id",
"datasourceId": "datasource_id",
"datasourceType": "PostgreSql",
"connectionDetails": "{\"server\":\"server_address\",\"database\":\"database_name\"}",
"credentialType": "Basic",
"credentialDetails": {
"privacyLevel": "None",
"useEndUserOAuth2Credentials": false
}
}
],
"defaultMode": "Push",
"name": "API DS 1",
"tables": [
{
"name":"currency_rates",
"description": "DS Table 1 Demo API",
"columns":[
{
"name":"id",
"dataType":"Int64"
},
{
"name":"traded_on",
"dataType":"DateTime"
},
{
"name":"currency_code",
"dataType": "string"
},
{
"name":"close",
"dataType": "Double"
}
]
}
]
}
Not sure what is wrong here.
API Reference: https://learn.microsoft.com/en-us/rest/api/power-bi/push-datasets/datasets-post-dataset-in-group

"INVALID_CURSOR_ARGUMENTS" from Github graphql API

I am using the following query:
query myOrgRepos {
organization(login: "COMPANY_NAME") {
repositories(first: 100) {
edges {
node {
name
defaultBranchRef {
target {
... on Commit {
history(after: "2021-01-01T23:59:00Z", before: "2023-02-06T23:59:00Z", author: { emails: "USER_EMAIL" }) {
edges {
node {
oid
}
}
}
}
}
}
}
}
}
}
}
But with accurate names for the orginization and emails, and am persistantly getting the following error for every repo.
{
"type": "INVALID_CURSOR_ARGUMENTS",
"path": [
"organization",
"repositories",
"edges",
20,
"node",
"defaultBranchRef",
"target",
"history"
],
"locations": [
{
"line": 10,
"column": 29
}
],
"message": "`2021-01-01T23:59:00Z` does not appear to be a valid cursor."
},
If I remove the after field, it works just fine. However, I kind of need it. Acording to all the docs that I have read both after and before take the same timestamp. Can't tell where I am going wrong here.
I have tried:
to narrow the gap between before and after
return only a single repository
remove after (works fine without it)

How to prepare Google Natural Language Proscessing output (json) for Big Query

I'm trying to query the output of a Natural Language Processing (NLP) call in Big Query (BQ) but I'm struggling to get the output in the right format for BQ.
I understand that BQ takes json files (as newline delimited) - but just not sure that (a) the output of NLP is json newline delimited and (b) if my schema is correct.
Here's the json output I'm working with:
{
"entities": [
{
"name": "Rowling",
"type": "PERSON",
"metadata": {
"wikipedia_url": "http://en.wikipedia.org/wiki/J._K._Rowling"
},
"salience": 0.65751493,
"mentions": [
{
"text": {
"content": " J.",
"beginOffset": -1
}
},
{
"text": {
"content": "K. Rowl",
"beginOffset": -1
}
}
]
},
{
"name": "LONDON",
"type": "LOCATION",
"metadata": {
"wikipedia_url": "http://en.wikipedia.org/wiki/London"
},
"salience": 0.14284456,
"mentions": [
{
"text": {
"content": "\ufeffLON",
"beginOffset": -1
}
}
]
},
{
"name": "Harry Potter",
"type": "WORK_OF_ART",
"metadata": {
"wikipedia_url": "http://en.wikipedia.org/wiki/Harry_Potter"
},
"salience": 0.0726779,
"mentions": [
{
"text": {
"content": "th Harry Pot",
"beginOffset": -1
}
},
{
"text": {
"content": "‘Harry Pot",
"beginOffset": -1
}
}
]
},
{
"name": "Deathly Hallows",
"type": "WORK_OF_ART",
"metadata": {
"wikipedia_url": "http://en.wikipedia.org/wiki/Harry_Potter_and_the_Deathly_Hallows"
},
"salience": 0.022565609,
"mentions": [
{
"text": {
"content": "he Deathly Hall",
"beginOffset": -1
}
}
]
}
],
"language": "en"
}
Is there a way to send the output directly to big query via the command line in Google Cloud shell?
Any information would be greatly appreciated!
Thanks
Glad you found my Harry Potter blog post! I'd recommend storing the NL API's JSON response as a string in BigQuery and then using a user-defined function to query it. You should be able to run the following (the table is publicly viewable) to get a count of how often each entity appears in the JSON you posted:
SELECT
COUNT(*) as entity_count, entity
FROM
JS(
(SELECT entities FROM [sara-bigquery:samples.hp_udf]),
entities,
"[{ name: 'entity', type: 'string'}]",
"function(row, emit) {
try {
x = JSON.parse(row.entities);
entities = x['entities'];
entities.forEach(function(data) {
emit({ entity: data.name });
});
} catch (e) {}
}"
)
GROUP BY entity
ORDER BY entity_count DESC
send the output directly to big query via the command line in Google Cloud shell
Look at this page, and search for "bq load"
https://cloud.google.com/bigquery/bq-command-line-tool
Here they have some example about json schema.
Schema to load json data to google big query

Max Response Limitation im OTA_AirLowFareSearchRQ

I'm working with Sabre REST API. I have a issue with the OTA_AirLowFareSearchRQ, I try limit the response number using the MaxResponses in the json structure but seems that I make something wrong because the response give to me 95 answers in the cert environment (https://api.cert.sabre.com/).
The json request that I use is:
{
"OTA_AirLowFareSearchRQ": {
"Target": "Production",
"PrimaryLangID": "ES",
"MaxResponses": "15",
"POS": {
"Source": [{
"RequestorID": {
"Type": "1",
"ID": "1",
"CompanyName": {}
}
}]
},
"OriginDestinationInformation": [{
"RPH": "1",
"DepartureDateTime": "2016-04-01T11:00:00",
"OriginLocation": {
"LocationCode": "BOG"
},
"DestinationLocation": {
"LocationCode": "CTG"
},
"TPA_Extensions": {
"SegmentType": {
"Code": "O"
}
}
}],
"TravelPreferences": {
"ValidInterlineTicket": true,
"CabinPref": [{
"Cabin": "Y",
"PreferLevel": "Preferred"
}],
"TPA_Extensions": {
"TripType": {
"Value": "Return"
},
"LongConnectTime": {
"Min": 780,
"Max": 1200,
"Enable": true
},
"ExcludeCallDirectCarriers": {
"Enabled": true
}
}
},
"TravelerInfoSummary": {
"SeatsRequested": [1],
"AirTravelerAvail": [{
"PassengerTypeQuantity": [{
"Code": "ADT",
"Quantity": 1
}]
}]
},
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "10ITINS"
}
}
}
}
}
MaxResponses could be something for internal development which is part of the schema but does not affect the response.
What you can modify is in the IntelliSellTransaction. You used 10ITINS, but the values that will work should be 50ITINS, 100ITINS and 200ITINS.
EDIT2 (as Panagiotis Kanavos said):
RequestType values depend on the business agreement between your company and Sabre. You can't use 100 or 200 without modifying the agreement.
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "50ITINS"
}
}
}
EDIT1:
I have searched a bit more and found:
OTA_AirLowFareSearchRQ.TravelPreferences.TPA_Extensions.NumTrips
Required: false
Type: object
Description: This element allows a user to specify the number of itineraries returned.

Is it possible to RECEIVE job applications through the LinkedIn API?

I see LinkedIn documentation on how to post job, apply for jobs, and search jobs. But I'm curious if it's possible to post a job on LinkedIn normally and then receive an API notification when people apply to it. In other words, I want to integrate with the application rather then the job posting. Is this possible?
Yes this is possible, you want to integrate the job applications you receive with your own Applicant Tracking System. Check the processing job applications developer documentation for more details.
Yes. It looks like documentation moved to microsoft: https://learn.microsoft.com/en-us/linkedin/talent/apply-connect/receive-applications
Essentially you create a webhook and register it with the job post. Then linkedin will POST the job application to that hook.
"Content-Type":"application/json",
"X-LI-Signature":"d3756e445a8065c0f38c2182c502f8229800eb2c6a9f3b4a1fdf152af867e6fc",
"Content-Length":"107",
"Connection":"Keep-Alive",
"Accept-Encoding":"gzip,deflate"
{
"type": "JOB_APPLICATION_EXPORT",
"externalJobId": "jobIdOnAtsPartner",
"jobApplicationId": "urn:li:jobApplication:12345678",
"jobApplicant": "urn:li:person:abc123",
"appliedAt": 1602137400011,
"applicantSkills": [
{
"skillUrn": "urn:li:skill:12345",
"skillName": "Java",
"jobMatched": true,
"assessmentVerified": true
}
],
"questionResponses": {
"resumeQuestionResponses":
{"resumeQuestionAnswer":
{"mediaUrl":"https://www.linkedin.com/ambry/?x-li-ambry-ep=AQFBV...",
"mediaUrn":"urn:li:media:AgAAA..."}​
},
"contactInformationQuestionResponses": {
"firstNameAnswer": {
"value": "First"
},
"lastNameAnswer": {
"value": "Last"
},
"emailAnswer": {
"value": "applicant#linkedin.com"
}
},
"voluntarySelfIdentificationQuestionResponses": {
"disabilityAnswer": "NO",
"genderAnswer": "MALE",
"raceAnswer": "ASIAN",
"veteranStatusAnswer": "NOT_PROTECTED_VETERAN"
},
"educationQuestionResponses": {
"educationExperienceQuestionSetResponses": [
{
"school": {
"value": "UC Berkeley"
}
}
]
},
"workQuestionResponses": {
"workExperienceQuestionSetResponses": [
{
"company": {
"value": "LinkedIn"
},
"title": {
"value": "Software Engineer"
}
}
]
},
"additionalQuestionResponses": {
"customQuestionSetResponses": [
{
"customQuestionResponses": [
{
"questionIdentifier": "question1",
"answer": {
"multipleChoiceAnswerValue": {
"symbolicNames": [
"right"
]
}
}
}
]
}
]
}
}
}