In the BigQuery documents it says that tables.insert creates a new empty table (https://cloud.google.com/bigquery/docs/reference/v2/tables/insert)
But when I try to test that endpoint within the page I get a 404 Table not found error.
Request:
POST https://www.googleapis.com/bigquery/v2/projects/myProject/datasets/temp/tables?key={YOUR_API_KEY}
{
"expirationTime": "1452627594",
"tableReference": {
"tableId": "myTable"
}
}
Response:
404 OK
- Show headers -
{
"error": {
"errors": [
{
"domain": "global",
"reason": "notFound",
"message": "Not found: Table myProject.myTable"
}
],
"code": 404,
"message": "Not found: Table myProject:temp.myTable"
}
}
I am sure there is a reasonable explanation of the 404 error but I couldn't understood it since I am trying to create that table, it should be ok if doesn't exist yet?
Thank you
Info you provided in Request Body is not enough for BigQuery to create new table
At least - add schema - something like below (just as an example):
{
"schema": {
"fields": [
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
]
}
}
See more in Tables resource for what you can supply in Request Body
Below is an example with above dummy schema that should work:
POST https://www.googleapis.com/bigquery/v2/projects/myProject/datasets/temp/tables?key={YOUR_API_KEY}
{
"schema": {
"fields": [
{
"name": "page_event",
"mode": "repeated",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
}
]
},
"expirationTime": "1452560523000",
"tableReference": {
"tableId": "myTable"
}
}
EDIT:
And the reason why it didn't worked with "expirationTime": "1452627594" from your question is because it points to the past
Related
Seems like no matter what hotel or check-in check-out dates I try, the GET /shopping/hotel-offers/by-hotel endpoint always returns no availability. The GET /shopping/hotel-offers for a city always returns one or more hotels that have availability. I understand that these can be cached results but then using any of those hotels in the GET /shopping/hotel-offers/by-hotel endpoint returns no availability
Example 1
GET https://test.api.amadeus.com/v2/shopping/hotel-offers?cityCode=NYC
(RESPONSE TRUNCATED FOR READABILITY)
{
"data": [
{
"type": "hotel-offers",
"hotel": {
"type": "hotel",
"hotelId": "BWNYC133",
"chainCode": "BW",
"dupeId": "700101379",
"name": "BEST WESTERN BOWERY HANBEE HTL"
...
"available": true,
"offers": [
{
"id": "15F1E33CA0571B94E27F2BA26CA4319C8A097B500D737AB68088E93AC813D2BC",
"rateCode": "SRS",
"rateFamilyEstimated": {
"code": "SRS",
"type": "C"
},
"boardType": "BREAKFAST",
"room": {
"type": "A1Q",
"typeEstimated": {
"category": "ACCESSIBLE_ROOM",
"beds": 1,
"bedType": "QUEEN"
},
}
...
],
...
}
Immediately followed by
GET https://test.api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=BWNYC133
RESPONSE
{
"errors": [
{
"status": 400,
"code": 3664,
"title": "NO ROOMS AVAILABLE AT REQUESTED PROPERTY"
}
]
}
The same is true no matter what hotel in any city I try. Am I doing something wrong? I've been playing with the endpoints for a few hours now and have only been able to get a successful response from the hotels by city endpoint.
Appreciate any help provided.
EDIT
My issues are continuing in production now. I am getting no availability from /shopping/hotel-offers/by-hotel or /shopping/hotel-offers/{offerId} endpoints.
GET https://api.amadeus.com/v2/shopping/hotel-offers?cityCode=NYC&hotelIds=XTNYC130,ONNYCMIM,DSNYC132&checkInDate=2020-05-01&checkOutDate=2020-05-03&roomQuantity=1&adults=2&radius=5&radiusUnit=KM&paymentPolicy=NONE&includeClosed=false&bestRateOnly=true&view=FULL&sort=NONE
(RESPONSE TRUNCATED FOR READABILITY)
{
"data": [
{
"type": "hotel-offers",
"hotel": {
"type": "hotel",
"hotelId": "XTNYC130",
"chainCode": "XT",
"dupeId": "700070576"
"name": "DUANE STREET HOTEL",
},
"available": true,
"offers": [
{
"id": "394DF124A254A86DD6DA5D1A3084B543DFA462740EDAE34023151D479266C4DE",
"rateCode": "GMT"
}
],
"self": "https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=XTNYC130&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL"
},
{
"type": "hotel-offers",
"hotel": {
"type": "hotel",
"hotelId": "ONNYCMIM",
"chainCode": "ON",
"dupeId": "700128992",
"name": "HOTEL MIMOSA"
},
"available": true,
"offers": [
{
"id": "547EA4B5F7F716DF083DFD19D857DAE0F1B6820E753D080F310737C5374AF857",
"rateCode": "BAR"
}
],
"self": "https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=ONNYCMIM&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL"
},
{
"type": "hotel-offers",
"hotel": {
"type": "hotel",
"hotelId": "DSNYC132",
"chainCode": "DS",
"dupeId": "700224946",
"name": "The Ludlow Hotel"
},
"available": true,
"offers": [
{
"id": "CC8CD3A64562527B6330E1A317584E78B68537B1E682115497037D28CB466FDE",
"rateCode": "RAC"
}
],
"self": "https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=DSNYC132&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL"
}
]
}
GET https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=XTNYC130&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL
RESPONSE
{
"errors": [
{
"status": 400,
"code": 3664,
"title": "NO ROOMS AVAILABLE AT REQUESTED PROPERTY"
}
]
}
GET https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=ONNYCMIM&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL
RESPONSE
{
"errors": [
{
"status": 400,
"code": 3664,
"title": "NO ROOMS AVAILABLE AT REQUESTED PROPERTY"
}
]
}
GET https://api.amadeus.com/v2/shopping/hotel-offers/by-hotel?hotelId=DSNYC132&adults=2&checkInDate=2020-05-01&checkOutDate=2020-05-03&paymentPolicy=NONE&roomQuantity=1&view=FULL
RESPONSE
{
"errors": [
{
"status": 400,
"code": 3664,
"title": "NO ROOMS AVAILABLE AT REQUESTED PROPERTY"
}
]
}
GET https://api.amadeus.com/v2/271FFDEF4E7FD5E1EEB10BFE59B0880B5F6AF4DCA73BA57E5489FDFE7E95AFCD
{
"errors": [
{
"status": 400,
"code": 477,
"title": "INVALID FORMAT"
}
]
}
We are experiencing temporary problems with some of our hotel providers test systems which is leading to slower than usual response times or timeouts as in your example. We tried the same request, and for most of the time, it does work, so while we are working with our providers to resolve this issue, you can try to do the request a few times until you have a response.
Sorry for the inconvenience!
I was trying to load an Avro file with nested record. One of the record was having a union of schema. When loaded to BigQuery, it created a very long name like com_mycompany_data_nestedClassname_value on each union element. That name is long. Wondering if there is a way to specify name without having the full package name prefixed.
For example. The following Avro schema
{
"type": "record",
"name": "EventRecording",
"namespace": "com.something.event",
"fields": [
{
"name": "eventName",
"type": "string"
},
{
"name": "eventTime",
"type": "long"
},
{
"name": "userId",
"type": "string"
},
{
"name": "eventDetail",
"type": [
{
"type": "record",
"name": "Network",
"namespace": "com.something.event",
"fields": [
{
"name": "hostName",
"type": "string"
},
{
"name": "ipAddress",
"type": "string"
}
]
},
{
"type": "record",
"name": "DiskIO",
"namespace": "com.something.event",
"fields": [
{
"name": "path",
"type": "string"
},
{
"name": "bytesRead",
"type": "long"
}
]
}
]
}
]
}
Came up with
Is that possible to make the long field name like eventDetail.com_something_event_Network_value to be something like eventDetail.Network
Avro loading is not as flexible as it should be in BigQuery (basic example is that it does not support load a subset of the fields (reader schema). Also, renaming of the columns is not supported today in BigQuery refer here. Only options are recreate your table with the proper names (create a new table from your existing table) or recreate the table from your previous table
I'm trying to create a page rule using Cloudflare's API to forward http to https. Unfortunately, I don't think the documentation is 100% clear on how to do this. Here is the JSON object I'm currently passing to the POST body:
{
"targets": [
{
"target": "url",
"constraint": {
"operator": "matches",
"value": "http://exampletest.com/*"
}
}
],
"actions": [{
"id": "forwarding_url",
"value": "https://exampletest.com/$1"
}]
}
and here is the message I'm getting back:
{
"success": false,
"errors": [
{
"code": 1004,
"message": "Page Rule validation failed: See messages for details."
}
],
"messages": [
{
"code": 1,
"message": ".settings[0].url: This value should not be blank.",
"type": null
},
{
"code": 2,
"message": ".settings[0].statusCode: This value should not be blank.",
"type": null
}
],
"result": null
}
So it seems like I need to have a settings object somewhere, but any way I try to add the settings, I'm getting the same message. Does anyone know what I'm doing wrong here? Here is Cloudflare's Documentation on the subject. Not sure if I might be missing something:
https://api.cloudflare.com/#page-rules-for-a-zone-create-page-rule
Actually, just figured this one out. It looks like this is the correct format:
{
"targets": [
{
"target": "url",
"constraint": {
"operator": "matches",
"value": "http://exampletest.com/*"
}
}
],
"actions": [
{
"id": "forwarding_url",
"value": {
"url": "https://www.exampletest.com/$1",
"status_code": 301
}
}
]
}
You have to set a parameter - "status":"active" to actually activate the page rule or else you're just creating an inactive rule.
Cloudflare sets this as an optional parameter, but it is needed to activate the rule.
The updated version of this answer goes like this:
{
"targets": [
{
"target": "url",
"constraint": {
"operator": "matches",
"value": "http://exampletest.com/*"
}
}
],
"actions": [
{
"id": "forwarding_url",
"value": {
"url": "https://www.exampletest.com/$1",
"status_code": 301
}
}
],
"status": "active"
}
I have completed a Node.js app using LINE APIs.I have the following request object. How can I define the array of different objects, here the messagesfield which contains different object structure for different message types. I hope swagger permits this very common scenario.
Request Body:
{
"to": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"messages":[
{
"type":"text",
"text":"Hello, world1"
},
{
"type": "audio",
"originalContentUrl": "https://example.com/original.m4a",
"duration": 240000
}
{
"type": "location",
"title": "my location",
"address": "〒150-0002 東京都渋谷区渋谷2丁目21−1",
"latitude": 35.65910807942215,
"longitude": 139.70372892916203
}
]
}
My swagger definition for messages array.
"Messages Object": {
"type": "array",
"items": {
"allOf": [
{
"$ref": "#/definitions/multicast Message Error Response"
},
{
"$ref": "#/definitions/multicast Message Error Response"
}
]
}
}
And this is the rendered messages array. It has only one entry. I want to include many different entries
"messages": [
{
"code": 500,
"httpCode": 400,
"name": "string",
"message": "string"
}
]
I am trying to insert a job through HTTP Post request, but i am getting Invalid path error.
My request body is as follows:
{
"configuration": {
"load": {
"sourceUris": [
"gs://onianalytics/PersData.csv"
],
"schema": {
"fields": [
{
"name": "Name",
"type": "STRING"
},
{
"name": "Age",
"type": "INTEGER"
}
]
},
"destinationTable": {
"datasetId": "Test_Dataset",
"projectId": "lithe-anvil-404",
"tableId": "tb_test_Pers"
}
}
},
"jobReference": {
"jobId": "10",
"projectId": "lithe-anvil-404"
}
}
For the sourceuri parameter, I am passing "gs://onianalytics/PersData.csv", where onianalytics is my bucket name and PersData.csv is my csv file (from which I want to upload data into google bigquery).
I am getting below response:
"status": {
"state": "DONE",
"errorResult": {
"reason": "invalid",
"message": "Invalid path: gs://onianalytics/PersData.csv"
},
"errors": [
{
"reason": "invalid",
"message": "Invalid path: gs://onianalytics/PersData.csv"
}
]
},
"statistics": {
"creationTime": "1387276603674",
"startTime": "1387276603751",
"endTime": "1387276603751"
}
}
Please explain why this error is occurring?
Is your bucket under the same projectId that has the BigQuery service activated and you requested tokens with? If not, have you tried enabling read/write access for that project?