Octavia apply Airbyte gives - google-bigquery

I'm trying to create a new BigQuery destination on Airbyte with Octavia cli.
When launching:
octavia apply
I receive:
Error: {"message":"The provided configuration does not fulfill the specification. Errors: json schema validation failed when comparing the data to the json schema. \nErrors:
$.loading_method.method: must be a constant value Standard
Here is my conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15

It was an indentation issue on my side:
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
Should be at 1 upper level (this wasn't clear in the commented template, hence the error and the possibility that others persons will do the same).
Here is full final conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15

Related

How to use Snowflake identifier function to reference a stage object

I can descrbie a stage w/ the identifier:
desc stage identifier('db.schema.stage_name');
But get an error when I try to use the stage with the at symbol syntax
Have tried these variations but no dice so far:
list #identifier('db.schema.stage_name');
list identifier('#db.schema.stage_name');
list identifier('db.schema.stage_name');
list identifier(#'db.schema.stage_name');
list identifier("#db.schema.stage_name");
The use of IDENTIFIER may indicate the need to query/list content of a stage with stage name provided as variable.
An alternative approach could be usage of directory tables:
Directory tables store a catalog of staged files in cloud storage. Roles with sufficient privileges can query a directory table to retrieve file URLs to access the staged files, as well as other metadata.
Enabling directory table on the stage:
CREATE OR REPLACE STAGE test DIRECTORY = (ENABLE = TRUE);
ALTER STAGE test REFRESH;
Listing content of the stage:
SET var = '#public.test';
SELECT * FROM DIRECTORY($var);
Output:
+---------------+------+---------------+-----+------+----------+
| RELATIVE_PATH | SIZE | LAST_MODIFIED | MD5 | ETAG | FILE_URL |
+---------------+------+---------------+-----+------+----------+

Postman Request body data from excel / csv file - forward slash

I get Request body data from excel file.
I have already covert excel to csv format.
I have kind of able to find a solution but it is not working 100% as jsonbody format in not fetching data correctly is shows forward slash in csv import data from runner collections.
Request Body
{{jsonBody}}
Set Global variables jsonBody
Run collection select data file as csv file as per screenshot request body shows with forward slash.
After running the collection I'm getting body incorrect version with forward slash.
This below screenshot show correct version on csv data I require to remove forward slash from csv data
I had similar issue with postman and realized my problem was more of a syntax issue.
Lets say your cvs file has the following columns:
userId | mid | platform | type | ...etc
row1 94J4J | 209444894 | NORTH | PT | ...
row2 324JE | 934421903 | SOUTH | MB | ...
row3 966RT | 158739394 | EAST | PT | ...
This is how you want your json request body to look like:
{
"userId" : "{{userId}}",
"mids":[{
"mid":"{{mid}}",
"platform":"{{platform}}"
}],
"type":["{{type}}"],
.. etc
}
Make sure your colums names match the varibales {{variableName}}
The data coming from CSV is already in a stringified format so you don't need to do anything in pre-request.
example:
let csv be
| jsonBody |
| {"name":"user"}|
Now in postman request just use:
{{jsonBody}}
as {{column_name}} will be considered as data varaible so , in your case {{jsonBody}}
csv:
make sure you save this as csv file :
Now in request use :
output:
if you want to add the json body as value of another then just use :
Output:

How to see the data(primary and backup) on each of the node in the Apache Ignite?

I have 6 Ignite nodes and all are connected well to form a cluster. Also, i am giving the backup copies as 2 . Now i have sent 20 data to the cluster to check the partition and data(primary and backup). I can see the count using the cache -a -r command .
Is there a command or way where i can see the actual data in each of the node, where i can see the primary data as well as the backup copies?
You could use cache -scan -c=cacheName
Entries in cache: SQL_PUBLIC_PERSON
+=============================================================================================================================================+
| Key Class | Key | Value Class | Value |
+=============================================================================================================================================+
| java.lang.Integer | 1 | o.a.i.i.binary.BinaryObjectImpl | SQL_PUBLIC_PERSON_.. [hash=357088963, NAME=Name1] |
+---------------------------------------------------------------------------------------------------------------------------------------------+
use help cache to see all cache related commands.
see: https://apacheignite-tools.readme.io/docs/command-line-interface
You also have the option of turning on SQL: https://apacheignite-sql.readme.io/docs/schema-and-indexes
and: https://apacheignite-sql.readme.io/docs/getting-started
then use JDBC/sql to see entries in your cache.

Read a file from a position in Robot Framework

How can I read a file from a specific byte position in Robot Framework?
Let's say I have a process running for a long time writing a long log file. I want to get the current file size, then I execute something that affects the behaviour of the process and I wait until some message appears in the log file. I want to read only the portion of the file starting from the previous file size.
I am new to Robot Framework. I think this is a very common scenario, but I haven't found how to do it.
There are no built-in keywords to do this, but writing one in python is pretty simple.
For example, create a file named "readmore.py" with the following:
from robot.libraries.BuiltIn import BuiltIn
class readmore(object):
ROBOT_LIBRARY_SCOPE = "TEST SUITE"
def __init__(self):
self.fp = {}
def read_more(self, path):
# if we don't already know about this file,
# set the file pointer to zero
if path not in self.fp:
BuiltIn().log("setting fp to zero", "DEBUG")
self.fp[path] = 0
# open the file, move the pointer to the stored
# position, read the file, and reset the pointer
with open(path) as f:
BuiltIn().log("seeking to %s" % self.fp[path], "DEBUG")
f.seek(self.fp[path])
data = f.read()
self.fp[path] = f.tell()
BuiltIn().log("resetting fp to %s" % self.fp[path], "DEBUG")
return data
You can then use it like this:
*** Settings ***
| Library | readmore.py
| Library | OperatingSystem
*** test cases ***
| Example of "tail-like" reading of a file
| | # read the current contents of the file
| | ${original}= | read more | /tmp/junk.txt
| | # do something to add more data to the file
| | Append to file | /tmp/junk.txt | this is new content\n
| | # read the new data
| | ${new}= | Read more | /tmp/junk.txt
| | Should be equal | ${new.strip()} | this is new content

BigQuery console api "Cannot start a job without a project id"

I can call a sql on big query browser tool and I installed the bq tool on centos and register it now I can able connect bigdata and show the dataset or get the table data with head method but when i call the quert from bq tool I got "BigQuery error in query operation: Cannot start a job without a project id." I searched it on google but nothing found helpful.
Does anyone run a select query via "This is BigQuery CLI v2.0.1"
BigQuery> ls
projectId friendlyName
-------------- --------------
XXXX
API Project
BigQuery> show publicdata:samples.shakespeare
Table publicdata:samples.shakespeare
Last modified Schema Total Rows Total Bytes Expiration
----------------- ------------------------------------ ------------ ------------- ------------
02 May 02:47:25 |- word: string (required) 164656 6432064
|- word_count: integer (required)
|- corpus: string (required)
|- corpus_date: integer (required)
BigQuery> query "SELECT title FROM [publicdata:samples.wikipedia] LIMIT 10 "
BigQuery error in query operation: Cannot start a job without a project id.
In order to run a query, you need to provide a project id, which is the project that gets billed for the query (there is a free quota of 25GB/month, but we still need a project to attribute the usage to). You can specify a project either with the --project_id flag or by setting a default project by running gcloud config set project PROJECT_ID. See the docs for bq and especially the 'Working with projects' section here.
Also it sounds like you may have an old version of bq. The most recent can be downloaded here: https://cloud.google.com/sdk/docs/