BigQuery console api "Cannot start a job without a project id" - project

I can call a sql on big query browser tool and I installed the bq tool on centos and register it now I can able connect bigdata and show the dataset or get the table data with head method but when i call the quert from bq tool I got "BigQuery error in query operation: Cannot start a job without a project id." I searched it on google but nothing found helpful.
Does anyone run a select query via "This is BigQuery CLI v2.0.1"
BigQuery> ls
projectId friendlyName
-------------- --------------
XXXX
API Project
BigQuery> show publicdata:samples.shakespeare
Table publicdata:samples.shakespeare
Last modified Schema Total Rows Total Bytes Expiration
----------------- ------------------------------------ ------------ ------------- ------------
02 May 02:47:25 |- word: string (required) 164656 6432064
|- word_count: integer (required)
|- corpus: string (required)
|- corpus_date: integer (required)
BigQuery> query "SELECT title FROM [publicdata:samples.wikipedia] LIMIT 10 "
BigQuery error in query operation: Cannot start a job without a project id.

In order to run a query, you need to provide a project id, which is the project that gets billed for the query (there is a free quota of 25GB/month, but we still need a project to attribute the usage to). You can specify a project either with the --project_id flag or by setting a default project by running gcloud config set project PROJECT_ID. See the docs for bq and especially the 'Working with projects' section here.
Also it sounds like you may have an old version of bq. The most recent can be downloaded here: https://cloud.google.com/sdk/docs/

Related

How to use Snowflake identifier function to reference a stage object

I can descrbie a stage w/ the identifier:
desc stage identifier('db.schema.stage_name');
But get an error when I try to use the stage with the at symbol syntax
Have tried these variations but no dice so far:
list #identifier('db.schema.stage_name');
list identifier('#db.schema.stage_name');
list identifier('db.schema.stage_name');
list identifier(#'db.schema.stage_name');
list identifier("#db.schema.stage_name");
The use of IDENTIFIER may indicate the need to query/list content of a stage with stage name provided as variable.
An alternative approach could be usage of directory tables:
Directory tables store a catalog of staged files in cloud storage. Roles with sufficient privileges can query a directory table to retrieve file URLs to access the staged files, as well as other metadata.
Enabling directory table on the stage:
CREATE OR REPLACE STAGE test DIRECTORY = (ENABLE = TRUE);
ALTER STAGE test REFRESH;
Listing content of the stage:
SET var = '#public.test';
SELECT * FROM DIRECTORY($var);
Output:
+---------------+------+---------------+-----+------+----------+
| RELATIVE_PATH | SIZE | LAST_MODIFIED | MD5 | ETAG | FILE_URL |
+---------------+------+---------------+-----+------+----------+

railflow cli command ignores the cucumber steps with data table. and only shows one line

we just started using testrail with railflow and I am using railflow cli command to create test cases that are written in cucumber/gherkin style. Test results are converted into json files and railflow cli reads those json files and create test cases in test rail. up to this point, everything works fine. However, recently realized that test scenarios where I use data table are not being transferred to my test case in test rail. Anyone had similar issue or suggesting any solution for this?
Here is cucumber step:
Then I verify "abc" table column headers to be
| columnName |
| Subject |
| Report Data |
| Action |
| ER Type |
in test rail, it only includes the header which is " Then I verify "abc" table column headers to be
"
any suggestion is appreciated.
we are constantly improving Railflow and reports handling, so we are more than happy to add support for the cucumber data tables.
Please contact the support team via our website
Update: This is now implemented and available in Railflow NPM CLI v. 2.1.12

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

Error with BQ command line tool: Cannot start a job without a project id

I am having issues with the BQ-Command line tool.
Specifically when trying to query a dataset/table, whether one the public datasets or my own I get the error:
BigQuery error in query operation: Cannot start a job without a project id.
I have a project id set as default as per the attached screen shot.
Any help you could give would be appreciated. :-)
Thanks
BigQuery> ls
projectId friendlyName
de********nts De********ing
BigQuery> query 'select count(*) from publicdata:samples.shakespeare'
BigQuery error in query operation: Cannot start a job without a project id.
BigQuery> show publicdata:samples.shakespeare
Table publicdata:samples.shakespeare
Last modified Schema Total Rows Total Bytes Expiration
22 Oct 07:27:07 |- word: string (required) 164656 6432064
|- word_count: integer (required)
|- corpus: string (required)
|- corpus_date: integer (required)
You should configure gcloud command-line tool first:
gcloud config set project 'yourProjectId'
This project is billed for querying not public data.
Then you can run your query:
bq query 'select count(*) from publicdata:samples.shakespeare'

Using timestamp literals in a WHERE clause with bq tool

I had a look at the BigQuery command line tool documentation and I saw that you are able to use timestamp literals in a WHERE clause. The documentation shows the following example:
$ bq query "SELECT name, birthday FROM dataset.table WHERE birthday <= '1959-01-01 01:02:05'"
Waiting on job_6262ac3ea9f34a2e9382840ee11538ef ... (0s) Current status: DONE
+------+---------------------+
| name | birthday |
+------+---------------------+
| kim | 1958-06-24 12:18:35 |
+------+---------------------+
As the dataset.table is not a public dataset, I build an example using the wikipedia dataset.
SELECT title, timestamp, SEC_TO_TIMESTAMP(timestamp) AS human_timestamp
FROM publicdata:samples.wikipedia
HAVING human_timestamp>'2008-01-01 01:02:03' LIMIT 5
The example works on the BigQuery Browser but it does not on the bq tool. Why? I tried to use scape characters and several combinations of single and double quotes without success. It is a Windows issue? Here goes a screenshot:
EDIT: This is BigQuery CLI 2.0.18
I know that "It works on my machine" isn't a satisfying answer, but I've tried this on my Mac and on a windows machine, and it appears to work fine on both. Here is the output from my windows machine for the same query you've specified:
C:\Users\Jordan Tigani>bq query "SELECT title, timestamp, SEC_TO_TIMESTAMP(timestamp) AS human_timestamp FROM publicdata:samples.wikipedia HAVING human_timestamp>'2008-01-01 01:02:03' LIMIT 5"
Waiting on bqjob_r607b7a74_00000144b71ddb9b_1 ... (0s) Current status: DONE
Can you make sure that the quotes you're using aren't pasted smart quotes and there aren't any stray unicode characters that might confuse the parsing?
One other hint is to use the --apilog=- option, which tells BigQuery to print out all interaction with the server to stdout. You can then see exactly what is getting sent to the BigQuery backend, and verify that the quotes are as expected.
I found out that the problem is due to the greater operator > in the Windows command line. It does not have anything to do with the google-cloud-sdk, sorry.
It seems that you have to use the scape to echo the sign in the command line: ^>
I found it at google groups (by Todd and Margo Chester), and the official reference at Microsoft site.