Using timestamp literals in a WHERE clause with bq tool - google-bigquery

I had a look at the BigQuery command line tool documentation and I saw that you are able to use timestamp literals in a WHERE clause. The documentation shows the following example:
$ bq query "SELECT name, birthday FROM dataset.table WHERE birthday <= '1959-01-01 01:02:05'"
Waiting on job_6262ac3ea9f34a2e9382840ee11538ef ... (0s) Current status: DONE
+------+---------------------+
| name | birthday |
+------+---------------------+
| kim | 1958-06-24 12:18:35 |
+------+---------------------+
As the dataset.table is not a public dataset, I build an example using the wikipedia dataset.
SELECT title, timestamp, SEC_TO_TIMESTAMP(timestamp) AS human_timestamp
FROM publicdata:samples.wikipedia
HAVING human_timestamp>'2008-01-01 01:02:03' LIMIT 5
The example works on the BigQuery Browser but it does not on the bq tool. Why? I tried to use scape characters and several combinations of single and double quotes without success. It is a Windows issue? Here goes a screenshot:
EDIT: This is BigQuery CLI 2.0.18

I know that "It works on my machine" isn't a satisfying answer, but I've tried this on my Mac and on a windows machine, and it appears to work fine on both. Here is the output from my windows machine for the same query you've specified:
C:\Users\Jordan Tigani>bq query "SELECT title, timestamp, SEC_TO_TIMESTAMP(timestamp) AS human_timestamp FROM publicdata:samples.wikipedia HAVING human_timestamp>'2008-01-01 01:02:03' LIMIT 5"
Waiting on bqjob_r607b7a74_00000144b71ddb9b_1 ... (0s) Current status: DONE
Can you make sure that the quotes you're using aren't pasted smart quotes and there aren't any stray unicode characters that might confuse the parsing?
One other hint is to use the --apilog=- option, which tells BigQuery to print out all interaction with the server to stdout. You can then see exactly what is getting sent to the BigQuery backend, and verify that the quotes are as expected.

I found out that the problem is due to the greater operator > in the Windows command line. It does not have anything to do with the google-cloud-sdk, sorry.
It seems that you have to use the scape to echo the sign in the command line: ^>
I found it at google groups (by Todd and Margo Chester), and the official reference at Microsoft site.

Related

How do I get Source Extractor to Analyze an Image?

I'm relatively inexperienced in coding, so right now I'm just familiarizing myself with the basics of how to use SE, which I'll need to use in the near future.
At the moment I'm trying to get it to analyze a FITS file on my computer (which is a Mac). I'm sure this is something obvious, but I haven't been able to get it do that. Following the instructions in Chapters 6 and 7 of Source Extractor for Dummies (linked below), I input the following:
sex MedSpiral_20deg_Serl2_.45_.fits.fits -c configuration_file.txt
And got the following error message:
WARNING: configuration_file.txt not found, using internal defaults
----- SExtractor 2.19.5 started on 2020-02-05 at 17:10:59 with 1 thread
Setting catalog parameters
ERROR: can't read default.param
I then tried entering parameters manually:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c configuration_file.txt -DETECT_TYPE CCD -MAG_ZEROPOINT 2.5 -PIXEL_SCALE 0 -SATUR_LEVEL 1 -SEEING_FWHM 1
And got the same error message. I tried referencing default.sex directly:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c default.sex
And got the same error message again, substituting "configuration_file.txt not found" with "default.sex not found" (I checked that default.sex was on my computer, it is). The same thing happened when I tried to use default.param.
Here's the link to SE for Dummies (Chapter 6 begins on page 19):
http://astroa.physics.metu.edu.tr/MANUALS/sextractor/Guide2source_extractor.pdf
If you run the command "sex MedSpiral_20deg_Ser12_.45_fits.fits -c default.sex" within the config folder (within the sextractor folder), you will be able to run it.
However, I wonder how I can possibly run sextractor command from any folder in my computer?

When trying to get the source of a function in Postgres using psql, what does the error "column p.proisagg does not exist" mean?

Background:
using postgres 11 on RDS, interface is psql on a Centos 7 box; objective is to show the source of certain stored procs / functions so that I can work with them
Problem description : When I attempt to list / show the source of a given stored function using the \df+ command which I understand to be correct for this use based on [official docs here](https://www.postgresql.org/docs/current/app-psql.html], an error is given as shown:
psql=> \df+ schema_foo.proc_bar;
ERROR: column p.proisagg does not exist
LINE 6: WHEN p.proisagg THEN 'agg'
I have no clue how to interpret this; the function in question does not contain the snippet of logic shown in the error, nor the column referenced there p.proisagg. I have verified this by opening the function in vim with \ef.
My guess based on several unrelated github issues that mention this same error for example is that it is in reference to some schema code internal to postgres.
Summary: in short, I can view the source of the functions using \ef, so my work is not impaired from a practical standpoint, however I wish to understand this error and why I'm encountering it with \df+.
I had the same issue and ran these 2 commands to fix it
sed -i "s/NOT pp.proisagg/pp.prokind='f'/g" /usr/share/phpPgAdmin/classes/database/Postgres.php
sed -i "s/NOT p.proisagg/p.prokind='f'/g" /usr/share/phpPgAdmin/classes/database/Postgres.php

Updating Values on json file A using reference on file B - The return

Ok, i should feel ashamed for that, but i'm unable to understand how awk works...
A few days ago i posted this question which questions about how to replace fields on file A using the file B as a reference ( both files have matching ID's for reference ).
But after accepting the answer as correct ( Thanks Ed !) i'm struggling about how to do it using this following pattern:
File A
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"","test_comm":"test", "test_val": 1923}
File B
{"test_id": 12345, "test_name": "Test values for null"}
{"test_id": 12346, "test_name": "alfa tests initiated"}
{"test_id": 12347, "test_name": "discard values"}
Expected result:
{"test_ref":32132112321,"test_id":12345,"test_name":"Test values for null","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"alfa tests initiated","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"discard values","test_comm":"test", "test_val": 1923}
I tried some alterations with the original solution but without success. So, Based on the Question posted before, how could i achieve the same results with this new pattern?
PS: One important note, the lines on file A not always have the same length
Big Thanks in advance.
EDIT:
After trying the solution posted by Wintermute, it seens it doens't work with lines having:
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true","modifiers":[{"type":3,"value":31}{"type":4,"value":33}]}
Error received.
error: parse error: Expected separator between values at line xxx, column xxx
Parsing JSON with awk or sed is not a good idea for the same reasons that it's not a good idea to parse XML with them: sed works based on lines, and JSON is not line-based. awk works on vaguely tabular data, and JSON is not vaguely tabular. People don't expect their JSON tools to break when they insert newlines in benign places.
Instead, consider using a tool geared towards JSON processing, such as jq. In this particular case, you could use
jq -c -s 'group_by(.test_id) | map(.[0] + .[1]) | .[]' a.json b.json > c.json
Here jq slurps (-s) the input files into an array of JSON objects, groups these by test_id, merges them and unpacks the array. -c means compact output format, so each JSON object in the result ends up on a single line in the output.

special character in bigquery command line command

I have a BigQuery table that has column with some values of '\N' (without the quotes). I want to write a query with where clause on the field.
This is my command "SELECT barcode FROM [mydataset1.mytab1] where barcode = '\N' and length(barcode) < 5"
The above command works perfectly on Windows. The above command returns records for which barcode is \N. Now the same command returns error on Linux platform. I think the special character needs to be written differently.
I tried "SELECT barcode FROM [mydataset1.mytab1] where barcode = '/\N' and length(barcode) < 5" and this does not work either. Could you let me know who to modify the above query to work it on Linux environment?
I have attached the screenshots of the working and not working screens.
http://goo.gl/9p6cwD (Windows works)
http://goo.gl/DeAHij (Linux gives error)
Try using \\\. For instance, this query works:
$ bq query "SELECT '\\\N';"

Powershell 4.0 - plink and table-like data

I am running PS 4.0 and the following command in interaction with a Veritas Netbackup master server on a Unix host via plink:
PS C:\batch> $testtest = c:\batch\plink blah#blersniggity -pw "blurble" "/usr/openv/netbackup/bin/admincmd/nbpemreq -due -date 01/17/2014" | Format-Table -property Status
As you can see, I attempted a "Format-Table" call at the end of this.
The resulting value of the variable ($testtest) is a string that is laid out exactly like the table in the Unix console, with Status, Job Code, Servername, Policy... all that listed in order. But, it populates the variable in Powershell as just that: a vanilla string.
I want to use this in conjunction with a stored procedure on a SQL box, which would be TONS easier if I could format it into a table. How do I use Powershell to tabulate it exactly how it is extracted from the Unix prompt via Plink?
You'll need to parse it and create PS Objects to be able to use the format-* cmdlets. I do enough of it that I wrote this to help:
http://gallery.technet.microsoft.com/scriptcenter/New-PSObjectFromMatches-87d8ce87
You'll need to be able to isolate the data and write a regex to capture the bits you want.