New Windows installation, CLI not picking up .bigqueryrc - google-bigquery

I had a Windows 10 machine working great with bq/gloud and gsutils all working great. Had to move to new machine and setting all of this back up. I cannot seem to get bq to recognize the entries in my .bigqueryrc file which was copied from my working machine. On my working machine, I am not using BIGQUERYRC environmental file, but just HOME, ie ~/.bigqueryrc
I say it does not seem to recognize is because if I introduce a random character I get an error about my .bigqueryrc file.
Let me first show that bq seems to be operating, overall, as expected
C:\Users\boyer>bq ls --project_id=broad-tapestry-sbx-boyer-14
datasetId
---------------
Broad_DataGov
Broad_EDW
Broad_Kitchen
Broad_Lake
Broad_Tableau
Broad_Utils
SHELL> bq ls brings nothing. The contents of my .biqqueryrc file is as thus:
credential_file = C:\Users\boyer\AppData\Roaming\gcloud\legacy_credentials\boyer#broadinstitute.org\singlestore_bq.json
project_id = broad-tapestry-sbx-boyer-14
[query]
--use_legacy_sql=false
Here is showing where my .bigqueryrc file is:
:\Users\boyer>ls -ls ~/.bigqueryrc
1 -rwx------+ 1 CHARLES+boyer CHARLES+boyer 198 Dec 22 12:48 /cygdrive/c/Users/boyer/.bigqueryrc
So, I try this and nothing happens either:
C:\Users\boyer>bq ls --bigqueryrc=c:/users/boyer/.bigqueryrc
C:\Users\boyer>bq ls --bigqueryrc=c:\users\boyer\.bigqueryrc
C:\Users\boyer>
But here is the error I purposefully introduced to see if it is actually looking at the file and it seems it is finding it.
C:\Users\boyer>bq ls --bigqueryrc=c:\users\boyer\.bigqueryrc
Unknown flag x found in bigqueryrc file in section global
The error above was introduced by adding a random 'x' character to the top of the file:
x
credential_file = C:\Users\boyer\AppData\Roaming\gcloud\legacy_credentials\boyer#broadinstitute.org\singlestore_bq.json
project_id = broad-tapestry-sbx-boyer-14
[query]
--use_legacy_sql=false

Related

How do I get Source Extractor to Analyze an Image?

I'm relatively inexperienced in coding, so right now I'm just familiarizing myself with the basics of how to use SE, which I'll need to use in the near future.
At the moment I'm trying to get it to analyze a FITS file on my computer (which is a Mac). I'm sure this is something obvious, but I haven't been able to get it do that. Following the instructions in Chapters 6 and 7 of Source Extractor for Dummies (linked below), I input the following:
sex MedSpiral_20deg_Serl2_.45_.fits.fits -c configuration_file.txt
And got the following error message:
WARNING: configuration_file.txt not found, using internal defaults
----- SExtractor 2.19.5 started on 2020-02-05 at 17:10:59 with 1 thread
Setting catalog parameters
ERROR: can't read default.param
I then tried entering parameters manually:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c configuration_file.txt -DETECT_TYPE CCD -MAG_ZEROPOINT 2.5 -PIXEL_SCALE 0 -SATUR_LEVEL 1 -SEEING_FWHM 1
And got the same error message. I tried referencing default.sex directly:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c default.sex
And got the same error message again, substituting "configuration_file.txt not found" with "default.sex not found" (I checked that default.sex was on my computer, it is). The same thing happened when I tried to use default.param.
Here's the link to SE for Dummies (Chapter 6 begins on page 19):
http://astroa.physics.metu.edu.tr/MANUALS/sextractor/Guide2source_extractor.pdf
If you run the command "sex MedSpiral_20deg_Ser12_.45_fits.fits -c default.sex" within the config folder (within the sextractor folder), you will be able to run it.
However, I wonder how I can possibly run sextractor command from any folder in my computer?

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

phpseclib: "CAT" command works randomly

I have a script that fetches data from a site.
Basically it is divided into two section.
1.executes commands on a remote machine and saves output in a file
2.read the contents of a file.
For some reason it works from time to time. Section 1 always works (checked the remote machine and found the files). The problem is related to cat.
I've added to my code the option to dump the results of "CAT" command to a file.
Sometimes it has info sometimes it doesn't. However the file is always created!
The nodes i'm querying are the same. Timeout of execution of Section 1 on a remote server is 11-12 secs.
Thanks beforehand.
$ssh->exec("rm toolkit/mybatch/$newfileid");
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/mybatch");
$ssh->setTimeout(15);
echo $ssh->exec('cat ' . escapeshellarg("toolkit/mybatch/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/mybatch/traffic.txt');
$traffic = $ssh->exec("cat toolkit/mybatch/traffic.txt");
$traffic = substr($traffic,21,-16);
$ssh->disconnect();
echo $traffic;
I've updated the code above, however, it worked several times , but after deletion of old files, it only creates "traffic.txt" and sometimes it has info in it, sometimes no.
Also, the file "traffic.txtescapeshellarg" is not created anymore. So i was forced to go back to my previous solution and read the "traffic.txt".
So, the solution was very simple. All i needed to do is adding "timeout".
The final code:
$ssh->exec("rm toolkit/traffic/$newfileid");
$ssh->setTimeout(0);
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/traffic");
sleep(8);
$ssh->exec('cat ' . escapeshellarg("toolkit/traffic/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/traffic/traffic.txt');
$traffic = $ssh->exec("cat toolkit/traffic/traffic.txt");
$traffic = substr($traffic,21,-83);
echo $traffic;

BigQuery bq command with asterisk (*) doesn't work in Compute Engine

I have a directory with a file named file1.txt
And I run the command:
bq query "SELECT * FROM [publicdata:samples.shakespeare] LIMIT 5"
In my local machine it works fine but in Compute Engine I receive this error:
Waiting on bqjob_r2aaecf624e10b8c5_0000014d0537316e_1 ... (0s) Current status: DONE
BigQuery error in query operation: Error processing job 'my-project-id:bqjob_r2aaecf624e10b8c5_0000014d0537316e_1': Field 'file1.txt' not found.
If the directory is empty it works fine. I'm guessing the asterisk is expanding the file(s) into the query but I don't know why.
Apparently the bq command which is located at /usr/bin/bq has the following script:
#!/bin/sh
exec /usr/lib/google-cloud-sdk/bin/bq ${#}
which expands the asterisk.
As a current workaround I'm calling /usr/lib/google-cloud-sdk/bin/bq directly.

Find UID from EUID

I am having a AIX 5.3 host to which we login and when needed uses pbrun tool to become root.Now the question is how do I find from command line as what user I have logged in to get this privileged/root user. If I am not wrong how do I find UID from my current EUID. Tried whoami and who am i both gives output as root.
"who am i" is coming from utmp. If utmp shows you as root then your pbrun tool must be changing it from what it was when you first logged in.
You could do:
ps l $$
which prints out a line with the PID and PPID. Take the PPID and do that again:
ps l <PPID>
The UID column is your numeric user id. If the PPID shows as 1, then pbrun did an exec rather than a folk / exec (which implies that it is a function or alias within your shell). In that case, you could revert to "last" which will show who logged in to which tty at what time.
======
Another idea. You can get the terminal the program is executing on via ps. This is called the controlling terminal. You can also get it via the "tty" command:
tty
/dev/pts/18
Now, feed that to "last" but remove the leading /dev/ part and take the first hit:
last pts/18 | head -1
myname pts/18 myhost.mydomain.com Nov 14 10:22 still logged in.
That is the last person to log into that particular terminal. Will that work?