How to get current account name in google colab - google-colaboratory

For example, if my current Colab instance is signed in as
myColab#gmail.com
Is it possible to get the name "myColab" in ipython?
What is the command to do this?
I want to do this because I may run two Colab, and they generate files, I want to tag these files with Colab account name, so I know which account generate these files.

By default, the backends are not authenticated for any particular user. But, if you've gone through through the gcloud auth flow, you can retrieve the email address like so:
https://colab.research.google.com/drive/1VVWs_pcjjz2vg0H2Ti6-12FzcCojRF6a
The key snippet is:
import requests
gcloud_token = !gcloud auth print-access-token
gcloud_tokeninfo = requests.get('https://www.googleapis.com/oauth2/v3/tokeninfo?access_token=' + gcloud_token[0]).json()
print(gcloud_tokeninfo['email'])

Or just run following:
!gcloud config get-value account

I was trying to get the current user logged in via the GDrive authenticaiton. I do not wish to push another authentication, just to record the name. I believe OP was attempting the same thing.
My theory was, login with google drive, create a file, then read the file's creator. This does not function as intended, it only returns root:root. But it might spark an idea how to make this work in someone who knows more about python and colab then me.
If you create the file in the directory MyDrive, GDrive can see the file and it says owner is Me, hovering over me shows the email account. GDrive knows who made it. I can just not figure out how to make colab output the real file owner, even for files already existing in MyDrive. I assume it is something to do with the mount being symbolic, but.. there HAS to be a way.
Code
print("test", file=open("owner.txt", "a"))
from pathlib import Path
path = Path("owner.txt")
owner = path.owner()
group = path.group()
print(f"{path.name} is owned by {owner}:{group}")
Output
owner.txt is owned by root:root

Related

Is there a way of passing a short string to a Google Colaboratory webapp notebook instance?

I'm experimenting with Google Colab as a mobile tool for pulling data from a public repository, using the webapp running on my phone.
I'd like to use this when I'm on the go, but one of the inputs I need is the lat-long of my current location.
Ideally I'd like to take the url that would open the colab notebook by itself and add a query parameter like so:
http://colab.research.google.com/drive/<somenotebook>?arbitrarykey=arbitraryvalue
and then, in the notebook, use a method like
val = env.readVar('arbitrarykey')
to read it back, as it is easy for me to build and then open such a url.
Is there any such functionality or method for doing this?
PS My fallback is to write a one-off Android app, read my current location, dump it on the clipboard, and then manually paste that in some code block in my notebook assigning it to a variable. Works, but not as elegant.

Potential bug in GCP regarding public access settings for a file

I was conversing with someone from GCS support, and they suggested that there may be a bug and that I post what's happening to the support group.
Situation
I'm trying to adapt this Tensorflow demo ...
https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization
... to something I can use with images stored on my GCP account. Substituting one of my images to run through the process.
​​I have the bucket set for allUsers to have public access, with a Role of Storage Object Viewer.
However, the demo still isn't accepting my files stored in GCS.
For example, this file is being rejected:
https://storage.googleapis.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg
That file was downloaded from the examples in the demo, and then uploaded to my GCS and the link used in the demo. But it's not being accepted. I'm using the URL from the Copy URL link.
Re: publicly accessible data
I've been following the instructions on making data publicly accessible.
https://cloud.google.com/storage/docs/access-control/making-data-public#code-samples_1
I've performed all the above operations from the console, but the bucket still doesn't indicate public access for the bucket in question. So I'm not sure what's going on there.
Please see the attached screen of my bucket permissions settings.
So I'm hoping you can clarify if those settings look good for those files being publicly accessible.
Re: Accessing the data from the demo
I'm also following this related article on 'Accessing public data'
https://cloud.google.com/storage/docs/access-public-data#storage-download-public-object-python
There are 2 things I'm not clear on:
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
If I do need to add this to the code from the demo, can you tell me how I can find these 2 parts of the code:
a. source_blob_name = "storage-object-name"
b. destination_file_name = "local/path/to/file"
I know the path of the file above (01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpeg), but don't understand whether that's the storage-object-name or the local/path/to/file.
And if it's either one of those, then how do I find the other value?
And furthermore, to make a bucket public, why would I need to state an individual file? That's making me think that code isn't necessary.
Thank you for clarifying any issues or helping to resolve my confusion.
Doug
If I've set public access the way I have, do I still need code as in the example on the 'Access public data' article just above?
No, you don't need to. I actually did some testing and I was able to pull images in GCS, may it be set to public or not.
As what we have discussed in this thread, what's happening in your project is that the image you are trying to pull in GCS has a .jpeg extension but is not actually .jpeg. The actual image is in .jpg causing TensorFlow to not able to load it properly.
See this testing following the demo you've mentioned and the image from your bucket. Note that I used .jpg as the image's extension.
content_urls = dict(
test_public='https://storage.cloud.google.com/01_bucket-02/Green_Sea_Turtle_grazing_seagrass.jpg'
)
Also tested another image from your bucket and it was successfully loaded in TensorFlow.
Most likely the problem is your turtle ends in .jpeg and your libraries are looking for .jpg.
The Errors you're seeing would be much more helpful to figure out the problem.

How to get remote state file for my latest Terraform (Enterprise) run?

I want to get the latest run for my workspace and grab its Terraform state file. We are using Terraform Enterprise.
I did the below and I got get the payload:
https://ptfe-dev.company.com.au/api/v2/organizations/organization-name/workspaces/workspace1
I get output but information of the workspace id etc, but that is what not what I want.
With the above output, I get workspace id and I ran the query below:
https://ptfe-dev.companyname.com.au/api/v2/organizations/organization-name/workspaces/workspace1/current-state-version
However, the above query returns:
Sorry, the page /api/v2/organizations/rganization-name/workspaces/workspace1/states/sv-DKBZ2AFoV5mwY4kP could not be found.
This error could mean one of two things:
The resource doesn't exist
The resource exists, but you do not have permission to access it
If someone has linked you to this resource, ensure that they have given you proper permissions to access it.
I can, however, access the same resource (workspace1) state file via TFE UI.
Can anyone please advise me what I am doing wrong here?
In the payload from the first API call, you'll get the reference to the endpoint to get the state.
<tfe_host>/api/v2/workspaces/<workspace_id>/current-state-version
To include outputs, add include=putputs parameter like this:
<tfe_host>/api/v2/workspaces/<workspace_id>/current-state-version?include=outputs
You only need the organizations to get the workspace itself. When you get the state, you're referencing the workspace ID directly and don't need to include the organization.

Passing secret variables to Google Colaboratory notebook

Is there a secure way for public (shared with someone) Google Colaboratory notebooks to import security sensitive variables like access tokens?
I have a notebook with code like this one:
TOKEN='7o6kti1TW7ebwXXG6ZAdVkS08MzDBLG00oXTCNTYEbB5A'
items = json.loads(
requests.get('https://someservice.com/api/items?access_token={}'.format(TOKEN)).text
)
I want share the notebook with other users so they are able to run and edit code cells, but I want to move TOKEN variable definition to some hidden place. Is there a way to achieve that?
One option is to assign the token at invocation time using getpass.
Here's an example:
https://colab.research.google.com/drive/1bjBVx6pokBm_A1em-XdURQAmemlUAgYz
Here is the code inside #Bob Smith's linked colab
from getpass import getpass
token = getpass('Enter token here')
print ('token is', token)
Note to moderators: I tried to edit the original answer but there's too many queued edits. I also tried to add as a comment but the formatting put it on one line making it hard to read.

Hybris hMC login configuration

Forgive me here if this is a complete newbie question. At work, they are (as I am as well) trying to onboard me into using Hybris. While the documentation on the wiki.hybris site is not well placed, most of the information is there. I am however having some trouble finding how to change default HMC logins credential?
When I rebuilt the server, it forced me to reinitialize the database, and thus changed all of the logins. I managed to find the cms login, but I am curious as to where the HMC admin login is stored at?, as it appears to be changed, and I need to find it. I know that it heavily leverages Spring, and I searched the .xml files for a password, but I am not finding what I need.
Any help would be greatly appreciated!
they are exist in different location in .impex files (importer files) , that you initialize the store for the first time , those .impex files got imported to your database
example of one location :
ext-template\yacceleratorinitialdata\resources\yacceleratorinitialdata\import\common\user-groups.impex
as there are multiple starter stores that comes with hybris (accelerator, telcoaccelerator, and powershop b2b) I suggest you to search as text for username or password in all files extention = .impex then change for files that belong to your store .
i did a quick search , not only in .impex file , the hmc admin/nimda seems to changes from
/bin/platform/project.properties
# Login and password for the automatic logging into the hMC
hmc.default.login=admin
hmc.default.password=nimda
hope that helped you .
Thanks
When searching for Out Of The Box impex files, search against ".impex" & ".csv" as many of the impex scripts are written as csv as well. In rare case, you may also find *.txt also catering to impex scripts.
You will not be able to find an impex file where "admin" user credentials are maintained. It is "nimda" by default and may be changed via impex file or simply by HMC under users.
/bin/platform/project.properties
Any property file will have no impact on the user credentials, the mentioned property file just defaults the jsp page with filled in values in the login form. This has nothing to do with the current/changed credentials.