I have currently installed gsutil on a server to access my GCS buckets. I followed the instructions under the section 'How to convert gsutil to use OAuth 2.0' from https://cloud.google.com/storage/docs/gsutil_install
The intermediate steps in the instructions require that a URL is copy pasted in the browser to generate a code that you have to enter again on the terminal. You also need to enter proxy server details (if any).
I am looking for ways to automate this set up and configuration process for gsutil.
Any ideas/references/suggestions/comments are welcome.
Thanks.
Can you say more about what you're trying to do? Are you looking to create distinct credentials for each of a set of users, or are you trying to set up gsutil running on multiple machines all as part of an application that authenticates as that application to Google Cloud Storage?
For the former you need users to set up their own credentials. The web-based dialog for OK'ing the creation of OAuth2 credentials was designed to make it unlikely that a customer could grant long lasting credentials without being aware that they are doing so (for security reasons).
For the latter you should use a service account (see https://cloud.google.com/storage/docs/authentication#service_accounts). You create those credentials once and then deploy them on your production machines along with gsutil - which is a valid security approach because all instances of those machines are authenticating on behalf of an application, not distinct users.
Related
I want to ask a conceptional question and take advices about possible system design if possible.
The plan is basically authenticating specific Gmail users to use my serverless backend application. I'm thinking about either forwarding users directly to my VPC or I can authenticate them in my host-provider server and then after forward them to the VPC (or directly Cloud Run service?).
I'd be really glad if someone experienced can lead me about concepts and suggest design ideas about this.
As commented by#John Hanley, your question has concepts that do not exist.
To invoke Cloud Run authentication to specific users to use your serverless backend application, go through below required possible systems designs :
1)Initially design how to describe IAM roles that are associated with Cloud Run, and list the permissions that are contained in each role.
2)Design how to secure and Configure Cloud Run to limit access to Cloud Run service with Identity aware Proxy(IAP).
3)Design how to create a Serverless VPC Access connector and also know how to use IAP for TCP forwarding within a VPC Service Controls perimeter.
4)Step by step implementation of how to use IAP to secure portal access without using a Virtual Private Network (VPN). IAP simplifies implementing a zero-trust access model and takes less time than a VPN for remote workers both on-premises and in cloud environments with a single point of control for managing access to your apps.
Solution to the what I had in mind was could be accomplished by Identity-Aware Proxy.
I have some cloud foundry nodejs apps in IBM Cloud (Bluemix), and facing some issues with temp folders. It'd be easier if I can access directly to app folders (like my .tmp) to debug what's saving there. I can only think in SSH, but I prefer a visual tool or connected service. Any idea?
You can use graphical tools to browse the files by utilizing scp or sftp. I use FileZilla to browser the files. Instructions for accessing the files are similar to all Cloud Foundry providers, including IBM Cloud:
I logged in to IBM Cloud using the CLI: ibmcloud login
Next, set the org and space: ibmcloud target --cf
Obtain the GUID for the app: ibmcloud cf app YOURAPP --guid
Look for the ssh endpoint: ibmcloud cf curl /v2/info
Issue a onetime password for ssh access: ibmcloud cf ssh-code
With that, use a username like cf:theGUIDfrom3/0 (0 could be another number depending on how many instances you have) and the onetime password to log in. The host is the one listed as app_ssh_endpoint at the shown port. You likely need to prefix it with the protocol, e.g., sftp://.
I'm trying to secure HDFS cluster on open source DC/OS but it seems it's not an easy thing.
The problem I see in HDFS is the fact that it uses username of current system user so without any form of authentication anyone can just create user with certain username and get superuser permissions on cluster.
So I need any form of authentication. IP auth would be fine(clients with certain IPs can only connect to HDFS) but I couldn't find if there's an option to enable it.
Creating Kerberos for HDFS is not an option because running another service just to run another service to run another service etc. will only give tons of work.
If enabling any form of viable security is impossible, is there any other DC/OS HDFS-like service I can use? I need some HA storage to fetch config files and sometimes jars from Artifact Uris to run services. I also need a place to store parquet files from spark streaming.
Version of DC/OS HDFS is 2.6.x.
Unfortunately it seems that Kerberos is the only real form of authentication in HDFS. Without this, HDFS will trust every user.
I have a webapplication and I would like to provide the users with a feature to be able to ssh to the linux server without having to add all the users' credentials from the Psql DB to the linux server. Instead I would like to use the credentials directly for ssh. I think that this is possible using linux pluggable authentication modules (pam), however I'm don't know where to start and I would like some help?
You will need to set up pam_pgsql and nss_pgsql for users from database to become first-class citizens (local users). Then they will be able to ssh as easy as your users from passwd/shadow/group files.
Start with said packages installation and reading their configuration manuals. Remember: PAM is for authentication, NSS is for name-to-uid and back translations.
So I'm setting up a dedicated server using Debian 5 Lenny. I will be using some Atlassian Tools (JIRA, Confluence, Bamboo, and Fisheye). I want to use a local LDAP server to store information for the users that will be accessing these software titles, so that they can use one set of credentials to log in.
I also want webmail users to be configured using LDAP.
However, this is a small operation. Three people. That's why all of the software, including the ldap server, will all be on the same machine.
That said, is it safe to use LDAP to store user credentials (including passwords) in LDAP without using Kerberos? I'm confused as to when Kerberos should be used.
Hypothetically, let's say I had two servers on a subnet. Server A received requests from the outside world, for atlassian tools. Server a communicates to ldap server (internally) on server b. In that case, would I use kerberos?
When do I use Kerberos? When do I not?
I am not setting anything like "Active Directory" up. No Samba either. Users do not need to login to a domain (with access to files on the domain), they just need to login to webapps. But if I was doing LDAP on it's own dedicated machine, then I might want Kerberos?
:confuzzled: :(
-Sam
The simplest possible answer is yes, it is possible to store user names, user ids, and passwords without using Kerberos, and in fact directory services accessed via LDAP are an excellent tool for storing this sort of authentication and authorization information.
Update:
In my opinion, if you do choose an open source server, you will find OpenDS to be superior to OpenLDAP or Apache.
Basically, if you have Kerberos, you do not need any directory server. If you aren't in a corporate environment and are looking for an identity management store, you should definitively go for a directory server like OpenLDAP or Apache Directory. Kerberos require running a correctly set up DNS and NTP server. This might be way to much. Even if you do, those lazy morons from Atlassian still did not implement Kerberos support into their products. You can't even go with that.
I just noticed that there are only three of you, maybe a simple database setup with MySQL would suffice instead of running a full-blown directory server?