I'm looking for guidance on how to securely use azure storage in an public facing production environment.
My simplest scenarios is multiple windows 8 store clients uploading images to azure. The account key is stored in app.config.
Is it ok to distribute the account key as part of my mobile application?
Or should I have a backend service that creates shared access signatures for the container / blob?
Thanks in advance.
Sharing your account key in your mobile application is not desirable because the clients get complete access to your account and can view/modify other data. Shared Access Signatures are useful in such cases as you can delegate access to certain storage account resources. You can grant them access to a resource for a specified period of time, with a specified set of permissions. In your case, you want to provide them access to only write blob content. You can find more details about SAS and how to use it here - http://msdn.microsoft.com/en-us/library/windowsazure/ee395415.aspx
Related
I want to make sure I understand how ImageFlow.NET server works with images stored on a private Azure Blob Storage container.
Currently, we access images directly from Azure Blob Storage and we need to create a SAS token for images to be available in our frontend apps -- inlcuding mobile apps.
Our primary interest in ImageFlow.NET server is resizing images on demand. Would we still need to generate a SAS token for each image if we use ImageFlow.NET server to handle images for us?
For example, if we were to request a downsized version of image myimage.jpg, which is stored on Azure Blob Storage, do we still need to generate a SAS token or will ImageFlow server simply pull the image and send it to the requesting app without a SAS token?
Imageflow.NET Server has an easy API if you need to change this or hook up a different blob storage provider or design.
In the default Azure plugin setup, Imageflow authenticates with Azure using the configured credentials to access protected blobs, but clients themselves do not need an SAS token. Imageflow's own access can be restricted via Azure and by configuring the allowed buckets list.
Often, you need to have authorization for client/browser access as well as for Imageflow getting to blob storage. You can use any of the existing ASP.NET systems and libraries for this as if you're protecting static files or pages, or you can use Imageflow's built-in signing system that is actually quite similar to SAS tokens.
You can configure Imageflow to require a signature be appended to URLs. There's a utility method for generating those.
Then it's on you to only give those URLs to users who are allowed to access them.
Essentially, Imageflow supports any client authentication/authorization system you want to add to the app.
If you need something customized between Imageflow and Azure, that's also easy to customize (In fact, there's a single file adapter in the example project that implements a different approach for cases where you don't want to limit which buckets Imageflow accesses).
We are building a small web-UI using React that will be served up by GCP App-Engine (standard). The UI will display a carousel of images along with some image metadata to our client's employees when they click on a link inside of their internal GIS system. We are looking to authenticate these calls since the App-Engine endpoint will be exposed publicly, and are hoping to use a GCP Service Account private key that will be used by the client to create a time-limited JSON web-token that will give temporary access to the GIS user when they open the web-UI. We are following this GCP documentation. In summary:
We create a new service-account with necessary IAM permissions in GCP along with a key
We share the private key with client which they then use to sign a Json Web Token which is passed in the call to our endpoint when user accesses our web-UI from their GIS system
Call is authenticated by GCP backend (ESP/OpenAPI)
Question: is this a recommended approach for external system accessing GCP resources or is there a better pattern more applicable to this type of situation (external system accessing GCP resource)?
I believe this is the recommended approach for your use case.
According to the official documentation:
O.k. my fellow devops and coders. I have spent the last week trying to figure this out with Google (GCP) Cloud Storage objects. Here is my objective.
The solution needs to be light weight as it will be used to download images inside a docker image, hence the curl requirement.
The GCP bucket and object needs to be secure and not public.
I need a "long" lived ticket/key/client_ID.
I have tried the OAuth2.0 setup that Google's documentation mentions but everytime I want to setup an OAuth2.0 key it I do not get the option to have the "offline" access. AND to top it off it requires you to put in source URL's that will be accessing the auth request.
Also Google Cloud Storage does not support the key= like some of their other services. So here I have a an API KEY for my project as well as an OAuth JSON file for my service user and they are useless.
I can get a curl command to work with the temp OAuth bearer key but I need a long term solution for this.
RUN curl -X GET \
-H "Authorization: Bearer ya29.GlsoB-ck37IIrXkvYVZLIr3u_oGB8e60UyUgiP74l4UZ4UkT2aki2TI1ZtROKs6GKB6ZMeYSZWRTjoHQSMA1R0Q9wW9ZSP003MsAnFSVx5FkRd9-XhCu4MIWYTHX" \
-o "/home/shmac/test.tar.gz" \
"https://www.googleapis.com/storage/v1/b/mybucket/o/my.tar.gz?alt=media"
A long term key/ID/secret that will allow me to download a GCP bucket object from any location.
The solution needs to be lightweight as it will be used to download
images inside a docker image, hence the curl requirement.
This is a vague requirement. What is lightweight? No external libraries, everything written in assembly language, must fit in 1 KB, etc.
The GCP bucket and object needs to be secure and not public.
This normal requirement. With some exceptions (static file storage for websites, etc) you want your buckets to be private.
I need a "long" lived ticket/key/client_ID.
My advice is to stop thinking "long-term keys". The trend in security is to implement short-term keys. In Google Cloud Storage, seven-days is considered long-term. 3600 seconds (one hour) is the norm almost everywhere in Google Cloud.
For Google Cloud Storage you have several options. You did not specify the environment so I will include both user credentials, service account, and presigned-url based access.
User Credentials
You can authenticate with User Credentials (eg username#gmail.com) and save the Refresh Token. Then when an Access Token is required, you can generate one from the Refresh Token. In my website article about learning the Go language, I wrote a program on Day #8 which implements Google OAuth, saves the necessary credentials and creates Access Tokens and ID Tokens as required with no further "login" required. The comments in the source code should help you understand how this is done. https://www.jhanley.com/google-cloud-and-go-my-journey-to-learn-a-new-language-in-30-days/#day_08
This is the choice if you need to use User Credentials. This technique is more complicated, requires protecting the secrets file but will give you refreshable long term tokens.
Service Account Credentials
Service Account JSON key files are the standard method for service-to-service authentication and authorization. Using these keys, Access Tokens valid for one hour are generated. When they expire new ones are created. The max time is 3600 seconds.
This is the choice if you are programmatically accessing Cloud Storage with programs under your control (the service account JSON file must be protected).
Presigned-URLs
This is the standard method of providing access to private Google Cloud Storage objects. This method requires the URL and generates a signature with an expiration so that objects can be accessed for a defined period of time. One of your requirements (which is unrealistic) is that you don't want to use source URLs. The max time is seven-days.
This is the choice if you need to provide access to third-parties to access your Cloud Storage Objects.
IAM Based Access
This method does not use Access Tokens, instead, it uses Identity Tokens. Permissions are assigned to Cloud Storage buckets and objects and not to the IAM member account. This method requires a solid understanding of how Identities work in Google Cloud Storage and is the future direction for Google security - meaning for many services access will be controlled on a service/object basis and not via roles that grant wide access to an entire service in a project. I talk about this in my article on Identity Based Access Control
Summary
You have not clearly defined what will be accessing Cloud Storage, how secrets are stored, if the secrets need to be protected from users (public URL access), etc. The choice depends on a number of factors.
If you read the latest articles on my website I discuss a number of advanced techniques on Identity Based Access Control. These features are starting to appear on a number of Google Services in the beta level commands. This includes Cloud Scheduler, Cloud Pub/Sub, Cloud Functions, Cloud Run, Cloud KMS and soon more. Cloud Storage supports Identity Based Access which requires no permissions at all - the identity is used to control access.
Are there step by step instructions anywhere on how to generate a "ticket" for an iCloud user given their username/password. I'd like to build a service that access iCloud data (server to server) without having to store the iCloud username or password.
My understanding is that you use the username/password to generate a Kerberos ticket from iCloud. That's based on the answer to How does Sunrise for iOS use iCloud credentials to access our calendar? . But I haven't found instructions online on how to do that.
Does anyone know how to do that? Thanks!
Let me start by pointing out that by default iCloud app storage is "sandboxed" in containers. A signed application can only access its own container without having the API key to authenticate to other application containers. You can make multiple applications share the same container, or use multiple containers in the same application if needed, but essentially you have to be the developer of all applications or have explicit permission to do this. Check out Incorporating iCloud into your app and Enabling CloudKit for more details.
Other (non-appstore) applications and services can authenticate to use an application's data via CloudKit Web Services:
Authenticating to iCloud (redirect based, so credentials still are never revealed and are known only by the user and iCloud server itself);
Further authenticating with your application API key;
The process is described in detail here, as already kindly pointed out by Adam Taylor.
All the above being said, If I understand correctly, you want to have access to all of the user's iCloud data. I think, you won't be able to do so for multiple reasons:
Data is protected by application key, so you need to have this to access a container in addition to the basic credentials;
I'm sure that Apple has a design policy to never ask for user credentials in plain text. Asking the user explicitly for credentials will be against their policy and even if it turns out it is not, having the credentials won't help you much, because you have to enter/send them somewhere. But all iCloud authentication mechanisms are designed to ask for authentication only by the end-user.
This is why I don't believe it is possible to just use user credentials and get access to all of their iCloud data. Now, my 2 cents on why Sunrise works:
As far as I understand, the Sunrise application works, because the calendar data is designed to be shared via CalDav, that works on a concrete URL, so you can import and link your calendar in various calendar client applications. The URL can be found out with a bit of investigation. CalDAV is kind of similar to IMAP and POP3 for mailbox access.
Be so kind to elaborate a bit more on what kind of data you're trying to extract (apple application specific, developer application specific, documents, key-value pairs or something else) and me or other users might help you further.
I'm developing an application that manipulates data in Google Cloud Storage
buckets owned by the user. I would like to set it up so the user can arrange to
grant the application access to only one of his or her buckets, for the sake of
compartmentalization of damage if the app somehow runs amok (or it is
impersonated by a bad actor or whatever).
But I'm more than a bit confused by the documentation around GCS authorization.
The docs on OAuth 2.0 authentication show that there are only three
choices for scopes: read-only, read-write, and full-control. Does this
mean that what I want is impossible, and if I grant access to read/write one
bucket I'm granting access to read/write all of my buckets?
What is extra confusing to me is that I don't understand how this all plays in
with GCS's notion of projects. It seems like I have to create a project to get
a client ID for my app, and the N users also have to create N projects for
their buckets. But then it doesn't seem to matter -- the client ID from project
A can access the buckets from project B. What are project IDs actually for?
So my questions, in summary:
Can I have my installed app request an access token that is good for only a
single bucket?
If not, are there any other ways that developers and/or careful users
typically limit access?
If I can't do this, it means the access token has serious security
implications. But I don't want to have to ask the user to go generate a new one
every time they run the app. What is the typical story for caching the token?
What exactly are project IDs for? Are they relevant to authorization in any
way?
I apologize for the scatter-brained question; it reflects what appears to be
scatter-brained documentation to me. (Or at least documentation that isn't
geared toward the installed application use case.)
I had the same problem as you.
Go to : https://console.developers.google.com
Go to Credentials and create new Client ID
You have to delete the email* in "permissions" of your projet.
And add it manually in the ACL of your bucket.
*= the email of the Service Account. xxxxxxxxxxxx-xxxxxxxxx#developer.gserviceaccount.com
if you are building an app. It's Server to server OAuth.
https://developers.google.com/accounts/docs/OAuth2ServiceAccount
"Can you be clearer about which project I create the client ID on (the developer's project that owns the installed application, or the user's project that own's the bucket)?"
the user's project that own's the bucket
It's the user taht own the bucket who grant access.
It turns out I'm using the wrong OAuth flow if I want to do this. Thanks to Euca
for the inspiration to figure this out.
At the time I asked the question, I was assuming there were multiple projects
involved in the Google Developers Console:
One project for me, the developer, that contained generated credentials for
an "installed application", with the client ID and (supposed) secret baked into
my source code.
One project for each of my users, owning and being billed for a bucket that
they were using the application to access.
Instead of using "installed application" credentials, what I did was switch to
"service account" credentials, generated by the user in the project that owns
their bucket. That allows them to create and download a JSON key file that they
can feed to my application, which then uses the JSON Web Tokens flow of OAuth
2.0 (aka "two-legged OAuth") to obtain authorization. The benefits of this are:
There is no longer a need for me to have my own project, which was a weird
wart in the process.
By default, the service account credentials allow my application to access
only the buckets owned by the project for which they were generated. If the
user has other projects with other buckets, the app cannot access them.
But, the service account has an "email address" just like any other user, and
can be added to the ACLs for any bucket regardless of project, granting
access to that bucket.
About your answer.
Glad you solved your problem.
You can also reduce the access to only ONE bucket of the projet. For example, if you have several buckets and the application does not need access to all.
By default, the service account has FULL access Read, write and ACL of all buckets. I usually limited to the needed bucket.