I've got an Appveyor project setup and working awesomely. Now, I want to upload artifacts S3 for easy hosting. This seems fairly easy as outlined in the documentation. My question is, where do I put the secret with write permission? I don't want to push it to my public repo for obvious reasons. On travis I could put it in an environment variable that was never logged. How would I go about this in Appveyor?
I assume you need to store this in YAML. You can use secure variables. Or you can simple put your secrets in clear text to S3 deployment configuration in UI, then save and press Export YAML and you will have YAML section with secrets encrypted.
Related
I've gotten an Amplify project dropped in my lap where the backend environment is deleted (or lost when the project were moved to another account).
I haven't worked with Amplify before, so I'm not sure how "automatic" everything is.
I noticed that the project has a folder called 'amplify-backup' which contain a bunch of json and graphql config files, so I assumed that I could use those somehow to restore the backend environment in AWS, but I can't seem to find any information on how to do so.
There's currently no backend environment in the AWS console and I don't really know which services the backend environment should contain.
Is it possible to restore the backend environment and all the services that the application need or do I need to figure out which services are needed?
If so, any pointers on how to find which services that are used?
If the project files still exist (amplify directory), you may be able to re-create the project with the existing resources.
One idea could be to clone the git repository from when the amplify project files were intact and run amplify init
OR
amplify-backup is generally generated automatically when doing commands with amplify. You could try rename to amplify and run amplify init.
See more here for re-creating an amplify project on another account: https://docs.amplify.aws/cli/migration/cli-migrate-aws-account/
I'm new to vue .env. I looked everywhere for a straight answer till I got lost. According to VueJS documentation that if we have .env.local file that will be loaded in all cases but will be ignored by git which exactly what we want to hide secret API keys from the public. But it also says that if we add VUE_APP before the key name in .env.local that will make our key load to the public.
My question is. How to securely hide API key from the public and still be able to use it in production and in development without any security risks?
my .env.local file
VUE_APP_DEEPGRAM_KEY=some_API_key_that_is_secret
the above works if I log it to the console from my app. but if i removed VUE_APP It won't work so is it safe to leave it like this?
Anothe thing, in Laravel we used to save API keys in .env and refer to them from config file and then call them in app from config. So is Vue different? if not, then how to do the same here?
To answer my own question, documentation is actually pretty obvious but I got a bit confused. it simply has to start with VUE_APP_ for it to work in Vue CLI
I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.
So, a common practice these days is to put connection strings & passwords as environment variables to avoid their being placed into a file. This is all fine and dandy, but I'm not sure how to make this work when trying to set up a continuous deployment workflow with some configuration management tool such as Salt/Ansible or Chef/Puppet.
Specifically, I have the following questions in environments using the above mentioned configuration management tools:
Where do you store connection strings/passwords/keys separate from codebases?
Do you keep those items in a code-repo of some type (git, etc.)?
Do you use some structure built-in to your tool?
How do you keep those same items secure?
Do you track changes/back-up these items, and if so, how?
In Chef you can
store passwords or API tokens in either encrypted data bags or using chef-vault. They are then decrypted while chef does the provisioning (with encrypted data bags using a shared secret, with chef-vault using the existing PKI of Chef client).
set environment variables when calling external software using the environment parameter of e.g. the execute resource.
not sure, what to write here -- I'd say you don't really manage them. This way you set the variables only for the command that needs it, not e.g. for the whole chef run.
With puppet, the preferred way is probably to store the secrets in Hiera files, which are just plain YAML files. That means that all secrets are stored on the master, separate from the manifest files.
truecrypt virtual encrypted disks are cross-platform and independent of tooling. Mount it read-write to change the secrets in the files it contains, unmount it and then commit/push the encrypted disk image into version control. Mount read-only for automation.
ansible-vault can be used to encrypt sensitive data files. A CI server like Jenkins however is not the safest place to store access credentials. If you add Hashicorp Vault and Ansible Tower/AWX, then you can provide a secure solution for several teams.
Current Situation
I have a project on GitHub that builds after every commit on Travis-CI. After each successful build Travis uploads the artifacts to an S3 bucket. Is there some way for me to easily let anyone access the files in the bucket? I know I could generate a read-only access key, but it'd be easier for the user to access the files through their web browser.
I have website hosting enabled with the root document of "." set.
However, I still get an 403 Forbidden when trying to go to the bucket's endpoint.
The Question
How can I let users easily browse and download artifacts stored on Amazon S3 from their web browser? Preferably without a third-party client.
I found this related question: Directory Listing in S3 Static Website
As it turns out, if you enable public read for the whole bucket, S3 can serve directory listings. Problem is they are in XML instead of HTML, so not very user-friendly.
There are three ways you could go for generating listings:
Generate index.html files for each directory on your own computer, upload them to s3, and update them whenever you add new files to a directory. Very low-tech. Since you're saying you're uploading build files straight from Travis, this may not be that practical since it would require doing extra work there.
Use a client-side S3 browser tool.
s3-bucket-listing by Rufus Pollock
s3-file-list-page by Adam Pritchard
Use a server-side browser tool.
s3browser (PHP)
s3index Scala. Going by the existence of a Procfile, it may be readily deployable to Heroku. Not sure since I don't have any experience with Scala.
Filestash is the perfect tool for that:
login to your bucket from https://www.filestash.app/s3-browser.html:
create a shared link:
Share it with the world
Also Filestash is open source. (Disclaimer: I am the author)
I had the same problem and I fixed it by using the
new context menu "Make Public".
Go to https://console.aws.amazon.com/s3/home,
select the bucket and then for each Folder or File (or multiple selects) right click and
"make public"
You can use a bucket policy to give anonymous users full read access to your objects. Depending on whether you need them to LIST or just perform a GET, you'll want to tweak this. (I.e. permissions for listing the contents of a bucket have the action set to "s3:ListBucket").
http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html
Your policy will look something like the following. You can use the S3 console at http://aws.amazon.com/console to upload it.
{
"Version":"2008-10-17",
"Statement":[{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": {
"AWS": "*"
},
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::bucket/*"
]
}
]
}
If you're truly opening up your objects to the world, you'll want to look into setting up CloudWatch rules on your billing so you can shut off permissions to your objects if they become too popular.
https://github.com/jupierce/aws-s3-web-browser-file-listing is a solution I developed for this use case. It leverages AWS CloudFront and Lambda#Edge functions to dynamically render and deliver file listings to a client's browser.
To use it, a simple CloudFormation template will create an S3 bucket and have your file server interface up and running in just a few minutes.
There are many viable alternatives, as already suggested by other posters, but I believe this approach has a unique range of benefits:
Completely serverless and built for web-scale.
Open source and free to use (though, of course, you must pay AWS for resource utilization -- such S3 storage costs).
Simple / static client browser content:
No Ajax or third party libraries to worry about.
No browser compatibility worries.
All backing systems are native AWS components.
You never share account credentials or rely on 3rd party services.
The S3 bucket remains private - allowing you to only expose parts of the bucket.
A custom hostname / SSL certificate can be established for your file server interface.
Some or all of the host files can be protected behind Basic Auth username/password.
An AWS WebACL can be configured to prevent abusive access to the service.