I would like to write files from a remote machine to Amazon S3. The machine I am working restricts outbound connections unless specified. I can have an ip whitelisted but from my understanding S3 uses a pool of addresses and they are not fixed. Not sure what my options are. Anything helps.
Thank you
Option 1 :
Aws actually list the range of allowed ip's per each service
References :
1. https://ip-ranges.amazonaws.com/ip-ranges.json
2. https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
You can write a script to download this range of ip's and automate the process of updating your security group accordingly
option 2:
If the remote resource (ec2-instance) you are using is also in AWS
Then you can create a new role (which will allow access to s3 operations ) and attach that role to your remote instance
Have not checked this option when we have restriction on outbound connection , but could be a better option if it works
Related
I'm trying to create a AWS Databrew job that pulls data from an S3 folder into a AWS RDS SQL Server table and receive the following:
"AWS Glue VPC interface endpoint validation failed for SubnetId: subnet-xxx9574. VPC: vpc-xxxdd2. Reason: Could not find an active AWS Glue VPC interface endpoint. Could not find an active NAT."
My Databrew output is a "Data catalog RDS Tables", which I have setup a crawler/connection and everything is green.
I followed this path and helped solve a different issue, but this error is slightly different.
https://aws.amazon.com/premiumsupport/knowledge-center/glue-s3-endpoint-validation-failed/
I tried to create another endpoint with type of 'Interface' instead of 'Gateway' but not really sure if that's the correct path.
Any guidance? Trying to avoid any custom scripts as this should be something AWS can handle in their interfaces...IMO.
Thanks!
I followed this path and helped solve a different issue, but this error is slightly different.
https://aws.amazon.com/premiumsupport/knowledge-center/glue-s3-endpoint-validation-failed/
I tried to create another endpoint with type of 'Interface' instead of 'Gateway' but not really sure if that's the correct path.
i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me
If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.
Terraform 0.9.5.
I am in the process of putting together a group of modules that our infrastructure team and automation team will use to create resources in a standard fashion and in turn create stacks to provision different envs. All working well.
Like all teams using terraform shared state becomes a concern. I have configured terraform to use a s3 backend, that is versioned and encrypted, added a lock via a dynamo db table. Perfect. All works with local accounts... Okay the problem...
We have multiple aws accounts, 1 for IAM, 1 for billing, 1 for production, 1 for non-production, 1 for shared services etc... you get where I am going. My problem is as follows.
I authenticate as user in our IAM account and assume the required role. This has been working like a dream until i introduced terraform backend configuration to utilise s3 for shared state. It looks like the backend config within terraform requires default credentials to be set within ~/.aws/credentials. It also looks like these have to be a user that is local to the account where the s3 bucket was created.
Is there a way to get the backend configuration setup in such a way that it will use the creds and role configured within the provider? Is there a better way to configured shared state and locking? Any suggestions welcome :)
Update:Got this working. I created a new user within the account where the s3 bucket is created. Created a policy to just allow that new user s3:DeleteObject,GetObject,PutObject,ListBucket and dynamodb:* on the specific s3 bucket and dynamodb table. Created a custom credentials file and added default profile with access and secret keys assigned to that new user. Used the backend config similar to
terraform {
required_version = ">= 0.9.5"
backend "s3" {
bucket = "remote_state"
key = "/NAME_OF_STACK/terraform.tfstate"
region = "us-east-1"
encrypt = "true"
shared_credentials_file = "PATH_TO_CUSTOM_CREDENTAILS_FILE"
lock_table = "MY_LOCK_TABLE"
}
}
It works but there is an initial configuration that needs to happen within your profile to get it working. If anybody knows of a better setup or can identify problems with my backend config please let me know.
Terraform expects backend configuration to be static, and does not allow it to include interpolated variables as might be true elsewhere in the config due to the need for the backend to be initialized before any other work can be done.
Due to this, applying the same config multiple times using different AWS accounts can be tricky, but is possible in one of two ways.
The lowest-friction way is to create a single S3 bucket and DynamoDB table dedicated to state storage across all environments, and use S3 permissions and/or IAM policies to impose granular access controls.
Organizations adopting this strategy will sometimes create the S3 bucket in a separate "adminstrative" AWS account, and then grant restrictive access to the individual state objects in the bucket to the specific roles that will run Terraform in each of the other accounts.
This solution has the advantage that once it has been set up correctly in S3 Terraform can be used routinely without any unusual workflow: configure the single S3 bucket in the backend, and provide appropriate credentials via environment variables to allow them to vary. Once the backend is initialized, use workspaces (known as "state environments" prior to Terraform 0.10) to create a separate state for each of the target environments of a single configuration.
The disadvantage is the need to manage a more-complicated access configuration around S3, rather than simply relying on coarse access control with whole AWS accounts. It is also more challenging with DynamoDB in the mix, since the access controls on DynamoDB are not as flexible.
There is a more complete description of this option in the Terraform s3 provider documentation, Multi-account AWS Architecture.
If a complex S3 configuration is undesirable, the complexity can instead be shifted into the Terraform workflow by using partial configuration. In this mode, only a subset of the backend settings are provided in config and additional settings are provided on the command line when running terraform init.
This allows options to vary between runs, but since it requires extra arguments to be provided most organizations adopting this approach will use a wrapper script to configure Terraform appropriately based on local conventions. This can be just a simple shell script that runs terraform init with suitable arguments.
This then allows to vary, for example, the custom credentials file by providing it on the command line. In this case, state environments are not used, and instead switching between environments requires re-initializing the working directory against a new backend configuration.
The advantage of this solution is that it does not impose any particular restrictions on the use of S3 and DynamoDB, as long as the differences can be represented as CLI options.
The disadvantage is the need for unusual workflow or wrapper scripts to configure Terraform.
I have used AWS Community AMI for configuring Spinnaker. I am able to get the lists of ELB, AMI and Security Groups while creating Server Group. But, I am not getting the Instance types in the custom drop down list. Any idea about what could be going wrong?
Spinnaker Cluster Error
It looks like you are not having a correct IAM role assigned to the user whose access keys you are using for the spinnaker integration with AWS.
Mostly if you used the spinnaker.Check if you have enough rights in AWS.
If not then create a role and assign AWS POWER USER ACCESS to your user and then try to get the integration .
Spinnaker is a tool which would need AWS EC2 Full access atleast as it directly access EC2 spin up its server groups.
Instance types are cached in the browser's local storage. You can explicitly refresh the cache via the 'Refresh all caches' link:
If you show the network tab of your browser's console (prior to clicking 'Refresh all caches'), you should see a request to http://localhost:8084/instanceTypes.
If the response contains your instance types, you should be good to go.
Goal: I would like to keep sensitive data in s3 buckets and process it on EC2 instances, located in the private cloud. I researched that there is possbility to set up S3 buckets policy by IP and user(iam) arn's thus i consider that data in s3 bucket is 'on the safe side'. But i am worriyng about the next scenario: 1) there is vpc 2) inside theres is an ec2 isntance 3) there is an user under controlled(allowed) account with permissions to connect and work with ec2 instance and buckets. Buckets are defined and configured to work with only with known(authorized) ec2-instances. Security leak: user uploads malware application on ec2 instance and during processing data executes malware application that transfer data to other(unauthorized) buckets under different AWS account. Disabling uploading data to ec2-instance is not an option in my case. Question: is it possible to restrict access on vpc firewal in such way that it will be access to some specific s3 buckets but it will be denied access to any other buckets? Assumed that user might upload malware application to ec2 instance and within it upload data to other buckets(under third-party AWS account).
There is not really a solution for what you are asking, but then again, you seem to be attempting to solve the wrong problem (if I understand your question correctly).
If you have a situation where untrustworthy users are in a position where they are able to "connect and work with ec2 instance and buckets" and upload and execute application code inside your VPC, then all bets are off and the game is already over. Shutting down your application is the only fix available to you. Trying to limit the damage by preventing the malicious code from uploading sensitive data to other buckets in S3 should be the absolute least of your worries. There are so many other options available to a malicious user other than putting the data back into S3 but in a different bucket.
It's also possible that I am interpreting "connect and work with ec2 instance and buckets" more broadly than you intended, and all you mean is that users are able to upload data to your application. Well, okay... but your concern still seems to be focused on the wrong point.
I have applications where users can upload data. They can upload all the malware they want, but there's no way any code -- malicious or benign -- that happens to be contained in the data they upload will ever get executed. My systems will never confuse uploaded data with something to be executed or handle it in a way that this is even remotely possible. If your code will, then you again have a problem that can only be fixed by fixing your code -- not by restricting which buckets your instance can access.
Actually, I lied, when I said there wasn't a solution. There is a solution, but it's fairly preposterous:
Set up a reverse web proxy, either in EC2 or somewhere outside, but of course make its configuration inaccessible to the malicious users. In this proxy's configuration, configure it to only allow access to the desired bucket. With apache, for example, if the bucket were called "mybucket," that might look something like this:
ProxyPass /mybucket http://s3.amazonaws.com/mybucket
Additional configuration on the proxy would deny access to the proxy from anywhere other than your instance. Then instead of allowing your instance to access the s3 endpoints directly, only allow outbound http toward the proxy (via the security group for the compromised instance). Requests for buckets other than yours will not make it through the proxy, which is now the only way "out." Problem solved. At least, the specific problem you were hoping to solved should be solvable by some variation of this approach.
Update to clarify:
To access the bucket called "mybucket" in the normal way, there are two methods:
http://s3.amazonaws.com/mybucket/object_key
http://mybucket.s3.amazonaws.com/object_key
With this configuration, you would block (not allow) all access to all S3 endpoints from your instances via your security group configuration, which would prevent accessing buckets with either method. You would, instead, allow access from your instances to the proxy.
If the proxy, for example, were at 172.31.31.31 then you would access buckets and their objects like this:
http://172.31.31.31/mybucket/object_key
The proxy, being configured to only permit certain patterns in the path to be forwarded -- and any others denied -- would be what controls whether a particular bucket is accessible or not.
Use VPC Endpoints. This allows you to restrict which S3 buckets your EC2 instances in a VPC can access. It also allows you to create a private connection between your VPC and the S3 service, so you don't have to allow wide open outbound internet access. There are sample IAM policies showing how to control access to buckets.
There's an added bonus with VPC Endpoints for S3 that certain major software repos, such as Amazon's yum repos and Ubuntu's apt-get repos, are hosted in S3 so you can also allow your EC2 instances to get their patches without giving them wide open internet access. That's a big win.