I have a lambda function which needs to access ec2 through ssh and load files and save it to s3. So,for that I have kept ec2 and lambda both in default VPCs and same subnet. Now the problem is that I am able to connect the function to ec2 but not to s3.
Its killing me since morning as when I remove the vpc settings it uploads the files to s3 ,but then connection to ec2 is lost.
I tried to add a NAT gateway to default VPC(although I am not sure I did it correctly or not because I am new to this) but it didnt do anything.
I am confused as my ec2 instance which s in the same VPC and subnet can access internet but lambda function is not able to access s3.
I am not sure how to proceed.
Please help!!!
The Lambda function will not get a public IP assigned to it from within a VPC, so it will never have direct Internet access like your EC2 instance has. You will have to move the Lambda function to a private subnet with a route to a NAT Gateway in order to give it Internet access. It sounds like you attempted this but configured it incorrectly.
If all the Lambda function needs to access is S3, then it is easier to setup a VPC Endpoint (AWS PrivateLink) in your VPC.
Related
I have a requirement to SFTP ".csv" files from corporate on-premise linux box to S3 bucket.
The Current Setup is as follows:
The on-premise linux box is NOT connected to internet.
Corporate Network is connected with AWS with Direct Connect.
There are several VPCs for different purposes. Only One VPC has IGW and Public Subnet (to accept requests coming from Public Internet), all other VPCs do not have IGW and Public Subnets.
Corporate Network and several AWS VPCs (those having no IGW) are connected with each other through Transit Gateway.
Can someone please advise whether I should use AWS Transfer or S3 VPC Interface Endpoints to transfer files to S3 bucket from on-premise (corporate network)? and why?
I appreciate your valuable advise in advance.
You should Create a server endpoint that can be accessed only within your VPC - AWS Transfer Family.
Note that this is a special endpoint for AWS Transfer. It is not an endpoint for Amazon S3.
Alternatively, you could run an SFTP server on an Amazon EC2 instance, as long as the instance also has access to Amazon S3 to upload the files received.
Of course, I'd also recommend avoiding SFTP altogether and upload directly to Amazon S3 if it is at al possible. Using SFTP adds complexity and expense that is best avoided.
Whether I try to create an AWS S3 File Gateway (EC2) in the management console or with Terraform, I get the same problem below...
If I launch the EC2 instance in a public subnet, the gateway is created. If I try to launch the gateway in a private subnet (with NAT, all ports open in and out), it wont work. I get...
HTTP ERROR 500
I am running a VPN and able to ping the instance's private IP if I use the Management console. This is the same error code in terraform on a cloud 9 instance, which is also able to ping the instance.
Since I am intending to share the S3 bucket with NFS, its important that the instance reside in a private subnet. I'm new to trying out the AWS S3 File Gateway, I have read over the documentation, but nothing clearly states how to do this and why a private subnet would be different, so if you have any pointers I could look into, I'd love to know!
For any further reference (not really needed) my testing in terraform is mostly based on this github repository:
https://github.com/davebuildscloud/terraform_file_gateway/tree/master/terraform
I was unable to get the AWS console test to work, but I realised my Terraform test was poorly done - I mistakenly was skipping over a dependency that was establishing the VPC peering connection to the Cloud 9 Instance. once I fixed that it worked. Still I would love to know what would be required to get this to work through the Management Console too...
My Airflow application is running in AWS EC2 instance which has IAM role as well. Currently I am creating Airflow S3 connection using hardcoded access and secret key. But I want my application to pickup this AWS credentials from this instance itself.
How to achieve this?
We have a similar setup, our Airflow instance run inside containers deployed inside an EC2 machine. We set up the policies to access S3 on the EC2 machine instance profile. You don't need to pick up the credentials in the EC2 machine, because the machine has an instance profile that should have all the permissions that you need. From the Airflow side, we only use aws_default connection, in the extra parameter we only setup the default region, but there aren't any credentials.
Here a details article about Intance Profiles: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
The question is answered but for future reference, it is possible to do it without relying on aws_default and just doing it via Environment Variables. Here is an example to write logs to s3 using an AWS connection to benefit form IAM:
AIRFLOW_CONN_AWS_LOG="aws://"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID=aws_log
AIRFLOW__CORE__REMOTE_LOGGING=true
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER="s3://path to bucket"
I have created a Memorystore instance with IP 10.190.50.3 (and this is in us-east1).
I have a shared VPC setup with name my-gcp and I also authorised the same when creating the Memorystore instance.
In shared VPC, I have a service project dev and I have a window machine(10.190.5.7). Inside that when I am trying to connect to memory store from that Windows machine, I am not able to connect to Memorystore instance.
I have also enabled egress traffic to 10.190.50.3 from all instance of my-gcp vpc. this vpc is setup in us-east4.
tracert, ping also not working form window machine for IP 10.190.50.3.
This Memorystore instance is created in host project of vpc.
I found the public documentation updated recently:
1.The connecting client must be on the same network and in the same region (different zone within same region will also ok) as your Cloud Memorystore for Redis instance.
2.If you are using a Shared VPC network across multiple projects, you can connect to a Redis instance that is deployed on shared VPC network on the host project. Connecting to a Redis instance that is deployed on shared VPC network in a service project is not supported.
Also here is the link on how to Connect to the Redis instance from a Compute Engine VM.
Unfortunately, accessing memorystore from service project is currently not supported in Shared VPC .
Now we can deploy GCP's memory store in the shared network using the private connection. Refer this link for more details.
Steps below:
Verify or establish a private services access connection for the network in the host project that you use to create your Redis instance.
Make sure the Service Networking API is enabled for both the host project and the service project.
Follow the steps from Creating a Redis instance on a VPC network, but make the following modifications:
a. Complete the optional step for setting up a private services access connection.
b. Use the Authorized VPC Network dropdown to select the Shared VPC network from the host project. It is listed under Shared VPC Networks.
I have installed an App with Apache tomcat on AWS ec2 instance. I am able to access the tomcat url (which is server_name:8080/BOE/BI) from the AWS instance on Win2016. Also I installed IIS on the server.
Now what are the configurations I need to do to on AWS ec2 instance to access the URL from outside the AWS instance like from my personal PC.
I also tried disabling the firewalls, it did not helped.
You need to look at security groups. This will allow you to open up ports on your EC2 instance from the outside world.