Using an AWS S3 File Gateway in EC2, why does it only work in a public subnet and not a private subnet? - amazon-s3

Whether I try to create an AWS S3 File Gateway (EC2) in the management console or with Terraform, I get the same problem below...
If I launch the EC2 instance in a public subnet, the gateway is created. If I try to launch the gateway in a private subnet (with NAT, all ports open in and out), it wont work. I get...
HTTP ERROR 500
I am running a VPN and able to ping the instance's private IP if I use the Management console. This is the same error code in terraform on a cloud 9 instance, which is also able to ping the instance.
Since I am intending to share the S3 bucket with NFS, its important that the instance reside in a private subnet. I'm new to trying out the AWS S3 File Gateway, I have read over the documentation, but nothing clearly states how to do this and why a private subnet would be different, so if you have any pointers I could look into, I'd love to know!
For any further reference (not really needed) my testing in terraform is mostly based on this github repository:
https://github.com/davebuildscloud/terraform_file_gateway/tree/master/terraform

I was unable to get the AWS console test to work, but I realised my Terraform test was poorly done - I mistakenly was skipping over a dependency that was establishing the VPC peering connection to the Cloud 9 Instance. once I fixed that it worked. Still I would love to know what would be required to get this to work through the Management Console too...

Related

EF Core connect from Google Cloud Run to Google Cloud SQL

I have tried these:
Data Source=/cloudsql/*****:asia-southeast2:*****;Initial Catalog=*****;Integrated Security=False;User ID=sqlserver;Password=MyPassword0!;MultipleActiveResultSets=True
that /cloudsql/*****:asia-southeast2:***** is my instance connection name described in here.
I tried public IP too like this:
Data Source=***.***.***.***;Initial Catalog=*****;Integrated Security=False;User ID=sqlserver;Password=MyPassword0!;MultipleActiveResultSets=True
with IP address my SQL instance public IP, but it is not working.
I have enabled the sql instance connection from the Cloud Run:
How can I fix the connection string using EF Core?
This is the error I got:
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid)
You are trying to use Cloud SQL SQL server with Cloud Run. But if you have a look to the documentation, this connexion is not supported.
In reality, the connexion is supported, but Cloud Run service open a Unix socket to connect the SQL Server instance. And there is no SQL Server client compliant with Unix Socket and therefore, you can't access it.
To solve your issue, I recommend you to use the Private IP section of this page. You can also achieve the same configuration with the public IP (don't use a serverless VPC connector and go to your Cloud SQL instance, and authorized the network 0.0.0.0/0 to access to your instance), but, because you need to open broadly the authorized network, I don't recommend you this option, for security reason.
EDIT 1
Because of my bad english, let me explain more!!
The best way is to follow the documentation page: Connect Cloud SQL private IP to your VPC, use a serverless VPC connector with Cloud Run, and in your code you can use the private IP in your connection string to access the database.
But, you can also use the public IP, that I don't recommend (see below why), at least in its first naive use. In fact, you can use the pubic IP instead of the private IP in your code. Because you use the pubic IP, you no longer need the serverless VPC connector on your Cloud Run service (you don't use the VPC but the internet to reach the internet).
However, because you use the internet and Cloud Run is a multi-customer shared service, you don't know your source IP. On Cloud SQL, you need to allow any IP (0.0.0.0/0) in the authorized network section to access to your database, which is not a very secure configuration.
Alternatively, you can create a more complex configuration on Cloud Run to use securely the Cloud SQL public IP (but it becomes really complex). let me dig into it.
I said previously that Cloud Run is a shared service, and you don't manage the source IP when you initiate outgoing call (like connection to the database). It's true, but you can control that!
Firstly, you need (again) a serverless VPC connector on your Cloud Run. And you need to set your egress to ALL (route public and private traffic to the serverless VPC connector).
Then, create a Cloud NAT in your VPC and select, at least, your serverless VPC connector subnet to be NATed when going to the internet
Reserve a public IP on your Cloud NAT configuration
Now you have a public, static IP defined on your Cloud Run service. You can only grant it on your Cloud SQL authorized Network, to improve the security and don't let anybody access to your Cloud SQL instance.

Access AWS S3 from Lambda within Default VPC

I have a lambda function which needs to access ec2 through ssh and load files and save it to s3. So,for that I have kept ec2 and lambda both in default VPCs and same subnet. Now the problem is that I am able to connect the function to ec2 but not to s3.
Its killing me since morning as when I remove the vpc settings it uploads the files to s3 ,but then connection to ec2 is lost.
I tried to add a NAT gateway to default VPC(although I am not sure I did it correctly or not because I am new to this) but it didnt do anything.
I am confused as my ec2 instance which s in the same VPC and subnet can access internet but lambda function is not able to access s3.
I am not sure how to proceed.
Please help!!!
The Lambda function will not get a public IP assigned to it from within a VPC, so it will never have direct Internet access like your EC2 instance has. You will have to move the Lambda function to a private subnet with a route to a NAT Gateway in order to give it Internet access. It sounds like you attempted this but configured it incorrectly.
If all the Lambda function needs to access is S3, then it is easier to setup a VPC Endpoint (AWS PrivateLink) in your VPC.

Where can i find node public ip on aks made cluster?

I've been asked by Azure support to open the question here, though i think this is an AKS bug.
When deploying a cluster each node 'node.status.addresses' should show an externalip or hostname of the node by design but there is a VM name in hostname address in instead of it in AKS made cluster. Which makes it is really hard to know node public ips for various reasons we need them.
Is there any standard or nonstandard way to get node public ip ?
There is the public IP exposed for the Azure Kubernetes Service, but it's not directly to the node. Actually, the Kubernetes node will not be exposed to the internet with a public IP.
The AKS nodes create in a VNet on Azure and access or can be accessed through the Azure Load Balancer with a public IP. The VNet is a private network as a resource of Azure. For the VNet, there are two types such as Basic and Advanced. You can get more details, see Network concepts for applications in Azure Kubernetes Service (AKS).
AKS nodes are not exposed to the public internet and therefore will not have an exposed public IP.
With that said, I’ve been investigating an issue where nodes either lose or fail to ever get an internal IP. We (AKS) have implemented an initial fix, which restarts kubelet, and does seem to at least temporarily mitigate the lack of an internal IP. There are ongoing efforts upstream to find and fix the real root cause.
I don’t think I’ve come across the scenario of a node not having a hostname address though. I’m going to log a backlog item to investigate any clusters that appear to be experiencing this symptom. I can’t promise an immediate fix, but I am definitely going to look into this further early next week.
There is a preview of a feature enabling a public IP per node. Please see https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-in-a-node-pool
In common scenarios, each AKS node cluster will be behind a Load Balancer, which in turn will have an Public IP. You can get the public IP by going to your AKS Cluster -> Services & Ingresses -> Check for Service with Type Load Balancer. This will have a Public IP.
You can also configure the cluster so each Node has a Public IP. You can then access the details from the Node Pool tab.

Accessing memorystore in Shared VPC

I have created a Memorystore instance with IP 10.190.50.3 (and this is in us-east1).
I have a shared VPC setup with name my-gcp and I also authorised the same when creating the Memorystore instance.
In shared VPC, I have a service project dev and I have a window machine(10.190.5.7). Inside that when I am trying to connect to memory store from that Windows machine, I am not able to connect to Memorystore instance.
I have also enabled egress traffic to 10.190.50.3 from all instance of my-gcp vpc. this vpc is setup in us-east4.
tracert, ping also not working form window machine for IP 10.190.50.3.
This Memorystore instance is created in host project of vpc.
I found the public documentation updated recently:
1.The connecting client must be on the same network and in the same region (different zone within same region will also ok) as your Cloud Memorystore for Redis instance.
2.If you are using a Shared VPC network across multiple projects, you can connect to a Redis instance that is deployed on shared VPC network on the host project. Connecting to a Redis instance that is deployed on shared VPC network in a service project is not supported.
Also here is the link on how to Connect to the Redis instance from a Compute Engine VM.
Unfortunately, accessing memorystore from service project is currently not supported in Shared VPC .
Now we can deploy GCP's memory store in the shared network using the private connection. Refer this link for more details.
Steps below:
Verify or establish a private services access connection for the network in the host project that you use to create your Redis instance.
Make sure the Service Networking API is enabled for both the host project and the service project.
Follow the steps from Creating a Redis instance on a VPC network, but make the following modifications:
a. Complete the optional step for setting up a private services access connection.
b. Use the Authorized VPC Network dropdown to select the Shared VPC network from the host project. It is listed under Shared VPC Networks.

How can I make Apache on an amazon ec2 linux box using the elastic IP instead of the private IP?

I've migrated a website to Amazon ec2 that hooks into a service we are using that is installed on another server (not on Amazon). Access to the API for that service is IP-restricted and done by sending XML data using *http_build_query* & *stream_context_create* in PHP.
If I want to connect to the service from a new server, I need to ask the vendor to add the new IP first. I did that by sending the Elastic IP to them, but it doesn't work.
While trying to debug, I noticed that the output for $_SERVER['SERVER_ADDR'] is the private IP of the ec2 instance.
I assume that the server on the other side is receiving the same data, so it tries to authenticate the private IP.
I've asked the vendor to allow access from the private IP as well – it's not implemented yet, so I'm not sure if that solves the problem, but as far as I understand the way their API works, it will then try to parse data back to the IP it was contacted from, which shouldn't be possible because the server is outside the Amazon cloud.
I might miss something really obvious here. I added a command to rc.local (running CENT OS on my ec2 instance) that associates the elastic IP to the server upon startup by using ec2-associate-address, and this seemed to help make a MySQL connection to another outside server working, but no luck with the above mentioned API.
To rule out one thing - the API is accessed through HTTPS, with ports 80 and 443 (and a mysql port) enabled in security groups and tested. The domain and SSL are running fine.
Any hint highly appreciated - I searched a lot already, but couldn't find anything useful so far.
It sounds like both IPs (private and elastic) are active in your VM. Check by running ifconfig -a. If that's what's happening then the IP that gets used for external traffic will depend on the remote address and your VM's routing table. It could even vary from one connection to the next.
If that's what's going on then the quickest fix would be to ifconfig down the interface that has the private address. That should leave only the elastic address for all external connections. If that resolves the problem then you can script something that downs the private IP automatically after the elastic IP has been made active, or if the elastic IP will be permanently assigned to this VM and you really don't need the private IP then you can permanently disassociate the private IP from this VM.