I am trying to add a file to S3 bucket using NiFi. I have configured IAM role for authentication. Getting the error PutS3Object Failed to put StandardFlowFileRecord to Amazon S3 due to Unable to execute HTTP request: Connect to sts.amazonaws.com:443 failed: connect timed out com.amazonaws.SdkClientException.
My NiFi instance is installed on ec2 instance and from that ec2 instance I am able to transfer file to S3 bucket using aws cli.
This is an issue with your STS service, the "client" - nifi processor module uses and endpoint via public internet unless you have setup an VPC Endpoint for STS
#Tris
Assuming you have everything allowed and configured at firewall, access, and SSL Context Service, the timeout should be able to be resolved by increasing the timeout values in the processor. Sometimes the default settings are not sufficient.
Related
I have .Net core code to write file in AWS S3 bucket. This code is getting called from Hangfire as a Job. It works fine in my machine but when I deploy solution to cloud foundry as docker container, code throws error "connection refused" at line where it connects to AWS. Exception doesn't have much information.
I did setup proxy environment in docker file and I can verify proxy by ssh to container in cloud foundry. By ssh when I run curl command to AWS S3 bucket it gives "Access Denied" error which confirms that docker container has no firewall issue accessing AWS.
I am not able to figure out why Hangfire job can't connect to AWS. Any suggestions.
Whether I try to create an AWS S3 File Gateway (EC2) in the management console or with Terraform, I get the same problem below...
If I launch the EC2 instance in a public subnet, the gateway is created. If I try to launch the gateway in a private subnet (with NAT, all ports open in and out), it wont work. I get...
HTTP ERROR 500
I am running a VPN and able to ping the instance's private IP if I use the Management console. This is the same error code in terraform on a cloud 9 instance, which is also able to ping the instance.
Since I am intending to share the S3 bucket with NFS, its important that the instance reside in a private subnet. I'm new to trying out the AWS S3 File Gateway, I have read over the documentation, but nothing clearly states how to do this and why a private subnet would be different, so if you have any pointers I could look into, I'd love to know!
For any further reference (not really needed) my testing in terraform is mostly based on this github repository:
https://github.com/davebuildscloud/terraform_file_gateway/tree/master/terraform
I was unable to get the AWS console test to work, but I realised my Terraform test was poorly done - I mistakenly was skipping over a dependency that was establishing the VPC peering connection to the Cloud 9 Instance. once I fixed that it worked. Still I would love to know what would be required to get this to work through the Management Console too...
I'm writing an aspnet core app which can be deployed to either azure or aws. The Microsoft libraries for accessing azure logging / configuration sources are well-behaved and fail silently if they're not in an appropriate environment. However the AWS SDK blows up with and exception Unable to get IAM security credentials from EC2 Instance Metadata Service. if the providers are configured outside of AWS. Are there some environment variables I can look at to determine if my application is running in AWS so I can skip those?
All EC2 instances (and therefore all AWS hosting methods) have access to an instance meta-data http service running on a local-link address at http://169.254.169.254/latest/meta-data/. The best process I can come up with is to make a call to this service. If the call fails -- the process is not hosted on an EC2 instance.
My Airflow application is running in AWS EC2 instance which has IAM role as well. Currently I am creating Airflow S3 connection using hardcoded access and secret key. But I want my application to pickup this AWS credentials from this instance itself.
How to achieve this?
We have a similar setup, our Airflow instance run inside containers deployed inside an EC2 machine. We set up the policies to access S3 on the EC2 machine instance profile. You don't need to pick up the credentials in the EC2 machine, because the machine has an instance profile that should have all the permissions that you need. From the Airflow side, we only use aws_default connection, in the extra parameter we only setup the default region, but there aren't any credentials.
Here a details article about Intance Profiles: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
The question is answered but for future reference, it is possible to do it without relying on aws_default and just doing it via Environment Variables. Here is an example to write logs to s3 using an AWS connection to benefit form IAM:
AIRFLOW_CONN_AWS_LOG="aws://"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID=aws_log
AIRFLOW__CORE__REMOTE_LOGGING=true
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER="s3://path to bucket"
I'm trying to evaluate spinnaker on AWS.
I followed the document to setup VPC in AWS. I'm able to run Spinnaker at http://localhost:9000, but when I click on the new application in the Spinnaker UI, I see this error on the terminal.
channel 7: open failed: connect failed: Connection refused.
I'm unable to create a new application.
It is possible that port 8084 is blocked. The Spinnaker UI(Deck) needs to be able to communicate with the API(Gate) over port 8084.
In AWS, try opening port 8084, same way you did for port 9000.