I am running logstash on kubernetes and using s3 input plugin to read logs from S3 and send it to elasticsearch. Pod is being crashloopback with the below error. Can someone help me with this issue?
error:
[2020-05-06T05:20:53,995][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>42, "name"=>"[main]<s3", "current_call"=>"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/protocol.rb:181:in `wait_readable'"}]
{"thread_id"=>41, "name"=>"[main]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:262:in `block in start_workers'"}]}}
[2020-05-06T05:20:48,651][ERROR][org.logstash.execution.ShutdownWatcherExt]
Related
I have EKS cluster setup, where in a pod I'm downloading s3 bucket objects. I have added service account with role of s3 full access and KMS. But I'm unable to download.
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
During handling of the above exception, another exception occurred:
Things I have tried:
Exec into pod and run python code python3 s3_downloads.py
In this s3 botocore config adding access key and secret key works well.
s3 = boto3.resource('s3', access-key,secret-key)
Making buckets public.
Even though I have added proper role attached to service account,im unable to download. Is any configs am i missng ? Any help would really be appreaciated.
In my case, the issue was that I was missing the session creation. Here is my original code:
client = boto3.client('s3')
The fixed code:
session = boto3.Session()
s3 = session.client('s3')
I was executing this code on a Pod running on a EKS cluster and the missing line was preventing me to use the ServiceAccount role I had defined for this deployment.
Details about boto3 and sessions management on this post https://ben11kehoe.medium.com/boto3-sessions-and-why-you-should-use-them-9b094eb5ca8e
I have the following situation:
Application uses S3 to store data in Amazon. Application is deployed as a pod in kubernetes. Sometimes some of developers messes with access data for S3 (eg. user/password) and application fails to connect to S3 - but pod starts normally and kills previous pod version that worked OK (since all readiness and aliveness probes are OK). I thought of adding S3 probe to readiness - in order to execute HeadBucketRequest on S3 and if this one succeeds it is able to connect to S3. The problem here is that these requests cost money, and I really need them only on start of the pod.
Are there any best-practices related to this one?
If you (quote) "... really need them [the probes] only on start of the pod" then look into adding a startup probe.
In addition to what startup probes help with - pods that take longer time to start - a startup probe will make it possible to verify a condition only at pod startup time.
Readiness and liveness prove as for checking the health of POD or container while running. You scenario is quite wired but with Readiness & liveness probe it wont work as it fire on internal and which cost money.
in this case you might can use the lifecycle hook :
containers:
- image: MAGE_NAME
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "script.sh"]
which will run the hook at starting of the container you can keep shell file inside the POD or image.
inside shell file you can right logic if 200 response move a head and container get started.
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
I am trying to upgrade my EKS cluster from 1.15 to 1.16 using same ci pipeline which created the cluster...So the credentials have no issue.However I am receiving AccessDenied error.I am using eksctl upgrade cluster command to upgrade cluster.
info: cluster test-cluster exists, will upgrade it
[ℹ] eksctl version 0.33.0
[ℹ] using region us-east-1
[!] NOTE: cluster VPC (subnets, routing & NAT Gateway) configuration changes are not yet implemented
[ℹ] will upgrade cluster "test-cluster" control plane from current version "1.15" to "1.16"
Error: AccessDeniedException:
status code: 403, request id: 1a02b0fd-dca5-4e54-9950-da29cac2cea9
My eksctl version 0.33.0
I am not sure why the same ci pipeline which created the cluster now throwing Access denied error when trying to upgrade the cluster..Is there any permissions I need to add to IAM policy for the user ? I dont find anything in the prerequisites document.So Please let me know what I am missing here.
I have figured out the error was due to missing IAM permission.
I used --verbose 5 to diagnose this issue.
I'm using a product called Localstack to mock Amazon S3 locally, which is serving as a streaming file sink for a Flink job.
In the run logs, I can see that Flink disregards Localstack and attempts to contact Amazon S3.
Received error response: org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Service Unavailable
Retrying Request: HEAD https://s3.amazonaws.com /testBucket/
In a flink-conf.yaml, I've specified the following configuration properties:
s3.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
s3.buffer.dir: ./tmp
s3.endpoint: localhost:4566
s3.path.style.access: true
s3.access-key: ***
s3.secret-key: ***
Why might Flink disregard the s3.endpoint?
You are almost right with your config, you will need to add http when using localstack:
s3.endpoint: http://localhost:4566
and maybe try with extra dummy secrets as environment variables:
AWS_ACCESS_KEY_ID=foo
AWS_SECRET_ACCESS_KEY=bar
I am using Jenkins, on Post Build i want to push artifacts to S3.
but i am getting the following error :
Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: E9EF9BE1E1D0C011), S3 Extended Request ID: wsyJXgV9If7Yk/GbgI486HrQ5RFZbvnQt/haOBJq3nZ6aLFbWEvKmnHE9ly+05eOab2qTPOQjZU=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1275)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:873)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:576)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:362)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:328)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:307)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3659)
at com.amazonaws.services.s3.AmazonS3Client.initiateMultipartUpload(AmazonS3Client.java:2651)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.initiateMultipartUpload(UploadCallable.java:350)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:178)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:121)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I tried with Java 1.8 latest , java 1.7 latest. But getting this error again and again. I tried s3 publish plugin 0.8 and also 0.10.1.
Project Config :
Plugin Config :
You're getting a 403 (forbidden) error, which indicates that you're either missing valid credentials for the bucket, or that the bucket's security settings, such as server-side encryption (SSE), are not being respected.
First, update to the latest version of the S3 publisher plugin - it's added support for SSE, and if your bucket needs it enabled, you can check the box for "Server side encryption" in your pipeline configuration.
Second, you'll need to modify the S3 profile in the Jenkins "Configure System" form. In your question, the highlighted field for your access key is empty, and that must be provided, along with the secret key component.
Once you've entered the configuration correctly and verified that bucket requirements are satisfied, you should be in the clear for pushing your objects to S3.
I had same issue while I tried to push artifacts to an S3 bucket by a Jenkins. Later I found out that it threw errors because I was providing a wrong Bucket in the Jenkins config.