fluentbit logs are creating over 1000+ streams, how to merge logs if same application but new podname - amazon-cloudwatch

Currently in our environment on EKS we have fluentbit to collect logs and push to Cloudwatch, we noticed that same service that has been redeployed after new code changes that a new log stream is created, this causes over 1000 log streams, we are wondering if we can merge the logstreams to one logstream for 1 application.
E.g:
Application A has POD A-72371723 running, a logstream is created for that.
Developer pushes new code and Application A is redeployed and has POD Name A-44444 that then creates a logstream, we want to merge those two logstreams into just 1 and name it APPLICATION NAME

Related

AWS AppRunner start command running multiple time

I am trying to run apache server on AWS AppRunner using my source code repository with corretto 11 as runtime using the below start command
https://github.com/myanees284/apprunner-jmeter/blob/main/run_apacheee.sh
I could see the commands in the above sh gets executed and service gets deployed successfully as running. However after the deployment and health check, the commands are executed repeatedly.
Application log is here: https://gist.github.com/myanees284/db233e7e0d71eba4643f56c2e1bf87ec#file-application-logs2022-08-22t06_29_55-322z-2022-08-23t06_29_55-322z-json-L281
I am unable to understand why the code is executed multiple times when the service is already running?
After your start script exists, the container stops as well. That's why App Runner starts a new container after

AWS EC2 Instance Profile for S3 permissions inconsistent

Background: I'm writing an automated deployment script to deploy a ruby on rails application to AWS on an EC2 instance using S3 as the storage for ActiveStorage. My script creates an instance profile/role and attaches it to the EC2 instance on creation. My script uses the ruby sdk for AWS.
Sometimes when my script runs, it works great (which tells me my configuration is correct). Sometimes it throws the following exception:
/home/ubuntu/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/aws-sigv4-1.2.1/lib/aws-sigv4/signer.rb:613:in `extract_credentials_provider': missing credentials, provide credentials with one of the following options: (Aws::Sigv4::Errors::MissingCredentialsError)
- :access_key_id and :secret_access_key
- :credentials
- :credentials_provider
I generally have success about 9 times out of 10 using a t3a.micro or t3.micro instance. I usually have a failure 9 times out of 10 using a t3a.nano or t3.nano instance.
It sure seems like there is something eventually consistent about these instance profiles, but I can't find anything in the documentation. What's going on, and what can I do to make this succeed consistently?
Thank you.

How ambari detect a service state

I'm adding a new custom service to Ambari.
I have successfully created the service and install it in the Ambari web UI. After starting the master component of my new service, Ambari claims that the master is in stop status, however, the master has been run successfully on the intended node and I can use its API.
I wonder how Ambari checks a component status?
Does it use the status function which I have provided in the component definition? I don't see logs of calling my status function in the Ambari logs.
Or does it use the PID file? My component does not have a PID file.
#TailofGodzilla (cool name btw), When I make custom services, I start with existing open source examples, and then finally create management packs. You can easily reverse engineer these, including the service status function.
I checked 3 of these services (Hue, Elk, NiFi) and all are using PID File with entries for status function and status_params file.

AKS API Load testing error: Premature end of Content-Length delimited message body

While load testing, after some successful responses from the API, JMeter records errors:
'Premature end of Content-Length delimited message body'.
From logs inside the code the response seems to complete normally.
The APP is deployed on AKS with ingress nginx/1.15.10 controllers. The APP consists of 4 separate APIs (one master calling the 3 others). The APIs are created in FLASK with CONNEXION and run in a WSGIContainer on a Tornado HTTPServer.
Another confusing factor is that the APP is deployed on two AKS instances on the same cluster. The one deployment does not return errors and the other does.
What could be causing the error?
I would suggest to limit your testing scope.
1) target the application directly (bypassing the k8s svc and ingress controller). ensure you target each app running on the two different nodes. Do you still see the issue ?
2) target the app service directly (bypassing ingress controller), ensure you target each app running on the two different nodes. Do you still see the issue ?
3) target the app using its ingress, ensure you target each app running on the two different nodes. Do you still see the issue ?
Based on those results, we should be able to pinpoint better the source of your issue.

Android service needs to periodically access internet

I need to access the internet periodically (every 5 minutes or so) to update a database with a background service. I have tried the following using Android 8 or 9:
Use a boot receiver to start a JobService.
In onStartJob of the class that extends JobService, create a LocationListener and request a single location update.
Have onStartJob schedule the job again to run in 5-10 minutes.
Return true from onStartJob.
In the OnLocationChanged of the LocationListener, write to a local file, and start a thread to make a PHP request to update the database.
Everything works fine while the underlying process is running. When the process dies, the service keeps periodically updating the local file, but the the URLConnection.getResponseCode() now throws an exception - java.net.ConnectionException: failed to connect to ...
Is there a way to get around this using the above approach? If not, how can I have a background service access the internet even after the underlying process dies?