Azure Container Instance is immediately killed on Startup - azure-container-instances

I am trying to run an azure container instance but it appears to be getting killed off the second I run it. This works fine in 2 other resource groups but not my production resource group where I see the following:
In events I see 'Successfully pulled image
selenium/standalone-chrome:latest' with count 1 and then 'Started
container' and then 'Killing container' with count 31. The times for
started and killed are the same.
In logs, it just says 'No logs available'
The metrics for CPU and memory on the container never show any change from zero.
Looked at this article but the proposed solution didn't work: Azure Container Group Instance I have tried putting on both an empty directory volume and 2Gb of ram as advised here: https://github.com/SeleniumHQ/docker-selenium but nothing works.
This is the code I am using to create the container:
containerGroup = await azure.ContainerGroups.Define(containerName)
.WithRegion("West Europe")
.WithExistingResourceGroup(configuration.ContainerResourceGroup)
.WithLinux()
.WithPublicImageRegistryOnly()
.WithEmptyDirectoryVolume("devshm")
.DefineContainerInstance(containerName)
.WithImage("selenium/standalone-chrome")
.WithExternalTcpPorts(4444)
.WithVolumeMountSetting("devshm", "/dev/shm")
.WithMemorySizeInGB(2)
.Attach()
.WithDnsPrefix(configuration.AppServiceName + "container")
.WithRestartPolicy(ContainerGroupRestartPolicy.OnFailure)
.CreateAsync(cancellationToken);
How do I debug what is going wrong?
What is wrong with the container?

In case this helps someone I renamed the "containerName" parameter in the above example from myinstance to myinstance1 and changed the region from West Europe to UK South. This fixed the issue. I can only think that Azure caches instances somehow to reduce start up times and the cached image I was using was poisoned somehow.

One issue could be the restart policy - have a look at the Microsoft restart policy troubleshooting on Microsoft's ACI troubleshooting page. According to the website under the Container continually exits and restarts (no long-running process) header in the page:
Container groups default to a restart policy of Always, so containers
in the container group always restart after they run to completion.
You may need to change this to OnFailure or Never if you intend to run
task-based containers. If you specify OnFailure and still see
continual restarts, there might be an issue with the application or
script executed in your container.
In your case you may need to adjust the code as follows using the withStartingCommand:
containerGroup = await azure.ContainerGroups.Define(containerName)
.WithRegion("West Europe")
.WithExistingResourceGroup(configuration.ContainerResourceGroup)
.WithLinux()
.WithPublicImageRegistryOnly()
.WithEmptyDirectoryVolume("devshm")
.DefineContainerInstance(containerName)
.WithImage("selenium/standalone-chrome")
.WithExternalTcpPorts(4444)
.WithVolumeMountSetting("devshm", "/dev/shm")
.WithMemorySizeInGB(2)
.WithStartingCommandLine("tail")
.WithStartingCommandLine("-f")
.WithStartingCommandLine("/dev/null")
.Attach()
.WithDnsPrefix(configuration.AppServiceName + "container")
.WithRestartPolicy(ContainerGroupRestartPolicy.OnFailure)
.CreateAsync(cancellationToken);

This link is helpful for this issue.
--command-line
linux => "tail -f /dev/null"
windows => "ping -t localhost"
# .yml
command: tail -f /dev/null
It will keep your azure instance running.
As now azure do have a endpoint to connect/analyze the process on.

Related

Running AWS Log Agent from inside a Fargate container

Trying to run the AWS Logs Agent inside a docker container running on AWS ECS Fargate.
This has been working fine under EC2 for several years. Under Fargate context, it does not seem to be able to resolve the task role being passed to it.
Permissions on the Task Role should be good... I've even tried giving it full CloudWatch permissions to eliminate that as a reason.
I've managed to hack the python based launcher script to add a --debug flag which gave me this in the log:
Caught retryable HTTP exception while making metadata service request to
http://169.254.169.254/latest/meta-data/iam/security-credentials
It does not appear to be properly resolving the credentials that are passed into the task as the 'Task Role'
I managed to find a hack workaround, that may illustrate what I believe to be a bug or inadequacy in the agent. I had to hack the launcher script using sed as follows:
sed -i "s|HTTPS_PROXY|AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI HTTPS_PROXY|"
/var/awslogs/bin/awslogs-agent-launcher.sh
This essentially de-references the ENV variable holding the URI for retrieving the task role and passes it to the agent's launcher.
It results in something like this:
/usr/bin/env -i AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/f4ca7e30-b73f-4919-ae14-567b1262b27b (etc...)
With this in place, I restart the log agent and it works as expected.
Note that you can do something like this to add --debug flag to the launcher also which was very helpful in trying to figure out where it went astray.

Hosts in Nagios are disappearing

This may belong in ServerFault, but I wanted to approach this community first. If this is not correct, please move this thread or close and I will open on the correct thread.
PROBLEM:
Hosts, along with their associated services, disappear and reappear upon refresh (F5 / Ctrl+F5 / etc).
STEPS TO REPRODUCE:
1. Log into Nagios
2. Click Service Detail
3. See a breakdown of services but you don't see the last one you added.
4. Refresh screen by using F5 / Ctrl+F5 / etc and it doesn't show up still
5. Refresh screen by using F5 / Ctrl+F5 / etc and it doesn't show up still
6. Refresh screen and it will show up.
(!) - Steps 4-6 vary
WHAT I'VE TRIED:
Restarting Nagios service (service Nagios restart)
Restarting HTTPD service (service httpd restart)
Restarting VPS
Refresh browser including "Clear Cache and Hard Reload"
Tried different browsers
Tried different computers
Tried different networks
SCREENSHOTS:
GOOD
https://i.imgur.com/KUW5C6E.png
BAD
https://i.imgur.com/rWFLEaf.png
POSSIBLE CAUSE:
The reason we're in this situation now is because we had an intern add this latest host and its associated service. He added it correctly, and I even checked his work. He did the normal preflight but instead of issuing the reset command via SSH he issued the command on the Web interface itself by accessing "Process Info > Restart the Nagios process". Seems like it would work OK, but we've never restarted like this and is the only reason I suspect it's the culprit of the issue we are seeing. Is there something different that this restart does over the normal SSH restart?
EDIT: To add to all of this, we have updated a different file today, unrelated to this host or it's services and Nagios is not updating.
Thanks for helping!
Rich
EXTRA:
Here is a screenshot of the config file:
https://i.imgur.com/2UsYZcw.png
This can happen if you have multiple Nagios services running, There could be a secondary instance of the service running which hasn't been updated with the new configuration files as it technically hasn't been restarted. I've had this happen once or twice.
First, shut down Nagios
service nagios stop
Next, kill all remaining instances.
killall -9 nagios
Finally, start Nagios back up
service nagios start
That should fix your problem.

ERROR: The overall deployment failed because too many individual instances failed deployment

I'm trying to deploy using CircleCI -> S3 -> CodeDeploy -> EC2.
I was able to upload deploy image onto S3 from CircleCI, but unable to deploy S3 to EC2 instance. Here's the error.
The overall deployment failed because too many individual instances
failed deployment, too few healthy instances are available for
deployment, or some instances in your deployment group are
experiencing problems. (Error code: HEALTH_CONSTRAINTS)
The error was provided from CodeDeploy. I can't figure out why and how.
I'd appreciate if you give some advise.
If you are running on Ubuntu there might be plenty of reasons, here is a checklist can verify
Check code-deploy agent is installed on your EC2 Instance. Please refer this document to install code deploy agent.
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html
$ sudo service codedeploy-agent status
In case if you are running Ubuntu release 20.x and you get this error
./install:22:in block in method_missing': undefined method path' for
#<IO:> (NoMethodError)
try running the install file via this script
sudo ./install auto > /tmp/logfile
Check you have EC2 Instance Code Deploy Role -> Create a code deployment role and assign it to the Instance, https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html.
In case if you assign the EC2 Role after initiate, restart the server.
Check your appsec.yml file placement as per the top answer, try to avoid any long timeout in it.
Log into your instance check your error log
$ tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log
You should be able to figure out what caused the individual instances to fail by digging into the deployment instance details:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-view-instance-details.html
These should contain more detailed information about why your application was unable to be deployed.
This error is commonly due to problems in the configuration of the appSpec.yml or appSpec.json file (It depends on the format you are using).
"If you have any Hook I recommend that you remove them, check if it works, then you can add one by one (the Hooks) and so you can identify the error"
The appspec.yml file should be located at the root of your project:
│-- appspec.yml
│-- index.html
└-- scripts
│-- install_dependencies
│-- start_server
└-- stop_server
In the scripts folder you will have to place the processes that you want to be executed according to the Hook
Here is an example of the appspec.yml file
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I hope I can help you 😃👻🕺🏾
Make sure the CodeDeploy Host Agent Service is running in your target EC2 instance.
The error you are facing is a generic error message thrown on any of the event failure which could be beforeblockTraffic, blockTraffic, ApplicationStop etc.
The first step in this case would be check whether code deploy agent is running or not if first event i.e. BeforeBlockTraffic event is failed.
As you can see in the screenshot below, the event failure message would tell you the exact error behind.
From the failed deployments, I can see all lifecycle events were skipped. Instance i-0bcc36e73851297f2 is currently in Stopped state but I can see the IAM instance profile is missing. Your Amazon EC2 instances need permission to access the Amazon S3 buckets or GitHub repositories where the applications that will be deployed by AWS CodeDeploy are stored. To launch Amazon EC2 instances that are compatible with AWS CodeDeploy, you must create an additional IAM role, an instance profile. 1
For such failures, you can always begin with a general troubleshooting checklist for a failed deployment 2 and then look for troubleshooting guides on Deployment Issues and Instance issues3.
1[http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html]1
2 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-general.html]2
3 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting.html]3
Check the status of the Code Deploy Agent. In my case, the agent wasn't up.
Please check the role given to the ec2 machine(where the agent is running). It should have s3 access as well. This resolved my issue.
"The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path 'appspec.yml'"
Please place your appspec.yml file in your root folder to solve this error
To access your after script and before script
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.

RDO unable to boot VM with disk size specified

I have packstack-allinone setup on my RHEL7.1 trial for Juno release.
I am facing problem while launching VM(for ex: cirros) with a disk size mentioned in flavor. If there is 0gb disk size then VM are getting launched but not for higher flavor sizes.
I also observe that when I do this, openstack-nova-compute service goes down which I observed when I checked using nova-manage service list with nova-compute being XXX making me restart the service everytime I try this scenario. The compute logs doesn't throw any error, it just gets stuck at "Creating image".
Is there any Filesystem issue which i missing to be configured? I am new to this, so please help.
PS: I run all commands with "root" user.
The problem was with esxi. Esxi needs to be 5.5v to support RHEL7x Since mine was 5.1v it only supported RHEL6x.
After upgrading esxi5.1 to 5.5v it worked fine.

how to prevent the stdout.out in weblogic to increasing the size heavily (Windows)

I have deployed a system integrated with weblogic, but until now I faced a problem is the weblogic increasing the stdout.out size heavily(by GB per week), it caused the system to load slowly and slowly.
Any way to prevent it increase the size heavily or redirect into .log?
Thanks alot
As David Herget says above, using the WebLogic Scripting Tool (WLST) to redirect StdOut and StdErr did not actually work for me either; I had to also do so through the web console (even though they appear to be set on the console) and restart the relevant jvms.
I can't reply to David's comment above due to being a newbie. [Edited since for clarity]
Not totally sure to understand fully your question.
Are you talking about the {server_name}.out file located in the {Domain_Path}/servers/{server_name}/logs ?
If so, I've never found anyway to rotate those logs automatically so I run a script each day to rotate it (basically copying it to another name, zip it and echoing a NULL in the orginal file...erasing the older one after).
If you are talking about redirecting StdOut to the logs though, that can be done within the console for each server in the logging tab by checking "Redirect stdout logging enabled". Configuration to rotate those logs can also be done within that tab.
On that, StdErr can also be redirected, but not from the console (in WL9). You have to put "RedirectStderrToServerLogEnabled" at true in the MBean tree by wlst (it's located at /Servers/{server_name}/Log/{server_name}
I know the question was ask long time ago but hoping it would help nonetheless
Weblogic provides features of log files rotation based on the size and time interval.
You can try rotating the log files based on the size. You would need to configure the log rotation policy from the admin console. Please refer the below link for further details.
http://docs.oracle.com/cd/E12840_01/wls/docs103/ConsoleHelp/taskhelp/logging/RotateLogFiles.html
If you want to rotate the log files on demand, you can use the below WSLT script.
C:\>java weblogic.WLST
#connect WLST to an Administration Server
wls:/offline> connect('username','password')
#navigate to the ServerRuntime MBean hierarchy
wls:/mydomain/serverConfig> serverRuntime()
wls:/mydomain/serverRuntime>ls()
#navigate to the server LogRuntimeMBean
wls:/mydomain/serverRuntime> cd('LogRuntime/myserver')
wls:/mydomain/serverRuntime/LogRuntime/myserver> ls()
-r-- Name myserver
-r-- Type LogRuntime
-r-x forceLogRotation java.lang.Void :
#force the immediate rotation of the server log file
wls:/mydomain/serverRuntime/LogRuntime/myserver> cmo.forceLogRotation()
wls:/mydomain/serverRuntime/LogRuntime/myserver>
http://docs.oracle.com/cd/E12840_01/wls/docs103/logging/config_logs.html#wp1001654