I use jboss fuse 6.0.0 in windows and start the container using the bin/fuse.bat. The etc/users.properties is modified to add the line admin=admin,admin.
At first the admin command acts as normal. I have admin:list showing all the containers, admin:create to create the child containers.
Then I followed the instructions of
https://access.redhat.com/site/documentation/en-US/JBoss_Fuse/6.0/html/Getting_Started/files/Deploy-Fabric-Create.html
and create a fabric use the command fabric:create --clean. After that the admin command is gone! I get Command not found: admin:list, and I can no longer list the child containers created by admin:create. The fabric:container-list command only enumerates the containers created by the fabric:container-create-child command.
Does any one experienced this problem before? Is it normal? How can I get the admin commands back?
This is expected, when you create fabric, then fabric is managing the containers. So you should use fabric commands to create/manage your containers.
Related
I have a strange situation that i would like to share with you.
I started container recently and wand to have Azure DevOps agent running on container.
On my windows 10 laptop , i can instanciate a Linux container and
everything run & execute well (using WSL)
On a Ubuntu VM running an Azure, the same container run well and
execute well
However the same container in Azure Container instance failed and for unknow reason i get the following error:
Generating browser application bundles (phase: setup)...
/bin/sh: 1: wslpath: not found
01 12 2022 14:55:59.933:ERROR [config]: Error in config file!
Error: Command failed: wslpath -w "/usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin"
/bin/sh: 1: wslpath: not found
When i see wsl i think about Windows Subsystem Linux but i am not sure. I really do not understand why ACI behave like that with that error.
I always thought that container should behave the same wherever it runs. Did you experience it ? Any idea are welcome.
Regards
I'm building an application on AWS EMR using YARN (and Dask) version Hadoop 2.7.3-amzn-1. I'm trying to test various failure scenarios and I'm wanting to simulate a container failure. I can't seem to find an easy way to kill a YARN container - only the whole application. Is there a command-line utility for this?
[root#node1 lillcol]# yarn container -help
20/04/24 15:04:14 INFO client.AHSProxy: Connecting to Application History server at node1/127.0.0.1:10200
usage: container
-help Displays help for all commands.
-list <Application Attempt ID> List containers for application
attempt.
-signal <container ID [signal command]> Signal the container. The
available signal commands are
[OUTPUT_THREAD_DUMP,
GRACEFUL_SHUTDOWN,
FORCEFUL_SHUTDOWN] Default
command is OUTPUT_THREAD_DUMP.
-status <Container ID> Prints the status of the
container.
Through the command yarn container -signal [container-ID] GRACEFUL_SHUTDOWN to achieve.
i've tried and int works,I hope that will be helpful.
YARN has no CLI or REST API that kills a container.
The simplest way to create a container failure is to login to a NodeManager host and kill the process (which would be a container) spawned by the NodeManager.
Seems like it's exposed in API starting from version 2.8.0
https://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/yarn/client/api/YarnClient.html#signalToContainer(org.apache.hadoop.yarn.api.records.ContainerId,%20org.apache.hadoop.yarn.api.records.SignalContainerCommand)
Is it possible (and how) to specify a shell script somewhere which will be executed each time a new node is added to Ambari cluster?
I'm using HDP Ambari for that and I would like to add some symbolic links when setup of new node is completed, but I want to automatize that so that I (or someone else) don't forget it.
There is no functionality that currently exists that will enable you to execute a script when a node is added to the cluster. What you're asking for is a custom hook. You would have to look through the Ambari source code and see if you can define a custom hook for the stack. There are a few hooks provided in each stack, for examples see: https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks
I am trying to make a Django server on AWS. My django app depends on some mathematical python libraries like numpy, scipy, sklearn etc. However there is an issue for which I need to this after every deployment
sudo nano /etc/httpd/conf.d/wsgi.conf
---------------------------------------
add this line in the file
WSGIApplicationGroup %{GLOBAL}
---------------------------------------
sudo /etc/init.d/httpd reload
Basically I need "WSGIApplicationGroup %{GLOBAL}" in my wsgi.conf file otherwise I get 504. I am using a Custom AMI built on top of Amazon Linux 2014 and I am using EB CLI for deployment. However whenever I deploy the wsgi.conf is reset and it does not contain the line that I have added previously and I need to manually SSH into the EC2 instance and do this task myself. It gives a overhead on every deployment and its also not feasible once we scale up (cloning or creating instances also resets it). So is there a way that this will be automatically done after every deployment ?
The content of the wsgi.conf is fixed, so basically I can make a script easily to create it but the issue is how to trigger the script automatically ?
PS:I am new to AWS
You need to use AWS Elastic Beanstalk feature called .ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
In your case you can't use Files or Commands sections, because:
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
You need to use Container_commands section:
They run after the application and web server have been set up and the
application version file has been extracted, but before the
application version is deployed.
Example .ebextensions/01wsgi.config (not tested :-))
container_commands:
apache_reload:
command: |
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
/etc/init.d/httpd reload
Feel free to tweak my example as you want, for example you can copy your temporary wsgi.conf file somewhere and then replace original in Container_commands section.
I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available