Deploying Custom Cartridges on Openshift Origin - openshift-origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.

It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache

MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available

Related

yarn usercache dir not resolved properly when running an example application

I am using Hadoop 3.2.0 and trying to run a simple application in a docker container and I have made the required configuration changes both in yarn-site.xml and container-executor.cfg to choose LinuxContainerExecutor and docker runtime.
I use the example of distributed shell in one of the hortonworks blog. https://hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/
The problem I face here is when the application is submitted to YARN it fails with a reason related to directory creation issue with the below error
2019-02-14 20:51:16,450 INFO distributedshell.Client: Got application
report from ASM for, appId=2, clientToAMToken=null,
appDiagnostics=Application application_1550156488785_0002 failed 2
times due to AM Container for appattempt_1550156488785_0002_000002
exited with exitCode: -1000 Failing this attempt.Diagnostics:
[2019-02-14 20:51:16.282]Application application_1550156488785_0002
initialization failed (exitCode=20) with output: main : command
provided 0 main : user is myuser main : requested yarn user is
myuser Failed to create directory
/data/yarn/local/nmPrivate/container_1550156488785_0002_02_000001.tokens/usercache/myuser
- Not a directory
I have configured yarn.nodemanager.local-dirs in yarn-site.xml and I can see the same reflected in YARN web ui localhost:8088/conf
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/yarn/local</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>
I do not understand why is it trying to create usercache dir inside the nmPrivate directory.
Note : I have verified the permissions for myuser to the directories and also have tried clearing the directories manually as suggested in a related post. But no fruit. I do not see any additional information about container launch failure in any other logs.
How do I debug why the usercache dir is not resolved properly??
Really appreciate any help on this.
Realized that this is all because of the users the services were started with and the permissions to the directories the services work on.
After making sure the required changes are done, I am able to seamlessly run the examples and other applications..
Thanks Hadoop user community for the direction. Adding the link here for more details.
http://mail-archives.apache.org/mod_mbox/hadoop-user/201902.mbox/browser

Error creating an Apache Apollo broker

I downloaded and unzipped the Apache Apollo distribution as described in their site.
~/Developer/Web/MQTT/apache-apollo-1.7.1/bin/apollo create mybroker
I got teh below output in the Terminal.
Creating apollo instance at: mybroker
ERROR: mybroker/etc/log4j.properties (No such file or directory)
That command is supposed to create the etc sub directory among others.
Any idea why this error is occurring?
Okay, I resolved it. I installed Apollo via Homebrew successfully. Then I cd'ed to /var/lib and ran the following command. This time with sudo.
sudo apollo create mybroker
It created the broker successfully. Then I ran the below command to run it. Again with sudo.
sudo mybroker/bin/apollo-broker run
Which started the broker and I could login via the web dashboard at http://127.0.0.1:61680/ too.
I use Ubuntu 16.0.4. And I encountered the same problem. I resolved by this:
use "sudo apollo create......"
it seems because the apollo didn't have authorities to create document in /etc/

How to run scripts automatically after deployment in AWS using EB CLI?

I am trying to make a Django server on AWS. My django app depends on some mathematical python libraries like numpy, scipy, sklearn etc. However there is an issue for which I need to this after every deployment
sudo nano /etc/httpd/conf.d/wsgi.conf
---------------------------------------
add this line in the file
WSGIApplicationGroup %{GLOBAL}
---------------------------------------
sudo /etc/init.d/httpd reload
Basically I need "WSGIApplicationGroup %{GLOBAL}" in my wsgi.conf file otherwise I get 504. I am using a Custom AMI built on top of Amazon Linux 2014 and I am using EB CLI for deployment. However whenever I deploy the wsgi.conf is reset and it does not contain the line that I have added previously and I need to manually SSH into the EC2 instance and do this task myself. It gives a overhead on every deployment and its also not feasible once we scale up (cloning or creating instances also resets it). So is there a way that this will be automatically done after every deployment ?
The content of the wsgi.conf is fixed, so basically I can make a script easily to create it but the issue is how to trigger the script automatically ?
PS:I am new to AWS
You need to use AWS Elastic Beanstalk feature called .ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
In your case you can't use Files or Commands sections, because:
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
You need to use Container_commands section:
They run after the application and web server have been set up and the
application version file has been extracted, but before the
application version is deployed.
Example .ebextensions/01wsgi.config (not tested :-))
container_commands:
apache_reload:
command: |
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
/etc/init.d/httpd reload
Feel free to tweak my example as you want, for example you can copy your temporary wsgi.conf file somewhere and then replace original in Container_commands section.

Error activating gear: CLIENT_ERROR: Failed to execute: 'control deploy'

I am planning to deploy my developed ruby on rails 3 mysql application on openshift.
I have created an openshift application, by clicking add application... button
Entered the name of the application and the name space and choose mysql 5.1 as the database,then left that git hub sssh url as it is and then clicked create application
Upon successful creation I got a git clone ssh url for cloning this openshift application in to my local hard drive. I just cloned it and replaced the content of openshift with my existing rails application source code.
When I tried to push this change to openshift I am getting the following error. Here is the gist that shows the error
Why am I getting this Error activating gear: CLIENT_ERROR: Failed to execute: 'control deploy' How do I fix this error here ?
I was getting this when I added
spring.profiles.active=openshift
to my JAVA_OPTS_EXT in env. I changed this to
-Dspring.profiles.active=openshift
In my case this was a environment fix. And this could vary in different scenarios, it looks like we get this ERROR after succesful compilation and server is about to start.

How to display RoR app code version on heroku?

For my Rails apps I normally deploy to production from a tagged version, and then display the tag in the user interface assigning the output of git describe --always to a variable in config/application.rb.
Now I'm moving an app over to Heroku, and deployment to heroku only happens using the master branch, so this trick won't work any more.
Are there any other ways to assign a version number to my code and display it on the UI when I've deployed to heroku?
Thanks,
Stewart
You can add a variable to the Heroku configuration by running this command locally whenever you push new changes to Heroku:
heroku config:add GIT_TAG=`git describe --always`
Then you can access this in your app's configuration:
version = ENV['GIT_TAG'] || `git describe --always`
When the app is running on Heroku, it will pick up the config variable (ENV['GIT_TAG']) and when it's running locally in development it will fall back to running git describe --always.
You will need to update the Heroku config variable each time you deploy, but I generally add this kind of thing to a deploy script or rake task (along with useful things like creating a new tag marking the deploy and running any new database migrations on Heroku).
Doesn't git tag fit your needs?
And why wouldn't the old trick work anymore?
If you want to display it on the UI then a git SHA output probably isn't particularly useful - you have two options, set a Heroku config variable with a user friendly version number in or a set a version number in your code that you increment when you deploy from master. You could probably wrap the deploy up in a rake task that incremented the version number either a file (and then readded it to git and commits it) or simply increments a value in a config variable.
Also, don't forget Heroku release management http://blog.heroku.com/archives/2010/11/17/releases/ which you may also be able to employ here to get the version number from that perhaps.