How to update OpenShift Cartridge with oo-admin-upgrade? - openshift-origin

I have an application running in OpenShift Origin. It has been running for some time and now I have an update for the cartridge it uses.
When I try to update cartridge, script fails.
[root#broker ~]# oo-admin-upgrade --upgrade-node node1 --login admin --app-name app1 --version 1.0 --upgrade-gear 52231466a6577a242f00015d
/usr/sbin/oo-admin-upgrade:76:in `rescue in upgrade_gear': Can only supply discovery data if direct_addressing is enabled (RuntimeError)
["/opt/rh/ruby193/root/usr/share/ruby/mcollective/rpc/client.rb:438:in `discover'", "/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-msg-broker-mcollective-1.13.0.1/lib/openshift/mcollective_application_container_proxy.rb:2173:in `rpc_exec'", "/usr/sbin/oo-admin-upgrade:49:in `block in upgrade_gear'", "/opt/rh/ruby193/root/usr/share/ruby/timeout.rb:69:in `timeout'", "/usr/sbin/oo-admin-upgrade:41:in `upgrade_gear'", "/usr/sbin/oo-admin-upgrade:611:in `<main>'"]
Output:
Migrating gear on node with: /usr/sbin/oo-admin-upgrade --login 'admin' --upgrade-gear '52231466a6577a242f00015d' --app-name 'app1' --version '1.0'
Upgrading on node...
from /usr/sbin/oo-admin-upgrade:24:in `upgrade_gear'
from /usr/sbin/oo-admin-upgrade:611:in `<main>'
Do I do something wrong or it is a bug in the script?

I believe you're probably one of the first people attempting to use oo-admin-upgrade in their origin installation. This looks like the mcollective command to the node to upgrade the gear timed out. Please make sure mcollective is correctly configured by running 'mco ping' - you should see responses from all nodes in your cluster.
That said, the upgrade-node option is not designed to be used by end-users. Please use:
oo-admin-upgrade --version 1.0
This should apply upgrades for all apps in your cluster.

Related

Convox CLI deploy gives 502 response

I'm suddenly having issues deploying my apps to AWS ECS using the Convox CLI. When I am trying as of Friday, this is what happens:
$ convox deploy -a my-app -r test
Packaging source... OK
Uploading source... OK
Starting build... ERROR: response status 502
This is regardless of rack, and other operations such as "env" and "logs" seem to work. I don't know how to go about trouble shooting this. Is there some switch I can use to get more debug info from the CLI? I am assuming the "502" is an HTTP error code, but I do not know where it is coming from. I've looked around in AWS, but can not seem to find any errors there (however, is not sure where to look).
Any help would be appreciated.
Had the same problem on a rack running a version from 2019. Solved this updating the rack to version 20211019100155 as Brian suggested.

Cannot Login Into Vagrant boxes managed via Test Kitchen

I have a very boiler plate .kitchen.yml with the following:
---
driver:
name: vagrant
platforms:
- name: ubuntu-14.04
suites:
- name: default
run_list:
- recipe[webserver::default]
when I run kitchen converge I get the following:
==> default: Setting hostname...
==> default: Replaced insecure vagrant key with less insecure key!
==> default: Machine not provisioned because `--no-provision` is specified.
Waiting for SSH service on 127.0.0.1:2222, retrying in 3 seconds
Waiting for SSH service on 127.0.0.1:2222, retrying in 3 seconds
Waiting for SSH service on 127.0.0.1:2222, retrying in 3 seconds
.....
......
After quit a bit of googling, I've read that Vagrant 1.7+ replaces the default ssh key with what they think is a less insecure key.
There's the config.ssh.insert_key = false but that won't work for the following reasons:
Updated kitchen.yml with insert_key = false
1.1 This does not work because the Vagrantfile produced has the boolean false as a "false" string!
Tried using a Global Vagrantfile file
2.1 This did not work as if the file isn't even read!
Tried to build my own box but didn't succeed.
Anyone manage to fix or have a work around for this?
Apparently other fixes solved the original post-er's problem, but not mine. Posting here in case it's useful for someone else...
After some hours troubleshooting this problem, and I finally noticed in the VirtualBox Manager that, under Settings/Network/Advanced, that the "Cable Connected" checkbox was unchecked!
WTF, my virtual machine's virtual cable was not "connected?" (Big sigh)
I fixed this problem by adding this to my .kitchen.yml file:
driver:
name: vagrant
customize:
cableconnected1: 'on'
I have no idea why the virtual machines were coming up with an unplugged cable. I do not think my workaround is the natural solution but it's all I've got and it works.
UPDATE: This is no longer needed with newer versions of Vagrant and VBox, for all those finding this via Google now.
kitchen-vagrant maintainer here to let everybody know that the issue has to do entirely with the matrix of Vagrant, VirtualBox, and bento boxes in play.
To check versions:
VBoxManage --version
vagrant --version
vagrant box list | grep bento/
In short, there were a rough series of both Vagrant and VirtualBox releases that caused all sorts of havok so depending on which versions the bento boxes were built/tested against you may or may not experience it.
At present, the following configuration is known working and what the last :
kitchen-vagrant 1.2.1
Vagrant 2.0.0
VirtualBox 5.1.28
bento boxes version 201708.22.0+
Users can look at the boxes on Vagrant Cloud and see what any given box was tested against, e.g. bento/14.04 version 201708.22.0. It's an ugly JSON blob at the moment but very useful as you can see this one was built/tested against. Any box that is uploaded is run through a kitchen run to test it not only for base functionality but also shared folder support for most* platforms.
*most here means nearly everything except known problem distros and FreeBSD
Notice this issue only happens in Centos boxes, not in Ubuntu boxes.
The kitchen-vagrant driver is already fixed.
You can update it, or manually make the change:
https://github.com/test-kitchen/kitchen-vagrant/commit/3178e84b65d3da318f818a0891b0fcc4b747d559
Then this .kitchen.yml will work:
driver:
name: vagrant
ssh:
insert_key: false
I downgraded from vagrant to 1.8.4 from 1.8.5 and it worked.
I had to run kitchen destroy blah to remove the instance created with 1.8.5. Then when I rand kitchen converge blah it worked.
I had the same issue, I just needed to update the kitchen-vagrant gem. You can do this by first seeing which gem you have installed by doing a
$ gem list
...
kitchen-vagrant (0.20.0)
...
then do a gem update kitchen-vagrant and retry the kitchen verify command.
The configuration that worked for me was:
PS> vboxmanage --version
5.1.26r117224
PS> gem list | grep kitchen-vagrant
kitchen-vagrant (1.2.1)
PS> vagrant --version
Vagrant 1.9.6
With ChefDK 2.3.4.1.

Glassfish won't start from Intellij unless I run Intellij with sudo

Title says it all... just trying to get glassfish up and going. This is the error I get
Detected server admin port: 4848
[2015-04-06 07:37:56,138] Artifact java_web_app:war exploded: Server is not connected. Deploy is not available.
Detected server http port: 8080
Command start-domain failed.
JVM failed to start: com.sun.enterprise.admin.launcher.GFLauncherException: The server exited prematurely with exit code 1.
Before it died, it produced the following output:
This subcommand requires root privileges: bsexec
Surely there's a way around this? I don't really want to run Intellij with sudo every time.
Answer: GlassFish 4.1, IntelliJ IDEA 14.1
I have no idea (pun not intended) why GlassFish requires a root user account.
You need to execute something like this:
/Library/opt/payara-4.1.151/glassfish/bin/asadmin start-domain --verbose=true domain1
Go Run -> Edit Configuration -> Select configuration (acme-payara-project) -> Start Up Configuration
Edit the Startup Script and change it to add the --verbose-true parameter.
Is this a problem happening on Mac OSX 10.10.3?
If so, we were able to workaround the problem by changing the content of the file /usr/libexec/StartupItemContext to
#!/bin/sh
unset LAUNCHD_SOCKET
$#
We've also reported this workaround on the corresponding glassfish-issue: https://java.net/jira/browse/GLASSFISH-21343
Note that this will only work for glassfish 4.0. In 4.1 they changed the startup code, so this StartupItemContext file will no longer be used.
If your glassfish Version is 4.1, the only known workaround at the moment is to start glassfish with the --verbose=true param.
Solved this on OS X 10.10.4, IntelliJ 14.1.4 by adding -v to the startup script.
Changing the Startup command in the Run Configuration under the "StartUp/Connection" tab to the following worked for me:
.../glassfish-4.1/glassfish/bin/asadmin start-domain --verbose domain1

Deploying Custom Cartridges on Openshift Origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available

Rhomobile rake redis aborted

I am working my way through the RhoMobile tutorial http://docs.rhomobile.com/rhoconnect/command-line#generate-an-application and I at the point of entering
rake redis:install
I get the following error.
WARNING: using the built-in Timeout class which is known to have issues when use
d for opening connections. Install the SystemTimer gem if you want to make sure
the Redis client will not hang.
See http://redis.io/ for information about redis.
Installing redis to C:\RhoStudio\redis-2.4.0;C:\dropbox\code\InstantRhodes\redis
-1.2.6-windows.
rake aborted!
Zip end of central directory signature not found
Tasks: TOP => redis:install => redis:download
(See full trace by running task with --trace)
D:\Dropbox\code\rhodes-apps\storeserver>
I am working on a Whindows machine, primarily using RhoStudio.
It ended up being an environmental variables issue. Also, it seems the main support forum for Rhodes is the Google Group. Question answered here:
https://groups.google.com/d/topic/rhomobile/b-Adx2FDMT8/discussion
If you are using Rhostudio in windows then redis is automatically installed with Rhostudio.
So no need of installing it again.