Error activating gear: CLIENT_ERROR: Failed to execute: 'control deploy' - ruby-on-rails-3

I am planning to deploy my developed ruby on rails 3 mysql application on openshift.
I have created an openshift application, by clicking add application... button
Entered the name of the application and the name space and choose mysql 5.1 as the database,then left that git hub sssh url as it is and then clicked create application
Upon successful creation I got a git clone ssh url for cloning this openshift application in to my local hard drive. I just cloned it and replaced the content of openshift with my existing rails application source code.
When I tried to push this change to openshift I am getting the following error. Here is the gist that shows the error
Why am I getting this Error activating gear: CLIENT_ERROR: Failed to execute: 'control deploy' How do I fix this error here ?

I was getting this when I added
spring.profiles.active=openshift
to my JAVA_OPTS_EXT in env. I changed this to
-Dspring.profiles.active=openshift
In my case this was a environment fix. And this could vary in different scenarios, it looks like we get this ERROR after succesful compilation and server is about to start.

Related

dbt deps command results in "Unable to connect to registry hub"

When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)

Running Apache ActiveMQ Artemis, unable to login to web console due to IOException

(Windows, JDK8, and ARTEMIS_HOME set.) I downloaded v2.5.0, created a broker, and ran it.
artemis.cmd create broker1, specify login info, cd broker1 and bin\artemis.cmd run
(Understood that instance suggested not to be under ARTEMIS_HOME dir.) The webconsole renders and I can access it via localhost:8161/console. But trying to login, I get a Server Error on the web page, and the CLI shows
[org.eclipse.jetty.server.HttpChannel] /console/auth/login/:java.lang.SecurityException: java.io.IOException: \login.config (No such file or directory)
The file broker1/etc/login.config does exist. I have tried running from various directories and explicitly stating the configuration.
cd broker1/bin, artemis.cmd run -- xml:artemis-ervice.xml
But same issue. Why can't this login.config be recognized?
I believe there's a bug in the artemis.profile.cmd. It's using this:
-Djava.security.auth.login.config=%ARTEMIS_ETC_INSTANCE%\login.config
But the %ARTEMIS_ETC_INSTANCE% variable is not defined. I believe it should be using %ARTEMIS_INSTANCE_ETC_URI% instead. Can you try this? If that fixes the issue then I'll open a JIRA and sent a PR to get it fixed permanently.

Deploying Custom Cartridges on Openshift Origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available

How to fix error 'another rcp application is running in this sandbox'

When I attempt to run an 'scm load' I receive this error :
another rcp application is running in this sandbox file locked at file
c:\workspaces\myworkspace
How can this error be fixed ?
I've successfully used the scm load command before so maybe I need to perform some 'tidying up' after I load a workspace as this just occurs when I change workspace?
This thread sums it up:
Two potential solutions:
Run lscm.bat instead of "scm.exe" to do the checkin
lscm will contact your RTC eclipse client to perform the checkin
Use a separate sandbox and repository workspace
Use scm.exe to load a repository workspace into a separate sandbox (e.g. c:\Workspaces\sandbox1)
Make changes to the files in that sandbox
Use scm.exe to check in those changes and deliver them

New repository, production problem

I have a problem with the deployment of the project on the production server. We use Capistrano and Passenger. The problem is that we moved the project repository on GitHub to another account. I changed the repository address in the file deploy.rb, however, during the 'cap production deploy ", after authentication by the production server, Capistrano is looking for an old repository, which fails. I suspect that this is a change in the repository. git on production, but I do not know how to do it.
servers: ["85.xxx.xxx.xxx"]
Password:
[85.xxx.xxx.xx] executing command
** [85.xxx.xxx.xx:: err] ERROR: repo / repo.git does not exist. Did you enter it correctly?
** [85.xxx.xxx.xx:: err] fatal: The remote end hung up unexpectedly
command finished in 4220ms
*** [deploy: update_code] rolling back
Try editing shared/cached-copy/.git/config and modify the git repo listed there. If you're using the remote_cache method, it keeps a local git repo and updates that on the remote machine. Repoint that to your new git repo and you should be good to go.