Rhomobile rake redis aborted - redis

I am working my way through the RhoMobile tutorial http://docs.rhomobile.com/rhoconnect/command-line#generate-an-application and I at the point of entering
rake redis:install
I get the following error.
WARNING: using the built-in Timeout class which is known to have issues when use
d for opening connections. Install the SystemTimer gem if you want to make sure
the Redis client will not hang.
See http://redis.io/ for information about redis.
Installing redis to C:\RhoStudio\redis-2.4.0;C:\dropbox\code\InstantRhodes\redis
-1.2.6-windows.
rake aborted!
Zip end of central directory signature not found
Tasks: TOP => redis:install => redis:download
(See full trace by running task with --trace)
D:\Dropbox\code\rhodes-apps\storeserver>
I am working on a Whindows machine, primarily using RhoStudio.

It ended up being an environmental variables issue. Also, it seems the main support forum for Rhodes is the Google Group. Question answered here:
https://groups.google.com/d/topic/rhomobile/b-Adx2FDMT8/discussion

If you are using Rhostudio in windows then redis is automatically installed with Rhostudio.
So no need of installing it again.

Related

How to tell github action that the job had done successfully?

I use github action to deploy my website to my server. The last ssh cmd is npm run start. It will output ready - started server on http://localhost:4000(Since i use Nextjs) finally but it seems that github doesn't know what did it mean and print :
2021/01/09 14:24:14 Error: command timeout
err: Run Command Timeout!
Although the website is successfully deployed, it shows that the Github action failed to execute.
So how to tell github action that the job had done successfully?
You should find a way to start the application in a daemon process of its own, rather than as a process within the SSH session. Perhaps this tool (pm2) might solve your problem? This question and answer is somewhat related.
There are definitely other ways to start your app in a daemon process, or perhaps as a service, but this might be the most straightforward for you since it's a Node tool.

redmine:migrate_from_trac -stack level too deep error

After successful installation of redmine , trying to migrate the datas from trac to redmine . i am getting the following error.... . Any work around to fix this
user#user:~/redmine-2.3$ rake redmine:migrate_from_trac RAILS_ENV="production"
WARNING: a new project will be added to Redmine during this process.
Are you sure you want to continue ? [y/N] y
Trac directory []: /home/user/implementation Trac database adapter (sqlite3, mysql2, postgresql) [sqlite3]:
Trac database encoding [UTF-8]:
Target project identifier []: implementation
Migrating components.......................................................................................................................................................................................
Migrating milestones.......................................
Migrating custom fields...
Migrating tickets..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Migrating wiki.....
Components: 178/183
Milestones: 39/39
Tickets: 2082/2082
Ticket files: 0/421
Custom values: 2812/2812
Wiki edits: 5/5
Wiki files: 0/0
rake aborted!
stack level too deep
Tasks: TOP => redmine:migrate_from_trac
(See full trace by running task with --trace)
This is a typical stack overflow error, means a function is recursively called in an infinite loop. That is caused by a bug in that script, likely because your data is somehow corrupted and the script is not able to stand that.
Try to call the script with the --verbose flag, or check the log files for error messages. Try to find the error in your data by running the script testwise with reduced data input (e.g. without tickets).

Deploying Custom Cartridges on Openshift Origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available

Bundler::GemNotFound when compiling assets from cap deploy

I'm deploying to servers with capistrano and doing a bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile as the last step. Problem is when it gets to this point FROM cap deploy, i get the following error:
/usr/local/rvm/gems/ruby-1.9.3-p194/gems/bundler-1.1.4/lib/bundler/spec_set.rb:90:in `block in materialize': Could not find Platform-0.4.0 in any of the sources (Bundler::GemNotFound)
Platform-0.4.0 IS in fact on the server. And when i go into the server and run this exact command, all works great.
Couple of facts about my server: its using RVM, but that doesn't seem to be an issue with cap as the stack trace above would suggest. The other fact of interest is that this server was first created with a custom script I wrote that downloads an archived version of the git repo and then manually runs what cap does on a deploy. The reason I'm doing this, if anyone asks, is for automation with AWS AutoScaling. If i do a normal deploy:setup (not using my AWS script), it works fine with deployments. But the gem list is the same, and the site works all the same either way. Its just something with the cap deploy
Any thoughts?
I figured out what I was doing wrong. on my custom AMI scripts, i was naming the initial release folder 'first' when it should be a timestamp the way capistrano normally names it. That screwed things up on subsequent deployments.

Delayed Job failing in Production environment on Server

I am using delayed_job gem for sending emails in my rails app.
delayed_job was working well but from last 5 days, it is not working and throwing following error in delayed_job.log file.
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#private_group_join_request failed with NoMethodError: undefined method private_group_join_request' for # - 11 failed attempts
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 1.4503 j/s, 1 failed ...
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#contact_us_email failed with NoMethodError: undefined method contact_us_email for # - 11 failed attempts
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 4.3384 j/s, 1 failed ...
Following is one of the example how I am calling delayed job for sending email.
UserMailer.delay(:run_at => 10.seconds.from_now).contact_us_email(self)
I am starting delayed job with
RAILS_ENV=production script/delayed_job start
It is working correctly in development as well as production environment on my local machine.
Environment Which I am using in Rails App.
Rails 3.0.8
Ruby 1.9.2 in Linux(Ubuntu)
rake 0.9.2
delayed_job 2.1.4
This is same as
Undefined Method Error when creating delayed_job workers with script/delay_job
But solution is not working for me.
I figured it out. It was due to package "libyaml" package, which was not present on my local system but was installed on server.
Is it possible that you didn't stop and start your delayed_job worker when you deployed some new code? If a worker that was running before the deploy is trying to run new methods, it will fail.
Is it possible that YAML (or Syck) running in the worker process doesn't know about the method in question? Take a look at:
https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
... the relevant part is:
One common cause of deserialization errors is that the YAML references
a class not known to the worker. If this is the case, you can add
# file: config/initializers/custom.rb
require 'my_custom_class'
which will force my_custom_class to be loaded when the worker starts.
I had to restart my unicorns on the production server, by hand because for some reason cap deploy was not doing it for me.
So what I had to do was:
sudo /etc/init.d/unicorn_myapp stop
sudo /etc/init.d/unicorn_myapp start
But unicorn wasn't able to start, so I had to
sudo rm /tmp/unicorn.my_app.sock
And
sudo /etc/init.d/unicorn_myapp start