thinking sphinx error when rake thinking_sphinx:start - ruby-on-rails-3

I am using gem 'thinking-sphinx', '2.0.10' for search functionality.I am following http://railscasts.com/episodes/120-thinking-sphinx tutorial for this.
script/plugin install git://github.com/freelancing-god/thinking-sphinx.git
rake thinking_sphinx:index
These 2 steps executed without any problem,but when i did rake thinking_sphinx:start it was giving following error :
Failed to start searchd daemon. Check /home/user/newsvn/alumnicell/log/searchd.log.
Failed to start searchd daemon. Check /home/user/newsvn/alumnicell/log/searchd.log
I searched on net about this but even trying many solutions i am not able to solve this error.Also while checking on net i came to know that there should be sphinx.yml file in config which is not present in my project.
How to solve this error?

solved it...
i just added sphinx.yml file inside config and specified port number for each environment as follows:
development:
port: 9310
morphology: stem_en
test:
port: 9310
morphology: stem_en
production:
port: 9310
morphology: stem_en
then changed listen address in development.sphinx.conf for searchd as follows:
searchd
{
listen = 127.0.0.1:9310
}
then on console ran command:
rake thinking_sphinx:rebuild

May be /home/user/newsvn/alumnicell/log/searchd.log has solution for the problem.
Just post it!

Related

Using services: mysql for codecption test in gitlab-ci fails with "Connection refused"

I have a CakePHP Application with codeception-plugin for testing.'
Locally I run it in a ddev docker environment and everything works fine.
Trying to run automated tests with gitlab-ci gives me following error:
Running with gitlab-runner 11.1.0 (081978aa)
on shared runner 601c0f11
Using Docker executor with image kevinliteon/cakephp:php7 ...
Starting service mysql:latest ...
Pulling docker image mysql:latest ...
Using docker image sha256:6a834f03bd02bb88cdbe0e289b9cd6056f1d42fa94792c524b4fddc474dab628 for mysql:latest ...
Waiting for services to be up and running...
*** WARNING: Service runner-601c0f11-project-94-concurrent-0-mysql-0 probably didn't start properly.
Health check error:
service "runner-601c0f11-project-94-concurrent-0-mysql-0-wait-for-service" timeout
Health check container logs:
Service container logs:
2018-10-04T12:12:18.904025613Z Initializing database
2018-10-04T12:12:18.925096235Z 2018-10-04T12:12:18.919745Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2018-10-04T12:12:18.925195518Z 2018-10-04T12:12:18.919970Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.12) initializing of server in progress as process 30
2018-10-04T12:12:50.330736417Z 2018-10-04T12:12:50.330487Z 5 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
*********
Pulling docker image kevinliteon/cakephp:php7 ...
Using docker image sha256:bd4a83b02647ad93a356b343d2ce5ae3a9a1177aea2cd76c61b009abc7df8990 for kevinliteon/cakephp:php7 ...
Running on runner-601c0f11-project-94-concurrent-0 via d7f4a5e71b47...
Fetching changes...
Removing vendor/
HEAD is now at 92cb022 test
Checking out 92cb0223 as deployment...
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
$ vendor/bin/codecept run Unit
Codeception PHP Testing Framework v2.3.9
Powered by PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
In Db.php line 308:
Db: SQLSTATE[HY000] [2002] Connection refused while creating PDO connection
My gitlab-ci.yml (partly):
services:
- mysql:latest
variables:
MYSQL_ROOT_PASSWORD: mysql123456789
MYSQL_DATABASE: test_db
MYSQL_USER: db
MYSQL_PASSWORD: db
build:
...
codecept:Unit:
stage: test
script:
- vendor/bin/codecept run Unit
In my codeception.yml I configured the Db module:
modules:
config:
Db:
dsn: 'mysql:host=mysql;dbname=test_db'
user: 'db'
password: 'db'
cleanup: true # reload dump between tests
populate: true # load dump before all tests
reconnect: true
I also tryed using the root user - without success.
Problem is, that I can not connect to the DB for whatever reasons... Maybe the warnings while initializing the service container have something to do with that, but I could not figure out how to fix them or if this is the problem.
I really tried a lot of things without any success! Basically my code depends on the documentations of gitlab-ci and codeception so it should work.
Anybody implemented this scenario successfully or know what I'm doing wrong?
Thanks for any help!
I want to answer how I solved it:
First thing was, I had to add the env-varibale "db_dsn" like this:
export db_dsn="mysql://user:paswd#host/db"
Then I still got the health-check error. Only way I found to successfully setup was to use another docker image for the db-service. I choose "mariadb:latest" - and then it worked for me.

trouble with pysphere - ansible

i am trying to deploy a VM via Ansible on my ESXi host.
I am using the following role for this:
- vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
when i execute this role now via a playbook i get the following message:
root#ansible1:~/ansible# ansible-playbook -i Inventory vmware_deploy.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [172.20.22.5]
TASK [vmware : vsphere_guest] **************************************************
fatal: [172.20.22.5]: FAILED! => {"changed": false, "failed": true, "msg": "pysphere module required"}
PLAY RECAP *********************************************************************
172.20.22.5 : ok=1 changed=0 unreachable=0 failed=1
So it seems to be "pysphere" module is missing. i've already checked that with the command:
root#ansible1:~/ansible# pip install pysphere
Requirement already satisfied (use --upgrade to upgrade): pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0 .1.7-py2.7.egg
Then i did the "upgrade" and get the following message back:
root#ansible1:~/ansible# pip install pysphere --upgrade
Requirement already up-to-date: pysphere in /usr/local/lib/python2.7/dist-packages/pysphere-0.1.7-py2.7.egg
So it seems to be it is already installed and its up-to-date , why do i get this error message then?
How can i fix it that my god damn role works fine now?
Jesus, Ansible makes me crazy ..
I hope you guys can help me, thanks in advance!
kind regards,
kgierman
EDIT:
so i've writen a new playbook with the old stuff, the new playbool lookes like this(i've added your localhost and connection local stuff):
---
- hosts: localhost
connection: local
tasks:
vsphere_guest:
vcenter_hostname: emea-esx-s18t.****.net
username: ****
password: ****
guest: newvm001
state: powered_off
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: ****
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 4096
num_cpus: 4
osid: windows7Server64Guest
scsi: paravirtual
esxi:
datacenter: MyDatacenter
hostname: esx-s18t.****.net
so when i execute this playbook i get the following error:
root#ansible1:~/ansible# ansible-playbook vmware2.yml
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/root/ansible/vmware2.yml': line 7, column 19, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
vcenter_hostname: emea-esx-s18t.sddc-hwl-family.net
username: root
^ here
the struggle is real -.-
You generally should execute provisioning modules such as vsphere_guest on your local ansible machine.
I suspect that 172.20.22.5 is actually your ESX host, and ansible try to execute module from there, where pysphere is surely absent.
Use:
- hosts: localhost
tasks:
- vsphere_guest:
...
Ran into this issue once again on macOS / OSX...
It seems to be related to PYTHONPATH.
I have this in my .profile:
export PYTHONPATH="/usr/local/lib/python2.7/site-packages"
[ ... further down ... ]
export PYTHONPATH="/usr/local/Cellar/ansible/2.1.2.0/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/2.2.1.0/libexec/vendor/lib/python2.7/site-packages:$PYTHONPATH"
The first line with PYTHONPATH is where pysphere and other system modules reside.
Also take note of the specific version of Ansible!
Anyway, this seems to resolve the issue.
Source: https://github.com/debops/debops-tools/issues/159#issuecomment-236536195

Sidekiq not processing queue

What possible reasons can Sidekiq prevent from processing jobs in the queue? The queue is full. The log file sidekiq.log indicates no activity at all. Thus the queue is full but the log is empty, and Sidekiq does not seem to process items. There seem to no worker processing jobs. Restarting Redis or flush it with FLUSHALL or FLUSHDB has no effect. Sidekiq has been started with
bundle exec sidekiq -L log/sidekiq.log
and produces the following log file:
2013-05-30..Booting Sidekiq 2.12.0 using redis://localhost:6379/0 with options {}
2013-05-30..Running in ruby 1.9.3p374 (2013-01-15 revision 38858) [i686-linux]
2013-05-30..See LICENSE and the LGPL-3.0 for licensing details.
2013-05-30..Starting processing, hit Ctrl-C to stop
How can you find out what went wrong? Are there any hidden log files?
The reason was in our case: Sidekiq may look for the wrong queue. By default Sidekiq uses a queue named "default". We used two different queue names, and defined them in config/sidekiq.yml
# configuration file for Sidekiq
:queues:
- queue_name_1
- queue_name_2
The problem is that this config file is not automatically loaded by default in your development environment (unlike database.yml or thinking_sphinx.yml for instance) by a simple bundle exec sidekiq command. Thus we wrote our jobs in two certain queues, and Sidekiq was waiting for jobs in a third queue (the default one). You have to pass the path to the config file as a parameter through the -Cor --config option:
bundle exec sidekiq -C ./config/sidekiq.yml
or you can pass the queue names directly (no spaces allowed here after the comma):
bundle exec sidekiq -q queue_name_1,queue_name_2
To find the problem out it is helpful to pass the option -v or --verbose at the command line, too, or to use :verbose: true in the sidekiq.yml file. Everything which is defined in a config file is of course useless if the config file is not loaded.. Therefore make sure you are using the right config file first.
If you have a config/sidekiq.yml check that all the queues are defined there, check this sample file: https://github.com/mperham/sidekiq/blob/master/examples/config.yml
If you are passing queue names in the command line or Procfile, something similar to
bin/sidekiq -q queue1 -q queue2
bundle exec sidekiq -q queue1 -q queue2
check that all your queues are defined there.
In case you are not sure about the names of your queues, you can figure it out with the following script:
require "sidekiq/api"
stats = Sidekiq::Stats.new
stats.queues
# {"production_mailers"=>25, "production_default"=>1}
Then, you can do things with the queues:
queue = Sidekiq::Queue.new("production_mailers")
queue.count
queue.clear
It took me hours to find out that I had set config.active_job.queue_name_prefix = "xxxxx_#{Rails.env}". The queue names in the settings look the same, but sidekiq looks for the queue with prefix.
Wrong setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- default
Correct setting
app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
end
config/sidekiq.yml
:queues:
- xxxxx_development_default
- xxxxx_production_default
My problem was I had a configure_server but not configure_client in my initialiser, you must have both:
Sidekiq.configure_server do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV.fetch('SIDEKIQ_REDIS_URL', 'redis://127.0.0.1:6379/1') }
end
In my case, sidekiq was fine in development, but stuck in staging. It was human error on the capistrano's deploy configuration. I set the path for sidekiq.yml incorrectly in the Capfile (shared instead of current).
It failed silently:
# Capfile
# WRONG:
set :sidekiq_config, -> { File.join(shared_path, 'config', 'sidekiq.yml') }
^^^^^^^^^^^
# RIGHT:
set :sidekiq_config, -> { File.join(current_path, 'config', 'sidekiq.yml') }
flushing redis worked for me.
WARNING: THIS WILL REMOVE ALL DATA IN YOUR REDIS DATABASE.
redis-cli flushall
I was banging my head against a brick wall on this for a while, my issue was that sidekiq required a newer version of redis-server. I ran "bundle exec sidekiq" and that revealed the error. Once I updated to a newer version of redis-server it was fine.
I just had this issue. Turns out I had made a syntax error in my sidekiq.yml
Spent at least two hours on this as well because queues and configuration and web UI were all fine ... the jobs were just not processed.
My issue was that the sidekiq-server was not running in my docker-compose setup even though it should have been started in the command-section here:
sidekiq:
depends_on:
- 'proddb'
- 'redis'
build: rails-app
--> command: bundle exec sidekiq --environment ${RAILS_ENV} -C config/sidekiq.yml
volumes:
- './rails-app:/project'
- '/project/tmp' # don't mount tmp directory
environment:
- REDIS_URL_SIDEKIQ=${REDIS_URL_SIDEKIQ}
networks:
- backend
My problem was I did not config my initializers/sidekiq.rb properly but even with the correct config, sidekiq was still not running enqueued jobs. I had to run spring stop on top of that and restarted everything and it solved my issue.
I encountered a similar problem wherein the logs would show entries such as INFO Rails : queueing TestWorker (TestWorker). However, the jobs would never get processed, and none of the answers in this question solved the issue.
The tl;dr to my solution is that Sidekiq's Testing Client was getting unexpectedly triggered.
I eventually deduced that there is some "magic" going on underneath the surface that makes it difficult to discretely determine where/when/how the above testing trigger was getting configured, based on the following anecdote...
Running bundle exec sidekiq -C config/sidekiq.yml -e development had the result that Sidekiq::Testing.fake? == true
However, running bundle exec sidekiq -C config/sidekiq.yml -e development_2 had the result that Sidekiq::Testing.fake? == false
^ The only difference between these 2 commands is that I renamed the development environment in sidekiq.yml to development_2, i.e. the same/equivalent environment was running with both commands (at least, presumably it would be the same environment if it wasn't for this inane "magic" under the hood).
I updated sidekiq.rb to explicitly toggle Sidekiq::Testing via the following:
sidekiq_testing_fake = false # set this using env var, etc.
if sidekiq_testing_fake
Sidekiq::Testing.fake!
elsif Sidekiq.constants.include?(:Testing)
Sidekiq::Testing.disable!
end
My issue was that I had both a redis-server running and Redis.app's redis-server running, I killed the redis-server (and kept the Redis.app one)

Refinery error when pushing to Heroku

I've a Refinery app, works great locally.
Created a bamboo stack on Heroku.
When I try to push I can see this:
Preparing app for Rails asset pipeline
Running: rake assets:precompile
rake aborted!
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Then I open it up in browser:
"We're sorry, but something went wrong."
$ heroku logs
Rendered vendor/bundle/ruby/1.9.1/gems/refinerycms-authentication-2.0.2/app/views/refinery/users/new.html.erb within refinery/layouts/login (82.3ms)
2012-03-15T14:43:25+00:00 app[web.1]: Completed 500 Internal Server Error in 1269ms
full output is here
Any help is great, thanks!
+++
Update:
Updated the stack to Cedar and made Ruby env 1.9.3
$ heroku config
DATABASE_URL => ..
GEM_PATH => vendor/bundle/ruby/1.9.1
LANG => en_US.UTF-8
PATH => bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin
RACK_ENV => production
RAILS_ENV => production
RUBY_VERSION => ruby-1.9.3-p0
SHARED_DATABASE_URL => ..
$ heroku info --app mimacohuoncedar
=== mimacohuoncedar
Addons: Basic Logging, Shared Database 5MB
Database Size: (empty)
Git URL: git#heroku.com:mimacohuoncedar.git
Owner: ..
Repo Size: 9M
Slug Size: 19M
Stack: cedar
Web URL: http://mimacohuoncedar.herokuapp.com/
$ heroku logs now shows this:
this-updated
Where to go on? Thanks
Dont know if you managed to fix this but I ran into the same issue using the Cedar stack. Found this article on Heroku that seemed to do the trick for me. Ran the line in terminal and it pushed first time.
I am seeing this same error, and the accepted answer did not solve it for me;
This blog however, did the trick. The blog title refers to Rails 3.2, but I'm on 3.1 and was seeing this same error.
The blog recommended adding this line to application.rb.
config.assets.initialize_on_precompile = false
The meaning, as summarized from the article;
This option prevents the Rails environment from being loaded when the assets:precompile task is executed. Because Heroku precompiles assets before setting the database configuration, you need to set this configuration to false or you Rails application will try to connect to an unexisting database.
Added the line and pushed, everything seems good now.
That output looks suspiciously like Cedar stack and not Bamboo - give http://devcenter.heroku.com/articles/labs-user-env-compile a go. That should sort you out.

running delayed_job under monit with ubuntu

I'm struggling to get delayed_job working under rails 3.0.9 (ruby 1.9.2). The only way I have succeeded to run is to tape manualy the command rake jobs:work.
But I want that to be automatically started when the rails application is starting.
I have installed monit under ubuntu and I configured it to launch a file located in my app. This fails looks like:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/home/me/myapp/script/delayed_job start"
stop program = "/home/me/myapp/script/delayed_job stop"
And I added the environment setting in the delayed_job script file:
#!/usr/bin/env ruby
ENV['RAILS_ENV'] = "development"
require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'
Delayed::Command.new(ARGV).daemonize
When I run the command "sudo monit start delayed_job" I get the following error:
/usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- bundler/setup (LoadError)
So I guess it is because sudo is using a wrong version of ruby environment
I tried then the solution of:
rvm monit delayed_job
by adding rvm -S in the start program / stop program lines.
But it still failing with the error : rvm command not found
my rvm dir is located in my home dir /home/me/.rvm
I tried to find workarounds in (sudo changes PATH - why?) to change the PATH environment variable by adding
/usr/bin/env PATH=/home/me/.rvm/bin:$PATH
The command "sudo monit start delayed_job" succeeded! and the worker started.
But the issue is: When I launch sudo /etc/init.d/monit start and when I look to the syslog I still get 'delayed_job' failed to start
So I don't know how to investigate more, how to get more verbose errors for monit.
I have finally succeeded to solve this issue.
I modified the monit file like this:
check process delayed_job with pidfile /home/me/myapp/tmp/pids/delayed_job.pid
start program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job start'"
stop program = "/bin/su - me -c 'cd /home/me/myapp/; script/delayed_job stop'"
I have also downgraded the daemons gem because it seems that there are problems with the latest version. So I'm using now daemons v 1.0.10
I also modified the rights of the log file /home/me/myapp/log/delayed_job.log, because it seems that is was created before my root and my user had no access to it (I had problems to test the command "script/delayed_job start" with "me" user)
This i s the only line that worked for me that read the ENV properly
start program = "/usr/local/rvm/bin/rvm-shell -c 'cd /var/www/[APP]/current/; RAILS_ENV=production bundle exec bin/delayed_job start'"
Hope it helps!