Keyspace 'OpsCenter' does not exist - datastax

When I start datastax-agent look into /var/log/datastax-agent/agent.log, I see following error message:
clojure.lang.ExceptionInfo: throw+: {:type :opsagent.cassandra/keyspaces-does-not-exist, :message "The OpsCenter storage keyspace, \"OpsCenter\", does not exist yet."} {:object {:type :opsagent.cassandra/keyspaces-does-not-exist, :message "The OpsCenter storage keyspace, \"OpsCenter\", does not exist yet."}, :environment {conn #<SessionManager com.datastax.driver.core.SessionManager#374c40ba>, ks-to-set "\"OpsCenter\"", current-ks nil, e #<InvalidQueryException com.datastax.driver.core.exceptions.InvalidQueryException: Keyspace 'OpsCenter' does not exist>}}
at opsagent.cassandra$set_ks.invoke(cassandra.clj:28)
at opsagent.cassandra$get_conn.invoke(cassandra.clj:33)
at opsagent.cassandra$scan_pdps.invoke(cassandra.clj:180)
at opsagent.cassandra$process_pdp_row$fn__2465.invoke(cassandra.clj:206)
at opsagent.cassandra$process_pdp_row.invoke(cassandra.clj:204)
at opsagent.cassandra$process_pdp_row.invoke(cassandra.clj:202)
at opsagent.cassandra$load_pdps_with_retry$fn__2471.invoke(cassandra.clj:218)
at opsagent.cassandra$load_pdps_with_retry.invoke(cassandra.clj:217)
at opsagent.cassandra$setup_cassandra.invoke(cassandra.clj:275)
at opsagent.opsagent$setup_cassandra.invoke(opsagent.clj:152)
at opsagent.opsagent$init_jmx.invoke(opsagent.clj:206)
at opsagent.opsagent$_main.doInvoke(opsagent.clj:271)
at clojure.lang.RestFn.applyTo(RestFn.java
how do I fix this?

you should restart opscenter; I do that is ok!
ssh to your deploy opscenter machine;
kill -9 opscenter's pid
cd your opscenter path
cd bin
./opscenter

Related

command line: heroku pg:psql ... gets 'psql: error: SSL error: certificate verify failed'

heroku pg:psql suddenly not happy.
π main ✗ ❯ heroku pg:psql postgresql-xyz --app xyz
--> Connecting to postgresql-xyz
psql: error: SSL error: certificate verify failed
FATAL: no pg_hba.conf entry for host "47.123.123.123", user "abc", database "xyz", SSL off
▸ psql exited with code 2
π main ❯ heroku -v
heroku/7.59.1 darwin-x64 node-v12.21.0
I notice "SSL off". How to turn in on via HEROKU cli? Or is it a setting in "Config Vars" at heroku.com ?
On Mac OS, this did the trick.
Postgres seems to install 'root.crt' in ~.postgresql folder. Somehow, referring to it as 'root.key' in the connection string works.
psql "sslmode=require sslrootcert=/Users/abc123/.postgresql/root.key user=abc password=xyz host=ec1.compute-1.amazonaws.com dbname=d123"

Ansible Tower 3.7.0 Copy Module Fails To Find or Access Directory

I have an issue with Ansible Tower 3.7.0 (ansible 2.9.7) when using the Copy module I receive this error message:
TASK [Copy Installation Directory For CentOS 7] ********************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
fatal: [devmachine]: FAILED! => {"changed": false, "msg": "Could not find or access '/var/lib/awx/projects/xagt_install/Test_Directory' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
However, that directory path does exist:
[root#tower xagt_install]# pwd
/var/lib/awx/projects/xagt_install
[root#tower xagt_install]# ls -Alh
drwxr-xr-x. 2 awx awx 98 Jun 17 12:57 Test_Directory
Here is the task/play:
- name: Copy Installation Directory For CentOS 7
copy:
src: /var/lib/awx/projects/xagt_install/Test_Directory
dest: /tmp/
remote_src: no
when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "7" and 'xagt' in ansible_facts.packages)
It appears the "Test_Directory" has the appropriate permissions. Anyone have an idea as to why this module is reporting it cannot "find or access" the directory?
Disabling: Settings --> Jobs --> Enable Job Isolation fixed my issue and the Copy module works.
I assume if I left the Job Isolation enabled then I would need to store directories in /tmp in order for the Copy module to access them?

chef-solo hangs at the end installing redis

chef-solo hangs at the end when installing redis as if chef is waiting for some event to occur. Here is output when I had to kill it with ctrl+c.
[2013-05-14T15:55:27+00:00] ERROR: Running exception handlers
[2013-05-14T15:55:27+00:00] ERROR: Exception handlers complete
Chef Client failed. 8 resources updated
[2013-05-14T15:55:27+00:00] FATAL: Stacktrace dumped to /home/ubuntu/cache/chef-stacktrace.out
[2013-05-14T15:55:27+00:00] FATAL: Chef::Exceptions::MultipleFailures: Multiple failures occurred:
* SystemExit occurred in chef run: service[redis] (redis::default line 107) had an error: SystemExit: exit
* Chef::Exceptions::Exec occurred in delayed notification: service[redis] (redis::default line 83) had an error: Chef::Exceptions::Exec: /sbin/start redis returned 1, expected 0
I am new to chef and unable to figure out why this is happening. Has anyone noticed this behaviour before?
Here is my recipe file
package "build-essential" do
action :install
end
user node[:redis][:user] do
action :create
system true
shell "/bin/false"
end
directory node[:redis][:dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
directory node[:redis][:data_dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
directory node[:redis][:log_dir] do
owner node[:redis][:user]
group node[:redis][:user]
mode "0755"
action :create
end
remote_file "#{Chef::Config[:file_cache_path]}/redis-2.6.10.tar.gz" do
source "http://redis.googlecode.com/files/redis-2.6.10.tar.gz"
action :create_if_missing
end
# Adding 'make test' causes the install to freeze for some reason.
bash "compile_redis_source" do
cwd Chef::Config[:file_cache_path]
code <<-EOH
tar zxf redis-2.6.10.tar.gz
cd redis-2.6.10
make && sudo make install
# to give permissions to the executables that it copied to.
chown -R redis:redis /usr/local/bin
EOH
creates "/usr/local/bin/redis-server"
end
service "redis" do
provider Chef::Provider::Service::Upstart
subscribes :restart, resources(:bash => "compile_redis_source")
supports :restart => true, :start => true, :stop => true
end
template "redis.conf" do
path "#{node[:redis][:dir]}/redis.conf"
source "redis.conf.erb"
owner node[:redis][:user]
group node[:redis][:user]
mode "0644"
notifies :restart, resources(:service => "redis")
end
template "redis.upstart.conf" do
path "/etc/init/redis.conf"
source "redis.upstart.conf.erb"
owner node[:redis][:user]
group node[:redis][:user]
mode "0644"
notifies :restart, resources(:service => "redis")
end
service "redis" do
action [:enable, :start]
end
There are 2 service "redis" resource statements, is that a problem? or how does chef workout in this case, does it merge into a single resource when running?
I am using upstart and here is the redis.upstart.conf.erb file. Not sure if anything is wrong with this. Does the order of the statement matter in this file?
#!upstart
description "Redis Server"
emits redis-server
# run when the local FS becomes available
start on local-filesystems
stop on shutdown
setuid redis
setgid redis
expect fork
# Respawn unless redis dies 10 times in 5 seconds
#respawn
#respawn limit 10 5
# start a default instance
instance $NAME
env NAME=redis
#instance $NAME
# run redis as the correct user
#setuid redis
#setgid redis
# run redis with the correct config file for this instance
exec /usr/local/bin/redis-server /etc/redis/redis.conf
respawn
#respawn limit 10 5
I think Dmytro was on the right path, but not exactly.
I see that you are using Upstart as the service provider in Chef. Please check your Upstart config for redis-server for any expect statement. If you have an expect fork or expect daemon statement in there, it means that when starting redis-server, Upstart will be waiting for the Redis service to fork once or twice respectively. If you have daemonize no in the redis.conf, Redis process will never fork, and therefore Upstart just hangs at the execution of the init script.
Your redis is not failing to start, it simply runs in the foreground.
I had similar problem with one of the Redis cookbooks I was using. In the redis.conf.erb file it had configuration option
daemonize no
Some other cookbooks have this option configurable by attribute. So, your fix would depend on the cookbook you are using. Either edit your redis.conf.erb file or find how that attribute is configured and set it to yes.

How do i setup queue_classic background jobs on EC2 using the rubber gem

How do i setup queue_classic background jobs on EC2 using the rubber gem?
I tried using foreman export but i'm not sure where to run it (app or web role?).
My failed attempt is using http://blog.sosedoff.com/2011/07/24/foreman-capistrano-for-rails-3-applications/.
Should i be creating a new instance to run these jobs? (or a new role?)
Thanks for the help!
Figured it out.
First create a Procfile with the queue_classic rake task (see http://blog.daviddollar.org/2011/05/06/introducing-foreman.html)
Then i added foreman to my host (make sure you have a procfile for that environment, ex: Procfile.production)
# Foreman tasks
namespace :foreman do
desc 'Export the Procfile to Ubuntu upstart scripts'
task :export, :roles => :queue do
run "cd #{release_path} && bundle exec foreman export upstart /etc/init -f ./Procfile.#{Rubber.env} -a #{application} -u #{user} -l #{release_path}/log/foreman"
end
desc "Start the application services"
task :start, :roles => :queue do
rsudo "start #{application}"
end
desc "Stop the application services"
task :stop, :roles => :queue do
rsudo "stop #{application}"
end
desc "Restart the application services"
task :restart, :roles => :queue do
rsudo "stop #{application}; start #{application}"
#run "sudo start #{application} || sudo restart #{application}"
end
end
after "deploy:update", "foreman:export" # Export foreman scripts
before "deploy:restart", "foreman:restart" # Restart application scripts
after "deploy:stop", "foreman:stop" # Restart application scripts

Heroku: MongoHQ: connection problem

I recently forked https://github.com/fortuity/rails3-mongoid-omniauth and tried to get to run on heroku.com. This is an application which shows how to use MongoDB (via MongoHQ) on heroku, as well as OAuth authentication. My forked code snapshot is at https://github.com/jgodse/rails3-mongoid-omniauth/tree/8cb490e660ab1d2d1df0f68312584563f0fd223a
After I tweaked mongoid.yml to include the URI parameter, remove other parameters for the production (i.e. heroku) environment and then started the application on heroku.com, I got the following log (from heroku logs).
←[36m2011-05-11T19:00:36+00:00 heroku[web.1]:←[0m Starting process with command: thin -p 41913 -e production -R /home/heroku_rack/heroku.ru start
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.0/lib/mongo/connection.rb:494:in connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.0/lib/mongo/connection.rb:632:insetup'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.0/lib/mongo/connection.rb:101:in initialize'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.0/lib/mongo/connection.rb:152:innew'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.0/lib/mongo/connection.rb:152:in from_uri'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.1/lib/mongoid/config/database.rb:86:inmaster'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.1/lib/mongoid/config/database.rb:19:in configure'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.1/lib/mongoid/config.rb:319:inconfigure_databases'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.1/lib/mongoid/config.rb:114:in from_hash'
←[36m2011-05-11T19:00:42+00:00 app[web.1]:←[0m from (eval):2:infrom_hash'
←[36m2011-05-11T19:00:42+00:00 heroku[web.1]:←[0m Process exited
←[36m2011-05-11T12:00:43-07:00 heroku[web.1]:←[0m State changed from starting to crashed
My heroku environment looks like this (with some key information xxxx'ed out):
$ heroku info
=== jgodse-omniauth-mongoid
Web URL: http://jgodse-omniauth-mongoid.heroku.com/
Git Repo: git#heroku.com:jgodse-omniauth-mongoid.git
Dynos: 1
Workers: 0
Repo size: 5M
Slug size: 5M
Stack: bamboo-mri-1.9.2
Data size: (empty)
Addons: Basic Logging, MongoHQ MongoHQ Free, Shared Database 5MB
Owner: xxxxxxxx
Jay#JAY-PC ~/rapps/rails3-mongoid-omniauth (master)
$ heroku config --long
BUNDLE_WITHOUT => development:test
DATABASE_URL => postgres://xxxxxxxxxxxx
LANG => en_US.UTF-8
MONGOHQ_URL => mongodb://heroku:xxxxxxxxxxxxxxxxxxxxxxxxxxxx.mongohq.com:27098/app527030
RACK_ENV => production
SHARED_DATABASE_URL => postgres://xxxxxxx xxxxxx
The heroku log says that it is still trying to connect to localhost:27017 even though I removed localhost references from mongoid.yml. Is there anything else I must do to force it to connect to my MONGOHQ_URL?
In the file mongoid.yml, I was supposed to use "MONGOHQ_URL", but I used "MONGHQ_URL". The code therefore behaved as it was supposed to and defaulted to localhost.
When I started using "MONGOHQ_URL" in mongoid.yml, everything worked fine.