Install Apache from Puppetlabs on Vagrant - apache

I'm new to this and I think I'm just missing one thing to grasp what the problem really is.
I understand that I can create my own puppet modules that will install certain packpages to a vagrant instance. There are also some ready-made ones, like this apache. I've run
vagrant ssh and installed it using puppet module install puppetlabs/apache. It now resides under /etc/puppet/modules/apache. But, the apache is not installed.
So, how do I install apache?
In my Vagrantfile I have
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "init.pp"
puppet.options="--verbose --debug"
end
Plus, in the main vagrant directory, under puppet/modules/apache/manifests/init.pp:
class apache2::install {
package { 'apache2':
ensure => present,
}
}
and yet, after vagrant provision or vagrant reload no apache is being installed, or what I guess, the installation process doesn't even start.
Log from after the vagrant provision, whose messages look totally cryptic to me.
[default] Running provisioner: puppet...
Running Puppet with init.pp...
debug: Creating default schedules
debug: Puppet::Type::User::ProviderDirectoryservice: file /usr/bin/dscl does not exist
debug: Puppet::Type::User::ProviderUser_role_add: file rolemod does not exist
debug: Puppet::Type::User::ProviderLdap: true value when expecting false
debug: Puppet::Type::User::ProviderPw: file pw does not exist
debug: Failed to load library 'ldap' for feature 'ldap'
debug: /File[/var/lib/puppet/ssl/certificate_requests]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/state/graphs]: Autorequiring File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl/public_keys]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/facts]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/private_keys]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/client_yaml]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/last_run_report.yaml]: Autorequiring File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/client_data]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/state/state.yaml]: Autorequiring File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/ssl/certs]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/lib]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl/private]: Autorequiring File[/var/lib/puppet/ssl]
debug: /File[/var/lib/puppet/state/last_run_summary.yaml]: Autorequiring File[/var/lib/puppet/state]
debug: /File[/var/lib/puppet/state]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/clientbucket]: Autorequiring File[/var/lib/puppet]
debug: /File[/var/lib/puppet/ssl]: Autorequiring File[/var/lib/puppet]
debug: Finishing transaction 70208664910260
debug: Loaded state in 0.00 seconds
debug: Loaded state in 0.00 seconds
info: Applying configuration version '1389652562'
debug: /Schedule[daily]: Skipping device resources because running on a host
debug: /Schedule[monthly]: Skipping device resources because running on a host
debug: /Schedule[hourly]: Skipping device resources because running on a host
debug: /Schedule[never]: Skipping device resources because running on a host
debug: /Schedule[weekly]: Skipping device resources because running on a host
debug: /Schedule[puppet]: Skipping device resources because running on a host
debug: Finishing transaction 70208665347780
debug: Storing state
debug: Stored state in 0.00 seconds
notice: Finished catalog run in 0.05 seconds
debug: Finishing transaction 70208665012580
debug: Received report to process from localhost.localdomain
debug: Processing report from localhost.localdomain with processor Puppet::Reports::Store

You've told Vagrant that it should look for manifests in puppet/manifests (relative to your Vagrant directory), and that it should configure the machine based on whatever is in init.pp, which it will look for in puppet/manifests (as per your instructions). That is, Vagrant will install whatever is in puppet/manifests/init.pp. It will not look at puppet/modules/apache/manifests/init.pp (at least, not at first).
Put something like the following in puppet/manifests/init.pp and it should install properly.
class {'apache':}
In addition to the apache module, make sure you have all dependancies (the stdlib and concat modules from Puppetlabs in this case) installed in your puppet/modules directory, too.

I went with a solution I don't fully like, as it overwrites any settings not provisioned by puppet. But hey, it gets the job done smoothly.
My Vagrantfile:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "base"
#config.vm.box_url = "http://domain.com/path/to/above.box"
config.vm.network :forwarded_port, guest: 80, host: 5656, auto_correct: true
config.vm.provision :shell do |shell|
shell.inline = "mkdir -p /etc/puppet/modules;
puppet module install puppetlabs-concat --force --modulepath '/vagrant/puppet/modules'
puppet module install puppetlabs-stdlib --force --modulepath '/vagrant/puppet/modules'
puppet module install puppetlabs-apache --force --modulepath '/vagrant/puppet/modules'"
end
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "init.pp"
puppet.options="--verbose --debug"
end
end
Note: the script may actually also work without --force --modulepath '/vagrant/puppet/modules
My puppet/manifests/init.pp
node default {
class { 'apache': }
}
Thanks to https://stackoverflow.com/a/21105703/2066118 for pointing me into the right direction.

I'm not sure about Vagrant, but in puppet you need to mention what needed to install in a node on '/etc/puppet/manifests/site.pp' file.
So it will be like this.
node 'hosts.fqdn' {
include apache2
}
For more information : http://docs.puppetlabs.com/puppet/2.7/reference/lang_node_definitions.html

Related

Phoenix with exq: How do I execute mix test without redis running

I use exq in my Phoenix application with Phoenix 1.4.16 to run some background jobs.
One of them can be as simple as this:
defmodule PeopleJob do
def perform(request) do
IO.puts("Hello from PeopleJob:\n#{inspect(request)}")
end
end
It runs with redis in dev environment perfectly.
The problem is that when I push the code to a CI server that has no redis, all the tests fail.
The test config is like this
In config/test.exs:
config :exq, queue_adapter: Exq.Adapters.Queue.Mock
In test/test_helper.exs:
Exq.Mock.start_link(mode: :inline)
When I run "mix test" on a machine without redis running, it fails like this:
** (Mix) Could not start application exq: Exq.start(:normal, []) returned an error: shutdown: failed to start child: Exq.Manager.Server
** (EXIT) an exception was raised:
** (RuntimeError)
====================================================================================================
ERROR! Could not connect to Redis!
Configuration passed in: [host: "127.0.0.1", port: 6379, database: 0, password: nil]
Error: :error
Reason: {:badmatch, {:error, %Redix.ConnectionError{reason: :closed}}}
Make sure Redis is running, and your configuration matches Redis settings.
====================================================================================================
(exq) lib/exq/manager/server.ex:393: Exq.Manager.Server.check_redis_connection/1
(exq) lib/exq/manager/server.ex:173: Exq.Manager.Server.init/1
(stdlib) gen_server.erl:374: :gen_server.init_it/2
(stdlib) gen_server.erl:342: :gen_server.init_it/6
(stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Actually I try all 3 modes :redis, :fake and :inline, but all of them fail to start the mix test.
Question: Can I run "mix test" on a machine that has no redis?
The reason is that our company doesn't want to install redis on the Travis CI machine.
I expected that using Exq Mock in the test environment would allow the test to run without redis, but it is not the case.
I figure it out.
In config/test.exs:
config :exq, queue_adapter: Exq.Adapters.Queue.Mock
config :exq, start_on_application: false
In test/test_helper.exs:
Exq.Mock.start_link(mode: :inline)
Adding config :exq, start_on_application: false to config/test.exs solved this problem.

Making Dockerized Flask server concurrent

I have a Flask server that I'm running on AWS Fargate. My task has 2 vCPUs and 8 GB of memory. My server is only able to respond to one request at a time. If I run 2 API requests at the same, each that takes 7 seconds, the first request will take 7 seconds to return and the second will take 14 seconds to return.
This is my Docker file (using this repo):
FROM tiangolo/uwsgi-nginx-flask:python3.7
COPY ./requirements.txt requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
RUN python3 -m spacy download en
RUN apt-get update
RUN apt-get install wkhtmltopdf -y
RUN apt-get install poppler-utils -y
RUN apt-get install xvfb -y
COPY ./ /app
I have the following config file:
[uwsgi]
module = main
callable = app
enable-threads = true
These are my logs when I start the server:
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-10-05 06:29:53,438 CRIT Supervisor running as root (no user in config file)
2019-10-05 06:29:53,438 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2019-10-05 06:29:53,446 INFO RPC interface 'supervisor' initialized
2019-10-05 06:29:53,446 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-10-05 06:29:53,446 INFO supervisord started with pid 1
2019-10-05 06:29:54,448 INFO spawned: 'nginx' with pid 9
2019-10-05 06:29:54,450 INFO spawned: 'uwsgi' with pid 10
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
;uWSGI instance configuration
[uwsgi]
cheaper = 2
processes = 16
ini = /app/uwsgi.ini
module = main
callable = app
enable-threads = true
ini = /etc/uwsgi/uwsgi.ini
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
hook-master-start = unix_signal:15 gracefully_kill_them_all
need-app = true
die-on-term = true
show-config = true
;end of configuration
*** Starting uWSGI 2.0.18 (64bit) on [Sat Oct 5 06:29:54 2019] ***
compiled with version: 6.3.0 20170516 on 09 August 2019 03:11:53
os: Linux-4.14.138-114.102.amzn2.x86_64 #1 SMP Thu Aug 15 15:29:58 UTC 2019
nodename: ip-10-0-1-217.ec2.internal
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.7.4 (default, Jul 13 2019, 14:20:24) [GCC 6.3.0 20170516]
Python main interpreter initialized at 0x55e1e2b181a0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
2019-10-05 06:29:55,483 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-10-05 06:29:55,484 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Filebeat not starting TCP server (input)

So I have configured filebeat to accept input via TCP. This is filebeat.yml file.
filebeat.inputs:
- type: tcp
host: ["localhost:9000"]
max_message_size: 20MiB
For some reason filebeat does not start the TCP server at port 9000. I have verified this using wireshark. Wireshark shows nothing at port 9000.
This is output of command "filebeat -e -d "*"" run on terminal
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:468 Home path: [/usr/local/Cellar/filebeat/6.2.4] Config path: [/usr/local/etc/filebeat] Data path: [/usr/local/var/lib/filebeat] Logs path: [/usr/local/var/log/filebeat]
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:495 Beat metadata path: /usr/local/var/lib/filebeat/meta.json
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:475 Beat UUID: 764da0fd-ea93-4777-b1ea-63149be0d6b6
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.4
2019-08-14T09:12:40.745-0600 DEBUG [beat] instance/beat.go:230 Initializing output plugins
2019-08-14T09:12:40.745-0600 DEBUG [processors] processors/processor.go:49 Processors:
2019-08-14T09:12:40.745-0600 INFO pipeline/module.go:76 Beat name: Ad-MBP.domain
2019-08-14T09:12:40.745-0600 ERROR fileset/modules.go:95 Not loading modules. Module directory not found: /usr/local/Cellar/filebeat/6.2.4/module
2019-08-14T09:12:40.745-0600 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2019-08-14T09:12:40.745-0600 INFO instance/beat.go:301 filebeat start running.
2019-08-14T09:12:40.745-0600 DEBUG [registrar] registrar/registrar.go:90 Registry file set to: /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:110 Loading registrar data from /usr/local/var/lib/filebeat/registry
2019-08-14T09:12:40.746-0600 INFO registrar/registrar.go:121 States Loaded from registrar: 0
2019-08-14T09:12:40.746-0600 WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:48 Loading Prospectors: 1
2019-08-14T09:12:40.746-0600 DEBUG [registrar] registrar/registrar.go:152 Starting Registrar
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:95 Checking module configs from: /usr/local/etc/filebeat/modules.d/*.yml
2019-08-14T09:12:40.746-0600 DEBUG [cfgfile] cfgfile/reload.go:109 Number of module configs found: 0
2019-08-14T09:12:40.746-0600 INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 0
2019-08-14T09:12:40.746-0600 INFO cfgfile/reload.go:127 Config reloader started
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:151 Scan for new config files
2019-08-14T09:12:40.748-0600 DEBUG [cfgfile] cfgfile/reload.go:170 Number of module configs found: 0
2019-08-14T09:12:40.748-0600 INFO cfgfile/reload.go:219 Loading of config files completed.
I am not sure what I am doing wrong..
I believe filebeat inputs are only available from filebeat 6.3+, anything older used filebeat prospectors.
6.3 TCP input documentation, nothing available for 6.2 or older as it uses prospectors:
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-input-tcp.html
Your logs show that you are on filebeat version 6.24, could you try out your configuration with 6.3+?

SaltStack Tomcat Deployment: 'tomcat.war_deployed' error

I'm new to SaltStack and I'm working on a build to deploy tomcat and tomcat war files to Ubuntu 16.04 systems. I haven't ran into any issues until my first attempt at deploying a war file with tomcat.war_deployed. If anyone with more experience with SaltStack that could provide me any feedback I'd greatly appreciate it.
/srv/pillar/top.sls
base:
'*':
- tomcat-manager
/srv/pillar/tomcat-manager.sls
tomcat-manager:
user: 'myuser'
passwd: 'mypassword'
Output of salt '*' pillar.test
tomcat-manager:
------------
passwd:
mypassword
user:
myuser
mystate.sls
# Install tomcat8 packages.
install_tomcat:
pkg.installed:
- pkgs:
- tomcat8
- tomcat8-admin
# Install postgresql packages.
install_postgresql:
pkg.installed:
- name: postgresql-9.5
# Start tomcat service.
start_service_tomcat:
service.running:
- name: tomcat8
- enable: True
- require:
- pkg: install_tomcat
- watch:
- file: sync tomcat-users.xml
# Tomcat deploy war files.
deploy_war:
tomcat.war_deployed:
- name: /mywar
- war: salt://files/tomcat/war/mywar.war
- require:
- service: start_service_tomcat
# Start postgresql service.
start_service_postgresql:
service.running:
- name: postgresql
- enable: True
- require:
- pkg: install_postgresql
- watch:
- file: sync pg_hba.conf
- file: sync postgresql.conf
Output of salt '*' state.sls mystate
----------
ID: deploy_war
Function: tomcat.war_deployed
Name: /mywar
Result: False
Comment: F
a
i
l
e
d
t
o
c
r
e
a
t
e
H
T
T
P
r
e
q
u
e
s
t
Started: 15:54:02.314254
Duration: 1980.229 ms
Changes:
[...]
Failed: 1
-------------
Total states run: 12
Total run time: 2.671 s
ERROR: Minions returned with non-zero exit code
Updates
myminion:8080/manager is accessible on my minion(s).
I haven't been able to find if SaltStack officially supports Tomcat8 so I decided to test this with Tomcat7 and it's giving me the same issue.
When I run salt '*' tomcat.version on the minions:
myminion:
Apache Tomcat/7.0.68 (Ubuntu)
myminion2:
Apache Tomcat/8.0.32 (Ubuntu)
Output of salt '*' tomcat.status:
myminion:
False
myminion1:
False
Output of salt '*' tomcat.serverinfo:
myminion:
----------
error:
Failed to create HTTP request
myminion1:
----------
error:
Failed to create HTTP request
I haven't had any luck with search engines for Failed to create HTTP request yet.
Output of sudo salt-call -l debug tomcat.serverinfo:
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[DEBUG ] Using cached minion ID from /etc/salt/minion_id: myminion
[DEBUG ] Configuration file path: /etc/salt/minion
[WARNING ] Insecure logging configuration detected! Sensitive data may be logged.
[DEBUG ] Reading configuration from /etc/salt/minion
[DEBUG ] Including configuration from '/etc/salt/minion.d/_schedule.conf'
[DEBUG ] Reading configuration from /etc/salt/minion.d/_schedule.conf
[WARNING ] Unable to find IPv6 record for "myminion" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
[DEBUG ] Please install 'virt-what' to improve results of the 'virtual' grain.
[DEBUG ] Connecting to master. Attempt 1 of 1
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] Generated random reconnect delay between '1000ms' and '11000ms' (7330)
[DEBUG ] Setting zmq_reconnect_ivl to '7330ms'
[DEBUG ] Setting zmq_reconnect_ivl_max to '11000ms'
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'clear')
[DEBUG ] Decrypting the current master AES key
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] SaltEvent PUB socket URI: /var/run/salt/minion/minion_event_bbef5074cf_pub.ipc
[DEBUG ] SaltEvent PULL socket URI: /var/run/salt/minion/minion_event_bbef5074cf_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/minion/minion_event_bbef5074cf_pull.ipc
[DEBUG ] Sending event: tag = salt/auth/creds; data = {'_stamp': '2017-05-04T18:14:04.328838', 'creds': {'publish_port': 4505, 'aes': '######/#####/###############################################=', 'master_uri': 'tcp://mymaster:4506'}, 'key': ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')}
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] Determining pillar cache
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG ] LazyLoaded jinja.render
[DEBUG ] LazyLoaded yaml.render
[DEBUG ] LazyLoaded tomcat.serverinfo
[DEBUG ] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506', 'aes')
[DEBUG ] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'myminion', 'tcp://mymaster:4506')
[DEBUG ] LazyLoaded nested.output
local:
----------
error:
Failed to create HTTP request
Output of salt-call test.versions:
[WARNING ] Unable to find IPv6 record for "dt-rhettvm-01" causing a 10 second timeout when rendering grains. Set the dns or /etc/hosts for IPv6 to clear this.
local:
Salt Version:
Salt: 2016.11.4
Dependency Versions:
cffi: Not Installed
cherrypy: Not Installed
dateutil: 2.4.2
docker-py: Not Installed
gitdb: Not Installed
gitpython: Not Installed
ioflo: Not Installed
Jinja2: 2.8
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.3
msgpack-pure: Not Installed
msgpack-python: 0.4.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.12 (default, Nov 19 2016, 06:48:10)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.2.0
RAET: Not Installed
smmap: Not Installed
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: Ubuntu 16.04 xenial
machine: x86_64
release: 4.4.0-62-generic
system: Linux
version: Ubuntu 16.04 xenia
What worked for me was restarting the tomcat instance on the minion.
The tomcat instance could be running with the old user parameters.

Adding cluster: “Error creating cluster: Call to /cluster-configs timed out.”

(Continuing the discussion with the same title on the datastax forum). I was able to reproduce the issue where opscenter is unable to connect to a 2.0.1 cluster, using "Use existing cluster", failing with the message "Error creating cluster: Call to /cluster-configs timed out.". It is related to having "rpc_server_type: hsha" in cassandra.yaml.
I reproduced it as follows:
(1) Installed ubuntu 12.04 (x86-64 architecture) in qemu. Updated it to the latest version of all packages. Configured it with a static ip address (192.168.77.3). Qemu networking was set up so that the host machine and the qemu virtual machine can communicate.
(2) Downloaded Sun jre-7u45-linux-x64.tar.gz and installed it. Installed libjna-java. This all done as per the datastax installing on Debian/ubuntu docs.
(3) Installed datastax cassandra 2.0.1 using the Debian package, as described in the datastax docs.
(4) Made the following changes to cassandra.yaml:
seeds: "192.168.77.3"
listen_address: 192.168.77.3
rpc_address: 192.168.77.3
rpc_server_type: hsha
NB: To see the failure, it is essential to use hsha.
(5) Stopped the cassandra instance (Debian automatically starts it when installed). Note that the init script doesn't work for stopping cassandra (this is a new problem with cassandra 2.0), so I had to kill the process by hand. This is a trap: you may think you restarted cassandra and that it has taken your configuration changes into account, only it hasn't because you are still running the old instance.
(6) Cleared out instance data: sudo rm -fr /var/lib/cassandra/*
(7) Started a new cassandra instance. Checked that nodetool could connect to it from both the virtual machine (i.e. running locally) and from the host machine.
(8) Tried to add the cluster from opscenter-free running on the host (i.e. not running on the virtual machine). opscenter version 3.2.2, ubuntu 13.10. As no cluster had been added yet, I got the "Welcome to Datastax opscenter" dialog, with "Create New Cluster" or "Use Existing cluster". Chose "Use Existing Cluster". Added the ip address (192.168.77.3) of the qemu virtual machine instance. Clicked "Save cluster". This failed with "Error creating cluster: Call to /cluster-configs timed out".
The opscenter log:
2013-10-28 11:59:04+0100 [] INFO: Log opened.
2013-10-28 11:59:04+0100 [] INFO: twistd 10.2.0 (/usr/bin/python2.7 2.7.5) starting up.
2013-10-28 11:59:04+0100 [] INFO: reactor class: twisted.internet.selectreactor.SelectReactor.
2013-10-28 11:59:04+0100 [] INFO: set uid/gid 0/0
2013-10-28 11:59:04+0100 [] INFO: Logging level set to 'info'
2013-10-28 11:59:04+0100 [] INFO: OpsCenter version: 3.2.2
2013-10-28 11:59:04+0100 [] INFO: Compatible agent version: 3.2.2
2013-10-28 11:59:04+0100 [] INFO: No clusters are configured yet, checking to see if a config migration is needed
2013-10-28 11:59:04+0100 [] INFO: Main config does not appear to include a cluster configuration, skipping migration
2013-10-28 11:59:04+0100 [] INFO: No clusters are configured
2013-10-28 11:59:04+0100 [] INFO: HTTP BASIC authentication disabled
2013-10-28 11:59:04+0100 [] INFO: Starting webserver with ssl disabled.
2013-10-28 11:59:04+0100 [] INFO: SSL agent communication enabled
2013-10-28 11:59:04+0100 [] INFO: opscenterd.WebServer.OpsCenterdWebServer starting on 8888
2013-10-28 11:59:04+0100 [] INFO: Starting factory <opscenterd.WebServer.OpsCenterdWebServer instance at 0x2f2a6c8>
2013-10-28 11:59:04+0100 [] INFO: morbid.morbid.StompFactory starting on 61619
2013-10-28 11:59:04+0100 [] INFO: Starting factory <morbid.morbid.StompFactory instance at 0x3062320>
2013-10-28 11:59:04+0100 [] INFO: Configuring agent communication with ssl support enabled.
2013-10-28 11:59:04+0100 [] INFO: morbid.morbid.StompFactory starting on 61620
2013-10-28 11:59:04+0100 [] INFO: OS Version: Linux version 3.11.0-12-generic (buildd#allspice) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu7) ) #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013
2013-10-28 11:59:04+0100 [] INFO: CPU Info: ['2401.000', '1200.000', '1200.000', '2401.000', '1200.000', '1200.000', '1200.000', '2401.000']
2013-10-28 11:59:04+0100 [] INFO: Mem Info: 15979MB
2013-10-28 11:59:04+0100 [] INFO: Package Manager: Unknown
2013-10-28 12:03:02+0100 [] INFO: Starting factory <opscenterd.ThriftService.NoReconnectCassandraClientFactory instance at 0x31cd7e8>
2013-10-28 12:03:02+0100 [] INFO: Adding new cluster 'Test_Cluster': {u'jmx': {u'username': u'', u'password': u'', u'port': u'7199'}, 'kerberos_client_principals': {}, 'kerberos': {}, u'agents': {}, 'kerberos_hostnames': {}, 'kerberos_services': {}, u'cassandra': {u'username': u'', u'seed_hosts': u'192.168.77.3', u'api_port': u'9160', u'password': u''}}
2013-10-28 12:03:02+0100 [] INFO: Starting new cluster services for Test_Cluster
2013-10-28 12:03:02+0100 [Test_Cluster] INFO: Starting services for cluster Test_Cluster
2013-10-28 12:03:02+0100 [] INFO: Metric caching enabled with 50 points and 1000 metrics cached
2013-10-28 12:03:02+0100 [] INFO: Starting PushService
2013-10-28 12:03:02+0100 [Test_Cluster] INFO: Starting CassandraCluster service
2013-10-28 12:03:02+0100 [Test_Cluster] INFO: agent_config items: {'cassandra_log_location': '/var/log/cassandra/system.log', 'thrift_port': 9160, 'thrift_ssl_truststore': None, 'rollups300_ttl': 2419200, 'rollups86400_ttl': -1, 'jmx_port': 7199, 'metrics_ignored_solr_cores': '', 'api_port': '61621', 'metrics_enabled': 1, 'thrift_ssl_truststore_type': 'JKS', 'kerberos_use_ticket_cache': True, 'kerberos_renew_tgt': True, 'rollups60_ttl': 604800, 'cassandra_install_location': '', 'rollups7200_ttl': 31536000, 'kerberos_debug': False, 'storage_keyspace': 'OpsCenter', 'ec2_metadata_api_host': '169.254.169.254', 'provisioning': 0, 'kerberos_use_keytab': True, 'metrics_ignored_column_families': '', 'thrift_ssl_truststore_password': None, 'metrics_ignored_keyspaces': 'system, system_traces, system_auth, dse_auth, OpsCenter'}
2013-10-28 12:03:02+0100 [] INFO: Stopping factory <opscenterd.ThriftService.NoReconnectCassandraClientFactory instance at 0x31cd7e8>
This is due to a bug in Cassandra unfortunately:
https://issues.apache.org/jira/browse/CASSANDRA-6373
The workaround at the moment is to use the sync thrift server. If a workaround is implemented in OpsCenter, I will update my response.