I am running collectd 5.4.2.788.gf87af5a, I have also tried using 5.4.1.
I am seeing the following in the logs:
May 8 00:50:01 ip_172_1_1_1 collectd[19559]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status 2 (ENOENT). Most likely this means you didn't load any write plugins.
And I have write_http writing to localhost:9103 and netcat listening on that port.
nc -l 9103
My collectd.conf:
LoadPlugin write_http
<Plugin write_http>
<URL "http://127.0.0.1:9103/collectd-post">
Format "JSON"
StoreRates false
</URL>
</Plugin>
The message goes away if I enable rrdtool but regardless of rrdtool being enabled nothing is printed by netcat so write_http isn't sending any data to that socket.
UPDATE 1 - 2015.05.08
write_http is shipping stats from the cpu plugin but not from my own python plugin. But the python plugin does write to rrdtool any ideas?
UPDATE 2 - 2015.05.08
Once I verified that write_http was working just not with my python plugin I found the culprit here: https://github.com/collectd/collectd/issues/716 using the meta data workaround resolved the issue.
Once I verified that write_http was working just not with my python plugin I found the culprit here: https://github.com/collectd/collectd/issues/716 using the meta data workaround resolved the issue.
Related
I've installed fresh version of DSE 6.8 for dev purposes, after installing a cluster with one node (Cassandra + Solr) I want to allow Graph, the job keeps failing with error:
Graph is enabled and should have native-transport-address set to 0.0.0.0. name="node1" ssh-management-address="IP" rack="rack1"
Changed the cassandra.yaml from:
native_transport_address: IP
to:
native_transport_address: localhost
The job keeps failing, any ideas?
As it says, you need to configure setting native_transport_address in the node definition dialog to 0.0.0.0, and native_transport_broadcast_address to actual IP address.
This change should be done in the LCM UI as described in documentation, and then you can say reconfigure, or reinstall - you shouldn't change cassandra.yaml directly - it's generated by LCM.
Is it possibile for RabbitMQ with amqp1.0 plugin, queues act like topics?
In this docs -19th slide- I saw, Queue-s acting like topics, with non-destructive links. I just don't see is it able to(?), if yes >> where and how can configure RabbitMQ to get this behaviour.
Information provided in the documentation is well enough to add plugins
In case if you are looking for adding a AMQP1.0 plugin to the existing rabbitmq docker image
Dockerfile:
FROM rabbitmq:3-management
RUN echo '[rabbitmq_management,rabbitmq_management_visualiser,rabbitmq_amqp1_0].' > enabled_plugins
RUN rabbitmq-plugins enable rabbitmq_amqp1_0
Hope this helps.
You can activate the aqmp 1.0 plugin using rabbitmq-plugins enable rabbitmq_amqp1_0 command.
Create a config file named "rabbitmq.conf" in this path $RABBITMQ_HOME/etc/rabbitmq/
Add this content to the "rabbitmq.conf" file which you have created following the above instruction.
amqp1_0.default_user = guest
amqp1_0.default_vhost = /
amqp1_0.protocol_strict_mode = false
I followed these steps to configure my RabbitMQ to work with aqmp 1.0 version.
Documentation for reference :
https://github.com/rabbitmq/rabbitmq-amqp1.0
https://www.rabbitmq.com/configure.html#config-file
I hope this helps.
I try to use celery. I have installed rabbit-mq by command from celery tutorial:
sudo apt-get install rabbitmq-server
And all worked well while I write my code in one file and run it to test functionality. But when I tried to add my code in Django views, and then to do concurrent requests to my views, I got this kind of exception:
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/connection.py", line 464, in drain_events
return self.blocking_read(timeout)
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/connection.py", line 468, in blocking_read
frame = self.transport.read_frame()
File "/home/kinmanz/PycharmProjects/GitFace/myvenv/lib/python3.5/site-packages/amqp/transport.py", line 251, in read_frame
'Received {0:#04x} while expecting 0xce'.format(ch))
amqp.exceptions.UnexpectedFrame: Received 0x00 while expecting 0xce
I think that problem may be in concurrency of request, and I should somehow to make queue concurrent safe.
I use Python 3.5, Celery 4.0.0, RabbitMQ 3.5.7
Actually problem in amqplib see answer below.
May be for someone who has the same problem, I will list possible solutions that I have managed to find. If you know better solution please add your answer or comment mine.
If you are using Python 2.x then see that issue https://github.com/celery/celery/issues/922
problem actually in amqplib if it change on librabbitmq all should be working, it's quite easy to do, see:
Framing Errors in Celery 3.0.1
But if you are using Python 3.x you can't solve that problem in that way, because there is no Python 3-compatible librabbitmq available, see that issue https://github.com/celery/celery/issues/2066 but in that case you can change your result backend on redis for example:
1) Install redis server:
$ sudo aptitude install redis-server
2) Change your app configuration
app = Celery('tasks', backend='redis://localhost', broker='pyamqp://')
Some useful links about installation redis: Setting up an asynchronous task queue for django using celery redis and Celery-redis quick guide
Also for Python 3 you can try to run celery worker in Python 2.7 while your app is working on Python 3, in that case don't forget install librabbitmq instead of amqplib. (This way seems to be inconvenient)
I am new to pox and I don't know how to run the components in pox. Currently I'm stuck with the host_tracker.py taken from https://github.com/CPqD/RouteFlow/blob/master/pox/pox/host_tracker/host_tracker.py
I've tried something like this:
./debug-pox.py host_tracker
And got the output as
POX 0.3.0 (dart) / Copyright 2011-2014 James McCauley, et al.
DEBUG:core:POX 0.3.0 (dart) going up...
DEBUG:core:Running on CPython (2.7.6/Mar 22 2014 22:59:56)
DEBUG:core:Platform is Linux-3.13.0-53-generic-x86_64-with-Ubuntu-14.04-trusty
DEBUG:core:host_tracker still waiting for: openflow
WARNING:core:Still waiting on 1 component(s)
INFO:core:POX 0.3.0 (dart) is up.
Not sure what it means :( Kindly tell me how to run components in pox.
Thanks :)
Assuming that you have mininet up and running you should use the host_tracker along with the openflow.discovery module. In addition you should load an example controller (stock component) included in your pox version.
First load a sample mininet
sudo mn --controller remote
Then run pox like this
python pox.py forwarding.l2_pairs host_tracker openflow.discovery
When all is up and running in the terminal you launched mininet issue a
pingall
and monitor the terminal in which you run pox to observe host_tracker info
forwarding.l2_pairs is a sample controller (stock component) that will handle the network and flows modifications. host_tracker is the host tracker module and the openflow.discovery is the discovery module of pox.
To find more stock components go to https://openflow.stanford.edu/display/ONL/POX+Wiki#POXWiki-StockComponents
To read more about host_tracker https://openflow.stanford.edu/display/ONL/POX+Wiki#POXWiki-host_tracker
hbase dependency: /usr/local/hadoop/hbase-0.98.14/lib/hbase-common-0.98.14-hadoop2.jar
KYLIN_JVM_SETTINGS is -Xms1024M -Xmx4096M -XX:MaxPermSize=128M
KYLIN_DEBUG_SETTINGS is not set, will not enable remote debuging
KYLIN_LD_LIBRARY_SETTINGS is not set, lzo compression at MR and hbase might not work
A new Kylin instance is started by sreeharsha, stop it using "kylin.sh stop"
Please visit http://:7070/kylin to play with the cubes! (Useranme: ADMIN, Password: KYLIN)
You can check the log at ./bin/../tomcat/logs/kylin.log
sreeharsha#localhost:/usr/local/hadoop/kylin-1.0$
Got it, just changed the property export HBASE_MANAGES_ZK=true in hbase-env.sh and restarted all the hadoop deamons its working fine