Connection problem with Clickhouse and RabbitMQ - rabbitmq

I am a newbie to Clickhouse and RabbitMQ. While I am trying to record data in RabbitMQ to Clickhouse with the below script, it doesn't work.
CREATE TABLE Station (
Station varchar(2000)
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = '<IP>:5672',
rabbitmq_exchange_name = 'Clickhouse',
rabbitmq_exchange_type = 'direct',
rabbitmq_routing_key_list = 'Station',
rabbitmq_format = 'CSV',
rabbitmq_num_consumers = 1;
And the following error message is given.
SQL Error [115]: ClickHouse exception, code: 115, host: <IP>, port: 8123; Code: 115, e.displayText() = DB::Exception: Unknown setting rabbitmq_username: for storage RabbitMQ (version 21.4.3.21 (official build))
Any suggestion for setting the rabbitmq_username?

The Rabbit MQ credentials should be defined in config-file:
open exist or create a new custom config file rabbitmq.xml
sudo nano /etc/clickhouse-server/config.d/rabbitmq.xml
add this configuration & save it
<yandex>
<rabbitmq>
<username>your_rabbitmq_username</username>
<password>your_rabbitmq_password</password>
</rabbitmq>
</yandex>
restart service
sudo service clickhouse-server restart

Related

MongooseIM mod_event_pusher RabbitMQ

I trying to understand MongooseIM file configuration ( not easy , this is my point of view ) I spent 2 days to understand how I can config mod_event_pusher & RabbitMQ but not working
This is my config
[auth]
methods = ["http"]
password.format = "plain"
sasl_mechanisms = ["plain"]
[auth.http]
[outgoing_pools.http.auth.connection]
host = "https://---------------"
[outgoing_pools.rabbit.event_pusher.connection]
amqp_host = "---------damqp.com"
amqp_port = 1883
amqp_username = "---------"
amqp_password = "eld_8NZ_________DY8x"
But when I execute ./bin/mongooseimctl live I have some error like
Could not read the TOML configuration file
If someone have an example , it will be great .
The provided configuration file is missing the general section. This section is mandatory because it contains the list of hosts that the server is handling and the default_server_domain, see the documentation.

Connect App Engine to Google cloud SQL fails

I'm following this guide
I'm filling the config like this:
val datasourceConfig = HikariConfig().apply {
jdbcUrl = "jdbc:mysql:///$DB_NAME"
username = DB_PASS
password = DB_USER
mapOf(
"cloudSqlInstance" to CLOUD_SQL_CONNECTION_NAME,
"socketFactory" to "com.google.cloud.sql.mysql.SocketFactory",
"ipTypes" to "PUBLIC,PRIVATE",
).forEach {
addDataSourceProperty(
it.key,
it.value
)
}
}
output of the gcloud sql instances describe project-name:
backendType: SECOND_GEN
connectionName: project-name:europe-west1:project-name-db
databaseVersion: MYSQL_5_7
failoverReplica:
available: true
gceZone: europe-west1-d
instanceType: CLOUD_SQL_INSTANCE
ipAddresses:
- ipAddress: *.*.*.*
type: PRIMARY
kind: sql#instance
name: project-name-db
project: project-name
region: europe-west1
from which I'm filling my env variables:
DB_NAME=project-name-db
CLOUD_SQL_CONNECTION_NAME=project-name:europe-west1:project-name-db
On the deployed app line val dataSource = HikariDataSource(datasourceConfig) crashes with the following exception:
com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Cannot connect to MySQL server on localhost:3,306.
Make sure that there is a MySQL server running on the machine/port you are trying to connect to and that the machine this software is running on is able to connect to this host/port (i.e. not firewalled). Also make sure that the server has not been started with the --skip-networking flag.
update: I've tried adding google between second and third slashes("jdbc:mysql://google/$DB_NAME"), according to this answer, now I get:
Cannot connect to MySQL server on google:3,306.
I was missing the following dependency:
implementation("com.google.cloud.sql:mysql-socket-factory-connector-j-8:1.2.2")
more info here
Also DB_NAME is not name of gcloud sql instances output, but a database name that should be created in Console -> Project -> Sql -> Databases

Airflow Adaptive Server connection failed

I want to connect my Airflow and Microsoft SQL Server. I configured my connection under 'connections' bar in 'Admin' box as mentioned in the following link:
http://airflow.apache.org/howto/manage-connections.html
But when I run my Dag task that is related to SQL server immedatly fails by following error:
[2019-03-28 16:16:07,439] {models.py:1788} ERROR - (18456, "Login failed for user 'XXXX'.DB-Lib error message 20018, severity 14:\nGeneral SQL Server error: Check messages from the SQL Server\nDB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (***.***.***.28:1433)\n")
My code from DAG for Micrososft Sql Connection is following:
sql_command = """
select * from [sys].[tables]
"""
t3 = MsSqlOperator( task_id = 'run_test_proc',
mssql_conn_id = 'FIConnection',
sql = sql_command,
dag = dag)
I verified ip address and port number kind of configuration things by establishing connection through pymssql library from my local computer. Test code is following:
pymssql.connect(server="***.***.***.28:1433",
user="XXXX",
password="XXXXXX"
) as conn:
df = pd.read_sql("SELECT * FROM [sys].[tables]", conn)
print(df)
Could you please share if you have experienced this issue?
By the way I am using VirtualBox in Ubuntu 16.04 LTS
I had the same problem because freetds-dev was missing on linux:
apt-get install freetds-dev

Celery tasks from different applications in different log files

I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'

Need help identify host name

I'm very new to rabbitmq, I installed rabbitmq-server on one EC2 instance, and want to create a consumer on another EC2 instance.
But I'm getting this error:
socket.gaierror: [Errno -2] Name or service not known
That's the node status:
ubuntu#ip-10-147-xxx-xxx:~$ sudo rabbitmq-server restart
ERROR: node with name "rabbit" already running on "ip-10-147-xxx-xxx"
DIAGNOSTICS
===========
nodes in question: ['rabbit#ip-10-147-xxx-xxx']
hosts, their running nodes and ports:
- ip-10-147-xxx-xxx: [{rabbit,46074},{rabbitmqprelaunch4603,51638}]
current node details:
- node name: 'rabbitmqprelaunch4603#ip-10-147-xxx-xxx'
- home dir: /var/lib/rabbitmq
- cookie hash: Gsnt2qHd7wWDEOAOFby=
And that's the consumer code:
import pika
cred = pika.PlainCredentials('guest', 'guest')
conn_params = pika.ConnectionParameters('10-147-xxx-xxx', credentials=cred)
conn_broker = pika.BlockingConnection(conn_params)
conn_broker = pika.BlockingConnection(conn_params)
channel = conn_broker.channel()
channel.exchange_declare(exchange='hello-exchange', type='direct', passive=False, durable=True, auto_delete=False)
channel.queue_declare(queue='hello-queue')
channel.queue_bind(queue='hello-queue', exchange='hello-exchange', routing_key='hola')
def msg_consumer(channel, method, header, body):
channel.basic_ack(delivery_tag=method.delivery_tag)
if body == 'quit':
channel.basic_cancel(consumer_tag='hello-consumer')
channel.stop_consuming()
else:
print body
return
channel.basic_consume(msg_consumer, queue='hello-queue', consumer_tag='hello-consumer')
channel.start_consuming()
You should check that the security group allows you to use the rabbitMQ port, also it seems that you are not using Rabbit default's port (5672) so it should be in your connection parameters