Datastax cassandra cpp_driver hangs when connecting to node - datastax

I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.

I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.

Related

Web HUE is not getting loaded though HUE is workin on the port 8000

I have installed the Hue on the Linux whixh is an instance from Azure. I have made all the required changes in ambari and hue.ini conf file. And when I run the supervisor job, it runs fine
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
hue 83933 sshuser 3u IPv4 15707246 0t0 TCP *:8000 (LISTEN)
But when I try to access the wb hue, I don't get any page loaded. It shows refused to connect.
Tried deleting caches and reset up was done.
I am using hue 4.7 version and I don't find any issues in error.log file. Yet, I don't see any data in access.log file. Could you please help me?
Do you have
http_host=0.0.0.0
in the hue.ini?
#Ruthikajawar here is a working hue.ini for ambari
https://github.com/steven-dfheinz/HDP3-Hue-Service/blob/Hue.4.6.0/configuration/live.hue.ini
I have noticed that sometimes, after initial install, it takes 1 or 2 restarts to get the WEBUI to work. I have also noticed sometimes, after a restart, it takes quite a few moments before the WEBUI starts to respond.
Give it some time after restart and check the WEBUI. If you still are not getting it to answer you need to check /var/log/hue/error.log as it should be very specific with errors causing the WEBUI to fail on startup.

Connect via ssh to CF - Error

I need to debug my application , we are using version 2.65 (Diego)
.
I use the following wiki
http://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
while running cf ssh myapp via cli
nothing happens , what should I do in order
1. To see the container FS
2. To be able to debug it ?
The application was deployed successfully to CF.
Im using nodejs app.
all other commands are working well.
When I run the command cf ssh myapp I got this error after two minutes :
FAILED
Error opening SSH connection: dial tcp 52.23.201.1:2277: getsockopt: operation timed out
It sounds like a platform issue using non-standard ssh port.
You can find a bit more ssh access manual steps/troubleshooting tips on https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
If you believe it is a instance issue, you can download a copy of droplet/fs using api, more on https://apidocs.cloudfoundry.org/213/apps/downloads_the_staged_droplet_for_an_app.html

Cannot POST api/system/sessions of Graylog2 on CentOS 7

I have an working installation of Graylog 2.1 on Debian 8, but I had to install Graylog on CentOS 7 because my datacenter uses this distribution and I want to have same environment to avoid problems when I need to ask changes in production.
I follow guideline of Graylog for CentOS 7 available in http://docs.graylog.org/en/2.1/pages/installation/os/centos.html and installed Graylog 2.1.2. MongoDB, ElasticSearch e Graylog are running and answer to local requests via terminal. However, web interface is not available. Login page is presented, but when I try to connect using admin user, I receive this answer:
Error - the server returned: 404 - cannot POST http://mydomain:9000/api/system/sessions (404)
Below are lines that I changed into server.conf of Graylog (I replaced real IP address here):
rest_listen_url = http://4.8.15.16:9000/api/
rest_transport_uri = http://4.8.15.16:9000
web_listen_uri = http://4.8.15.16:9000/
I have searched for references about this fail and created a graylog-settings.json file based on suggestion of Graylog github issues, with this content:
"custom_attributes": {
"graylog-server": {
"rest_transport_url": false
}
}
But event after restarting server, the problem continues. Graylog log only shows INFO records, then it seems to me that requests are not reaching server. I would like to know if this is due to network configuration or can be solved by an adjustment of Graylog.
Your rest_transport_uri looks odd in comparison with rest_listen_uri. Make sure that you actually need to set rest_transport_uri at all and that it is the correct setting.
I don't know where you found information about graylog-settings.json, but that file is only being used in the official Omnibus package (i. e. the OVA and AMIs).

RDO unable to boot VM with disk size specified

I have packstack-allinone setup on my RHEL7.1 trial for Juno release.
I am facing problem while launching VM(for ex: cirros) with a disk size mentioned in flavor. If there is 0gb disk size then VM are getting launched but not for higher flavor sizes.
I also observe that when I do this, openstack-nova-compute service goes down which I observed when I checked using nova-manage service list with nova-compute being XXX making me restart the service everytime I try this scenario. The compute logs doesn't throw any error, it just gets stuck at "Creating image".
Is there any Filesystem issue which i missing to be configured? I am new to this, so please help.
PS: I run all commands with "root" user.
The problem was with esxi. Esxi needs to be 5.5v to support RHEL7x Since mine was 5.1v it only supported RHEL6x.
After upgrading esxi5.1 to 5.5v it worked fine.

elasticsearch-mesos not getting listed under frameworks of mesosUI

Iam trying to run elasticsearch-mesos on mesos.My machine is running ubuntu 14.04. I have running mesos cluster installed with mesosphere packages by following these instructions. When I run test frameworks it gets lister under frameworks of mesosUI but for elasticsearch-mesos its not getting listed under mesos webUI. I want to run elasticsearch-mesos on top of mesos. I followed instructions given here. When I run ./elasticsearch-mesos I am getting a message in terminal
I0108 17:24:01.898540 23861 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I tried running ./elasticsearch-mesos on both mesos masters and slaves.
The last few lines of terminal output is given below
2015-01-08 17:24:01,881:23844(0x7f175bfff700):ZOO_INFO#zookeeper_init#786: Initiating
client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f1762a3e6a0
sessionId=0 sessionPasswd=<null> context=0x7f1710002530 flags=0
I0108 17:24:01.881392 23858 sched.cpp:137] Version: 0.21.1
2015-01-08 17:24:01,881:23844(0x7f172b7fe700):ZOO_INFO#check_events#1703: initiated
connection to server [127.0.0.1:2181]
2015-01-08 17:24:01,897:23844(0x7f172b7fe700):ZOO_INFO#check_events#1750: session
establishment complete on server [127.0.0.1:2181], sessionId=0x14ac7c469270006,
negotiated timeout=10000
I0108 17:24:01.898455 23861 group.cpp:313] Group process (group(1)#127.0.1.1:38668)
connected to ZooKeeper
I0108 17:24:01.898509 23861 group.cpp:790] Syncing group operations: queue size (joins,
cancels, datas) = (0, 0, 0)
I0108 17:24:01.898540 23861 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
According to the README at https://github.com/mesosphere/elasticsearch-mesos,
you may need to modify mesos.master.url to point to the same ZK url that the Mesos master is using (maybe not localhost). If you're using a single-master Mesos cluster, you can skip the ZK url and point this parameter directly to the Mesos master.
Please also note that the elasticsearch framework is a bit outdated, so use with caution