Datastax Opscenter shows empty storage capacity - datastax

I'm using opscenter free edition 4.0.2, with cassandra 2.0.3 on centos 6.
The storage capacity widget on the landing page is showing no data at all.
The storage-capacity call is returning:
{"free_gb": 0, "used_gb": 0, "reporting_nodes": 0}
However in the cluster ring view size is shown correctly.
Is there a way to fix this?

The call says reporting_nodes: 0 means that there are currently no nodes actively attached to Opscenter.
This can be fixed by installing Opscenter agent on each of your machines.
Look for this image in Opscenter, and click 'Fix'.

Related

Infinspan console shows only one node for clustered servers in the cache node view

We are working with infinspan version 9.4.8 in a domain mode with cluster of two hosts servers with two nodes.
In the statistics of the cluster view we can see that both nodes get hits but when we look at the view of the cache nodes for a distributed cache we can see only one node in the nodes view
In console of infinspan 8 we used to have the two nodes in the cache nodes view but after upgrading to version 9 it is not the case
Could you please advise if it is bug in the console for version 9.4.8 or something is missed in the configuration
This is a bug which has just been fixed and will be included in the upcoming 9.4.18.Final release. The issue is tracked by ISPN-11265.
In the future please utilise the Infinispan JIRA directly if you suspect a bug.

INSTALL "[AMD/ATI] Tonga XT GL [FirePro S7150]" Graphic Card on a VMu (Centos 6.9) running on XenServer 7.4

Just start using XenServer. Doing some experiment for my company. Installed XenServer 7.4 on a Box and created a Centos 6.9 VMU. Using XenCenter.
Got to the point when I can run the virtual operating system but when I try to use the "Advanced Micro Devices, Inc. [AMD/ATI] Tonga XT GL [FirePro S7150]" graphic card with the command:
xe vgpu-create vm-uuid=xxx-xxx-xxx-xxx gpu-group-uuid=xxx-xxx-xxx-xxx
I receive the following error message:
The use of this feature is restricted.
I have also tried to install the graphic interface (Xen-Center) using a licensed Xen-Server to enable the AMD card using the Tools->Install Update: downloaded and selected the mxgpu-1.0.5.amd.iso to enable the Graphic card but I cannot complete the process as I receive the error message:
The attempt to create a VDI failed
I am running out of option. The CentOS is running but I cannot use the machine AMD graphic card. Can you help?
Could you try running the VM with the virtual disk stored on the same Local Storage repository located on that card's host, and removed from any pools. This is the default configuration, but I'd thought I'd mention these tips in case you have this box somehow mixed in a heterogenous pool. If the machine is part of a pool, make sure that you are not selecting the video adapter to passthrough to the VM of another host's adapter.

Repeated IBM bluemix Node Red app crashing; status 1

My Node Red application in IBM BlueMix is repeatedly crashing - once an hour - with no real error message other than "exited with status: 1."
How can I troubleshoot this issue?
Is there someone from IBM BlueMix support that monitors this that could take a look?
I looked at my logs and there's nothing in there that really says what's going on.
Edit per requests:
The regular log for "OUT/ERR" is scrolling so fast with HTTPD logs that I can't get it to copy/paste. Filtering to "ERR" Channel the only thing I see is below. I believe this is an error which occurs during deploy when the application restarts.
[App/0] ERR js-bson: Failed to load c++ bson extension, using pure JS version
My Node Red application is gathering data from Wink, LIFX, and other IoT services and compiles them together into a Freeboard dashboard.
Caught crash on screenshot here -- not enough cred to post images so it'll only post as a link
The zlib error was fixed in the 0.13.2 Node-RED release (that shipped 19/02/16).
If you re-stage your application is should pick up the new version of Node-RED
You can re-stage the application using the cf command line management application:
cf restage <app name>

RDO unable to boot VM with disk size specified

I have packstack-allinone setup on my RHEL7.1 trial for Juno release.
I am facing problem while launching VM(for ex: cirros) with a disk size mentioned in flavor. If there is 0gb disk size then VM are getting launched but not for higher flavor sizes.
I also observe that when I do this, openstack-nova-compute service goes down which I observed when I checked using nova-manage service list with nova-compute being XXX making me restart the service everytime I try this scenario. The compute logs doesn't throw any error, it just gets stuck at "Creating image".
Is there any Filesystem issue which i missing to be configured? I am new to this, so please help.
PS: I run all commands with "root" user.
The problem was with esxi. Esxi needs to be 5.5v to support RHEL7x Since mine was 5.1v it only supported RHEL6x.
After upgrading esxi5.1 to 5.5v it worked fine.

Amazon EC2 || RHEL || Connection refused on port 22 after reboot

I am aware that this question is asked many times in forums and I have tried all solutions mentioned in them, but no luck.
Actually, I doubt when last time I was trying to replace the /etc/sysconfig/iptables with my own iptables rules, I mistakenly replaced /etc/init.d/iptables and restarted the machine. And as expected it didn't start. Then I detached the EBS from this instance and attached to a new RHEL instance and fix the mess up by copying back the /etc/init.d/iptables from backup (I used to take backups before replacement :) ) and same for /etc/sysconfig/iptables.
I have also put some custom startup scripts in /etc/init.d folder for our application to start on instance reboot. I have removed those too to make sure any of my script is not causing this. But still system is not allowing me to connect via ssh. AWS console is showing 2/2 checks being successful, but not able to connect via 22.
Here is the last few lines of system log which states that something wrong is happening after or on iptables startup but not showing what. :(
blkfront: xvde1: barriers disabled
Changing capacity of (202, 65) to 62914560 sectors
xvde1: detected capacity change from 0 to 32212254720
EXT4-fs (xvde1): mounted filesystem with ordered data mode. Opts:
dracut: Mounted root filesystem /dev/xvde1
dracut: Loading SELinux policy
type=1404 audit(1398404320.826:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295
type=1403 audit(1398404321.795:3): policy loaded auid=4294967295 ses=4294967295
dracut:
dracut: Switching root
udev: starting version 147
Initialising Xen virtual ethernet driver.
microcode: CPU0 sig=0x306e4, pf=0x1, revision=0x415
platform microcode: firmware: requesting intel-ucode/06-3e-04
Microcode Update Driver: v2.00 <tigran#aivazian.fsnet.co.uk>, Peter Oruba
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
ip6_tables: (C) 2000-2006 Netfilter Core Team
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
ip_tables: (C) 2000-2006 Netfilter Core Team
Can anyone help me in identifying what is going wrong here?
Got it fixed.
Actually, it was not the problem of iptables. Again it was due to the known bug in RHEL 6.4 on EC2 which puts wrong entries in sshd_config files. Although, I have checked this file for wrong entries in my first attempt to resolve the issue, somehow it was being created again, may be because every time I start a new machine using my AMI or new RHEL 6.4 AMI. In both cases, AMI is still registered as 6.4, though the OS on the disk is updated to 6.5. May be this was the reason that it was creating wrong entries in sshd_config. Now, again I have fixed this file for wrong entries and created new AMI using RHEL 6.5 and attached the EBS volume from instance created using my RHEL 6.4 AMI, it works fine.