I just installed Apache Accumulo. It is successfully initialized and run but after a restart when i insert start-all.sh command it stuck on waiting for Accumulo to be initialized. what's wrong here?
If you've restarted your computer, be sure that you have also restarted Hadoop (HDFS) and Zookeeper as well as verify they are running correctly. They are required to be running for Accumulo.
It sounds like you might be running this locally on a single machine. If that's the case, also verify your hadoop hdfs settings and make sure it's not writing its data to /tmp which will get wiped out occasionally between restarts.
Related
I've been testing Syslog-ng in a dev environment for several weeks now. It has since been moved to production but I'm getting weird behavior. I've taken the exact same syslog-ng.conf that was on dev (listens on udp:514 and writes everything to a file on a separate disk) and have it running on production. I only seem to get data written to my destination when I run syslog-ng -Fevd in the foreground. Does anyone have any ideas. I'm tried restarting the service with no luck at all.
This particular syslog-ng is gathering logs from all ESXi and vCenter servers in the production environment, then they get forwarded to Splunk from there (Splunk's recommend solution for VMware logs).
So I continued to pour through the man page. I compared the command the service runs and cross referenced the options on the man page. It was using the -F for foreground in the service. So I just ran sudo syslog-ng --process-mode safe-background (which is supposed to be the default behavior of syslog-ng) and I'm now getting all of my logs in my destination.
TLDR; RTFM.
I have a kubernetes cluster and I am getting cgroup out of memory. I have resources declared in the YAML but I have no idea which apache2 needs more memory. It gives me a process id but how do I tell which pod is being killed?
Thank you.
It is what it is. Your Apache process is using more memory than you are allowing in your pod/container definition.
Reasons why it could be needing more memory:
You have an increase in traffic and sessions being handled
Apache is forking more processes within the container running into memory limits.
Apache not reaping some lingering sessions because of a config issue.
If you are running Docker for containers (which most people do) you can ssh into the node in your cluster and run a:
docker ps -a
You should see the Exited container where your Apache process(es) was running. Then you can run:
docker logs <container-id>
And you might get details on why Apache was doing before it was killed. If you only see minimal info, I recommend increasing the verbosity of your Apache logs.
Hope it helps.
I am trying to start zookeeper on a remote virtual machine. I use this for my project regularly and I do not have any problems while starting the zookeeper. But lately when I am trying to start the server I am getting an error.
When I give ./zkServer.sh start it shows zookeeper server started.
When I check for status using ./zkServer.sh status it shows "Error contacting service. It is probably not running."
I am totally working with 5 Virtual Machines. All these machines were fine initially. I started getting problems with machine 1. But recently I have the same problem with all my virtual machines. Can someone tell me what the issue is and suggest me a way to clear this issue?
Most probably Zookeeper server exited.
If we are running it on a Linux box, use the linux commands. Some of them:
ps -ef | grep -i zookeeper
jps
etc.
Also, try running it in foreground
zkServer.sh start-foreground
In My case the issue was $PATH issue...
You will get what was the issue by running zookeeper in foreground
zkServer.sh start-foreground
I encountered same problem,too. In my case problem is about zookeeper locations configuration is not same for each node so zookeeper can not provide Quorum and mentioned nodes can not be part of cluster.
Please be sure server definition for each node is same.
For example for all nodes, server definition must be same as below
server.0=ip0:2888:3888
server.1=ip1:2888:3888
server.2=ip2:2888:3888
server.3=ip3:2888:3888
server.4=ip4:2888:3888
In my case the issue was some how ClientPort attribute's value was missed in one of the box so in console it was showing as invalid config path.With the help of command 'zkServer.sh start-foreground' investigated and found root cause.
I'm trying to revert a virtual machine to the previous snapshot every day or night.
Unfortunately, I haven't found any way to do this the way I want it.
Here are some things I tried and that didn't fit :
- snapshot.action=autoRevert --> The VM has to HALT, REBOOT doesn't work the same. I don't want to power on my VM manually.
- snapshot.action=autoRevert on a running snapshot. I tried this, thinking it might work and resolve the first issue. But when i HALT my VM, the snapshot is reverted but the VM is placed in a suspended state...
- PowerCLI script : I don't want to have a Windows machine running just for this little thing.
- NonPersistent disk : same thing as the first issue : VM needs to HALT, not REBOOT.
How can I simply do this ? I thought I could just do those things and place a cron on my linux VM to reboot every night.
In the past I've set up scripts that revert VMs to specific snapshots via the SSH server on my ESXi host. Once sshd is enabled, you can remotely run vim-cmd over SSH. This was on ESXi 4.x, but I assume the same can be done in newer versions.
The catch was that I had to enable the so-called "Tech Support Mode" to get sshd running, as documented in the VMware KB: kb.vmware.com/kb/1017910
The procedure I used was to first look up the ID of the VM in question by running:
vim-cmd vmsvc/getallvms
Then, you can view your VM's snapshot tree by passing its ID to this command (this example uses the VM with ID 80):
vim-cmd vmsvc/get.snapshotinfo 80
Finally, you can use an SSH client to remotely revert the VM to an arbitrary snapshot by passing the VM and snapshot IDs to 'snapshot.revert':
ssh root#YOUR_VMWARE_HOST vim-cmd vmsvc/snapshot.revert VM_ID 0 SNAPSHOT_ID
One other thing to note is that you can set up public key authentication between the ESXi server and the machine running your scripts so that the latter won't have to use a password.
The only annoyance with that approach was that I didn't immediately see a way to preserve the authorized_keys file on the ESXi server between reboots - if the ESXi server has to be rebooted, you'll have to rebuild its authorized_keys file before public key auth will work again.
Main OS is windows 7 64bit. Using VM player to create two vm CentOS 5.6 system. The net connection is bridge. I installed Hbase on both of the CentOS system, one is master, the other is slave. When I enter the shell, and run status 'details'.
The error from master is
zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException: An error is
preventing HBase from connecting to ZooKeeper
And the error from slave is
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is
the default). Consider inspecting your ZK server logs for that error
and then make sure you are reusing HBaseConfiguration as often as you
can. See HTable's javadoc for more information.
Please give me some suggestion.
Thanks a lot
Check if this is within your .bashrc, if not, add them and restart all hbase services (do not forget to manually run them as well), that did it for me with a pseudo-distributed installation. My problem (and maybe yours as well) was that Hbase wasn't detecting it's configuration.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
I see this very often on my machine. I don't have a failsafe cure, but end up running stop-all.sh, and deleting every place that hadoop and dfs (its a dfs failure) store their temp files. It seems to happen after my computer goes to sleep while dfs is running.
I am going to experiment with single-user mode to avoid this. I dont need distribution while developing.