LVM thinpool shrink - resize

I want to shrink a thinpool but don't manage to do it.
Have some a procedure for that ?
Here is some information:
hyper $ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
mailer lxc Vwi-aotz-- 64,00m pool 32,81
nginx lxc Vwi-aotz-- 512,00m pool 26,66
pool lxc twi-aotz-- 180,00g 2,29 11,45
vpn lxc Vwi-aotz-- 64,00m pool 33,40
[...]
hyper dev $ sudo vgs
VG #PV #LV #SN Attr VSize VFree
lxc 1 15 0 wz--n- 180,00g 0
hyper $ sudo lvreduce -L100G /dev/lxc/pool
Thin pool volumes lxc/pool_tdata cannot be reduced in size yet.
Is it possible to reduce the size of a thinpool ?
If needed, I can copy all the data elsewhere and then restore it, but how to do it properly ?

Related

Rabbitmq mqtt broker uses a lot of memory

I am evaluating rabbitmq as mqtt broker and currently doing benchmark tests to check performance. Using benchmark tool https://github.com/takanorig/mqtt-bench I tried publishing 1 byte messages for 10000 clients. The memory consumption by rabbitmq for these numbers is 2gb and it's the same for 10000 subscriptions as well. Here are the consumption details provided by rabbitmq-diagnostics memory_breakdown
connection_other: 1.1373 gb (55.89%)
other_proc: 0.3519 gb (17.29%)
allocated_unused: 0.1351 gb (6.64%)
other_system: 0.0706 gb (3.47%)
quorum_ets: 0.0675 gb (3.32%)
plugins: 0.0555 gb (2.73%)
binary: 0.0482 gb (2.37%)
mgmt_db: 0.035 gb (1.72%)
This means that the broker server is taking 200KB per connection, which seems to me a big number, considering that we need to scale our system to 1million connections in future and then we would need to provide around 200gb for just rabbitmq.
I have tried playing with some settings in my conf file and docker command
mqtt.allow_anonymous=false
ssl_options.cacertfile=/certs/ca_certificate.pem
ssl_options.certfile=/certs/server_certificate.pem
ssl_options.keyfile=/certs/server_key.pem
ssl_options.verify=verify_peer
ssl_options.fail_if_no_peer_cert=false
mqtt.listeners.ssl.default=8883
mqtt.listeners.tcp.default=1883
web_mqtt.ws_path = /mqtt
web_mqtt.tcp.port = 15675
collect_statistics_interval = 240000
management.rates_mode = none
mqtt.tcp_listen_options.sndbuf = 1000
mqtt.tcp_listen_options.recbuf = 2000
mqtt.tcp_listen_options.buffer = 1500
Below is the docker command where I've tried to reduce tcp_rmem and tcp_wmem size as well
docker run -d --rm -p 8883:8883 -p 1883:1883 -p 15675:15675 -p 15672:15672 -v /home/ubuntu/certs:/certs --sysctl net.core.somaxconn=32768 --sysctl net.ipv4.tcp_max_syn_backlog=4096 --sysctl net.ipv4.tcp_rmem='1024 4096 500000' --sysctl net.ipv4.tcp_wmem='1024 4096 500000' -e RABBITMQ_VM_MEMORY_HIGH_WATERMARK=0.9 -e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+P 2000000" -t probusdev/hes-rabbitmq:latest
Are there any other settings I can try to reduce the memory consumption?
Update: I used the same benchmark test against Emq broker and it took only 400mb for the same numbers. So is Rabbitmq mqtt more memory consuming then Emq?

Cannot connect to Google Compute Engine instance via SSH in browser

I cannot connect to GCE via ssh. It is showing Connection Failed, and we are unable to connect VM on port 22.
And serial console output its shows
Jul 8 10:09:26 Instance sshd[10103]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Jul 8 10:09:27 Instance sshd[10103]: User username from 0.0.0.0 not allowed because not listed in AllowUsers
Jul 8 10:09:27 Instance sshd[10103]: input_userauth_request: invalid user username [preauth] Jul 8 10:09:27 Instance sshd[10103]: Connection closed by 0.0.0.0 [preauth] –
Yesterday it was working fine, but today it shows this error. I am new to GCE. Any suggestions?
UPDATE
I'd like to post this update to mention that on June 2016 a new feature is released where you can enable interactive access to the serial console so you can more easily troubleshoot instances that are not booting properly or that are otherwise inaccessible. See Interacting with the Serial Console for more information.
-----------------------------------------------------------------------------------
It looks like you've added AllowUsers in /etc/ssh/sshd_config configuration file.
To resolve this issue, you'll need to attach the boot disk of your VM instance to a healthy instance as the second disk. Mount it, edit the configuration file and fix the issue.
Here are the steps you can take to resolve the issue:
First of all, take a snapshot of your instance’s disk, in case if a loss or corruption happens you can recover your disk.
In the Developers Console, click on your instance. Uncheck Delete boot disk when instance is deleted and then delete the instance. The boot disk will remain under “Disks”, and now you can attach the disk to another instance. You can also do this step using gcloud command:
$ gcloud compute instances delete NAME --keep-disks all
Now attach the disk to a healthy instance as an additional disk. You can do this through the Developers Console or using the gcloud command:
$ gcloud compute instances attach-disk EXAMPLE-INSTANCE --disk DISK --zone ZONE
SSH into your healthy instance.
Determine where the secondary disk lives:
$ ls -l /dev/disk/by-id/google-*
Mount the disk:
$ sudo mkdir /mnt/tmp
$ sudo mount /dev/disk/by-id/google-persistent-disk-1-part1 /mnt/tmp
Where google-persistent-disk-1 is the name of the disk
Edit sshd_config configuration file and remove AllowUsers line and save it.
$ sudo nano /mnt/tmp/etc/ssh/sshd_config
Now unmout the disk:
$ sudo umount /mnt/tmp
Detach it from the VM instance. This can be done through the Developers Console or using the command below:
$ gcloud compute instances detach-disk EXAMPLE-INSTANCE --disk DISK
Now create a new instance using your fixed boot disk.

Process Core dumps are not created after crash

I have configured system configurations to create process core dumps.
Below are my configurations.
/etc/sysctl.conf
kernel.core_uses_pid = 1
kernel.core_pattern = /var/core/core.%e.%p.%h.%t
fs.suid_dumpable = 2
/etc/security/limits.conf
* soft core unlimited
root soft core unlimited
Here are the steps which I am following to generate process coredumps.
1) I have restarted mysql service and executed command "kill -s SEGV <mysql_pid>" then I got the core dump file in /var/core location.
2) Then I have started my service mysql say "/etc/init.d/mysql start" or "service mysql start". Now if I give "kill -s SEGV <mysql_pid>" then core dump file is not getting created.
3) To get crash file again I have to restart the mysql service then only if I give "kill -s SEGV <mysql_pid>" i'm getting core dump file.
Can anyone please help me how to resolve this?
First of all, you can verify that core dumps are disabled for the MySQL process by running:
# cat /proc/`pidof -s mysqld`/limits|egrep '(Limit|core)'
Limit Soft Limit Hard Limit Units
Max core file size 0 unlimited bytes
The "soft" limit is the one to look for, zero in this case means core dumps are disabled.
Limits set in /etc/security/limits.conf by default only apply to programs started interactively. You may have to include 'ulimit -c unlimited' in the mysqld startup script to enable coredumps permanently.
If you're lucky, then you can enable coredumps for your current shell and restart the daemon using its init.d script:
# ulimit -c unlimited
# /etc/init.d/mysql restart
* Stopping MySQL database server mysqld [ OK ]
* Starting MySQL database server mysqld [ OK ]
* Checking for tables which need an upgrade, are corrupt
or were not closed cleanly.
# cat /proc/`pidof -s mysqld`/limits|egrep '(Limit|core)'
Limit Soft Limit Hard Limit Units
Max core file size unlimited unlimited bytes
As you can see, this works for MySQL on my system.
Please note that this won't work for applications like Apache, which call ulimit internally to disable core dumps, not for init.d script that use upstart.

Setting up redis with docker

I have setup a basic redis image based on the following instructions: http://docs.docker.io/en/latest/examples/running_redis_service/
With my snapshot I have also edited the redis.conf file with requirepass.
My server runs fine and I am able to access it remotely using redis-cli however the authentication isn't working. I am wondering if the config file isn't being used but when I try starting the container with:
docker run -d -p 6379:6379 jwarzech/redis /usr/bin/redis-server /etc/redis/redis.conf
the container immediately crashes.
the default config of redis is set to be a daemon. You can't run a daemon within a docker container, otherwise, lxc will lose track of it and will destroy the namespace.
I just tried doing this within the container:
$>redis-server - << EOF
requirepass foobared
EOF
Now, I can connect to it and I will get a 'ERR operation not permitted'. When I connect with redis-cli -a foobared, then it works fine.

Can only connect to local tmux session through over ssh

So, I have tmux session running on my local machine, but I can only connect to it (or see information about it) if I ssh back to myself first:
% tmux ls
failed to connect to server: Connection refused
% ssh localhost -t tmux ls
Password:
0: 2 windows (created Mon Nov 26 12:47:44 2012) [208x52] (attached)
Connection to localhost closed.
This isn't the worst hoop to have to jump through, but why is it happening, and how can I fix it?
For its client/server communication, tmux uses a named socket (in a UID-based subdirectory) under the directory specified by the TMPDIR environment variable. If this environment variable is not set (or it is empty), then tmux uses the directory defined by _PATH_TMP from paths.h; this is often /tmp.
Note: The following uses of “session” refer to login sessions, not tmux sessions.
My guess is that your ssh sessions share a common TMPDIR value (possibly not having one at all), while your “normal” sessions use a different TMPDIR value. Since the TMPDIR values are different in your different sessions, a client in one session type can not directly “see” a server started in the other session type (e.g. the client tries using /var/folders/random/directories/tmux-500/default, but the server is listening at /tmp/tmux-500/default).
To fix the problem you can simply adjust your TMPDIR to match whatever it normally is in your ssh sessions:
TMPDIR=$(/usr/bin/ssh localhost -t 'echo $TMPDIR') && export TMPDIR
You can determine the path your client is trying to use like this:
tmux -L temp start\; info | grep path
This will create a evanescent server using a socket named temp instead of default, and show you the path to the socket it is using.
Tmux manages sockets under /var/run/tmux /tmp/tmux-USERID and each of these sockets has a name attached to it
For example:
$ pwd
/tmp/tmux-2037
$ ls
default foo
$ tmux -L foo ls
0: 1 windows (created Tue Dec 4 13:36:10 2012) [172x52]
$ tmux -L default ls
0: 1 windows (created Tue Nov 20 16:21:14 2012) [188x47]
$ tmux ls
0: 1 windows (created Tue Nov 20 16:21:14 2012) [188x47]
Take a look at what you've got under /var/run/tmux /tmp/tmux-USERID and try attaching to some of those sockets by name to see if that's contributing to your problem (running tmux ls is the same as running tmux -L default ls)
If all else fails, it may be worth it to fully detach that tmux (close all windows and exit fully) and then rm /tmp/tmux-500/default to see if there's something stateful about your current problem.