Redis Booksleeve GetConfig Concern - redis

I'm using Redis MSOpenTech 2.6 with Booksleeve 1.3.38. Whenever I execute
Dictionary<string, string> config = conn.Server.GetConfig("save").Result;
I get the following:
save 900 0 300 0 60 0
which I know is incorrect, since I can read the .conf file and it's set to the standard
900 1 300 10 60 10000
I've tried running the command with admin privileges and without, and it's always the same. Is there something I'm missing, or is this a bug with the MS OpeTech version of Redis?

Related

ZFS: Unable to expand pool after increasing disk size in vmware

I have a Centos7 VM with ZFS on linux installed.
The VM has a disk /dev/sdb, that I've added to a pool named 'backup', and in this pool created a dataset.
Now, I wanted to increase the size of the disk in VMware, and then expand the size of the pool, but I'm not getting this to work.
I've tried 'zpool online -e backup sdb', but nothing changes.
I've tried running 'partprobe /dev/sdb' before and after the live above, but nothing changes.
I've tried rebooting + the above, nothing changes.
I've tried "parted /dev/sdb",resizing the partition (it suggests the actual new size of the volume), and then all of the above. But nothing changes
I've tried 'zpool export backup' + 'zpool import backup' in various combinations with all of the above. No luck
And also: 'lsblk' and 'df -h' reports the old/wrong size of /dev/sdb, even if parted seems to understand that it has been increased.
PS: autoexpand=on
What to do?
I faced a similar issue today and had to try a lot before finding the solution.
When I tried the known solutions (using zpool) of setting autoexpand as on and also restarting the partprobe, system would not auto expand (even after a restart).
Finally, I could solve it using parted instead of getting into zpool at all.
We need to be careful here since wrong partition selections can cause data loss.
What worked for me in your situation
Step 1: Find which pool you are trying to expand. In my case, it is 5 as seen below (unallocated space is after this pool). Use parted -l
parted -l
Output
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 69.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 540MB 538MB fat32 EFI System Partition boot, esp
3 540MB 2009MB 1469MB swap
4 2009MB 3592MB 1583MB zfs
5 3592MB 32.2GB 28.6GB zfs
Step 2: Instructing explictly to expany pool number 5 to 100% available. Note that '5' is not static. You need to use the pool id you wish to expand. Double-check this. Use parted /dev/XXX resizepart YY 100%
parted /dev/sda resizepart 5 100%
After this, I was able to use the entire space in VM.
For reference:
LSBSK Before
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 29.5G 0 part
LSBSK After
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 61.7G 0 part

How to monitor the process in a container?

I currently look into the LXC container API. I am trying to figure out how can I make the operating system know to which container the currently running process belongs. In this way, OS can allocate resource for processes according to the container.
I am assuming your query is - Given a PID, how to find the container in which this process is running?
I will try to answer it based on my recent reading on Linux containers. Each container can be configured to start with its own user and group id mappings.
From https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html:
lxc.id_map
Four values must be provided. First a character, either 'u', or 'g', to specify whether user or group ids are being mapped. Next is
the first userid as seen in the user namespace of the container. Next
is the userid as seen on the host. Finally, a range indicating the
number of consecutive ids to map.
So, you would add something like this in config file (Ex: ~/.config/lxc/default.conf):
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
The above basically means that uids/gids between 0 and 65536 are mapped to numbers between 100000 and 1655356. So, a uid of 0 (root) on container will be seen as 100000 on host
For Example, inside container it will look something like this:
root#unpriv_cont:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 02:18 ? 00:00:00 /sbin/init
root 157 1 0 02:18 ? 00:00:00 upstart-udev-bridge --daemon
But on host the same processes will look like this:
ps -ef | grep 100000
100000 2204 2077 0 Dec12 ? 00:00:00 /sbin/init
100000 3170 2204 0 Dec12 ? 00:00:00 upstart-udev-bridge --daemon
100000 1762 2204 0 Dec12 ? 00:00:00 /lib/systemd/systemd-udevd --daemon
Thus, you can find the container of a process by looking for its UID and relating it to the mapping defined in that container's config.

APC OpCode - apc.filters - Excluding a directory from cache

I have recently implemented APC on my KnownHost SSD VPS and it is working very well (WordPress sites).
One thing I would like to do is exclude a directory (or directories) from the cache.
I have read the documentation on apc.filters, and it seems confusing to me on whether directories or just file types can be excluded. I have also done extensive searching on the web, and have not found a working example of excluding a directory.
I have tried numerous variations for apc.filters, and have yet to find one that will exclude my directory.
So if my directory is located on this path in the server:
/home/my_user/public_html/my_directory
What would the correct value be for apc.filters to exclude the "my_directory" sub-directory?
Moved from the question
UPDATE: I found the answer (with help from KnownHost)
The correct syntax to exclude one directory is:
apc.filters = "-my_directory/.*";
Multiple directories are:
apc.filters = "-my_directory/.*,-my_directory2/.*";
I hope this can help someone out there, as I could not get it right, or find any information on it.
Thanks
In the interest of complete information, here are my APC runtime settings:
apc.cache_by_default 1
apc.canonicalize 1
apc.coredump_unmap 0
apc.enable_cli 0
apc.enabled 1
apc.file_md5 0
apc.file_update_protection 2
apc.filters
apc.gc_ttl 3600
apc.include_once_override 0
apc.lazy_classes 0
apc.lazy_functions 0
apc.max_file_size 2M
apc.mmap_file_mask /tmp/apc.XXXXXX
apc.num_files_hint 3000
apc.preload_path
apc.report_autofilter 0
apc.rfc1867 0
apc.rfc1867_freq 0
apc.rfc1867_name APC_UPLOAD_PROGRESS
apc.rfc1867_prefix upload_
apc.rfc1867_ttl 3600
apc.serializer default
apc.shm_segments 1
apc.shm_size 512M
apc.slam_defense 0
apc.stat 1
apc.stat_ctime 0
apc.ttl 7200
apc.use_request_time 1
apc.user_entries_hint 4096
apc.user_ttl 7200
apc.write_lock 1
Comments from other users suggest that it may also be necessary to set apc.cache_by_default=0

Connection reset by peer error in MongoDb on bulk insert

I am trying to insert 500 documents by doing a bulk insert in pymongo and i get this error
File "/usr/lib64/python2.6/site-packages/pymongo/collection.py", line 306, in insert
continue_on_error, self.__uuid_subtype), safe)
File "/usr/lib64/python2.6/site-packages/pymongo/connection.py", line 748, in _send_message
raise AutoReconnect(str(e))
pymongo.errors.AutoReconnect: [Errno 104] Connection reset by peer
i looked around and found here that this happens because the size of inserted documents exceeds 16 MB so according to that the size of 500 documents should be over 16 MB. So i checked the size of the size of the 500 documents(python dictionaries) like this
size=0
for dict in dicts:
size+=dict.__sizeof__()
print size
this gives me 502920. This is like 500 KB. way less than 16 MB. Then why do i get this error.
I know i am calculating the size of python dictionaries not BSON documents and MongoDB takes in BSON documents but that cant turn 500KB into 16+ MB. Moreover i dont know how to convert a python dict into A BSON document.
My MongoDB version is 2.0.6 and pymongo version is 2.2.1
EDIT
I can do a bulk insert with 150 documents and thats fine but over 150 documents this error appears
This Bulk Inserts bug has been resolved, but you may need to update your pymongo version:
pip install --upgrade pymongo
The error occurs due to the fact that the bulk inserted documents have
an overall size of greater than 16 MB
My method of calculating the size of dictionaries was wrong.
When i manually inspected each key of the dictionary and found that 1 key was having a value of size 300 KB. So that did make the overall size of documents in the bulk insert more than 16 MB. (500*(300+)KB) > 16 MB. But i still dont know how to calculate size of a dictionary without manually inspecting it. Can someone please suggest?
Just had the same error and got around it by creating my own small bulks like this:
region_list = []
region_counter = 0
write_buffer = 1000
# loop through regions
for region in source_db.region.find({}, region_column):
region_counter += 1 # up _counter
region_list.append(region)
# save bulk if we're at the write buffer
if region_counter == write_buffer:
result = user_db.region.insert(region_list)
region_list = []
region_counter = 0
# if there is a rest, also save that
if region_counter > 0:
result = user_db.region.insert(region_list)
Hope this helps
NB: small update, from pymongo 2.6 on, PyMongo will auto-split lists based on the max transfer size: "The insert() method automatically splits large batches of documents into multiple insert messages based on max_message_size"

redis config question?

I am using redis for caching but recently I ran into a problem with the amount of memory used - had to restart my server since all ram had been consumed.
It's not the biggest machine but how should I configure redis to avoid the same problem again?
free -m
total used free shared buffers cached
Mem: 240 222 17 0 6 38
-/+ buffers/cache: 177 62
Swap: 255 46 209
I have changed the following settings:
timeout 60
databases 1
save 300 1
save 60 100
maxmemory 104857600
top
top - 14:15:28 up 1:19, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 49 total, 1 running, 48 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 245956k total, 228420k used, 17536k free, 6916k buffers
Swap: 262136k total, 47628k used, 214508k free, 39540k cached
you can use the "maxmemory" directive in the config file: when this amount of memory is exceeded then Redis will expire earlier keys having already an expire set (the keys that would expire sooner are the first that will be removed).
Unlike memcached, redis is supposed to be a databse; so it won't automatically remove old values to make room for new ones.
You have to explicitly set the expire time for each key/value, and even then you could overflow if you create key/values faster than that.
Use Redis virtual memory in Redis 2.0:
http://antirez.com/post/redis-virtual-memory-story.html