tensorflow won't release system memory after force quit - tensorflow

I use tf.train.string_input_producer() and tf.train.shuffle_batch() to read data.
Since I don't have a good stop criterion, I manually terminate the training program by Ctrl-C.
I find that every time I terminated the program in this way, there were a huge amount of memory not released (checking by "free -g" command).
And when I used top command, there were no programs using those memory.
I have to reboot my machine after some numbers of training to release those memory.
Is it because I didn't explicitly close those queues? I think those memory should store my input data, otherwise it won't be so huge (50GB).
Thanks.

Related

Is it possible to set a baseline memory usage in valgrind for leak detection?

Is there a way to tell valgrind from inside my code when to start and when to stop checking for memory leaks?
I am using a legacy testing framework which must link with my testing program in order to run. The framework has memory leaks in it - valgrind shows about 50KB of memory that has not been released, but is reachable via heuristic. This is annoying, because I must keep this number in mind to see how much memory is leaked from my code. It would be a lot more convenient if I could tell valgrind to start collecting memory stats when my first test begins, and stop collecting when the last test is over. Is there an API for it?
valgrind memcheck allows to do a "differential" leak search. The differential leak search reports the delta between the previous leak search and the current situation.
You can do such a differential leak search using monitor commands with vgdb, either from the shell or from gdb. See https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands.
You can also use the client request VALGRIND_DO_CHANGED_LEAK_CHECK from your program, see https://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.clientreqs.

Redis-server using all RAM at startup

i'm using redis and noticed that it crashes with the following error :
MISCONF Redis is configured to save RDB snapshots
I tried the solution suggested in this post
but everything seems to be OK in term of permissions and space.
htop command tells me that redis is consuming 70% of RAM. i tried to stop / restart redis in order to flush but at startup, the amount of RAM used by redis was growing up dramatically and stops around 66%. I'm pretty sure at this moment no processus was using any redis instance !
what happens there ?
The growing up ram issue is an expected behaviour of Redis at first data load, after restarts, writing the data to disk (snapshot process). Redis tends to allocate memory as much as it can unless you don't use "maxmemory" option at your conf file.
It allocates memory but not release immediately. Sometimes it takes hours, I saw such cases.
Well known fact about Redis is that, it can allocate memory up to twice size of the dataset it keeps.
I suggest you to wait couple of hours without any restart (Redis can work in this time, get/set operations etc.) and keep watching the memory.
Please check that too
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist.

How do I mitigate CUDA's very long initialization delay?

Initializing CUDA in a newly-created process can take quite some time as long as a half-second or more on many server-grade machines of today. As #RobertCrovella explains, CUDA initialization usually includes establishment of a Unified Memory model, which involves harmonizing of device and host memory maps. This can take quite a long time for machines with a lot of memory; and there might be other factors contributing to this long delay.
This effect becomes quite annoying when you want to run a sequence of CUDA-utilizing processes, which do not use complicated virtual memory mappings: They each have to wait their their long wait - despite the fact that "essentially", they could just re-use whether initializations CUDA made the last time (perhaps with a bit of cleanup code).
Now, obviously, if you somehow rewrote the code for all those processes to execute within a single process - that would save you those long initialization costs. But isn't there a simpler approach? What about:
Passing the same state information / CUDA context between processes?
Telling CUDA to ignore most host memory altogether?
Making the Unified Memory harmonization more lazy than it is now, so that it only happens to the extent that it's actually necessary?
Starting CUDA with Unified Memory disabled?
Keeping some daemon process on the side and latching on to it's already-initialized CUDA state?
What you are asking about already exists. It is called MPS (MULTI-PROCESS SERVICE), and it basically keeps a single GPU context alive at all times with a daemon process that emulates the driver API. The initial target application is MPI, but it does bascially what you envisage.
Read more here:
https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf
http://on-demand.gputechconf.com/gtc/2015/presentation/S5584-Priyanka-Sah.pdf

Kernel - Scheduler : what happens when switching between process

Context:
I don't really understand how the kernel saves the state of a running code when it gets to exceed its time slice.
I don't visualize what happens actually.
Question:
1) Where is stored the current running code (and its stack ?) ?
2) When the kernel will "see" the code again, will it just follow an offset and keep going as if nothing happened ?
It is not clear to me.
Thanks
Current code instruction pointer and current stack pointer are stored in task_struct->ip and task_struct->sp (for x86) and new process's task_struct->ip and task_struct->sp and are loaded back to sp and ip registers when switch_to() is called in Linux kernel.
Kernel's switch_to() does many things like resetup of EIP, stack, FPU, segment descriptors, debug registers while switching to new process.
Then kernel's switch_mm() switch the virtual memory mappings from last process to new process.
It depends on the OS but as a general rule there is a block of storage which holds information about each process (usually called the Process Control Block or PCB). This information includes a pointer to the current line of code that is being executed and the contents of registers etc, so the process can start again where it stopped last time.
This block of information is owned by the OS itself not the process so it lives beyond the suspension of the process.
The program code itself is not stored in the PCB - it simply exists in memory or on disk. It can even be shared between processes, for example several processes may be running the same program, each at a different point in the code at any given time and each with their own set of 'variables' or data unique to that process's run of the program. All the OS needs is the variables and the line number or pointer to know where a particular process was in the code when it was suspended, and it can start from that point again.
It is worth noting that any RAM the process was using may or may not be still there when it restarts. In general an OS will try to leave recently used or frequently used RAM chunks (or 'pages') in memory if possible. If it needs to free up space, however, it may swap the 'page' out to disk, but disk access is much, much slower, hence the desire to avoid swapping out memory which is likely to be used again if possible.
In the worst case situation an OS may find it swaps out a process and then very soon the new process need to use some memory which has to be retrieved from disk. It is suspended while this happens as the retrieval take a long time in CPU terms. It may then happen that the next process also very soon finds itself in the same situation. The OS is now spending a lot of its time swapping processes and memory in and out and much less of its time doing real work - this is commonly called 'thrashing'.

Which takes longer time? Switching between the user & kernel modes or switching between two processes?

Which takes longer time?
Switching between the user & kernel modes (or) switching between two processes?
Please explain the reason too.
EDIT : I do know that whenever there is a context switch, it takes some time for the dispatcher to save the status of the previous process in its PCB, and then reload the next process from its corresponding PCB. And for switching between the user and the kernel modes, I know that the mode bit has to be changed. Isn't it all, or is there more to it?
Switching between processes (given you actually switch, not run them in parallel) by an order of oh-my-god.
Trapping from userspace to kernelspace used to be done with a processor interrupt earlier. Around 2005 (don't remember the kernel version), and after a discussion on the mailing list where someone found that trapping was slower (in absolute measures!) on a high-end xeon processor than on an earlier Pentium II or III (again, my memory), they implemented it with a new cpu instruction sysenter (which had actually existed since Pentium Pro I think). This is done in the Virtual Dynamic Shared Object (vdso) page in each process (cat /proc/pid/maps to find it) IIRC.
So, nowadays, a kernel trap is basically just a couple of cpu instructions, hence rather few cycles, compared to tenths or hundreds of thousands when using an interrupt (which is really slow on modern CPU's).
A context switch between processes is heavy. It means storing all processor state (registers, etc) to RAM (at a magic memory location in the user process space actually, guess where!), in practice dirtying all cached memory in the cpu, and reading back the process state for the new process. It will (likely) have nothing still in the cpu cache from last time it ran, so each memory read will be a cache miss, and needed to be read from RAM. This is rather slow. When I was at the university, I "invented" (well, I did come up with the idea, knowing that there is plenty of dye in a CPU, but not enough cool if it's constantly powered) a cache that was infinite size although unpowered when unused (only used on context switches i.e.) in the CPU, and implemented this in Simics. Implemented support for this magic cache I called CARD (Context-switch Active, Run-time Drowsy) in Linux, and benchmarked rather heavily. I found that it could speed-up a Linux machine with lots of heavy processes sharing the same core with about 5%. This was at relatively short (low-latency) process time slices, though.
Anyway. A context switch is still pretty heavy, while a kernel trap is basically free.
Answer to at which memory location in user-space, for each process:
At address zero. Yep, the null pointer! You can't read from this entire page from user-space anyway :) This was back in 2005, but it's probably the same now unless the CPU state information has grown larger than a page size, in which case they might have changed the implementation.