Can a new process with non-zero resource requirements be added while Banker's Algorithm is running without introducing deadlock? - process

I know that banker's algorithm requires a maximum resource needs for every process to ensure that system does not go into deadlock. But in real systems neither the resources available nor the demands of processes for resources are consistent over time.
So what happens when a new process is added while banker's algorithm is running? Could it result in deadlock?

Related

Can a Deadlock occur with CPU as a resource?

I am on my fourth year of Software Engineering and we are covering the topic of Deadlocks.
The generalization goes that a Deadlock occurs when two processes A and B, use two resources X and Y and wait for the release of the other process resource before releasing theirs.
My question would be, given that the CPU is a resource in itself, is there a scenario where there could be a deadlock involving CPU as a resource?
My first thought on this problem is that you would require a system where a process cannot be released from the CPU by timed interrupts (it could just be a FCFS algorithm). You would also require no waiting queues for resources, because getting into a queue would release the resource. But then I also ask, can there be Deadlocks when there are queues?
CPU scheduler can be implemented in any way, you can build one which used FCFS algorithm and allowed processes to decide when they should relinquish control of CPU. but these kind of implementations are neither going to be practical nor reliable since CPU is the single most important resource an operating system has and allowing a process to take control of it in such a way that it may never be preempted will effectively make process the owner of the system which contradicts the basic idea that operating system should always be in control of the system.
As far as contemporary operating systems (Linux, Windows etc) are concerned, this will never happen because they don't allow such situations.

Visual Basic .NET limit a process to a single thread

How do you keep a Visual Basic program (or, better, only a single portion of it) to a single thread? For example
[Hypothetical thread limiting command here]
my code and such
[End thread limitation]
Is there any way to do this? Sorry if I am being ambiguous.
Unless you are explicitly starting other threads, or are using a 3rd party library that uses threads, Visual Basic (or any other .NET language) will not run with multiple threads. You may see some debugging messages about threads starting and shutting down, but those are from the garbage collector running in the background and not your application.
Web projects are an exception to this, as each request will be handled on a separate thread.
UPDATE
What you are seeing is most likely a combination of multiple effects.
The most important being something called Hyper-threading. Despite it's name, it has nothing to do with threads at the application level. Most likely the CPU in your computer has only 2 physical cores that support Hyper-threading (HT). HT will make each physical core show up as 2 logical cores in the operating system. The CPU will decide to process instructions on one or both of these cores on its own. This isn't true multi-threading, and you will never have to worry about any effects from HT in your code.
Another cause for what you are seeing is called Processor affinity(PA), or more properly, the lack of PA. The operating system has the choice of which logical CPUs to execute your code on. It also has the option of moving that execution around to different CPUs to try and optimize its workload. It can do this whenever it wants and as often as it wants; sometimes it can happen very rapidly. Depending on your OS you can "pin" a program to a specific core (or set of cores) to stop this from happening, but again, it won't should not affect your program's execution in any way.
As far as the rise in CPU temperature you are seeing across all core, it can be explained pretty easily. The physical cores on your CPU are very close together, under a heat-sink. As one core heats up it will warm up the other core near it.

operating system - context switches

I have been confused about the issue of context switches between processes, given round robin scheduler of certain time slice (which is what unix/windows both use in a basic sense).
So, suppose we have 200 processes running on a single core machine. If the scheduler is using even 1ms time slice, each process would get its share every 200ms, which is probably not the case (imagine a Java high-frequency app, I would not assume it gets scheduled every 200ms to serve requests). Having said that, what am I missing in the picture?
Furthermore, java and other languages allows to put the running thread to sleep for e.g. 100ms. Am I correct in saying that this does not cause context switch, and if so, how is this achieved?
So, suppose we have 200 processes running on a single core machine. If
the scheduler is using even 1ms time slice, each process would get its
share every 200ms, which is probably not the case (imagine a Java
high-frequency app, I would not assume it gets scheduled every 200ms
to serve requests). Having said that, what am I missing in the
picture?
No, you aren't missing anything. It's the same case in the case of non-pre-emptive systems. Those having pre-emptive rights(meaning high priority as compared to other processes) can easily swap the less useful process, up to an extent that a high-priority process would run 10 times(say/assume --- actual results are totally depending on the situation and implementation) than the lowest priority process till the former doesn't produce the condition of starvation of the least priority process.
Talking about the processes of similar priority, it totally depends on the Round-Robin Algorithm which you've mentioned, though which process would be picked first is again based on the implementation. And, Windows and Unix have same process scheduling algorithms. Windows and Unix does utilise Round-Robin, but, Linux task scheduler is called Completely Fair Scheduler (CFS).
Furthermore, java and other languages allows to put the running thread
to sleep for e.g. 100ms. Am I correct in saying that this does not
cause context switch, and if so, how is this achieved?
Programming languages and libraries implement "sleep" functionality with the aid of the kernel. Without kernel-level support, they'd have to busy-wait, spinning in a tight loop, until the requested sleep duration elapsed. This would wastefully consume the processor.
Talking about the threads which are caused to sleep(Thread.sleep(long millis)) generally the following is done in most of the systems :
Suspend execution of the process and mark it as not runnable.
Set a timer for the given wait time. Systems provide hardware timers that let the kernel register to receive an interrupt at a given point in the future.
When the timer hits, mark the process as runnable.
I hope you might be aware of threading models like one to one, many to one, and many to many. So, I am not getting into much detail, jut a reference for yourself.
It might appear to you as if it increases the overhead/complexity. But, that's how threads(user-threads created in JVM) are operated upon. And, then the selection is based upon those memory models which I mentioned above. Check this Quora question and answers to that one, and please go through the best answer given by Robert-Love.
For further reading, I'd suggest you to read from Scheduling Algorithms explanation on OSDev.org and Operating System Concepts book by Galvin, Gagne, Silberschatz.

High CPU with ImageResizer DiskCache plugin

We are noticing occasional periods of high CPU on a web server that happens to use ImageResizer. Here are the surprising results of a trace performed with NewRelic's thread profiler during such a spike:
It would appear that the cleanup routine associated with ImageResizer's DiskCache plugin is responsible for a significant percentage of the high CPU consumption associated with this application. We have autoClean on, but otherwise we're configured to use the defaults, which I understand are optimal for most typical situations:
<diskCache autoClean="true" />
Armed with this information, is there anything I can do to relieve the CPU spikes? I'm open to disabling autoClean and setting up a simple nightly cleanup routine, but my understanding is that this plugin is built to be smart about how it uses resources. Has anyone experienced this and had any luck simply changing the default configuration?
This is an ASP.NET MVC application running on Windows Server 2008 R2 with ImageResizer.Plugins.DiskCache 3.4.3.
Sampling, or why the profiling is unhelpful
New Relic's thread profiler uses a technique called sampling - it does not instrument the calls - and therefore cannot know if CPU usage is actually occurring.
Looking at the provided screenshot, we can see that the backtrace of the cleanup thread (there is only ever one) is frequently found at the WaitHandle.WaitAny and WaitHandle.WaitOne calls. These methods are low-level synchronization constructs that do not spin or consume CPU resources, but rather efficiently return CPU time back to other threads, and resume on a signal.
Correct profilers should be able to detect idle or waiting threads and eliminate them from their statistical analysis. Because New Relic's profiler failed to do that, there is no useful way to interpret the data it's giving you.
If you have more than 7,000 files in /imagecache, here is one way to improve performance
By default, in V3, DiskCache uses 32 subfolders with 400 items per folder (1000 hard limit). Due to imperfect hash distribution, this means that you may start seeing cleanup occur at as few as 7,000 images, and you will start thrashing the disk at ~12,000 active cache files.
This is explained in the DiskCache documentation - see subfolders section.
I would suggest setting subfolders="8192" if you have a larger volume of images. A higher subfolder count increases overhead slightly, but also increases scalability.

Is there a way to make threads run slow?

This question is probably opposite of what every developer wants their system to do.
I am creating a software that looks into a directory for specific files and reads them in and does certain things. This can create high CPU load. I use GCD to make threads which are put into NSOperationQueue. What I was wondering, is it possible to make this operation not take this huge CPU load? I want to run things way slower, as speed is not an issue, but that the app should play very nice in the background is very important.
In short. Can I make NSOperationQueue or threads in general run slowly without using things like sleep?
The app traverses a directory structure, finds all images and creates thumbnails. Just the traversing of the directories makes the CPU load quite high.
Process priority: nice / renice.
See:
https://superuser.com/questions/42817/is-there-any-way-to-set-the-priority-of-a-process-in-mac-os-x#42819
but you can also do it programmatically.
Your threads are being CPU-intensive. This leads to two questions:
Do they need to be so CPU-intensive? What are they doing that's CPU-intensive? Profile the code. Are you using (say) a quadratic algorithm when you could be using a linear one?
Playing nicely with other processes on the box. If there's nothing else on the box then you /want/ to use all of the available CPU resource: otherwise you're just wasting time. However, it there are other things running then you want to defer to them (within reason), which means giving your process a lower priority (i.e. /higher/ nice value) than other processes. Processes by default have nice value 0, so just make it bigger (say +10). You have to be root to give a process negative niceness.
The Operation Queues section of the in the Concurrency Programming Guide describes the process for changing the priority of a NSOperation:
Changing the Underlying Thread Priority
In OS X v10.6 and later, it is possible to configure the execution priority of an operation’s underlying thread. Thread policies in the system are themselves managed by the kernel, but in general higher-priority threads are given more opportunities to run than lower-priority threads. In an operation object, you specify the thread priority as a floating-point value in the range 0.0 to 1.0, with 0.0 being the lowest priority and 1.0 being the highest priority. If you do not specify an explicit thread priority, the operation runs with the default thread priority of 0.5.
To set an operation’s thread priority, you must call the setThreadPriority: method of your operation object before adding it to a queue (or executing it manually). When it comes time to execute the operation, the default start method uses the value you specified to modify the priority of the current thread. This new priority remains in effect for the duration of your operation’s main method only. All other code (including your operation’s completion block) is run with the default thread priority. If you create a concurrent operation, and therefore override the start method, you must configure the thread priority yourself.
Having said that, I'm not sure how much of a performance difference you'll really see from adjusting the thread priority. For more dramatic performance changes, you may have to use timers, sleep/suspend the thread, etc.
If you're scanning the file system looking for changed files, you might want to refer to the the File System Events Programming Guide for guidance of lightweight techniques for responding to file system changes.