I used MATLAB to write a simulation engine for the simulation of product flows in a production environment. I inherited all used class from handle and used these handles (quite excessively, I guess) to link between e.g. products and work systems, orders, etc.
Now, to run multiple instances of my model, I create a simulation object that contains all other objects and their relations, run the model and free the simulation variable.
Creating and running the model takes ~50 seconds (this including the generation of all objects, their relations and of course the calculation over the course of the simulation run). Freeing the variable before the next run, currently takes ~3-4 minutes!
I tried clear, delete and plain overwriting of the old simulation object, without notifying significant differences in performance.
Is there a way to improve the performance without rewriting the code?
It is hard to say anything particular about your code without seeing it, or at least some high level design.
A short advice before optimizing the OO aspects :
Are you sure that the bottleneck is in the objects creation? Verify it with the profiler.
If the OO is indeed the bottleneck, here are some guesses:
You have used circular references. Matlab does not use garbage collector, but rather a smart reference counting mechanism, which can be quite slow in this case. Change the references between the objects to be tree-like instead.
You have created an enormous amount of objects. Matlab has a significant overhead for each object, much more than the traditional languages (c++, java). Re-design the system to have a smaller amount of objects.
Do you happen to use cell arrays to store other handle objects from within a handle object? This can cause serious slowdowns prior to Matlab R2011A. See http://www.mathworks.com/support/solutions/en/data/1-6VVMS0/index.html?product=ML
A workaround is to use a temp local variable to manipulate cell array, then assign this tmp variable back to your handle object property. I saw ~ 100X improvement in performance after doing this in one case.
Related
Some Vulkan objects (eg vkPipelines, vkCommandBuffers) are able to be created/allocated in arrays (using size + pointer parameters). At a glance, this appears to be done to make it easier to code using common usage patterns. But in some cases (eg: when creating a C++ RAII wrapper), it's nicer to create them one at a time. It is, of course, simple to achieve this.
However, I'm wondering whether there are any significant downsides to doing this?
(I guess this may vary depending on the actual object type being created - but I didn't think it'd be a good idea to ask the same question for each object)
Assume that, in both cases, objects are likely to be created in a first-created-last-destroyed manner, and that - while the objects are individually created and destroyed - this will likely happen in a loop.
Also note:
vkCommandBuffers are also deallocated in arrays.
vkPipelines are destroyed individually.
Are there any reasons I should modify my RAII wrapper to allow for array-based creation/destruction? For example, will it save memory (significantly)? Will single-creation reduce performance?
Remember that vkPipeline creation does not require external synchronization. That means that the process is going to handle its own mutexes and so forth. As such, it makes sense to avoid locking those internal mutexes whenever possible.
Also, the process is slow. So being able to batch it up and execute it into another thread is very useful.
Command buffer creation doesn't have either of these concerns. So there, you should feel free to allocate whatever CBs you need. However, multiple creation will never harm performance, and it may help it. So there's no reason to avoid it.
Vulkan is an API designed around modern graphics hardware. If you know you want to create a certain number of objects up front, you should use the batch functions if they exist, as the driver may be able to optimize creation/allocation, resulting in potentially better performance.
There may (or may not) be better performance (depending on driver and the type of your workload). But there is obviously potential for better performance.
If you create one or ten command buffers in you application then it does not matter.
For most cases it will be like less than 5 %. So if you do not care about that (e.g. your application already runs 500 FPS), then it does not matter.
Then again, C++ is a versatile language. I think this is a non-problem. You would simply have a static member function or a class that would construct/initialize N objects (there's probably a pattern name for that).
The destruction may be trickier. You can again have static member function that would destroy N objects. But it would not be called automatically and it is annoying to have null/husk objects around. And the destructor would still be called on VK_NULL_HANDLE. There is also a problem, that a pool reset or destruction would invalidate all the command buffer C++ objects, so there's probably no way to do it cleanly/simply.
TLDR summary: (a) Should I include (lengthy) method code in classes which may spawn multiple objects at runtime, (b) does doing so cause memory usage bloat, (c) if so should I "outsource" the code to a class that is loaded only once and have the class methods call that, or alternatively (d) does the code get loaded only once with the object definition anyway and I'm worrying about nothing?
........
I don't know whether there's a good answer to this but if there is I haven't found it yet by searching in the usual places.
In my VB.Net (2010 if it matters) WinForms project I have about a dozen or so class objects in an object model. Some of these are pretty simple and do little more than act as data storage repositories. The ones further up the object model, however, have an increasing number of methods. There can be a significant number of higher level objects in use though the exact number will be runtime dependent so I can't be more precise than that.
As I was writing the method code for one of the top level ones I noticed that it was starting to get quite lengthy.
Memory optimisation is something of a lost art given how much memory the average PC has these days but I don't want to make my application a resource hog. So my questions for anyone who knows .Net way better than I do (of which there will be many) are:
Is the code loaded into memory with each instance of the class that's created?
Alternatively is it loaded only once with the definition of the class, and all derived objects just refer to that definition? (I'm not really sure how that could be possible given that, for example, event handlers can be assigned dynamically, but no harm asking.)
If the answer to the first one is yes, would it be more efficient to write the code in a "utility" object which is loaded only once and called from the real class' methods?
Any thoughts appreciated.
Go with whichever is going to be the easier to maintain codebase (shorter methods, etc). That is the more important cost with anything that has increasing complexity.
Memory optimization is only a problem if its a problem. 12 classes is really nothing, when you have hundreds of instances of hundreds of classes, then it may become a problem.
The short answer, it doesn't matter. Your data is stored in memory but your code is loaded only once.
EDIT: I guess I need a longer answer.
If you have 10 instances of a class, the variables that are part of that instance all take up thier own memory space. So if you have 10 properties, variables, etc, that means you have 100(ish) items in your memory. As for your code, it was loaded just once with your assembly. If you create 10 instances of your class, your code is not in memory 10 times.
I'm just learning to program in scala.
I have some experience in functional programming, as I have in object oriented programming.
My question is kind of simple, yet tricky:
Which structures should be used in Scala? Should we only stick to immutables, eg. modifing lists by iterating through it and stick a new one together, or go for mutables? What is your opinion on that, what are the performance aspects, memory related aspects, ...
I'm likely to program in a functional style, but it often expands to an insane amount of effort to do things which are easily done by using mutables. Is it situation dependent, what to use?
Prefer immutable to mutable state. Use mutable state only where it is absolutely necessary. Some notable reasons include:
Performance. The standard libraries make wide use of vars and while loops, even though this is not idiomatic Scala. This should not be emulated, however, except for cases where you have profiled to determine that modifying the code to be more imperative will bring a significant performance gain.
I/O. I/O, or interacting with the outside world is inherently state dependent, and thus must be dealt with in a mutable manner.
This is no different than the recommended coding style found in all major languages, imperative or functional. For example, in Java it is preferable to use data objects with only private final fields. Code written in an immutable (and functional) way is inherently easier to understand because when one sees a val, they know it will never change, reducing the possible number of states any particular object or function can be in.
In many cases, it also allows automatic parallel execution, for example, collection classes in Scala all have a par function, which will return a parallel collection that automatically run the calls to functions like map or reduce in parallel.
(I thought this must be a duplicate but couldn't easily find an earlier similar one, so I venture to answer...)
There is no general answer to this question. The rule of thumb suggested by the creators of Scala is to start with immutable vals and structures and stick to them as long as it makes sense. You can almost always create a workable solution to your problem this way. But if not, of course be pragmatic and use mutability.
Once you have a solution, you can tweak it, test it, measure its performance etc. If you find that e.g. it is too slow or overly complex, identify the critical part of it, understand what makes it problematic and - if needed - reimplement it using mutable variables, ideally keeping it isolated from the rest of the program. Note though that in many cases, a better solution can be found from within the immutable realm as well, so try looking there first. Especially for a beginner like myself, it still happens regularly that the best solution I could come up with looked contorted and complex with no apparent way to improve it - until seeing a simple and elegant solution to the same problem in a few lines of code, created by an experienced Scala developer who controls more of the power of the language and its libraries.
I usually obey the following rules:
Never use static mutable vars
Keep all user defined data types (typically case classes) immutable unless they are very expensive to copy. This will simplify a lot of the application logic.
If a data structure/collection is inherently mutable (i.e. it's designed to change over time), using a mutable data structure/collection might be appropriate. An example might be a large game world that is updated when players move. Remember to (almost) never share these data structures between threads though.
It's fine to use mutable local vars in methods
Use immutable collections for function results. These can be strictly or lazily evaluated depending on what gives best performance in the used context. Be careful if you use a lazily evaluated result which depends on a mutable collection though.
Why should someone ever use the non-NSMutable equivalents of the data structures in Objective-C? When it's a situation when you need a const object that should not be modified? Does using non-NSMutable classes improve performance in any way? Any other situations?
The two main reasons off the top of my head:
An object returning a property can be certain nobody will alter it if it's immutable. The object can therefore return the original instead of making copies all the time. So it's a memory and performance benefit.
When writing your own immutable objects, it's very easy to be thread safe. That naturally flows into being able to write multi-threaded functional-style code which is reasonably efficient and error free.
You also tend to see arguments in favour of the inherent preservation of the original value being useful, especially in terms of semantics and design patterns.
Immutable classes don't tend to be much more efficient in and of themselves with one exception — if you take an immutable copy of a mutable array, for example, then it's clear exactly how much storage is needed and exactly that much can be allocated. Because memory allocation costs time, mutable collections tend to keep some spare storage around because they can't predict how they're going to grow.
const is not directly related to non-mutable objects; I'm more familiar with the latter, so that's what I'll talk about.
A non-mutable object is like a reservation. Imagine that you work at a busy restaurant that only works on a reservation basis—all guests must make a reservation. When someone calls and makes a reservation for eight people at six, you know that you'll be expecting eight people at 6. Of course, this keeps things predictable. You know to set out one table that can sit eight people (it wouldn't make sense to use more than one table, especially at a busy restaurant). You notify the kitchen and tell them to expect eight orders a few minutes after six (okay, maybe you won't, but you might as well). In this way, everything runs smoothly and there are no delays. When the party of eight arrives promptly at six (because everyone in this world is perfectly punctual), you lead them right over to their seats, they order, and enjoy their meal. No problems whatsoever.
A problem arises if the reservation never specifies the number of people or the time. Imagine someone calls and tells you to expect a group of people for dinner. In this case, you have no information. A group could be a couple on a date, a four-person family, or two dozen people for a corporate function. They might arrive late because they were at a movie, really early because they have a young child, or at different times because it was impossible to coordinate everyone. In this case, you would have to scramble to find seating for everyone and the kitchens might suddenly be swamped with a large number of orders. Or you could have blocked off to many seats and the kitchen might find itself with nothing to do. In either case, where you over-estimate or under-estimate, there are delays and lost potential. Anything could happen.
In this metaphor, the restaurant would be the runtime system, and the reservations are the objects. In the first scenario, you have a non-mutable object, like an NSArray. The system knows how much data it'll hold, how many elements there are, and by runtime, what type they are. The system knows that the size won't change, so it can optimize RAM to go around that array, without leaving any precautionary bits. Everything runs smoothly because everything is known.
By contrast, nothing is known with an NSMutableArray. The user might add more elements, so the system has to scramble to find more RAM, rather than using those same clock cycles to crunch some operation; the user might replace an element in the middle with a larger one, having to offset all the later elements—which involves copying all tho elements after. In certain cases, it could involve copying all the elements of the array or string or whatever to a new location, a (potentially) expensive operation. This can impart a significant performance overhead, especially when you use a lot of them. In Java for example, concatenating a string involves copying the entire existing string to a new memory location, and leaving the garbage collector to deal with the old string.
Another compelling reason is that you make it a bit harder to change the data. Users (of the class) have to explicitly make a mutable copy, which helps to ensure that they know what they're doing. This advantage is particularly notable with multiple threads—you don't want to pass a mutable object to something that's running on a background thread, because the foreground thread (or any other) could then be modifying the object, as it's being modified by the original thread, leading to very interesting results.
When working on the early stages of a console-based Python remake of the classic game 'Snake', someone submitted a patch to spawn food at random locations. The code defined a Food class which worked fine, but the logic behind it seemed a little weird.
I think we should delete the food once it's been consumed, then create another one. However, this person simply moves the food to a new random location once it's been consumed. While the latter seems illogical to me, it seems to do the exact same thing, maybe even more efficiently.
My question is: Would it be better to use the former logic, or the later, or am I simply nit-picking over nothing?
This all started at: https://bugs.launchpad.net/snakes-game/+bug/628180
Either is fine - within certain common-sense boundaries.
The latter approach will save re-allocating the object, so recycling it in this way will be more efficient - the gain is likely to be irrelevant in your particular example though unless heap fragmentation is a concern (e.g. on an embedded app with very limited RAM).
The danger with recycling is that the object may retain some vestige of its former state, so may not behave in the same manner as a new object would - in your case the logic is simple, so there is little danger, but with more complex objects this could become significant.
So in general I'd suggest the "create a new object" approach (it follows the principle of "least surprise", and will be less likely to confuse other programmers who come to work on the code) unless there are performance implications (e.g. on an embedded application like a phone where you have very limited resources and don't want a fragmented heap), in which case the "re-use an existing object" may be a smart solution.
I believe both solutions are fine. Relocating the food to another location is brobably less error prone in memory management terms, but due to garbage collection, you shouldn't care about that too much.
I'd argue, while instantiating a new food object is more logical, and closer to the real life model, relocating is more efficient.
The main issue as far as OOP is concerned isn't so much whether the food re-instantiates vs. relocates, but rather that this behavior remain transparent outside of the object. The game engine should be telling the object "you've been eaten" and such, but how the object handles that internally shouldn't be known to the game engine. If, internally, the object maintains a singleton of "food" and the "consume" method simply re-forms the food object with new values, that's fine. That's all internal to the implementation of "food" and just shouldn't be known outside of that class.