Factors affecting the performance, safety and security of object-oriented programs? - oop

How does object oriented programming improve performance, safety and security of programs and what factors affect their performance and security.

OOP doesn't improve performance per se. In fact, it has been a long criticism that OOP increased overall resource consumption. It was a trade-off between performance/optimization with productivity/maintainibility.
In terms of security, while procedural programs could be secure, the advent of OOP and increased reausability which comes in with it has spread reusable good coding practices. At the end of the day, a program that can be seamlessly maintained, built on top of reusable patterns and developed with good security practices should provide an easier foundation to detect and fix security holes.
In summary, OOP doesn't provide any direct advantage in the fields you're asking for but it provides a solid foundation to code better in most business cases. That's all.

Related

How do CRUD-oriented data abstractions in interfaces introduce operational and semantic coupling?

I am going through the book "Patterns for API Design: Simplifying Integration with Loosely Coupled Message Exchanges" and came across this paragraph. I am having a hard time understanding it
Some software engineering and object-oriented analysis and design
(OOAD) methods balance processing and structural aspects in their
steps, artifacts, and techniques; some put a strong emphasis on either
computing or data. Domain-driven design (DDD) [Evans 2003; Vernon
2013], for instance, is an example of a balanced approach.
Entity-relationship diagrams focus on data structure and relationships
rather than behavior. If a data-centric modeling and API endpoint
identification approach is chosen, there is a risk that many CRUD
(create, read, update, delete) APIs operating on data are exposed,
which can have a negative impact on data quality because every
authorized client may manipulate the provider-side data rather
arbitrarily. CRUD-oriented data abstractions in interfaces introduce
operational and semantic coupling.
There is actually a lot I do not understand in this but In particular I am having difficulty with this part
CRUD-oriented data abstractions in interfaces introduce
operational and semantic coupling.
I know what CRUD is but what does data abstractions mean in this context? How does it relate to the endpoint?
what is operational and semantic coupling?

When should we prefer hard coupling over loose coupling?

With all the hype about loose coupling (with reason) and all the information ive read its easy to get all obsessed, to the point of becoming religious about it. Ive been wondering, when is hard coupling favorible over loose coupling? When is an object a candidate for hard coupling? I think that depending on the situation it could reduce some complexity.
Some examples would be appreciated.
Advantages of hard/tight coupling
There are a few scenarios when tight coupling might leave you better off. Typically has to do with speed of development and complexity of the program.
Very Small Projects/Proof of Concepts
In the short term, loose coupling increases development time. Theoretically, for every Service/Client pair there will need to be an extra interface created for it as well. Where hard coupling wouldn't even worry about dependencies. From my experience, many projects in school were small enough where tightly coupled classes were okay. Especially in cases when you would turn in the assignment and never have to touch it again.
Keeping things simple
Often loosely coupled systems contain more components, which adds more complexity to a program. For bigger programs, the complexity is definitely worth it because expanding/modifying the program will be exponentially easier and cause less bugs. However, for small programs or programs that are 99% unlikely to change, the extra time spent trying to program to abstractions and remove any dependencies, just may not be worth it. Analysis Paralysis is more likely designing a program.
Concluding Remarks
As a whole I would never suggest"tightly coupled" classes are better. However, with the interest of time and complexity sometimes "tighter coupled" classes can be justified.

Does OO programming conflict for latency applications?

I just wondered whether OO programming can cause problems for those who design latency applications? Do those who write latency code use OO programming? It'd be interesting to know if the extendability of OO programming trades against the speed of the code?
For instance, I read that virtual functions in C++ are a big 'no no' for latency programmers?
The vast majority of code is not CPU bound. CPUs are ridiculously fast, and spend most of their time waiting passively for IO to catch up. So in most sane cases I would say "no".
It is possible that you have an application that does everything in memory, has perfect loop structures etc, and in this mythical beast you might eek out a tiny bit more performance by forsaking virtual functions. But that is a pretty extreme scenario.
It will depend on the context basically. Generally this is simply not an issue; java / c# are themselves pretty well optimised (on the desktop/server versions at least) and are not massively slow.
No. Compared to other costs like memory locality and cache coherence, algorithm design and language base (interpreter, vm, native), a few virtual functions are a quite trivial cost. OO principles like encapsulation are wholly compile-time operations, and abstraction can be achieved for free using generic programming. Unless you go totally overboard with them, which in some cases is quite unavoidable in languages which simply don't have generic programming, or if you're calling it in an incredibly tight loop, but in that case you likely won't find a cheap alternative.
Whoever told you that programmers who are concerned with performance in C++ do not use virtual functions was wrong.

Resources for evidence-based development practices

I am interested in studies and papers detailing trials that explore the evidence for different development practices in object-oriented languages. I am particularly keen on studies that measure productivity or consider the influence of modern IDEs. Can you point recommend any good resources for this? Has much work been done in this area of late?
For better or worse, empirically-driven productivity metrics are synonymous with Agile these days.
One that looks interesting for (shockingly) the agile research paper list
http://www.agilealliance.org/index.php/download_file/view/18/
It appears as though this is an ongoing research area.

Basic skills to work as an optimiser in the gaming industry [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm curious about a certain job title, that of "senior developer with a specialty in optimisation." It's not the actual title but that's essentially what it would be. What would this mean in the gaming industry in terms of knowledge and skills? I would assume basic stuff like
B-trees
Path finding
Algorithmic analysis
Memory management
Threading (and related topics like thread safety, atomicity, etc)
But this is only me conjecturing. What would be the real-life (and academic) basic knowledge required for such a job?
I interviewed for such a position a few years ago at one of the Big North American game studios.
The job required a lot of deep pipeline assembly programming, arithmetic optimization algorithms (think Duff's Device, branchless ifs), compile-time computation (SWAR), meta-template programming, computation of many values at once in parallel in very large registers (I forget the name for that)... You'll need to be solid on operating system fundementals, low level system operations, linear algebra, and C++ especially templates. You'll also become very familiar with the peculiar architecture of the PlayStation3, and probably be involved in developing libraries for that environment that the company's game teams will build on top of.
Generally I concur with Ether's post; this will typically be more about low-level optimisation than algorithmic stuff. Knowing good algorithms comes in handy, but there are many cases in games where you prefer the O(N) solution over the O(logN) solution because the first is far friendlier on the cache and requires less memory management. So you need a more holistic knowledge.
Perhaps on a more general level, the job may want to know if you can do some or all of the following:
use a CPU profiler (eg. VTune, CodeAnalyst) in both sampling and call graph mode;
use graphical profilers (eg. Microsoft Pix, NVPerfHud)
write your own profiling/timer code and generate useful output with it;
rewrite functions to remove dynamic memory allocations;
reorganise and reduce data to be more cache-friendly;
reorganise data to make it more SIMD-friendly;
edit graphics shaders to use fewer and cheaper instructions;
...and more, I'm sure.
This is a lot like my job actually. Real-life knowledge that would be practical for this:
Experience in using profilers of all kinds to locate bottlenecks.
Experience and skill in determining the reason those bottlenecks exist.
Good understanding of CPU caches, virtual memory, and common bottlenecks such as load-hit-store penalties, L2 misses, floating point code, etc.
Good understanding of multithreading and both lockless and locking solutions.
Good understanding of HLSL and graphics programming, including linear algebra.
Good understanding of SIMD techniques and the specific SIMD interfaces on relevant hardware (paired singles, VMX, SSE/MMX).
Good understanding of the assembly language used on relevant hardware. If writing assembly, a good understanding of instruction pairing, branch prediction, delay slots (if applicable), and any and all applicable stalls on the target platform.
Good understanding of the compilation and linking process, binary formats used on the target hardware, and tools to manipulate all of the above (including available compiler flags and optimizations).
Every once in a while people ask how to become good at low-level optimization. There are a few good sources of info, mostly proprietary, but I think it generally comes down to experience.
This is one of those "if you got it you know it" type of things. It's hard to list out specifics, and some studios will have different criteria than others.
To put it simply, the 'Senior Developer' part means you've been around the block; you have multiple years of experience in which you've excelled and have shipped games. You should have a working knowledge of a wide range of topics, with things such as memory management high up the list.
"Specialty in Optimization" essentially means that you know how to make a game run faster. You've already spent a significant amount of time successfully optimizing games which have shipped. You should have a wide knowledge of algorithms, 3d rendering (a lot of time is spent rendering), cpu intrinsics, memory management, and others. You should also typically have an in depth knowledge of the hardware you'd be working on (optimizing PS3 can be substantially different than optimizing for PC).
This is at best a starting point for understanding. The key is having significant real world experience in the topic; at a senior level it should preferably be from working on titles that have shipped.