What is the relationship between code coupling, cohesion and fragility? - oop

I am trying to understand code coupling, cohesion and fragility. I wanted to check my current understanding. Currently I have come up with following conclusions:
low coupling = high cohesion = low fragility
and vice-versa:
high coupling = low cohesion = high fragility
When I use "=" I mean that achieving any of the three will achieving the other two. Is this true or not or are their any exceptions to this. Example with facts would be more useful.

Related

How do CRUD-oriented data abstractions in interfaces introduce operational and semantic coupling?

I am going through the book "Patterns for API Design: Simplifying Integration with Loosely Coupled Message Exchanges" and came across this paragraph. I am having a hard time understanding it
Some software engineering and object-oriented analysis and design
(OOAD) methods balance processing and structural aspects in their
steps, artifacts, and techniques; some put a strong emphasis on either
computing or data. Domain-driven design (DDD) [Evans 2003; Vernon
2013], for instance, is an example of a balanced approach.
Entity-relationship diagrams focus on data structure and relationships
rather than behavior. If a data-centric modeling and API endpoint
identification approach is chosen, there is a risk that many CRUD
(create, read, update, delete) APIs operating on data are exposed,
which can have a negative impact on data quality because every
authorized client may manipulate the provider-side data rather
arbitrarily. CRUD-oriented data abstractions in interfaces introduce
operational and semantic coupling.
There is actually a lot I do not understand in this but In particular I am having difficulty with this part
CRUD-oriented data abstractions in interfaces introduce
operational and semantic coupling.
I know what CRUD is but what does data abstractions mean in this context? How does it relate to the endpoint?
what is operational and semantic coupling?

Is there a standard way to breakdown huge optimization models to ensure the submodules are working correctly?

Apologies as this may be a general question for optimization:
For truly large scale optimization models it feels as if the model becomes quite complex and cumbersome before it is even testable. For small scale optimization problems, up to even 10-20 constraints its feasible to code the entire program and just stare at it and debug.
However, for large scale models with potentially 10s-100 of constraint equations it feels as if there should be a way to test subsections of the optimization model before putting the entire thing together.
Imagine you are writing a optimization for a rocket that needs to land on the moon the model tells the rocket how to fly itself and land on the moon safely. There might be one piece of the model that would dictate the gravitational effects of the earth and moon and their orbits together that would influence how the rocket should position itself, another module that dictates how the thrusters should fire in order to maneuver the rocket into the correct positions, and perhaps a final module that dictates how to optimally use the various fuel sources.
Is there a good practice to ensure that one small section (e.g the gravitational module) works well independently of the whole model. Then iteratively testing the rocket thruster piece, then the optimal fuel use etc. Since, once you put all the pieces together and the model doesn't resolve (perhaps due to missing constraints or variables) it quickly becomes a nightmare to debug.
What are the best practices if any for iteratively building and testing large-scale optimization models?
I regularly work on models with millions of variables and equations. ("10s-100 of constraint equations" is considered small-scale). Luckily they all have less than say 50 blocks of similar equations (indexed equations). Obviously just eyeballing solutions is impossible. So we add a lot of checks (also on the data, which can contain errors). For debugging, it is a very good idea to have a very small data set around. Finally, it helps to have good tools, such as modeling systems with type/domain checking, automatic differentiation etc.)
Often we cannot really check equations in isolation because we are dealing with simultaneous equations. The model only makes sense when all equations are present. So "iterative building and testing" is usually not possible for me. Sometimes we keep small stylized models around for documentation and education.

When should we prefer hard coupling over loose coupling?

With all the hype about loose coupling (with reason) and all the information ive read its easy to get all obsessed, to the point of becoming religious about it. Ive been wondering, when is hard coupling favorible over loose coupling? When is an object a candidate for hard coupling? I think that depending on the situation it could reduce some complexity.
Some examples would be appreciated.
Advantages of hard/tight coupling
There are a few scenarios when tight coupling might leave you better off. Typically has to do with speed of development and complexity of the program.
Very Small Projects/Proof of Concepts
In the short term, loose coupling increases development time. Theoretically, for every Service/Client pair there will need to be an extra interface created for it as well. Where hard coupling wouldn't even worry about dependencies. From my experience, many projects in school were small enough where tightly coupled classes were okay. Especially in cases when you would turn in the assignment and never have to touch it again.
Keeping things simple
Often loosely coupled systems contain more components, which adds more complexity to a program. For bigger programs, the complexity is definitely worth it because expanding/modifying the program will be exponentially easier and cause less bugs. However, for small programs or programs that are 99% unlikely to change, the extra time spent trying to program to abstractions and remove any dependencies, just may not be worth it. Analysis Paralysis is more likely designing a program.
Concluding Remarks
As a whole I would never suggest"tightly coupled" classes are better. However, with the interest of time and complexity sometimes "tighter coupled" classes can be justified.

Factors affecting the performance, safety and security of object-oriented programs?

How does object oriented programming improve performance, safety and security of programs and what factors affect their performance and security.
OOP doesn't improve performance per se. In fact, it has been a long criticism that OOP increased overall resource consumption. It was a trade-off between performance/optimization with productivity/maintainibility.
In terms of security, while procedural programs could be secure, the advent of OOP and increased reausability which comes in with it has spread reusable good coding practices. At the end of the day, a program that can be seamlessly maintained, built on top of reusable patterns and developed with good security practices should provide an easier foundation to detect and fix security holes.
In summary, OOP doesn't provide any direct advantage in the fields you're asking for but it provides a solid foundation to code better in most business cases. That's all.

Definitions of Phenotype and Genotype

Can someone help me understand the definitions of phenotype and genotype in relation to evolutionary algorithms?
Am I right in thinking that the genotype is a representation of the solution. And the phenotype is the solution itself?
Thanks
Summary: For simple systems, yes, you are completely right. As you get into more complex systems, things get messier.
That is probably all most people reading this question need to know. However, for those who care, there are some weird subtleties:
People who study evolutionary computation use the words "genotype" and "phenotype" frustratingly inconsistently. The only rule that holds true across all systems is that the genotype is a lower-level (i.e. less abstracted) encoding than the phenotype. A consequence of this rule is that there can generally be multiple genotypes that map to the same phenotype, but not the other way around. In some systems, there are really only the two levels of abstraction that you mention: the representation of a solution and the solution itself. In these cases, you are entirely correct that the former is the genotype and the latter is the phenotype.
This holds true for:
Simple genetic algorithms where the solution is encoded as a bitstring.
Simple evolutionary strategies problems, where a real-value vector is evolved and the numbers are plugged directly into a function which is being optimized
A variety of other systems where there is a direct mapping between solution encodings and solutions.
But as we get to more complex algorithms, this starts to break down. Consider a simple genetic program, in which we are evolving a mathematical expression tree. The number that the tree evaluates to depends on the input that it receives. So, while the genotype is clear (it's the series of nodes in the tree), the phenotype can only be defined with respect to specific inputs. That isn't really a big problem - we just select a set of inputs and define phenotype based on the set of corresponding outputs. But it gets worse.
As we continue to look at more complex algorithms, we reach cases where there are no longer just two levels of abstraction. Evolutionary algorithms are often used to evolve simple "brains" for autonomous agents. For instance, say we are evolving a neural network with NEAT. NEAT very clearly defines what the genotype is: a series of rules for constructing the neural network. And this makes sense - that it the lowest-level encoding of an individual in this system. Stanley, the creator of NEAT, goes on to define the phenotype as the neural network encoded by the genotype. Fair enough - that is indeed a more abstract representation. However, there are others who study evolved brain models that classify the neural network as the genotype and the behavior as the phenotype. That is also completely reasonable - the behavior is perhaps even a better phenotype, because it's the thing selection is actually based on.
Finally, we arrive at the systems with the least definable genotypes and phenotypes: open-ended artificial life systems. The goal of these systems is basically to create a rich world that will foster interesting evolutionary dynamics. Usually the genotype in these systems is fairly easy to define - it's the lowest level at which members of the population are defined. Perhaps it's a ring of assembly code, as in Avida, or a neural network, or some set of rules as in geb. Intuitively, the phenotype should capture something about what a member of the population does over its lifetime. But each member of the population does a lot of different things. So ultimately, in these systems, phenotypes tend to be defined differently based on what is being studied in a given experiment. While this may seem questionable at first, it is essentially how phenotypes are discussed in evolutionary biology as well. At some point, a system is complex enough that you just need to focus on the part you care about.