Modelica fluid mixing simulation - fluid

I am newbie to modelica.
I am searching for people having experience with (open)modelica regarding fluid and media library.
My goal is to simulate pressure of a volume related to injection flows of N2 and H2 and controlled outlet valve but also time varying mass fraction ( due to very large volume compared to flow injection / outlet capacity )
Thanks for any feedback.
I have a more detailed explanation of my project that I can can share in private.
Up to now I only tried to create a new medium for HNx ( H2N2) from media library using idealgases NasaGasMixture
My main problem is that modelica unit for flow is Kg/s and our industrial practice is Nm3/h ( same for H2 contraction ( molar fraction) related to mass fraction
I found molartomassfraction function in the library but do not see how to use this properly to modify the interface using our usual units( for display of measurement, setpoint, but also curves )

The equations in Modelica.Fluid are all formulated with masses so you will have to do the conversion to and from chemical units (normal flow and molar fractions) "outside" the Modelica.Fluid components.
I have provided an example where you can vary normal flow and molar fractions of a mixture of N2/H2. Diagram, selected results and code is provided below. All the conversion blocks to the left can, of course, be written in code and/or wrapped nicely in a submodel.
Diagram
Selected results
Medium model
package MixtureGas "Mixture gas"
extends Modelica.Media.IdealGases.Common.MixtureGasNasa(
mediumName="MixtureGas",
data={Modelica.Media.IdealGases.Common.SingleGasesData.N2,Modelica.Media.IdealGases.Common.SingleGasesData.H2},
fluidConstants={Modelica.Media.IdealGases.Common.FluidData.N2,Modelica.Media.IdealGases.Common.FluidData.H2},
substanceNames={"N2","H2"},
reference_X={0.4,0.6});
end MixtureGas;
Simulation Model
model Mixing "Molar fractions and normal flow as independent inputs"
extends Modelica.Icons.Example;
package Medium = MixtureGas;
parameter Medium.AbsolutePressure p_normal=Medium.p_default "Normal pressure";
parameter Medium.Temperature T_normal=273.15 "Normal temperature";
Medium.Density rho_normal=Medium.density_pTX(
p_normal,
T_normal,
X) "Normal density of mixture";
// Conversion from mole to mass fractions
Real X[Medium.nX]=Medium.moleToMassFractions(moleFractions=moleFractions.y,
MMX=Medium.data.MM) "Mass fraction vector {N2, H2}";
Modelica.Fluid.Sources.MassFlowSource_T source(
use_m_flow_in=true,
use_X_in=true,
redeclare package Medium = Medium,
nPorts=1)
annotation (Placement(transformation(extent={{-60,-10},{-40,10}})));
Modelica.Fluid.Sources.Boundary_pT boundary1(nPorts=1, redeclare package
Medium = Medium)
annotation (Placement(transformation(extent={{140,-10},{120,10}})));
Modelica.Fluid.Vessels.ClosedVolume volume(
use_portsData=false,
V=0.1,
redeclare package Medium = Medium,
nPorts=2) annotation (Placement(transformation(
extent={{-10,-10},{10,10}},
rotation=0,
origin={-10,20})));
inner Modelica.Fluid.System system(energyDynamics=Modelica.Fluid.Types.Dynamics.FixedInitial)
annotation (Placement(transformation(extent={{-140,80},{-120,100}})));
Modelica.Fluid.Sensors.MassFractionsTwoPort massFraction_H2(redeclare package
Medium = Medium, substanceName="H2")
annotation (Placement(transformation(extent={{10,-10},{30,10}})));
Modelica.Fluid.Sensors.MassFractionsTwoPort massFraction_N2(redeclare package
Medium = Medium, substanceName="N2")
annotation (Placement(transformation(extent={{50,-10},{70,10}})));
Modelica.Blocks.Sources.Ramp Nm3PerHour(
height=10,
duration=10,
startTime=10)
annotation (Placement(transformation(extent={{-140,-2},{-120,18}})));
Modelica.Blocks.Sources.Constant const(k=1)
annotation (Placement(transformation(extent={{70,20},{90,40}})));
Modelica.Fluid.Valves.ValveLinear valve(
dp_nominal=100000,
m_flow_nominal=0.03,
redeclare package Medium = Medium)
annotation (Placement(transformation(extent={{90,-10},{110,10}})));
Modelica.Blocks.Math.Product toMassFlowRate
annotation (Placement(transformation(extent={{-100,4},{-80,24}})));
Modelica.Blocks.Sources.RealExpression density(y=rho_normal/3600)
annotation (Placement(transformation(extent={{-140,30},{-120,50}})));
Modelica.Blocks.Sources.Ramp molarFractionH2(duration=60, startTime=60)
annotation (Placement(transformation(extent={{-140,-80},{-120,-60}})));
Modelica.Blocks.Sources.Constant one(k=1)
annotation (Placement(transformation(extent={{-140,-50},{-120,-30}})));
Modelica.Blocks.Math.Feedback molarFractionN2 "fractions must sum to one"
annotation (Placement(transformation(extent={{-110,-50},{-90,-30}})));
Modelica.Blocks.Routing.Multiplex2 moleFractions
annotation (Placement(transformation(extent={{-70,-56},{-50,-36}})));
Modelica.Blocks.Sources.RealExpression massFractions[Medium.nX](y=X)
annotation (Placement(transformation(extent={{-100,0},{-80,-20}})));
equation
connect(massFraction_H2.port_b, massFraction_N2.port_a)
annotation (Line(points={{30,0},{50,0}}, color={0,127,255}));
connect(massFraction_N2.port_b, valve.port_a)
annotation (Line(points={{70,0},{90,0}}, color={0,127,255}));
connect(valve.port_b, boundary1.ports[1])
annotation (Line(points={{110,0},{120,0}}, color={0,127,255}));
connect(const.y, valve.opening)
annotation (Line(points={{91,30},{100,30},{100,8}}, color={0,0,127}));
connect(source.ports[1], volume.ports[1])
annotation (Line(points={{-40,0},{-11,0},{-11,10}}, color={0,127,255}));
connect(massFraction_H2.port_a, volume.ports[2])
annotation (Line(points={{10,0},{-9,0},{-9,10}}, color={0,127,255}));
connect(Nm3PerHour.y, toMassFlowRate.u2)
annotation (Line(points={{-119,8},{-102,8}}, color={0,0,127}));
connect(toMassFlowRate.y, source.m_flow_in) annotation (Line(points={{-79,14},
{-70,14},{-70,8},{-60,8}}, color={0,0,127}));
connect(density.y, toMassFlowRate.u1) annotation (Line(points={{-119,40},{-108,
40},{-108,20},{-102,20}}, color={0,0,127}));
connect(one.y, molarFractionN2.u1)
annotation (Line(points={{-119,-40},{-108,-40}}, color={0,0,127}));
connect(molarFractionH2.y, molarFractionN2.u2) annotation (Line(points={{-119,
-70},{-100,-70},{-100,-48}}, color={0,0,127}));
connect(molarFractionN2.y, moleFractions.u1[1])
annotation (Line(points={{-91,-40},{-72,-40}}, color={0,0,127}));
connect(molarFractionH2.y, moleFractions.u2[1]) annotation (Line(points={{-119,
-70},{-86,-70},{-86,-52},{-72,-52}}, color={0,0,127}));
connect(massFractions.y, source.X_in) annotation (Line(points={{-79,-10},{-70,
-10},{-70,-4},{-62,-4}}, color={0,0,127}));
annotation (Diagram(coordinateSystem(extent={{-140,-100},{140,100}}),
graphics={Line(
points={{-46,-46},{-34,-46},{-34,-24},{-114,-24},{-114,-10},{-104,-10}},
color={255,0,0},
arrow={Arrow.None,Arrow.Filled},
pattern=LinePattern.Dash)}), experiment(StopTime=500));
end Mixing;
Edit after comment
It is also possible to separate N2/H2 into two separate sources. A code example and diagram is given below
model Mixing_separateSources
extends Modelica.Icons.Example;
package Medium = MixtureGas;
parameter Medium.AbsolutePressure p_normal=Medium.p_default "Normal pressure";
parameter Medium.Temperature T_normal=273.15 "Normal temperature";
parameter Medium.Density rho_normal_N2=Medium.density_pTX(
p_normal,
T_normal,
{1,0}) "Normal density of N2";
parameter Medium.Density rho_normal_H2=Medium.density_pTX(
p_normal,
T_normal,
{0,1}) "Normal density of H2";
Modelica.Fluid.Sources.MassFlowSource_T N2(
use_m_flow_in=true,
X={1,0},
redeclare package Medium = Medium,
nPorts=1)
annotation (Placement(transformation(extent={{-60,-10},{-40,10}})));
Modelica.Fluid.Sources.Boundary_pT boundary1(nPorts=1, redeclare package
Medium = Medium)
annotation (Placement(transformation(extent={{140,-10},{120,10}})));
Modelica.Fluid.Sources.MassFlowSource_T H2(
use_m_flow_in=true,
X={0,1},
redeclare package Medium = Medium,
nPorts=1)
annotation (Placement(transformation(extent={{-60,-50},{-40,-30}})));
inner Modelica.Fluid.System system(energyDynamics=Modelica.Fluid.Types.Dynamics.FixedInitial)
annotation (Placement(transformation(extent={{-140,40},{-120,60}})));
Modelica.Fluid.Sensors.MassFractionsTwoPort massFraction_H2(redeclare package
Medium = Medium, substanceName="H2")
annotation (Placement(transformation(extent={{10,-10},{30,10}})));
Modelica.Fluid.Sensors.MassFractionsTwoPort massFraction_N2(redeclare package
Medium = Medium, substanceName="N2")
annotation (Placement(transformation(extent={{50,-10},{70,10}})));
Modelica.Blocks.Math.Gain toMassFlowRate_N2(k=rho_normal_N2/3600)
annotation (Placement(transformation(extent={{-100,-2},{-80,18}})));
Modelica.Blocks.Math.Gain toMassFlowRate_H2(k=rho_normal_H2/3600)
annotation (Placement(transformation(extent={{-100,-42},{-80,-22}})));
Modelica.Blocks.Sources.Ramp N2_Nm3PerHour(
height=10,
duration=10,
startTime=10)
annotation (Placement(transformation(extent={{-140,-2},{-120,18}})));
Modelica.Blocks.Sources.Ramp H2_Nm3PerHour(
height=-10,
duration=10,
offset=10,
startTime=10)
annotation (Placement(transformation(extent={{-140,-42},{-120,-22}})));
Modelica.Blocks.Sources.Constant const(k=1)
annotation (Placement(transformation(extent={{70,20},{90,40}})));
Modelica.Fluid.Valves.ValveLinear valve(
dp_nominal=100000,
m_flow_nominal=0.03,
redeclare package Medium = Medium)
annotation (Placement(transformation(extent={{90,-10},{110,10}})));
Modelica.Fluid.Vessels.ClosedVolume volume(
use_portsData=false,
V=0.1,
redeclare package Medium = Medium,
nPorts=3) annotation (Placement(transformation(
extent={{-10,-10},{10,10}},
rotation=0,
origin={-10,20})));
equation
connect(massFraction_H2.port_b, massFraction_N2.port_a)
annotation (Line(points={{30,0},{50,0}}, color={0,127,255}));
connect(toMassFlowRate_N2.y, N2.m_flow_in)
annotation (Line(points={{-79,8},{-60,8}}, color={0,0,127}));
connect(toMassFlowRate_H2.y, H2.m_flow_in)
annotation (Line(points={{-79,-32},{-60,-32}}, color={0,0,127}));
connect(H2_Nm3PerHour.y, toMassFlowRate_H2.u)
annotation (Line(points={{-119,-32},{-102,-32}}, color={0,0,127}));
connect(N2_Nm3PerHour.y, toMassFlowRate_N2.u)
annotation (Line(points={{-119,8},{-102,8}}, color={0,0,127}));
connect(massFraction_N2.port_b, valve.port_a)
annotation (Line(points={{70,0},{90,0}}, color={0,127,255}));
connect(valve.port_b, boundary1.ports[1])
annotation (Line(points={{110,0},{120,0}}, color={0,127,255}));
connect(const.y, valve.opening)
annotation (Line(points={{91,30},{100,30},{100,8}}, color={0,0,127}));
connect(N2.ports[1], volume.ports[1]) annotation (Line(points={{-40,0},{-11.3333,
0},{-11.3333,10}}, color={0,127,255}));
connect(H2.ports[1], volume.ports[2]) annotation (Line(points={{-40,-40},{-10,
-40},{-10,10}}, color={0,127,255}));
connect(massFraction_H2.port_a, volume.ports[3]) annotation (Line(points={{10,
0},{-8.66667,0},{-8.66667,10}}, color={0,127,255}));
annotation (Diagram(coordinateSystem(extent={{-140,-100},{140,100}})),
experiment(StopTime=500, __Dymola_Algorithm="Dassl"));
end Mixing_separateSources;

Related

Is it possible to pass a value from a meta annotation?

Let's say I have an annotation like this:
#Target(AnnotationTarget.FUNCTION)
#Retention(AnnotationRetention.RUNTIME)
#EnumSource(value = MyEnum::class, mode = EnumSource.Mode.EXCLUDE)
annotation class TestEachValue
Is is possible to pass a value from my annotation class to one of the annotations on it? Something like:
#Target(AnnotationTarget.FUNCTION)
#Retention(AnnotationRetention.RUNTIME)
#EnumSource(value = MyEnum::class, mode = EnumSource.Mode.EXCLUDE, names=excludes_from_below)
annotation class TestEachValue(val excludes: Array<String>)
I would be willing to wrap the value in an annotation if that helps. Or maybe Kotlin has some magic comparable to the inline keyword? Any advice on how this can be done nicely would be greatly appreciated.

How to filter data class properties by kotlin annotation?

Implimentation of Annotation
#Target(AnnotationTarget.PROPERTY)
#Retention(AnnotationRetention.RUNTIME)
annotation class Returnable
Dummy Data class
data class DataClass(
val property: String
#Returnable
val annotatedProperty: String
)
Java Reflections filtering doesn't work
this::class.memberProperties
.filter{ it.annotations.map { ann -> ann.annotationClass }.contains(Returnable::class)}
Kotlin annotation isn't the same as Java annotations. So work with Kotlin reflection requires a bit different way compare to classic java. Here you can find a way of filtering properties of Kotlin data class by Kotlin annotations
DataClass("false","true")::class.members.filter {
it.findAnnotation<Returnable>() != null
}

Runtime annotations annotated on a filed in kotlin class are not generated correctly

Kotlin compiler remove the Java runtime annotation annotated on a field.The annotation is shown below.
#Target({ElementType.ANNOTATION_TYPE, ElementType.METHOD, ElementType.FIELD, ElementType.TYPE, ElementType.PARAMETER})
#Retention(RetentionPolicy.RUNTIME)
#com.fasterxml.jackson.annotation.JacksonAnnotation
public #interface JsonDeserialize
I declared it on a field, as seen below.
#JsonSerialize(using = IDEncryptJsonSerializer::class)
#JsonDeserialize(using = IDDecryptJsonDeserializer::class)
#Column(name = "sku_id", nullable = false)
open var skuId: Long = 0L
The annotation doesn't work. Then, I take a fist look at the class file, as seen below.
#field:javax.persistence.Column public open var skuId: kotlin.Long
The JsonDeserialize and JsonSerialize annotation are dismiss.
The two annotations are work well in Java.
My kotlin version is 1.1.4.
How can I fix the problem?
Finally, I found the reason that result in the phenomenon.
If I declare a variable in class constructor, some of annotations annotate on that variable may cannot be compiled correctly.
Some of annotations may be lost because of kotlin compiler bug.
Then, I move the variable in the class body. Everything work well.

What is legitimate way to get annotations of a pure Kotlin property via reflection, are they always missing?

I'm trying to get annotations from Kotlin data class
package some.meaningless.package.name
import kotlin.reflect.full.memberProperties
annotation class MyAnnotation()
#MyAnnotation
data class TestDto(#MyAnnotation val answer: Int = 42)
fun main(args: Array<String>) {
TestDto::class.memberProperties.forEach { p -> println(p.annotations) }
println(TestDto::class.annotations)
}
I need to process class annotation to make a custom name serialization of GSON however no matter how I declare annotation class it never gets detected
The program always outputs
[]
[#some.meaningless.package.name.MyAnnotation()]
which means only class level annotations are present
Ok,
it seems that the culprit was, that Kotlin annotations have default #Target(AnnotationTarget.CLASS) which is not stressed enough in documentation.
After I added #Target to the annotation class it now works properly
#Target(AnnotationTarget.CLASS, AnnotationTarget.PROPERTY)
annotation class MyAnnotation()
Now it prints out
[#some.meaningless.package.name.MyAnnotation()]
[#some.meaningless.package.name.MyAnnotation()]
As a side affect it will force the compiler to check that the annotation is applied as required, in current version of Kotlin, if explicit #Targetis not present only class level annotations are kept but no validity checks performed.
As Kotlin reference said as below:
If you don't specify a use-site target, the target is chosen according to the #Target annotation of the annotation being used. If there are multiple applicable targets, the first applicable target from the following: param > property > field.
To make the annotation annotated on a property, you should use site target, for example:
#MyAnnotation
data class TestDto(#property:MyAnnotation val answer: Int = 42)
However, annotations with property target in Kotlin are not visible to Java, so you should double the annotation, for example:
#MyAnnotation // v--- used for property v--- used for params in Java
data class TestDto(#property:MyAnnotation #MyAnnotation val answer: Int = 42)

What is an example of the Liskov Substitution Principle?

I have heard that the Liskov Substitution Principle (LSP) is a fundamental principle of object oriented design. What is it and what are some examples of its use?
A great example illustrating LSP (given by Uncle Bob in a podcast I heard recently) was how sometimes something that sounds right in natural language doesn't quite work in code.
In mathematics, a Square is a Rectangle. Indeed it is a specialization of a rectangle. The "is a" makes you want to model this with inheritance. However if in code you made Square derive from Rectangle, then a Square should be usable anywhere you expect a Rectangle. This makes for some strange behavior.
Imagine you had SetWidth and SetHeight methods on your Rectangle base class; this seems perfectly logical. However if your Rectangle reference pointed to a Square, then SetWidth and SetHeight doesn't make sense because setting one would change the other to match it. In this case Square fails the Liskov Substitution Test with Rectangle and the abstraction of having Square inherit from Rectangle is a bad one.
Y'all should check out the other priceless SOLID Principles Explained With Motivational Posters.
The Liskov Substitution Principle (LSP, lsp) is a concept in Object Oriented Programming that states:
Functions that use pointers or
references to base classes must be
able to use objects of derived classes
without knowing it.
At its heart LSP is about interfaces and contracts as well as how to decide when to extend a class vs. use another strategy such as composition to achieve your goal.
The most effective way I have seen to illustrate this point was in Head First OOA&D. They present a scenario where you are a developer on a project to build a framework for strategy games.
They present a class that represents a board that looks like this:
All of the methods take X and Y coordinates as parameters to locate the tile position in the two-dimensional array of Tiles. This will allow a game developer to manage units in the board during the course of the game.
The book goes on to change the requirements to say that the game frame work must also support 3D game boards to accommodate games that have flight. So a ThreeDBoard class is introduced that extends Board.
At first glance this seems like a good decision. Board provides both the Height and Width properties and ThreeDBoard provides the Z axis.
Where it breaks down is when you look at all the other members inherited from Board. The methods for AddUnit, GetTile, GetUnits and so on, all take both X and Y parameters in the Board class but the ThreeDBoard needs a Z parameter as well.
So you must implement those methods again with a Z parameter. The Z parameter has no context to the Board class and the inherited methods from the Board class lose their meaning. A unit of code attempting to use the ThreeDBoard class as its base class Board would be very out of luck.
Maybe we should find another approach. Instead of extending Board, ThreeDBoard should be composed of Board objects. One Board object per unit of the Z axis.
This allows us to use good object oriented principles like encapsulation and reuse and doesn’t violate LSP.
Substitutability is a principle in object-oriented programming stating that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S
Let's do a simple example in Java:
Bad example
public class Bird{
public void fly(){}
}
public class Duck extends Bird{}
The duck can fly because it is a bird, but what about this:
public class Ostrich extends Bird{}
Ostrich is a bird, but it can't fly, Ostrich class is a subtype of class Bird, but it shouldn't be able to use the fly method, that means we are breaking the LSP principle.
Good example
public class Bird{}
public class FlyingBirds extends Bird{
public void fly(){}
}
public class Duck extends FlyingBirds{}
public class Ostrich extends Bird{}
LSP concerns invariants.
The classic example is given by the following pseudo-code declaration (implementations omitted):
class Rectangle {
int getHeight()
void setHeight(int value) {
postcondition: width didn’t change
}
int getWidth()
void setWidth(int value) {
postcondition: height didn’t change
}
}
class Square extends Rectangle { }
Now we have a problem although the interface matches. The reason is that we have violated invariants stemming from the mathematical definition of squares and rectangles. The way getters and setters work, a Rectangle should satisfy the following invariant:
void invariant(Rectangle r) {
r.setHeight(200)
r.setWidth(100)
assert(r.getHeight() == 200 and r.getWidth() == 100)
}
However, this invariant (as well as the explicit postconditions) must be violated by a correct implementation of Square, therefore it is not a valid substitute of Rectangle.
Robert Martin has an excellent paper on the Liskov Substitution Principle. It discusses subtle and not-so-subtle ways in which the principle may be violated.
Some relevant parts of the paper (note that the second example is heavily condensed):
A Simple Example of a Violation of LSP
One of the most glaring violations of this principle is the use of C++
Run-Time Type Information (RTTI) to select a function based upon the
type of an object. i.e.:
void DrawShape(const Shape& s)
{
if (typeid(s) == typeid(Square))
DrawSquare(static_cast<Square&>(s));
else if (typeid(s) == typeid(Circle))
DrawCircle(static_cast<Circle&>(s));
}
Clearly the DrawShape function is badly formed. It must know about
every possible derivative of the Shape class, and it must be changed
whenever new derivatives of Shape are created. Indeed, many view the structure of this function as anathema to Object Oriented Design.
Square and Rectangle, a More Subtle Violation.
However, there are other, far more subtle, ways of violating the LSP.
Consider an application which uses the Rectangle class as described
below:
class Rectangle
{
public:
void SetWidth(double w) {itsWidth=w;}
void SetHeight(double h) {itsHeight=w;}
double GetHeight() const {return itsHeight;}
double GetWidth() const {return itsWidth;}
private:
double itsWidth;
double itsHeight;
};
[...] Imagine that one day the users demand the ability to manipulate
squares in addition to rectangles. [...]
Clearly, a square is a rectangle for all normal intents and purposes.
Since the ISA relationship holds, it is logical to model the Square
class as being derived from Rectangle. [...]
Square will inherit the SetWidth and SetHeight functions. These
functions are utterly inappropriate for a Square, since the width and
height of a square are identical. This should be a significant clue
that there is a problem with the design. However, there is a way to
sidestep the problem. We could override SetWidth and SetHeight [...]
But consider the following function:
void f(Rectangle& r)
{
r.SetWidth(32); // calls Rectangle::SetWidth
}
If we pass a reference to a Square object into this function, the
Square object will be corrupted because the height won’t be changed.
This is a clear violation of LSP. The function does not work for
derivatives of its arguments.
[...]
I see rectangles and squares in every answer, and how to violate the LSP.
I'd like to show how the LSP can be conformed to with a real-world example :
<?php
interface Database
{
public function selectQuery(string $sql): array;
}
class SQLiteDatabase implements Database
{
public function selectQuery(string $sql): array
{
// sqlite specific code
return $result;
}
}
class MySQLDatabase implements Database
{
public function selectQuery(string $sql): array
{
// mysql specific code
return $result;
}
}
This design conforms to the LSP because the behaviour remains unchanged regardless of the implementation we choose to use.
And yes, you can violate LSP in this configuration doing one simple change like so :
<?php
interface Database
{
public function selectQuery(string $sql): array;
}
class SQLiteDatabase implements Database
{
public function selectQuery(string $sql): array
{
// sqlite specific code
return $result;
}
}
class MySQLDatabase implements Database
{
public function selectQuery(string $sql): array
{
// mysql specific code
return ['result' => $result]; // This violates LSP !
}
}
Now the subtypes cannot be used the same way since they don't produce the same result anymore.
There is a checklist to determine whether or not you are violating Liskov.
If you violate one of the following items -> you violate Liskov.
If you don't violate any -> can't conclude anything.
Check list:
No new exceptions should be thrown in derived class: If your base class threw ArgumentNullException then your sub classes were only allowed to throw exceptions of type ArgumentNullException or any exceptions derived from ArgumentNullException. Throwing IndexOutOfRangeException is a violation of Liskov.
Pre-conditions cannot be strengthened: Assume your base class works with a member int. Now your sub-type requires that int to be positive. This is strengthened pre-conditions, and now any code that worked perfectly fine before with negative ints is broken.
Post-conditions cannot be weakened: Assume your base class required all connections to the database should be closed before the method returned. In your sub-class you overrode that method and left the connection open for further reuse. You have weakened the post-conditions of that method.
Invariants must be preserved: The most difficult and painful constraint to fulfill. Invariants are sometimes hidden in the base class and the only way to reveal them is to read the code of the base class. Basically you have to be sure when you override a method anything unchangeable must remain unchanged after your overridden method is executed. The best thing I can think of is to enforce these invariant constraints in the base class but that would not be easy.
History Constraint: When overriding a method you are not allowed to modify an unmodifiable property in the base class. Take a look at these code and you can see Name is defined to be unmodifiable (private set) but SubType introduces new method that allows modifying it (through reflection):
public class SuperType
{
public string Name { get; private set; }
public SuperType(string name, int age)
{
Name = name;
Age = age;
}
}
public class SubType : SuperType
{
public void ChangeName(string newName)
{
var propertyType = base.GetType().GetProperty("Name").SetValue(this, newName);
}
}
There are 2 others items: Contravariance of method arguments and Covariance of return types. But it is not possible in C# (I'm a C# developer) so I don't care about them.
LSP is necessary where some code thinks it is calling the methods of a type T, and may unknowingly call the methods of a type S, where S extends T (i.e. S inherits, derives from, or is a subtype of, the supertype T).
For example, this occurs where a function with an input parameter of type T, is called (i.e. invoked) with an argument value of type S. Or, where an identifier of type T, is assigned a value of type S.
val id : T = new S() // id thinks it's a T, but is a S
LSP requires the expectations (i.e. invariants) for methods of type T (e.g. Rectangle), not be violated when the methods of type S (e.g. Square) are called instead.
val rect : Rectangle = new Square(5) // thinks it's a Rectangle, but is a Square
val rect2 : Rectangle = rect.setWidth(10) // height is 10, LSP violation
Even a type with immutable fields still has invariants, e.g. the immutable Rectangle setters expect dimensions to be independently modified, but the immutable Square setters violate this expectation.
class Rectangle( val width : Int, val height : Int )
{
def setWidth( w : Int ) = new Rectangle(w, height)
def setHeight( h : Int ) = new Rectangle(width, h)
}
class Square( val side : Int ) extends Rectangle(side, side)
{
override def setWidth( s : Int ) = new Square(s)
override def setHeight( s : Int ) = new Square(s)
}
LSP requires that each method of the subtype S must have contravariant input parameter(s) and a covariant output.
Contravariant means the variance is contrary to the direction of the inheritance, i.e. the type Si, of each input parameter of each method of the subtype S, must be the same or a supertype of the type Ti of the corresponding input parameter of the corresponding method of the supertype T.
Covariance means the variance is in the same direction of the inheritance, i.e. the type So, of the output of each method of the subtype S, must be the same or a subtype of the type To of the corresponding output of the corresponding method of the supertype T.
This is because if the caller thinks it has a type T, thinks it is calling a method of T, then it supplies argument(s) of type Ti and assigns the output to the type To. When it is actually calling the corresponding method of S, then each Ti input argument is assigned to a Si input parameter, and the So output is assigned to the type To. Thus if Si were not contravariant w.r.t. to Ti, then a subtype Xi—which would not be a subtype of Si—could be assigned to Ti.
Additionally, for languages (e.g. Scala or Ceylon) which have definition-site variance annotations on type polymorphism parameters (i.e. generics), the co- or contra- direction of the variance annotation for each type parameter of the type T must be opposite or same direction respectively to every input parameter or output (of every method of T) that has the type of the type parameter.
Additionally, for each input parameter or output that has a function type, the variance direction required is reversed. This rule is applied recursively.
Subtyping is appropriate where the invariants can be enumerated.
There is much ongoing research on how to model invariants, so that they are enforced by the compiler.
Typestate (see page 3) declares and enforces state invariants orthogonal to type. Alternatively, invariants can be enforced by converting assertions to types. For example, to assert that a file is open before closing it, then File.open() could return an OpenFile type, which contains a close() method that is not available in File. A tic-tac-toe API can be another example of employing typing to enforce invariants at compile-time. The type system may even be Turing-complete, e.g. Scala. Dependently-typed languages and theorem provers formalize the models of higher-order typing.
Because of the need for semantics to abstract over extension, I expect that employing typing to model invariants, i.e. unified higher-order denotational semantics, is superior to the Typestate. ‘Extension’ means the unbounded, permuted composition of uncoordinated, modular development. Because it seems to me to be the antithesis of unification and thus degrees-of-freedom, to have two mutually-dependent models (e.g. types and Typestate) for expressing the shared semantics, which can't be unified with each other for extensible composition. For example, Expression Problem-like extension was unified in the subtyping, function overloading, and parametric typing domains.
My theoretical position is that for knowledge to exist (see section “Centralization is blind and unfit”), there will never be a general model that can enforce 100% coverage of all possible invariants in a Turing-complete computer language. For knowledge to exist, unexpected possibilities much exist, i.e. disorder and entropy must always be increasing. This is the entropic force. To prove all possible computations of a potential extension, is to compute a priori all possible extension.
This is why the Halting Theorem exists, i.e. it is undecidable whether every possible program in a Turing-complete programming language terminates. It can be proven that some specific program terminates (one which all possibilities have been defined and computed). But it is impossible to prove that all possible extension of that program terminates, unless the possibilities for extension of that program is not Turing complete (e.g. via dependent-typing). Since the fundamental requirement for Turing-completeness is unbounded recursion, it is intuitive to understand how Gödel's incompleteness theorems and Russell's paradox apply to extension.
An interpretation of these theorems incorporates them in a generalized conceptual understanding of the entropic force:
Gödel's incompleteness theorems: any formal theory, in which all arithmetic truths can be proved, is inconsistent.
Russell's paradox: every membership rule for a set that can contain a set, either enumerates the specific type of each member or contains itself. Thus sets either cannot be extended or they are unbounded recursion. For example, the set of everything that is not a teapot, includes itself, which includes itself, which includes itself, etc…. Thus a rule is inconsistent if it (may contain a set and) does not enumerate the specific types (i.e. allows all unspecified types) and does not allow unbounded extension. This is the set of sets that are not members of themselves. This inability to be both consistent and completely enumerated over all possible extension, is Gödel's incompleteness theorems.
Liskov Substition Principle: generally it is an undecidable problem whether any set is the subset of another, i.e. inheritance is generally undecidable.
Linsky Referencing: it is undecidable what the computation of something is, when it is described or perceived, i.e. perception (reality) has no absolute point of reference.
Coase's theorem: there is no external reference point, thus any barrier to unbounded external possibilities will fail.
Second law of thermodynamics: the entire universe (a closed system, i.e. everything) trends to maximum disorder, i.e. maximum independent possibilities.
Long story short, let's leave rectangles rectangles and squares squares, practical example when extending a parent class, you have to either PRESERVE the exact parent API or to EXTEND IT.
Let's say you have a base ItemsRepository.
class ItemsRepository
{
/**
* #return int Returns number of deleted rows
*/
public function delete()
{
// perform a delete query
$numberOfDeletedRows = 10;
return $numberOfDeletedRows;
}
}
And a sub class extending it:
class BadlyExtendedItemsRepository extends ItemsRepository
{
/**
* #return void Was suppose to return an INT like parent, but did not, breaks LSP
*/
public function delete()
{
// perform a delete query
$numberOfDeletedRows = 10;
// we broke the behaviour of the parent class
return;
}
}
Then you could have a Client working with the Base ItemsRepository API and relying on it.
/**
* Class ItemsService is a client for public ItemsRepository "API" (the public delete method).
*
* Technically, I am able to pass into a constructor a sub-class of the ItemsRepository
* but if the sub-class won't abide the base class API, the client will get broken.
*/
class ItemsService
{
/**
* #var ItemsRepository
*/
private $itemsRepository;
/**
* #param ItemsRepository $itemsRepository
*/
public function __construct(ItemsRepository $itemsRepository)
{
$this->itemsRepository = $itemsRepository;
}
/**
* !!! Notice how this is suppose to return an int. My clients expect it based on the
* ItemsRepository API in the constructor !!!
*
* #return int
*/
public function delete()
{
return $this->itemsRepository->delete();
}
}
The LSP is broken when substituting parent class with a sub class breaks the API's contract.
class ItemsController
{
/**
* Valid delete action when using the base class.
*/
public function validDeleteAction()
{
$itemsService = new ItemsService(new ItemsRepository());
$numberOfDeletedItems = $itemsService->delete();
// $numberOfDeletedItems is an INT :)
}
/**
* Invalid delete action when using a subclass.
*/
public function brokenDeleteAction()
{
$itemsService = new ItemsService(new BadlyExtendedItemsRepository());
$numberOfDeletedItems = $itemsService->delete();
// $numberOfDeletedItems is a NULL :(
}
}
You can learn more about writing maintainable software in my course: https://www.udemy.com/enterprise-php/
Let’s illustrate in Java:
class TrasportationDevice
{
String name;
String getName() { ... }
void setName(String n) { ... }
double speed;
double getSpeed() { ... }
void setSpeed(double d) { ... }
Engine engine;
Engine getEngine() { ... }
void setEngine(Engine e) { ... }
void startEngine() { ... }
}
class Car extends TransportationDevice
{
#Override
void startEngine() { ... }
}
There is no problem here, right? A car is definitely a transportation device, and here we can see that it overrides the startEngine() method of its superclass.
Let’s add another transportation device:
class Bicycle extends TransportationDevice
{
#Override
void startEngine() /*problem!*/
}
Everything isn’t going as planned now! Yes, a bicycle is a transportation device, however, it does not have an engine and hence, the method startEngine() cannot be implemented.
These are the kinds of problems that violation of Liskov Substitution
Principle leads to, and they can most usually be recognized by a
method that does nothing, or even can’t be implemented.
The solution to these problems is a correct inheritance hierarchy, and in our case we would solve the problem by differentiating classes of transportation devices with and without engines. Even though a bicycle is a transportation device, it doesn’t have an engine. In this example our definition of transportation device is wrong. It should not have an engine.
We can refactor our TransportationDevice class as follows:
class TrasportationDevice
{
String name;
String getName() { ... }
void setName(String n) { ... }
double speed;
double getSpeed() { ... }
void setSpeed(double d) { ... }
}
Now we can extend TransportationDevice for non-motorized devices.
class DevicesWithoutEngines extends TransportationDevice
{
void startMoving() { ... }
}
And extend TransportationDevice for motorized devices. Here is is more appropriate to add the Engine object.
class DevicesWithEngines extends TransportationDevice
{
Engine engine;
Engine getEngine() { ... }
void setEngine(Engine e) { ... }
void startEngine() { ... }
}
Thus our Car class becomes more specialized, while adhering to the Liskov Substitution Principle.
class Car extends DevicesWithEngines
{
#Override
void startEngine() { ... }
}
And our Bicycle class is also in compliance with the Liskov Substitution Principle.
class Bicycle extends DevicesWithoutEngines
{
#Override
void startMoving() { ... }
}
The LSP is a rule about the contract of the clases: if a base class satisfies a contract, then by the LSP derived classes must also satisfy that contract.
In Pseudo-python
class Base:
def Foo(self, arg):
# *... do stuff*
class Derived(Base):
def Foo(self, arg):
# *... do stuff*
satisfies LSP if every time you call Foo on a Derived object, it gives exactly the same results as calling Foo on a Base object, as long as arg is the same.
I guess everyone kind of covered what LSP is technically: You basically want to be able to abstract away from subtype details and use supertypes safely.
So Liskov has 3 underlying rules:
Signature Rule : There should be a valid implementation of every operation of the supertype in the subtype syntactically. Something a compiler will be able to check for you. There is a little rule about throwing fewer exceptions and being at least as accessible as the supertype methods.
Methods Rule: The implementation of those operations is semantically sound.
Weaker Preconditions : The subtype functions should take at least what the supertype took as input, if not more.
Stronger Postconditions: They should produce a subset of the output the supertype methods produced.
Properties Rule : This goes beyond individual function calls.
Invariants : Things that are always true must remain true. Eg. a Set's size is never negative.
Evolutionary Properties : Usually something to do with immutability or the kind of states the object can be in. Or maybe the object only grows and never shrinks so the subtype methods shouldn't make it.
All these properties need to be preserved and the extra subtype functionality shouldn't violate supertype properties.
If these three things are taken care of , you have abstracted away from the underlying stuff and you are writing loosely coupled code.
Source: Program Development in Java - Barbara Liskov
An important example of the use of LSP is in software testing.
If I have a class A that is an LSP-compliant subclass of B, then I can reuse the test suite of B to test A.
To fully test subclass A, I probably need to add a few more test cases, but at the minimum I can reuse all of superclass B's test cases.
A way to realize is this by building what McGregor calls a "Parallel hierarchy for testing": My ATest class will inherit from BTest. Some form of injection is then needed to ensure the test case works with objects of type A rather than of type B (a simple template method pattern will do).
Note that reusing the super-test suite for all subclass implementations is in fact a way to test that these subclass implementations are LSP-compliant. Thus, one can also argue that one should run the superclass test suite in the context of any subclass.
See also the answer to the Stackoverflow question "Can I implement a series of reusable tests to test an interface's implementation?"
Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.
When I first read about LSP, I assumed that this was meant in a very strict sense, essentially equating it to interface implementation and type-safe casting. Which would mean that LSP is either ensured or not by the language itself. For example, in this strict sense, ThreeDBoard is certainly substitutable for Board, as far as the compiler is concerned.
After reading up more on the concept though I found that LSP is generally interpreted more broadly than that.
In short, what it means for client code to "know" that the object behind the pointer is of a derived type rather than the pointer type is not restricted to type-safety. Adherence to LSP is also testable through probing the objects actual behavior. That is, examining the impact of an object's state and method arguments on the results of the method calls, or the types of exceptions thrown from the object.
Going back to the example again, in theory the Board methods can be made to work just fine on ThreeDBoard. In practice however, it will be very difficult to prevent differences in behavior that client may not handle properly, without hobbling the functionality that ThreeDBoard is intended to add.
With this knowledge in hand, evaluating LSP adherence can be a great tool in determining when composition is the more appropriate mechanism for extending existing functionality, rather than inheritance.
The Liskov Substitution Principle
The overridden method shouldn’t remain empty
The overridden method shouldn’t throw an error
Base class or interface behavior should not go for modification (rework) as because of derived class behaviors.
The LSP in simple terms states that objects of the same superclass should be able to be swapped with each other without breaking anything.
For example, if we have a Cat and a Dog class derived from an Animal class, any functions using the Animal class should be able to use Cat or Dog and behave normally.
This formulation of the LSP is way too strong:
If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is a subtype of T.
Which basically means that S is another, completely encapsulated implementation of the exact same thing as T. And I could be bold and decide that performance is part of the behavior of P...
So, basically, any use of late-binding violates the LSP. It's the whole point of OO to to obtain a different behavior when we substitute an object of one kind for one of another kind!
The formulation cited by wikipedia is better since the property depends on the context and does not necessarily include the whole behavior of the program.
In a very simple sentence, we can say:
The child class must not violate its base class characteristics. It must be capable with it. We can say it's same as subtyping.
Liskov's Substitution Principle(LSP)
All the time we design a program module and we create some class
hierarchies. Then we extend some classes creating some derived
classes.
We must make sure that the new derived classes just extend without
replacing the functionality of old classes. Otherwise, the new classes
can produce undesired effects when they are used in existing program
modules.
Liskov's Substitution Principle states that if a program module is
using a Base class, then the reference to the Base class can be
replaced with a Derived class without affecting the functionality of
the program module.
Example:
Below is the classic example for which the Liskov's Substitution Principle is violated. In the example, 2 classes are used: Rectangle and Square. Let's assume that the Rectangle object is used somewhere in the application. We extend the application and add the Square class. The square class is returned by a factory pattern, based on some conditions and we don't know the exact what type of object will be returned. But we know it's a Rectangle. We get the rectangle object, set the width to 5 and height to 10 and get the area. For a rectangle with width 5 and height 10, the area should be 50. Instead, the result will be 100
// Violation of Likov's Substitution Principle
class Rectangle {
protected int m_width;
protected int m_height;
public void setWidth(int width) {
m_width = width;
}
public void setHeight(int height) {
m_height = height;
}
public int getWidth() {
return m_width;
}
public int getHeight() {
return m_height;
}
public int getArea() {
return m_width * m_height;
}
}
class Square extends Rectangle {
public void setWidth(int width) {
m_width = width;
m_height = width;
}
public void setHeight(int height) {
m_width = height;
m_height = height;
}
}
class LspTest {
private static Rectangle getNewRectangle() {
// it can be an object returned by some factory ...
return new Square();
}
public static void main(String args[]) {
Rectangle r = LspTest.getNewRectangle();
r.setWidth(5);
r.setHeight(10);
// user knows that r it's a rectangle.
// It assumes that he's able to set the width and height as for the base
// class
System.out.println(r.getArea());
// now he's surprised to see that the area is 100 instead of 50.
}
}
Conclusion:
This principle is just an extension of the Open Close Principle and it
means that we must make sure that new derived classes are extending
the base classes without changing their behavior.
See also: Open Close Principle
Some similar concepts for better structure: Convention over configuration
This principle was introduced by Barbara Liskov in 1987 and extends the Open-Closed Principle by focusing on the behavior of a superclass and its subtypes.
Its importance becomes obvious when we consider the consequences of violating it. Consider an application that uses the following class.
public class Rectangle
{
private double width;
private double height;
public double Width
{
get
{
return width;
}
set
{
width = value;
}
}
public double Height
{
get
{
return height;
}
set
{
height = value;
}
}
}
Imagine that one day, the client demands the ability to manipulate squares in addition to rectangles. Since a square is a rectangle, the square class should be derived from the Rectangle class.
public class Square : Rectangle
{
}
However, by doing that we will encounter two problems:
A square does not need both height and width variables inherited from the rectangle and this could create a significant waste in memory if we have to create hundreds of thousands of square objects.
The width and height setter properties inherited from the rectangle are inappropriate for a square since the width and height of a square are identical.
In order to set both height and width to the same value, we can create two new properties as follows:
public class Square : Rectangle
{
public double SetWidth
{
set
{
base.Width = value;
base.Height = value;
}
}
public double SetHeight
{
set
{
base.Height = value;
base.Width = value;
}
}
}
Now, when someone will set the width of a square object, its height will change accordingly and vice-versa.
Square s = new Square();
s.SetWidth(1); // Sets width and height to 1.
s.SetHeight(2); // sets width and height to 2.
Let's move forward and consider this other function:
public void A(Rectangle r)
{
r.SetWidth(32); // calls Rectangle.SetWidth
}
If we pass a reference to a square object into this function, we would violate the LSP because the function does not work for derivatives of its arguments. The properties width and height aren't polymorphic because they aren't declared virtual in rectangle (the square object will be corrupted because the height won't be changed).
However, by declaring the setter properties to be virtual we will face another violation, the OCP. In fact, the creation of a derived class square is causing changes to the base class rectangle.
Some addendum: I wonder why didn't anybody write about the Invariant , preconditions and post conditions of the base class that must be obeyed by the derived classes.
For a derived class D to be completely sustitutable by the Base class B, class D must obey certain conditions:
In-variants of base class must be preserved by the derived class
Pre-conditions of the base class must not be strengthened by the derived class
Post-conditions of the base class must not be weakened by the derived class.
So the derived must be aware of the above three conditions imposed by the base class. Hence, the rules of subtyping are pre-decided. Which means, 'IS A' relationship shall be obeyed only when certain rules are obeyed by the subtype. These rules, in the form of invariants, precoditions and postcondition, should be decided by a formal 'design contract'.
Further discussions on this available at my blog: Liskov Substitution principle
It states that if C is a subtype of E then E can be replaced with objects of type C without changing or breaking the behavior of the program. In simple words, derived classes should be substitutable for their parent classes. For example, if a Farmer’s son is Farmer then he can work in place of his father but if a Farmer’s son is a cricketer then he can’t work in place of his father.
Violation Example:
public class Plane{
public void startEngine(){}
}
public class FighterJet extends Plane{}
public class PaperPlane extends Plane{}
In the given example FighterPlane and PaperPlane classes both extending the Plane class which contain startEngine() method. So it's clear that FighterPlane can start engine but PaperPlane can’t so it’s breaking LSP.
PaperPlane class although extending Plane class and should be substitutable in place of it but is not an eligible entity that Plane’s instance could be replaced by, because a paper plane can’t start the engine as it doesn’t have one. So the good example would be,
Respected Example:
public class Plane{
}
public class RealPlane{
public void startEngine(){}
}
public class FighterJet extends RealPlane{}
public class PaperPlane extends Plane{}
A square is a rectangle where the width equals the height. If the square sets two different sizes for the width and height it violates the square invariant. This is worked around by introducing side effects. But if the rectangle had a setSize(height, width) with precondition 0 < height and 0 < width. The derived subtype method requires height == width; a stronger precondition (and that violates lsp). This shows that though square is a rectangle it is not a valid subtype because the precondition is strengthened. The work around (in general a bad thing) cause a side effect and this weakens the post condition (which violates lsp). setWidth on the base has post condition 0 < width. The derived weakens it with height == width.
Therefore a resizable square is not a resizable rectangle.
The big picture :
What is Liskov Substitution Principle about ? It's about what is (and what is not) a subtype of a given type.
Why is it so important ? Because there is a difference between a subtype and a subclass.
Example
Unlike the other answers, I won't start with a Liskov Substitution Principle (LSP) violation, but with a LSP compliance. I use Java but it would be almost the same in every OOP language.
Circle and ColoredCircle
Geometrical examples seem pretty popular here.
class Circle {
private int radius;
public Circle(int radius) {
if (radius < 0) {
throw new RuntimeException("Radius should be >= 0");
}
this.radius = radius;
}
public int getRadius() {
return this.radius;
}
}
The radius is not allowed to be negative. Here's a suclass:
class ColoredCircle extends Circle {
private Color color; // defined elsewhere
public ColoredCircle(int radius, Color color) {
super(radius);
this.color = color;
}
public Color getColor() {
return this.color;
}
}
This subclass is a subtype of Circle, according to the LSP.
The LSP states that:
If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is a subtype of T. (Barbara Liskov, "Data Abstraction and Hierarchy", SIGPLAN Notices, 23,5 (May, 1988))
Here, for each ColoredCircle instance o1, consider the Circle instance having the same radius o2. For every program using Circle objects, if you replace o2 by o1, the behavior of any program using Circle will remain the same after the substitution. (Note that this is theoretical : you will exhaust the memory faster using ColoredCircle instances than using Circle instances, but that's not relevant here.)
How do we find the o2 depending on o1 ? We just strip the color attribute and keep the radius attribute. I call the transformation o1 -> o2 a projection from the CircleColor space on the Circle space.
Counter Example
Let's create another example to illustrate the violation of the LSP.
Circle and Square
Imagine this subclass of the previous Circle class:
class Square extends Circle {
private int sideSize;
public Square(int sideSize) {
super(0);
this.sideSize = sideSize;
}
#Override
public int getRadius() {
return -1; // I'm a square, I don't care
}
public int getSideSize() {
return this.sideSize;
}
}
The violation of the LSP
Now, look at this program :
public class Liskov {
public static void program(Circle c) {
System.out.println("The radius is "+c.getRadius());
}
We test the program with a Circle object and with a Square object.
public static void main(String [] args){
Liskov.program(new Circle(2)); // prints "The radius is 2"
Liskov.program(new Square(2)); // prints "The radius is -1"
}
}
What happened ? Intuitively, although Square is a subclass of Circle, Square is not a subtype of Circle because no regular Circle instance would ever have a radius of -1.
Formally, this is a violation of Liskov Substitution Principle.
We have a program defined in terms of Circle and there is no Circle object that can replace new Square(2) (or any Square instance by the way) in this program and leave the behavior unchanged: remember that radius of any Circle is always positive.
Subclass and subtype
Now we know why a subclass is not always subtype. When a subclass is not a subtype, i.e. when there is a LSP violation, the behavior of some programs (at least one) won't always be the expected behavior. This is very frustrating and is usually interpreted as a bug.
In an ideal world, the compiler or interpreter would be able to check is a given subclass is a real subtype, but we are not in an ideal world.
Static typing
If there is some static typing, you are bound by the superclass signature at compile time. Square.getRadius() can't return a String or a List.
If there is no static typing, you'll get an error at runtime if the type of one argument is wrong (unless the typing is weak) or the number of arguments is inconsistent (unless the language is very permissive).
Note about the static typing: there is a mechanism of covariance of the return type (a method of S can return a subclass of the return type of the same method of T) and contravariance of the parameters types (a method of S can accept a superclass of a parameter of the same parameter of the same method of T). That is a specific case of precondition and postcondition explained below.
Design by contract
There's more. Some languages (I think of Eiffel) provide a mechanism to enforce the compliance with the LSP.
Let alone the determination the projection o2 of the initial object o1, we can expect the same behavior of any program if o1 is substituted for o2 if, for any argument x and any method f:
if o2.f(x) is a valid call, then o1.f(x) should also be a valid call (1).
the result (return value, display on console, etc.) of o1.f(x) should be equal to the result of o2.f(x), or at least equally valid (2).
o1.f(x) should let o1 in an internal state and o2.f(x) should let o2 in an internal state so that next function calls will ensure that (1), (2) and (3) will still be valid (3).
(Note that (3) is given for free if the function f is pure. That's why we like to have immutable objects.)
These conditions are about the semantics (what to expect) of the class, not only the syntax of the class. Also, these conditions are very strong. But they can be approximated by assertions in design by contract programming. These assertions are a way to ensure that the semantic of the type is upheld. Breaking the contract leads to runtime errors.
The precondition defines what is a valid call. When subclassing a class, the precondition may only be weakened (S.f accepts more than T.f) (a).
The postcondition defines what is a valid result. When subclassing a class, the postcondition may only be strengthened (S.f provides more than T.f) (b).
The invariant defines what is a valid internal state. When subclassing a class, the invariant must remain the same (c).
We see that, roughly, (a) ensures (1) and (b) ensures (2), but (c) is weaker than (3). Moreover, assertions are sometimes difficult to express.
Think of a class Counter having a unique method Counter.counter() that returns the next integer. How do you write a postcondition for that ? Think of a class Random having a method Random.gaussian() that returns a float between 0.0 and 1.0 . How do you write a postcondition to check that the distribution is gaussian ? It may be possible, but the cost would be so high that we would rely on test rather than on postconditions.
Conclusion
Unfortunately, a subclass is not always a subtype. This can lead to an unexpected behavior -- a bug.
OOP languages provide mechanism to avoid this situation. At syntactic level first. At semantical level too, depending on the programming language: a part of the semantics can be encoded in the text of the program using assertions. But it's up to you to ensure that a subclass is a subtype.
Remember when you began to learn OOP ? "If the relation is IS-A, then use inheritance". That's true the other way: if you use inheritance, be sure that the relation is IS-A.
The LSP defines, at a higher level than assertions, what is a subtype. Assertions are a valuable tool to ensure that the LSP is upheld.
Would implementing ThreeDBoard in terms of an array of Board be that useful?
Perhaps you may want to treat slices of ThreeDBoard in various planes as a Board. In that case you may want to abstract out an interface (or abstract class) for Board to allow for multiple implementations.
In terms of external interface, you might want to factor out a Board interface for both TwoDBoard and ThreeDBoard (although none of the above methods fit).
The clearest explanation for LSP I found so far has been "The Liskov Substitution Principle says that the object of a derived class should be able to replace an object of the base class without bringing any errors in the system or modifying the behavior of the base class" from here. The article gives code example for violating LSP and fixing it.
Let's say we use a rectangle in our code
r = new Rectangle();
// ...
r.setDimensions(1,2);
r.fill(colors.red());
canvas.draw(r);
In our geometry class we learned that a square is a special type of rectangle because its width is the same length as its height. Let's make a Square class as well based on this info:
class Square extends Rectangle {
setDimensions(width, height){
assert(width == height);
super.setDimensions(width, height);
}
}
If we replace the Rectangle with Square in our first code, then it will break:
r = new Square();
// ...
r.setDimensions(1,2); // assertion width == height failed
r.fill(colors.red());
canvas.draw(r);
This is because the Square has a new precondition we did not have in the Rectangle class: width == height. According to LSP the Rectangle instances should be substitutable with Rectangle subclass instances. This is because these instances pass the type check for Rectangle instances and so they will cause unexpected errors in your code.
This was an example for the "preconditions cannot be strengthened in a subtype" part in the wiki article. So to sum up, violating LSP will probably cause errors in your code at some point.
LSP says that ''Objects should be replaceable by their subtypes''.
On the other hand, this principle points to
Child classes should never break the parent class`s type definitions.
and the following example helps to have a better understanding of LSP.
Without LSP:
public interface CustomerLayout{
public void render();
}
public FreeCustomer implements CustomerLayout {
...
#Override
public void render(){
//code
}
}
public PremiumCustomer implements CustomerLayout{
...
#Override
public void render(){
if(!hasSeenAd)
return; //it isn`t rendered in this case
//code
}
}
public void renderView(CustomerLayout layout){
layout.render();
}
Fixing by LSP:
public interface CustomerLayout{
public void render();
}
public FreeCustomer implements CustomerLayout {
...
#Override
public void render(){
//code
}
}
public PremiumCustomer implements CustomerLayout{
...
#Override
public void render(){
if(!hasSeenAd)
showAd();//it has a specific behavior based on its requirement
//code
}
}
public void renderView(CustomerLayout layout){
layout.render();
}
I encourage you to read the article: Violating Liskov Substitution Principle (LSP).
You can find there an explanation what is the Liskov Substitution Principle, general clues helping you to guess if you have already violated it and an example of approach that will help you to make your class hierarchy be more safe.
LISKOV SUBSTITUTION PRINCIPLE (From Mark Seemann book) states that we should be able to replace one implementation of an interface with another without breaking either client or implementation.It’s this principle that enables to address requirements that occur in the future, even if we can’t foresee them today.
If we unplug the computer from the wall (Implementation), neither the wall outlet (Interface) nor the computer (Client) breaks down (in fact, if it’s a laptop computer, it can even run on its batteries for a period of time). With software, however, a client often expects a service to be available. If the service was removed, we get a NullReferenceException. To deal with this type of situation, we can create an implementation of an interface that does “nothing.” This is a design pattern known as Null Object,[4] and it corresponds roughly to unplugging the computer from the wall. Because we’re using loose coupling, we can replace a real implementation with something that does nothing without causing trouble.