How to easily access widely different subsets of fields of related objects/DB tables? - sql

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!

Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

Related

OOP creating and copying an object that depends on one value

I am sorry but I didn't know what to call this post (if you have a better title please tell me in a comment).
Say for instance you have the following Object whose purpose is to create chart series of the data specified in the Constructor:
/**
* Helper to generate chart series
*/
public class ChartHelper
{
public System.Windows.Forms.DataVisualization.Charting.Chart ChartType { get; set; }
public String TimeType { get; set; }
private readonly List<IObject> _datalist;
private readonly TimeType _timeType;
private readonly DateTime _stopDate;
private readonly DateTime _startDate;
public ChartHelper(List<IObject> dataList, TimeType timeType, DateTime startDate, DateTime stopDate)
{
_startDate = startDate;
_stopDate = stopDate;
_datalist = dataList;
_timeType = timeType;
}
public System.Windows.Forms.DataVisualization.Charting.Chart GetChart()
{
CreateSeries(_startDate);
return ChartType;
}
private void CreateSeries(DateTime seriesTime)
{
//Do something
}
//More internal private methods
}
Now say for instance you have a program that creates 10 different Charts but only the value of the List<IObject> dataList changes.
Then you could do one of two things:
Create 10 different ChartHelper Objects
Use the same Object and change the dataList value
This is of course an example of how the problem could be presented when developing (ive met this problem several times)
My question is, is there a design pattern that helps you solve this issue ? Or is there a best practice method that would be useful for these situations? It is important for me to learn these methods as I wish to improve my own skills.
If only the data is different then I would recommend using the same class and creating 10 different objects from it.
If however the implementation of the CreateSeries would be different depending on the type of data, than this would be a candidate for the Strategy pattern. In that case you would extract the creation of the series behind an interface and provide implementations for the different kinds of series. You could then also have a factory that picks the correct strategy depending on the data and composes a chart (helper).

Pattern name/Convention -> Class that merge different attributes from other classes

I wanted to know if there is a known pattern or convention for the following scenario:
I have two classes: MAT (name:String, address:String) & MATversion(type:String, version:int)
Now I have a DataGrid (DataTable) which will take a generic List of objects for the column mapping and data filling.
The columns should be name, type, version. (Which are distributed in MAT and MATversion)
So I create a class to make this work. This class will merge the needed properties from each class (MAT, MATversion).
-> MAT_MATversion (name:String, type:String, version:int).
Does there exist a naming convention for such an class like MAT_MATversion? Any pattern that mirrors that?
Thanks!
Is there any specific reason why the merged result has to be a unique class?
I'm assuming every MAT object has a single MATversion
you can add a couple of custom properties who will return the type and version of the underlying MATversion object
In C# this would result in something like this
public class MAT{
public String name{ get;set;};
public String adress{ get;set;};
public MATversion myVersion;
public String type {
get{
return myVersion.type;
}
set{
myVersion.type = value;
}
}
public int version {
get{
return myVersion.version;
}
set{
myVersion.version = value;
}
}
}
I'm aware that this doesn't answer the question about design patterns, but I couldn't ask/suggest another approach in a comment since I don't have that right yet.

Fluid NHibernate, Custom Types and Id mapping

I have an object in C# that I want to use as a primary key in a database that auto-increments when new objects are added. The object is basically a wrapper of a ulong value that uses some bits of the value for additional hints. I want to store it as a 'pure' ulong value in a database but I would like get an automatic conversion when the value is loaded / unloaded from DB. IE, apply the 'hint' bits to the value based on the table they come from.
I went on a journey of implementing my own IUserType object based on number of examples I found online ( tons of help on this forum ).
I have an ObjectId class that acts is an object ID
class ObjectIdType: IUserType
{
private static readonly NHibernate.SqlTypes.SqlType[] SQL_TYPES = { NHibernateUtil.UInt64.SqlType };
public NHibernate.SqlTypes.SqlType[] SqlTypes
{
get { return SQL_TYPES; }
}
public Type ReturnedType
{
get { return typeof(ObjectId); }
}
...
}
I have a mapping class that looks like this:
public class ObjectTableMap()
{
Id(x => x.Id)
.Column("instance_id")
.CustomType<ObjectIdType>()
.GeneratedBy.Native();
}
At this point I get an exception at config that Id can only be an integer. I guess that makes sense but I was half expecting that having the custom type implemented, the native ulong database type would take over and work.
I've tried to go down the path of creating a custom generator but its still a bit out of my skill level so I am stumbling though it.
My question is, is it possible for me to accomplish what I am trying to do with the mapping?
I think, it is not possible, because your mapping uses the native generator for the Id. This can only be used for integral types (and GUIDs). You can try to use assigned Ids with your custom type, so you are responsible for assigning the values to your Id property.
There is another alternative: Why not set your information bits on class level, instead depending on your table? Your entities represent the tables, so you should have the same information in your entity classes. Example:
class Entity
{
protected virtual ulong InternalId { get; set; } // Mapped as Id
public virtual ulong Id // This property is not mapped
{
get
{
var retVal = InternalId;
// Flip your hint bits here based on class information
return retVal;
}
}
}
You could also turn InternalId into a public property and make the setter protected.

What is the real significance(use) of polymorphism

I am new to OOP. Though I understand what polymorphism is, but I can't get the real use of it. I can have functions with different name. Why should I try to implement polymorphism in my application.
Classic answer: Imagine a base class Shape. It exposes a GetArea method. Imagine a Square class and a Rectangle class, and a Circle class. Instead of creating separate GetSquareArea, GetRectangleArea and GetCircleArea methods, you get to implement just one method in each of the derived classes. You don't have to know which exact subclass of Shape you use, you just call GetArea and you get your result, independent of which concrete type is it.
Have a look at this code:
#include <iostream>
using namespace std;
class Shape
{
public:
virtual float GetArea() = 0;
};
class Rectangle : public Shape
{
public:
Rectangle(float a) { this->a = a; }
float GetArea() { return a * a; }
private:
float a;
};
class Circle : public Shape
{
public:
Circle(float r) { this->r = r; }
float GetArea() { return 3.14f * r * r; }
private:
float r;
};
int main()
{
Shape *a = new Circle(1.0f);
Shape *b = new Rectangle(1.0f);
cout << a->GetArea() << endl;
cout << b->GetArea() << endl;
}
An important thing to notice here is - you don't have to know the exact type of the class you're using, just the base type, and you will get the right result. This is very useful in more complex systems as well.
Have fun learning!
Have you ever added two integers with +, and then later added an integer to a floating-point number with +?
Have you ever logged x.toString() to help you debug something?
I think you probably already appreciate polymorphism, just without knowing the name.
In a strictly typed language, polymorphism is important in order to have a list/collection/array of objects of different types. This is because lists/arrays are themselves typed to contain only objects of the correct type.
Imagine for example we have the following:
// the following is pseudocode M'kay:
class apple;
class banana;
class kitchenKnife;
apple foo;
banana bar;
kitchenKnife bat;
apple *shoppingList = [foo, bar, bat]; // this is illegal because bar and bat is
// not of type apple.
To solve this:
class groceries;
class apple inherits groceries;
class banana inherits groceries;
class kitchenKnife inherits groceries;
apple foo;
banana bar;
kitchenKnife bat;
groceries *shoppingList = [foo, bar, bat]; // this is OK
Also it makes processing the list of items more straightforward. Say for example all groceries implements the method price(), processing this is easy:
int total = 0;
foreach (item in shoppingList) {
total += item.price();
}
These two features are the core of what polymorphism does.
Advantage of polymorphism is client code doesn't need to care about the actual implementation of a method.
Take look at the following example.
Here CarBuilder doesn't know anything about ProduceCar().Once it is given a list of cars (CarsToProduceList) it will produce all the necessary cars accordingly.
class CarBase
{
public virtual void ProduceCar()
{
Console.WriteLine("don't know how to produce");
}
}
class CarToyota : CarBase
{
public override void ProduceCar()
{
Console.WriteLine("Producing Toyota Car ");
}
}
class CarBmw : CarBase
{
public override void ProduceCar()
{
Console.WriteLine("Producing Bmw Car");
}
}
class CarUnknown : CarBase { }
class CarBuilder
{
public List<CarBase> CarsToProduceList { get; set; }
public void ProduceCars()
{
if (null != CarsToProduceList)
{
foreach (CarBase car in CarsToProduceList)
{
car.ProduceCar();// doesn't know how to produce
}
}
}
}
class Program
{
static void Main(string[] args)
{
CarBuilder carbuilder = new CarBuilder();
carbuilder.CarsToProduceList = new List<CarBase>() { new CarBmw(), new CarToyota(), new CarUnknown() };
carbuilder.ProduceCars();
}
}
Polymorphism is the foundation of Object Oriented Programming. It means that one object can be have as another project. So how does on object can become other, its possible through following
Inheritance
Overriding/Implementing parent Class behavior
Runtime Object binding
One of the main advantage of it is switch implementations. Lets say you are coding an application which needs to talk to a database. And you happen to define a class which does this database operation for you and its expected to do certain operations such as Add, Delete, Modify. You know that database can be implemented in many ways, it could be talking to file system or a RDBM server such as MySQL etc. So you as programmer, would define an interface that you could use, such as...
public interface DBOperation {
public void addEmployee(Employee newEmployee);
public void modifyEmployee(int id, Employee newInfo);
public void deleteEmployee(int id);
}
Now you may have multiple implementations, lets say we have one for RDBMS and other for direct file-system
public class DBOperation_RDBMS implements DBOperation
// implements DBOperation above stating that you intend to implement all
// methods in DBOperation
public void addEmployee(Employee newEmployee) {
// here I would get JDBC (Java's Interface to RDBMS) handle
// add an entry into database table.
}
public void modifyEmployee(int id, Employee newInfo) {
// here I use JDBC handle to modify employee, and id to index to employee
}
public void deleteEmployee(int id) {
// here I would use JDBC handle to delete an entry
}
}
Lets have File System database implementation
public class DBOperation_FileSystem implements DBOperation
public void addEmployee(Employee newEmployee) {
// here I would Create a file and add a Employee record in to it
}
public void modifyEmployee(int id, Employee newInfo) {
// here I would open file, search for record and change values
}
public void deleteEmployee(int id) {
// here I search entry by id, and delete the record
}
}
Lets see how main can switch between the two
public class Main {
public static void main(String[] args) throws Exception {
Employee emp = new Employee();
... set employee information
DBOperation dboper = null;
// declare your db operation object, not there is no instance
// associated with it
if(args[0].equals("use_rdbms")) {
dboper = new DBOperation_RDBMS();
// here conditionally, i.e when first argument to program is
// use_rdbms, we instantiate RDBM implementation and associate
// with variable dboper, which delcared as DBOperation.
// this is where runtime binding of polymorphism kicks in
// JVM is allowing this assignment because DBOperation_RDBMS
// has a "is a" relationship with DBOperation.
} else if(args[0].equals("use_fs")) {
dboper = new DBOperation_FileSystem();
// similarly here conditionally we assign a different instance.
} else {
throw new RuntimeException("Dont know which implemnation to use");
}
dboper.addEmployee(emp);
// now dboper is refering to one of the implementation
// based on the if conditions above
// by this point JVM knows dboper variable is associated with
// 'a' implemenation, and it will call appropriate method
}
}
You can use polymorphism concept in many places, one praticle example would be: lets you are writing image decorer, and you need to support the whole bunch of images such as jpg, tif, png etc. So your application will define an interface and work on it directly. And you would have some runtime binding of various implementations for each of jpg, tif, pgn etc.
One other important use is, if you are using java, most of the time you would work on List interface, so that you can use ArrayList today or some other interface as your application grows or its needs change.
Polymorphism allows you to write code that uses objects. You can then later create new classes that your existing code can use with no modification.
For example, suppose you have a function Lib2Groc(vehicle) that directs a vehicle from the library to the grocery store. It needs to tell vehicles to turn left, so it can call TurnLeft() on the vehicle object among other things. Then if someone later invents a new vehicle, like a hovercraft, it can be used by Lib2Groc with no modification.
I guess sometimes objects are dynamically called. You are not sure whether the object would be a triangle, square etc in a classic shape poly. example.
So, to leave all such things behind, we just call the function of derived class and assume the one of the dynamic class will be called.
You wouldn't care if its a sqaure, triangle or rectangle. You just care about the area. Hence the getArea method will be called depending upon the dynamic object passed.
One of the most significant benefit that you get from polymorphic operations is ability to expand.
You can use same operations and not changing existing interfaces and implementations only because you faced necessity for some new stuff.
All that we want from polymorphism - is simplify our design decision and make our design more extensible and elegant.
You should also draw attention to Open-Closed Principle (http://en.wikipedia.org/wiki/Open/closed_principle) and for SOLID (http://en.wikipedia.org/wiki/Solid_%28Object_Oriented_Design%29) that can help you to understand key OO principles.
P.S. I think you are talking about "Dynamic polymorphism" (http://en.wikipedia.org/wiki/Dynamic_polymorphism), because there are such thing like "Static polymorphism" (http://en.wikipedia.org/wiki/Template_metaprogramming#Static_polymorphism).
You don't need polymorphism.
Until you do.
Then its friggen awesome.
Simple answer that you'll deal with lots of times:
Somebody needs to go through a collection of stuff. Let's say they ask for a collection of type MySpecializedCollectionOfAwesome. But you've been dealing with your instances of Awesome as List. So, now, you're going to have to create an instance of MSCOA and fill it with every instance of Awesome you have in your List<T>. Big pain in the butt, right?
Well, if they asked for an IEnumerable<Awesome>, you could hand them one of MANY collections of Awesome. You could hand them an array (Awesome[]) or a List (List<Awesome>) or an observable collection of Awesome or ANYTHING ELSE you keep your Awesome in that implements IEnumerable<T>.
The power of polymorphism lets you be type safe, yet be flexible enough that you can use an instance many many different ways without creating tons of code that specifically handles this type or that type.
Tabbed Applications
A good application to me is generic buttons (for all tabs) within a tabbed-application - even the browser we are using it is implementing Polymorphism as it doesn't know the tab we are using at the compile-time (within the code in other words). Its always determined at the Run-time (right now! when we are using the browser.)

The Object-Oriented way to separate the model from its representation

Suppose we have an object that represents the configuration of a piece of hardware. For the sake of argument, a temperature controller (TempController). It contains one property, the setpoint temperature.
I need to save this configuration to a file for use in some other device. The file format (FormatA) is set in stone. I don't want the TempController object to know about the file format... it's just not relevant to that object. So I make another object, "FormatAExporter", that transforms the TempController into the desired output.
A year later we make a new temperature controller, let's call it "AdvancedTempController", that not only has a setpoint but also has rate control, meaning one or two more properties. A new file format is also invented to store those properties... let's call it FormatB.
Both file formats are capable of representing both devices ( assume AdvancedTempController has reasonable defaults if it lacks settings ).
So here is the problem: Without using 'isa' or some other "cheating" way to figure out what type of object I have, how can FormatBExporter handle both cases?
My first instinct is to have a method in each temperature controller that can provide a customer exporter for that class, e.g., TempController.getExporter() and AdvancedTempController.getExporter(). This doesn't support multiple file formats well.
The only other approach that springs to mind is to have a method in each temperature controller that returns a list of properties and their values, and then the formatter can decide how to output those. It'd work, but that seems convoluted.
UPDATE: Upon further work, that latter approach doesn't really work well. If all your types are simple it might, but if your properties are Objects then you end up just pushing the problem down a level... you are forced to return a pair of String,Object values, and the exporter will have to know what the Objects actually are to make use of them. So it just pushes the problem to another level.
Are there any suggestions for how I might keep this flexible?
What you can do is let the TempControllers be responsible for persisting itself using a generic archiver.
class TempController
{
private Temperature _setPoint;
public Temperature SetPoint { get; set;}
public ImportFrom(Archive archive)
{
SetPoint = archive.Read("SetPoint");
}
public ExportTo(Archive archive)
{
archive.Write("SetPoint", SetPoint);
}
}
class AdvancedTempController
{
private Temperature _setPoint;
private Rate _rateControl;
public Temperature SetPoint { get; set;}
public Rate RateControl { get; set;}
public ImportFrom(Archive archive)
{
SetPoint = archive.Read("SetPoint");
RateControl = archive.ReadWithDefault("RateControl", Rate.Zero);
}
public ExportTo(Archive archive)
{
archive.Write("SetPoint", SetPoint);
archive.Write("RateControl", RateControl);
}
}
By keeping it this way, the controllers do not care how the actual values are stored but you are still keeping the internals of the object well encapsulated.
Now you can define an abstract Archive class that all archive classes can implement.
abstract class Archive
{
public abstract object Read(string key);
public abstract object ReadWithDefault(string key, object defaultValue);
public abstract void Write(string key);
}
FormatA archiver can do it one way, and FormatB archive can do it another.
class FormatAArchive : Archive
{
public object Read(string key)
{
// read stuff
}
public object ReadWithDefault(string key, object defaultValue)
{
// if store contains key, read stuff
// else return default value
}
public void Write(string key)
{
// write stuff
}
}
class FormatBArchive : Archive
{
public object Read(string key)
{
// read stuff
}
public object ReadWithDefault(string key, object defaultValue)
{
// if store contains key, read stuff
// else return default value
}
public void Write(string key)
{
// write stuff
}
}
You can add in another Controller type and pass it whatever formatter. You can also create another formatter and pass it to whichever controller.
In C# or other languages that support this you can do this:
class TempController {
int SetPoint;
}
class AdvancedTempController : TempController {
int Rate;
}
class FormatAExporter {
void Export(TempController tc) {
Write(tc.SetPoint);
}
}
class FormatBExporter {
void Export(TempController tc) {
if (tc is AdvancedTempController) {
Write((tc as AdvancedTempController).Rate);
}
Write(tc.SetPoint);
}
}
I'd have the "temp controller", through a getState method, return a map (e.g. in Python a dict, in Javascript an object, in C++ a std::map or std::hashmap, etc, etc) of its properties and current values -- what's convoluted about it?! Could hardly be simpler, it's totally extensible, and totally decoupled from the use it's put to (displaying, serializing, &c).
Well, a lot of that depends on the file formats you're talking about.
If they're based on key/value combinations (including nested ones, like xml), then having some kind of intermediate memory object that's loosely typed that can be thrown at the appropriate file format writer is a good way to do it.
If not, then you've got a scenario where you've got four combinations of objects and file formats, with custom logic for each scenario. In that case, it may not be possible to have a single representation for each file format that can deal with either controller. In other words, if you can't generalize the file format writer, you can't generalize it.
I don't really like the idea of the controllers having exporters - I'm just not a fan of objects knowing about storage mechanisms and whatnot (they may know about the concept of storage, and have a specific instance given to them via some DI mechanism). But I think you're in agreement with that, and for pretty much the same reasons.
If FormatBExporter takes an AdvancedTempController, then you can make a bridge class that makes TempController conform to AdvancedTempController. You may need to add some sort of getFormat() function to AdvancedTempController though.
For example:
FormatBExporter exporterB;
TempController tempController;
AdvancedTempController bridged = TempToAdvancedTempBridge(tempController);
exporterB.export(bridged);
There is also the option of using a key-to-value mapping scheme. FormatAExporter exports/imports a value for key "setpoint". FormatBExporter exports/imports a values for keys "setpoint" and "ratecontrol". This way, old FormatAExporter can still read the new file format (it just ignores "ratecontrol") and FormatBExporter can read the old file format (if "ratecontrol" is missing, it uses a default).
In the OO model, the object methods as a collective is the controller. It's more useful to separate your program in to the M and V and not so much the C if you're programming using OO.
I guess this is the where the Factory method pattern would apply