This is a generic question, I don't know if it belongs to Programming or StackOverflow.
I'm writing a litte simulation. Without going very deep into its details, consider that many kind of identities are involved. They correspond to Object since I'm using a OOP language.
There are Guys that inhabit the world simulated
There are Maps
A map has many Lots, that are pieces of land with some characteristics
There are Tribes (guys belong to tribes)
There is a generic class called Position to locate the elements
There are Bots in control of tribes that move guys around
There is a World that represents the world simulated
and so on.
If the simulated world was laid down as a database, the objects would be tables with lots of references, but in memory I have to use a different strategy. So, for example, a Tribe has an array of Guys as a property, The world has a, array of Bots, of Tribes, of Maps. A Map has a Dictionary whose key is a Position and whose value is a Lot. A Guy has a Position that is where he stands.
The way I lay down such connections is pretty much arbitrary. For example, I could have an array of Guys in the World, or an Array of guys per Lot (the guys standing on a piece of land), or an array of Guys per Bot (with the Guys controlled by the bot).
Doing so, I also have to pass around a lot of objects. For example, a Bot must have informations about the Map and opponent Guys to decide how to move its Guys.
As said, in a database I'd have a Guys table connected to the Lots table (indicating its position), to the Tribe table (indicating which Tribe it belongs to) and so it would also be easy to query "All the guys in Position [1, 5]". "All the Guys of Tribe 123". "All the Guys controlled by Bot B standing on the Lot b34 not belonging to the Tribe 456" and so on.
I've worked with APIs where to get the simplest information you had to make an instance of the CustomerContextCollection and pass it to CustomerQueryFactory to get back a CustomerInPlaceQuery to... When people criticize OOP and cite verbose abstractions that soon smell ridiculous, that's what I mean. I want to avoid such things and having to relay on deep abstractions and (anti pattern) abstract contexts.
The question is: what is the preferred, clean way to manage entities and collections of entities that are deeply linked in multiple ways?
It depends on your definition of "clean". In my case, I define clean as: I can implement desired behavior in an obvious, efficient manner.
Building OOP software is not a data modeling exercise. I'd suggest stepping back a little. What does each one of those objects actually do? What methods are you going to implement?
Just because "guys are in a lot" doesn't mean that the lot object needs a collection of guys; it only needs one if there are operations on a lot that affect all the guys in it. And even then, it doesn't necessarily need a collection of guys - it needs a way to get the guys in the lot. This may be an internally stored collection, but it could also be a simple method that calls back into the world to find guys matching a criteria. The implementation of that lookup should be transparent to anyone.
From the tenor of your questions, it seems like you're thinking of this from a "how do I generate reports" perspective. Step back and think of the behaviors you're trying to implement first.
Another thing I find extremely valuable is to differentiate between Entities and Values. Entities are objects where identity matters - you may have two guys, both named "Chris", but they are two different objects and remain distinct despite having the same "key". Values, on the other hand, act like ints. From your above list, Position sounds a lot like a value - Position(0,0) is Position(0,0) regardless of which chunk of memory (identity) those bits are stored in. The distinction has a bit effect on how you compare and store values vs. entities. For example, your Guy objects (entities) would store their Position as a simple member variable.
I've found a great reference for how to think about such things is Eric Evan's "Domain Driven Design" book. He's focused on business systems, but the discussions are very valuable for how you think about building OO systems in general I've found.
I would say that no 'true' answer exists to your core question -- a best way to manage collections of entities that are linked in multiple ways. It really depends on the kind of application (simulation) - here are some thoughts:
Is execution time important?
If this is the case, there is really no way around analyzing in which way your simulator will iterate over (query) the objects from the pool: sketch out the basic simulation loop and check what kind of events will require to iterate over what kind of model entities (I assume you are developing a discrete-event simulation?). Then you should organize the data structures in a way that optimizes the most frequent/time-consuming events (as opposed to "laying down the connections arbitrarily"). Additionally, you may want to use special data structures (such as k-d trees) to organize entities with properties that you need to query often (e.g., position data). For some typical problems, e.g. collision detection, there is also a whole lot of approaches to solve them efficiently (so look for suitable libraries/frameworks, e.g. for multi-agent simulation).
How flexible do you want to make it?
If you really want to make it super-flexible and really don't want to decide on the hierarchy of the model entities, why not just use an in-memory database? As you already said, databases are easily applicable to your problem (and you can easily save the model state, which may also be useful).
How clean is clean enough?
If you want to be absolutely sure that the rest of your simulator is not affected by the design choices you make in regards of your model representation, hide it behind an interface (say, ModelWorld), which defines methods for all the types of queries your simulator may invoke (this is orthogonal to the second point and may help with the first point, i.e. figuring out what kind of access pattern your simulator exhibits). This allows you to change implementations easily, without affecting any other parts of the simulator code.
Related
I'm working on a first JSON-RPC/JSON-REST API. One of the conveniences of JSON is that it can easily represent structured data (a user may have multiple email addresses, multiple addresses), etc...
For example, the Facebook Graph API nicely represents the kind of thing that's handy to return as JSON objects:
https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-ash3/851559_339008529558010_1864655268_n.png
However, in implementing an API such as this with a relational database, we end up shattering structured objects into very many tables (at least one for each list in the JSON object), and un-shattering them when responding to requests. So:
requires a lot of modelling (separate models for JSON object and SQL tables).
inconsistencies creep in between the models: e.g. user_id (in SQL) vs. userID (in JSON)
marshaling stuff between one model and the other is very time consuming (tedious, error-prone and pointless boilerplate).
What design-patterns exist to help in this situation?
I'm not sure you are looking for design patterns. I would look for tools that handle this better.
I assume that you want to be able to query these objects, and not just store them in TEXT fields. Many databases support XML fairly well, so I would convert the JSON to XML (with a library) and then store that in the database.
You may also want to consider a JSON document based database. That will definitely get you where you want to go.
If you don't need to be able to query these, or only need to query a very small subset of fields, just store the objects as text, and extract those query-able fields into actual columns. This way you don't need to touch the majority of the data, but you can still query the few fields you care about. (Plus you can index them for speedier lookup.)
I have always chosen to implement this functionality in a facade pattern. Since the point of the facade is to simplify (abstract) an underlying complexity as a boundary between two or more systems, it seemed like the perfect place to handle this.
I realize however that this does not quite answer the question. I am talking about the container for the marshalling while the question is about how to better manage the contents (the code that does the job).
My approach here is somewhat old fashioned, but since this an old question maybe that’s okay. I employ (as much as possible) stored procedures in the dB. This promotes better encapsulation than one typically finds with a code layer outside of the dB that has to “know about” dB structure. What inevitably happens in the latter case is that more than one system will be written to do this (one large company I worked at had at least 6 competing ESBs) and there will be conflicts. Also, usually the stored procedure scripting will benefit from some sort of IDE that will helps maintain contextual awareness of the dB structure.
So this approach - even though it is not a pattern per se - makes managing the ORM a lot easier.
I've recently been confronted with the need to design a database. Since this is my first time, I thought I'd better ask for some advice to make sure I'm building on solid foundations.
Goal
I'd like to store objects (POD structures best thought of as multi-maps) in an
SQL database for storage and querying. The objects' contents as well as its 'structure' are continuously modified. The database will be accessed intensively through both queries and updates.
Use Case
First, each object should have a unique identifier.
Second, different type of objects exist. For example, ObjectA is an instance of ClassA. ClassA can have attributes A1, A2, A3, etc. As a result, ObjectA can (but isn't required, NULL is allowed) have values for these attributes. However, each of these attributes may have more than one value, ie: ObjectA.A1="foo" and ObjectA.A1="bar" are both possible. The number of attributes of ClassA can change. For simplicity's sake, attributes can only be added, not removed.
Third, attributes are not specific to one class, ie: objects of ClassB can also have attributes A1, A2, etc. Thus ObjectB.A1="foo" is also possible. I'm not sure whether this changes anything, but I have a feeling it might in a design where each attribute corresponds with a table.
Finally, the following pseudo-queries and actions must be supported:
Get all the objects of type ClassA with attribute A1 equal to "bar".
Get all the attributes of ObjectB.
Add an attribute A4 to objects of type ClassA.
Add an object of type ClassC which has attributes A1="foobar", A2="bar".
Limitations
First, I want to avoid serializing the data, so multiple values in a single column are out of the question. The database should be normalized and the data structures should be atomic. The database will be queried very frequently, so I cannot afford wasting time trying to implement a complex query mechanism. I will end up re-inventing the wheel (probably a square one as well).
Second, I cannot use any prior knowledge of an object's internal structure as this will only become available at run-time. For example, in the use case above, the attributes are not known before-hand. So while I have thought of having a design where each attribute is a table, I cannot figure out how to get all the attributes of an object in such a set-up.
Environment
I'm using SQLite 3.7, C++.
Question
What would be an appropriate, flexible database design that meets the requirements of the described problem?
Any help, pointers or tips leading to useful insights or a solid design are very welcome.
Thanks!
ps: I have only basic theoretical familiarity and limited practical experience with relational databases, certainly no prior professional experience. I have been reading up on the subject the past week and have grasped some of the concepts which I think will be relevant to my case (normalization, foreign keys, etc), but I'm still going through my book at this moment.
If this is your first time out, and your project is as significant as it seems, you might want to invest the time and effort to learn the fundamentals from the ground up. CJ Date and many other authors have books and on line tutorials that can take you through the fundamentals. They are excellent works.
There are some fields within IT that are dominated by almost complete adhocracy. Not so database design. To begin with, EF Codd laid the groundwork on a very solid mathematical basis some 42 years ago, and the basic model has held up very well over time. There has been progress, but almost no backtracking. And very little change for the sake of change.
SQL has likewise enjoyed a lot of stability over its long lifespan.
Next, trial and error in database design can be enormously costly. There are dozens of cases where unfortunate choices made by newbies have ended up costing millions in data investments that didn't pan out.
Trial and error has its place. Tips and tricks have their place. Answers on SO have their place. But so does formal learning.
Here's a theoretical/pedantic question: imagine properties where each one could be owned by multiple others. Furthermore, from one iteration of ownership to the next, two neighboring owners could decide to partly combine ownership. For example:
territory 1, t=0: a,b,c,d
territory 2, t=0: e,f,g,h
territory 1, t=1: a,b,g,h
territory 2, t=1: g,h
That is to say, c and d no longer own property; and g and h became fat cats, so to speak.
I'm currently representing this data structure as a tree where each child could have multiple parents. My goal is to cram this into the Composite design pattern; but I'm having issues getting a conceptual footing on how the client might go back and update previous ownership without mucking up the whole structure.
My question is twofold.
Easy: What is a convenient name for this data structure such that I can google it myself?
Hard: What am I doing wrong? When I code I try to keep the mantra, "Keep it simple, Stupid," in my head, and I feel I am breaking this credo.
My question is two fold: Easy: What is a convenient name for this data
structure such that I can google it myself?
What you have here is not a tree, it is a graph. A multimap will help you here.
But any adjacency list or adjacency matrix will give you a good start.
Here is a video on adjacency matrix and list: Youtube on adjacency matrix and list
Hard: What am I doing wrong?
This is really hard to tell. Perhaps you did not model the relationship
in a proper way. It is not that hard, given a good datastructure to start with.
And, as you asked for design patterns (but you probably found out yourself),
the Composite pattern will let you model such an setting with ease.
You have a many-to-many relationship between your owners and your territories (properties). I'm not sure what language you're working in, but this sort of thing can be easily represented and tracked in a relational database. (You'd probably want a table for each entity, and the relationship would probably require a third "junction" table. If it's necessary to be able to query "back in time", this could have some sort of "time index" column as well.)
If you are working in an object-oriented language, you might create two classes, Territory and Owner, where the Territory class has a property/member/field which is a collection of references/pointers to Owners and the Owner class has a similar collection of Territories. (One of these two collections may need to contain "weak" references depending on the language.)
In this case, some difficulty may arise if you want to be able to go back and look at the network state at some particular point earlier in time. (If this is what you need, say so and I (or someone else) can post a solution that works for that.)
I'm not sure what level of simplicity you are striving for, but in neither of these cases is updating the ownership relationships really that "hard". Maybe if you posted some code it might be easier to give you more concrete advice.
Hard to tell without more information regarding the business rules. Though I've plenty of experience designing graphs where each node could potentially have numerous parents.
A common structure is the Directed Acyclic Graph. Essential rules here are that no path through the graph can cycle back onto itself. For example take the path "A/B/C/B", this would not be valid as B repeats twice.
Valid:- "A/B/C", "D/E/C", node C has two parents E and B.
Invalid:- "A/B/C/B", node B repeats in the same path causing a cycle.
I am trying to really get a good idea how to think in OOP terms, so I have a semi-hypothetical scenario in my mind and I was looking for some thoughts.
If I wanted to design a simulation for different types of people interacting with each other, each of whom could acquire different proficiency levels in different "skills", what would be an optimal way to do this?
It's really the "skills" thing that I was a bit caught up on. My requirements are as follows:
-Each person either "has" a skill or does not
-If people have skills, they also have a "proficiency level" associated with the skill
-I need a way to find and pick out every person that has certain skills at all, or at a certain level
-The design needs to be extendible (ie, I need to be able to add more "skills" later)
I considered the following options:
have a giant enum for every single skill I include, and have the person class contain an
"int Skills[TOTAL_NUM_SKILLS]" member. The array would have zeros for "unacquired" skills, and 1 to (max) for proficiency levels of "acquired skills".
have the same giant enumeration, and have the person class contain a map of skills (from the enum) and numbers associated with the skills so that you can just add only the acquired skills to the map and associate a number this way.
Create a concrete class for every single skill, and have each inherit from an abstract base class (ISkill, say), and have the person class have a map of ISkill's
Really, option 1 seems like the straightforward no-nonsense way to do it. Please criticize; is there some reason this is not acceptable? Is there a more object oriented way to do this?
I know that option 3 doesn't make much sense right now, but if I decided to extend this later to have skills be more than just things with proficiency associated with them (ie, actually associate new actions with the skills (ISkill::DoAction, etc), does this make sense as an option?
Sorry for the broad question, I just want to see if this line of thought makes sense, or if I'm barking up the wrong tree altogether.
The problem with option 1 is future compatibility. Say you were shipping this framework to customers. Now, the customer has built this array of Skill values, which is length TOTAL_NUM_SKILLS, for each person. But this fails as soon as you try to add another skill, and especially as you try to reorder skills.
What if the customer is using an RPC framework in which a client and server pass Person objects over the wire? Now, unless the customer upgrades the client and server at the exact same time, the RPC calls break, since now the client and server expect arrays of different lengths. This can be particularly tricky because the customer may own only the client, or only the server, and be unable to upgrade both at once.
But it gets worse. Say the client has written out a Person object to disk in some file. If they decided to serialize a person as a simple list of numbers, then a new skill will cause the deserialization code to fail. Worse, if you reorder skills in your enum, the deserialization code may work just fine but give a wrong answer.
I like option 3 exactly for the reason you named: later you can add more functionality, and do so safely (well, except for the fact that every public change is a breaking change if your customers exercised certain edge cases in the language).
If you want to add skills often without changing the overall program structure, I'd consider some kind of external data file that you can change without recompiling your code. Think about how you'd want to do it in a really large project. The person who chooses the skills might be a designer with no programming ability. He could edit the skills in an XML file, but not in C++ code.
If you defined the skills in XML, it would naturally extend to store more data with each skill. Your players could be serialized as XML files too.
When you set up a player's skills at runtime, you could build a hash table keyed on the skill name from the XML file. If it's more common to enumerate a player's skills than to query whether a player has a certain skill, you could just use a vector of strings.
Of course, this solution will use more memory and run slower than your enum solution. But it will probably be good enough unless you're dealing with millions of players in your program.
The "party model" is a "pattern" for relational database design. At least part of it involves finding commonality between many entities, such as Customer, Employee, Partner, etc., and factoring that into some more "abstract" database tables.
I'd like to find out your thoughts on the following:
What are the core principles and motivating forces behind the party model?
What does it prescribe you do to your data model? (My bit above is pretty high level and quite possibly incorrect in some ways. I've been on a project that used it, but I was working with a separate team focused on other issues).
What has your experience led you to feel about it? Did you use it, and if so, would you do so again? What were the pros and cons?
Did the party model limit your choice of ORMs? For example, did you have to eliminate certain ORMs because they didn't allow for enough of an "abstraction layer" between your domain objects and your physical data model?
I'm sure every response won't address every one of those questions ... but anything touching on one or more of them is going to help me make some decisions I'm facing.
Thanks.
What are the core principles and motivating forces behind the party
model?
To the extent that I've used it, it's mostly about code reuse and flexibility. We've used it before in the guest / user / admin model and it certainly proves its value when you need to move a user from one group to another. Extend this to having organizations and companies represented with users under them, and it's really providing a form of abstraction that isn't particularly inherent in SQL.
What does it prescribe you do to your data model? (My bit above is
pretty high level and quite possibly
incorrect in some ways. I've been on a
project that used it, but I was
working with a separate team focused
on other issues).
You're pretty correct in your bit above, though it needs some more detail. You can imagine a situation where an entity in the database (call it a Party) contracts out to another Party, which may in turn subcontract work out. A party might be an Employee, a Contractor, or a Company, all subclasses of Party. From my understanding, you would have a Party table and then more specific tables for each subclass, which could then be further subclassed (Party -> Person -> Contractor).
What has your experience led you to feel about it? Did you use it, and if
so, would you do so again? What were
the pros and cons?
It has its benefits if you need flexibly to add new types to your system and create relationships between types that you didn't expect at the beginning and architect in (users moving to a new level, companies hiring other companies, etc). It also gives you the benefit of running a single query and retrieving data for multiple types of parties (Companies,Employees,Contractors). On the flip side, you're adding additional layers of abstraction to get to the data you actually need and are increasing load (or at least the number of joins) on the database when you're querying for a specific type. If your abstraction goes too far, you'll likely need to run multiple queries to retrieve the data as the complexity would start to become detrimental to readability and database load.
Did the party model limit your choice of ORMs? For example, did you
have to eliminate certain ORMs because
they didn't allow for enough of an
"abstraction layer" between your
domain objects and your physical data
model?
This is an area that I'm admittedly a bit weak in, but I've found that using views and mirrored abstraction in the application layer haven't made this too much of a problem. The real problem for me has always been a "where is piece of data X living" when I want to read the data source directly (it's not always intuitive for new developers on the system either).
The idea behind the party models (aka entity schema) is to define a database that leverages some of the scalability benefits of schema-free databases. The party model does that by defining its entities as party type records, as opposed to one table per entity. The result is an extremely normalized database with very few tables and very little knowledge about the semantic meaning of the data it stores. All that knowledge is pushed to the data access in code. Database upgrades using the party model are minimal to none, since the schema never changes. It’s essentially a glorified key-value pair data model structure with some fancy names and a couple of extra attributes.
Pros:
Kick-ass horizontal scalability. Once your 5-6 tables are defined in your entity model, you can go to the beach and sip margaritas. You can virtually scale this database out as much as you want with minimum efforts.
The database supports any data structure you throw at it. You can also change data structures and party/entities definitions on the fly without affecting your application. This is very very powerful.
You can model any arbitrary data entity by adding records, not changing the schema. Meaning you can say goodbye to schema migration scripts.
This is programmers’ paradise, since the code they write will define the actual entities they use in code, and there are no mappings from Objects to Tables or anything like that. You can think of the Party table as the base object of your framework of choice (System.Object for .NET)
Cons:
Party/Entity models never play well with ORMs, so forget about using EF or NHibernate to get semantically meaningful entities out of your entity database.
Lots of joins. Performance tuning challenges. This ‘con’ is relative to the practices you use to define your entities, but is safe to say that you’ll be doing a lot more of those mind-bending queries that will bring you nightmares at night.
Harder to consume. Developers and DB pros unfamiliar with your business will have a harder time to get used to the entities exposed by these models. Since everything is abstract, there no diagram or visualization you can build on top of your database to explain what is stored to someone else.
Heavy data access models or business rules engines will be needed. Basically you have to do the work of understanding what the heck you want out of your database at some point, and your database model is not going to help you this time around.
If you are considering a party or entity schema in a relational database, you should probably take a look at other solutions like a NoSql data store, BigTable or KV Stores. There are some great products out there with massive deployments and traction such as MongoDB, DynamoDB, and Cassandra that pioneered this movement.
This is a vast topic, I would recommend reading The Data Model Resource Book Volume 3 - Universal Patterns for Data Modeling by Len Silverston and Paul Agnew.
I've just received my copy and it's pretty good - It provides you with an overlook for many approaches to data modeling, including hybrid contextual role patterns and so on. It has detailed PROs and CONs for every approach.
There is a pletheora of ways to model party relationships and roles all with their benefits and disadvantages. The question that was accepted as an answer covers just one instance of a 'party model'.
For instance, in many approaches, notions like "Employee", "Project Manager" etc. are roles that a party can play within a certain context. I will try to give you a better breakdown once I get home.
When I was part of a team implementing these ideas in the early 1980's, it did not limit our choice of ORM's because those hadn't been invented yet.
I'd fall back on those ideas any time, as that particular project was one of the most convincing proofs-of-concept I have ever seen of a "revolutionary" idea (which it certainly was at the time).
It forces you to nothing. And it doesn't stop you from anything (from any mistake, I mean). The one defining your own information model is you.
All parties have lots of properties in common. The fact that they have a name and such (we called those "signaletics"). The fact that they have principal/primary locations called "addresses". The fact that they all are involved, in some sense, in the business' contracts.
as a simple talk from my understanding: Party modeling gives the flexibility and needs more effort (like T-sql join and ...) to be implemented.
I also wanna point that, "using Party modeling (serialization/generalization) gives you the ability to have FK-Relation to other tables". for example: think of different types of users (admin, user, ...) which generalized into User table, and you can have UserID in your Authorization table.
I'm not sure, but the party model sounds like a particular case of the generalization-specialization pattern. A search on "generalization specialization relational modeling" finds some interesting articles.