Nhibernate - Map a single row table - nhibernate

I have an existing nhibernate web application and I'm about to add a configuration table that will contain all system wide configuration options. This table will always contain one and only one row. Each column will contain one configuration property. I plan on having a domain object that will have a matching property for each column in the table. The users will be able to modify the values for each property in an admin screen. I plan on populating the table with one row during installation, setting initial values for each configuration option. My questions are as follows:
1) I only want the system to update the existing row, and want to block any deletes or inserts on the table. I can, of course, enforce this by not creating application tier functions that do deletes or updates, but I wondered if NHibernate had some built in mapping or configuration options to help. I'd prefer to not have to do this at the database level since we are writing a database agnostic application, and so far, have not had to write any database platform specific code or scripts.
2) Would the mapping be different for this class than my other "normal" classes?

Answer to 1) NHibernate does not have any "configuration" that will enable to block "inserts" and "deletes" only. You can do work arounds e.g. Write an your own PreDeleteEventListener and PreInsertEventListener and stop updates and inserts if the entity is your configuration entity.
However I would advise you do to enforce this configuration via the application i.e. the configuration repository should only expose an Update" function and no more.
Answer to 2) I am assuming that this table does not have a primary key (as it is the only row in the table). As far as Im aware, NHibernate cannot work with entities that do not have primary keys. You may have to add a primary key just to get it to work for NHibernate

Related

Is it a good idea to manually create columns in existing AspNetUser table?

I'm using Identity 3 and I want to add some columns in AspNetUser table.
I've tried to do it using EF Core Code first and it's working well but I need to do it DB first.
I was wondering, if I will create the columns manually in database, then create the ApplicationUser class with corresponding properties, will it work?
Yup that should work, I've done it before.
However as time goes on I ended up having to add so many that it got messy.
So eventually I refactored those extra columns into their own related tables:
e.g: User_AdditionalDetails
This was a massive pain as I had live users and had to write scripts to migrate everyone's data etc.
This way you would only need to add a single FK for the related table with all this extra info.
It also neatens the code too, and gives the benefit of being able to load different sets of user properties only when they are needed.
If it's for an application scope property of the user like 'Region' which determines behaviour of core functionality of your app, then I'd say add it straight onto the main ApplicationUser class.

Degenerate a single SQL table into multiple domaines tables

In short: I have a client who wish to be able to add domain tables, without adding SQL tables.
I am working with an application in wich data are organized and made available with a postgresql catalogue. What I mean by catalogue is that the database hold the path to the actual data file(s) as well as some metadata.
Adding a new table means that the (Java class of the) client application has to be updated. This is a costly process for the client, who want us to find a way to let him add new kind of data in the catalogue, without having to change the schema.
I don't have many more specificities about the db itself and it's configuration as I'm usualy mostly a client of the said db.
My idea: to solve this was to have a generic table with the most often used columns (like date, comment etc.) and a column containing a domain key. The domain key would be used by the client application to request the kind of generic data is needed (and would have no meaning whatsoever to the db provider). Adding metadata could be done with a companion file within the catalogue and further filtering would have to be done on the client side.
Question: as I am by no mean an SQL expert, I would like to know if it is an acceptable solution, and what limitation I could be facing ? I'm thinking of performance, data volume etc. Or maybe a different approach, is advisable ?
Regarding expected volume, for a single domain data type, it could be arround 30 new entry per day.

Fetch an entity's read-only collection from a separate database

I'm building a new NHibernate 3.3 application that must connect to a legacy system in order to look up some information about my users. There's a separate, read-only, database that holds course enrollments that I'd like to use to populate a collection on my Student entity. These would be components in NHibernate-speak, consisting of a department code and course and section numbers, like "MTH101 sec. 2"
The external database has a surrogate key, the student number, which corresponds to a property in my User entity, but it's not the primary key of a Student.
These databases are on separate servers. I can't change the legacy database,
Do I have a hope of mapping the enrollments collection as NHibernate components?
Two Options
When you have multiple databases or multiple database servers that you're trying to link together in a single domain model using NHibernate, you basically have two options.
Leverage the database server's capabilities (linked servers, etc.) to join the data so that NHibernate only has to worry about connecting to one database. In your NHibernate mappings, you fully specify the table attribute so that the database server knows to query against the other database server. For your "surrogate key, ... not the primary key", you could map this using <many-to-one property-ref="...">.
Use multiple NHibernate session factories, one for each database. You would be responsible for coordinating what gets loaded from which database. You configure each session factory for just the tables that exist in that database and with the appropriate connection string. Then, to load the data, you execute two queries, one against one database, and another against the other database.
Which one?
Which is the right choice? It depends...
Available features
If your database server doesn't have any features to support #1, or if there are other things preventing you from using those features, then you obviously have to use #2.
Cross-DB where Clauses
#1 gives you more flexibility when writing queries - you could specify where clauses that span both databases if you needed to, though you need to be careful that the query you write doesn't require database A to fetch tons of data from database B. With method #2 you execute a second query to get what you need from database B, which forces you to be more conscious about exactly what data you have to fetch from each database to get the job done.
Unenforced relationship
There won't be any foreign keys enforcing the relationship because the data lives in two different databases. NHibernate (very reasonably) assumes that database relationships are enforced by foreign keys. Since there's a chance these two databases could be out of sync, #1 will require you to resort to things like not-found="ignore", which has performance implications.
Complexity of Deployment
Inter-database relationships make deploying to various environments (DEV, QA, PROD) difficult. You can't just deploy the application and database, and make sure the application's connection strings are pointing at the correct databases; instead you also have to make sure that any references inside the databases to other databases are pointing to the correct places.
Given all of the above factors, I usually lean towards option #2, but there are some situations where #1 is just so much more convenient.

Custom NHibernate entity persister for modifying generated SQL statements

What I need is to populate entity from DB view (non-insertable) and make all entity updates to updatable DB table.
Mapping entity to table and writing custom load SQL from view is not an option since in some cases NHibernate still tries to select from table name (when joining this entity, for example).
Mapping entity to view and writing custom data modification queries is not an option since I can not write cross-database sql-insert statement (because of the last inserted identity value selection part).
The only idea I came up with for now is to modify generated SQL statements on-the-fly. I managed to do it with custom interceptor but I don't think that its a good idea (since I intercept every single query, even for other entities). However, I think that it should be possible to change only needed queries using custom IEntityPersister. I created one based on SingleTableEntityPersister, specified it in <class persister="…">, but NHibernate doesn't even want to instantiate it.
Are there any examples of writing custom entity persisters for NHibernate?

Ideas for Combining Thousand Databases into One Database

We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.