NHibernate - Generate Domain from Database - nhibernate

I know its possible to generate the database tables from the domain model. But is there any way of doing things the other way. I have a totally awful database (worst I have ever seen). Its sharded (16 Shards!!), split across multiple postgres databases (all on the same server) with foreign key relations like urn:dbtable:guid.
Its proving a major pain in the ass to migrate using SSIS so I want to use NHibernate, read the data into objects and rewrite to a SQL Server database in blissful data-architectural harmony.
Is there any way to scan the current DB using NH or other and build a domain model and mappings?
Thanks!

NHibernate Mapping Generator
- A simple utility to generate NHibernate mapping files and corresponding domain classes from existing DB tables
It's free.

Related

How to make databases able to connect to each other?

What are some ways that these two databases can [INSERT INTO] Between each other or use commands or display the data in the table of the two databases ?
Not directly, but there are ways...
What you're asking about is called "cross database queries".
Each database in PostgreSQL has its own system tables and ways of keeping itself organized. Queries between two databases can break this, even for databases hosted by the same database server.
But there are ways to achieve what you want
Single database, multiple schemas
Instead of two databases, you can run one database with two schemas. The keeps the tables, views, etc separated and easier to maintain, and allows queries between the two. It also allows security and data isolation for users who are only allowed to access one of the schemas.
You're actually already using a schema in PostgreSQL called "public"; adding more schemas simply extends this.
See the documentation.
Foreign Data Wrappers
Foreign Data Wrappers (fdw) allow you to "link" a schema (or just tables, if you prefer) in another database. See the documentation for CREATE SERVER, and this seems like a pretty clear blog post on the subject.
Note that Foreign Data Wrappers will allow you to link to other databases than just PostgreSQL e.g. Oracle, SQL Server, MySQL, and lots more. See here.

Refactoring database and preserve existing data best practice?

I have been working on a very data intensive application that has around 250 tables. Recently there have been some design changes required. Some of the design changes require adding new tables and linking those up with existing tables (foregin key) in a 1-N manner for parent - child relationships (in ORM).
Take this example. Current design allows for one Rental Vehicle per Contract. New design requires multiple Vehicles in the same Contract with Multiple rates.
So the data in one table needs to be put in 2 additional tables now.
I have completed the changes to the schema but I can't deploy those changes to the test environment until I find a way to convert the existing data and put it in the new design format.
My current process.
Add 3 new Tables nContract, nContractedAsset, nContractRate
Copy information from Contract table into 3 new tables. Preserve primary key field on nContract table same as Contract table.
Copy foregin key references / Indexes / Rights to nContract from Contract table
Drop Contract table
Rename nContract to Contract and so on.
The only issue I have is I am not comfortable doing part 2 in SQL. I want to use the power of the ORM and .Net to do more intelligent and complex tasks for more complex scenarios than this example
Is there a way I can write the data migration using ADO.Net or ORM for step 2?
What are best practices or the processes for this? Am I doing something wrong?
I ended up using FluentMigrator https://github.com/schambers/fluentmigrator
It allowed me to do Entity Framework like migrations (See: Ruby On Rails Active Records migrations)
Most of the DDL can be written in .NET in a fluent format. It supports UP and DOWN migrations wrapped up in transactions and even supports full SQL scripts for data migration.
Best thing about it is all your migration scripts can be put in source control and even tested.

How to avoid manually writing/managing SQL

My team and I are rapidly developing an Webapp backed by an Oracle DB. We use maven's plugin flyway to manage our db creation and population from INSERT SQL scripts. Typically we add 3-4 tables per sprint and / or modify the existing tables structure.
We model the schema in an external tool that generates the schema including the constraints and run this in first followed by the SQL INSERTs to ensure the integrity of all the data.
We spend too much time managing the changes to the SQL to cover the new tables - by this I mean adding the extra column data to the existing SQL INSERT statements not to mention the manual creation of the new SQL INSERT data particularly when they reference a foreign key.
Surely there is another way, maybe maintaining raw data in Excel and passing this through a parser to the DB. Has anyone any ideas?
10 tables so far and up to 1000 SQL statements, DB is not live so we tear it down on every build.
Thanks
Edit: The inserted data is static reference data the platform depends on to function - menus etc.
The architecture is Tomcat, JSF, Spring, JPA, Oracle
Please store your raw data in tables in the database - hey! why on earth do you want to use Excel for this? You have Oracle Database - the best tool for the job!
Load your unpolished data using SQL*Loader or external tables into regular tables in the database.
From there you have SQL - the most powerful rdbms tool to manipulate your data.
NEVER do slow by slow inserts. (1000 sql statements). Please do CTAS.
Add/enable the constraints AFTER you have loaded all the data.
create table t as select * from raw_data;
or
insert into t (x,y,z) select x,y,z from raw_data;
Using this method, you can bypass the SQL engine and do direct inserts (direct path load). This can even be done in parallel to make your data go into the database superfast!
Do all of your data manipulation in SQL or PLSQL. (Not in the application)
Please invest time learning the Oracle Database. It is full of features for you to use!
Don't just use it like a datadump (a place where you store your data). Create packages - interfaces to your application - your API to the database.
Don't just throw around thousands of statements compiled into your application. It will get messy.
Build your business logic inside the database PLSQL - use your application for presentation.
Best of luck!
Alternatively, you also have the option to implement a Java migration. It could read whatever input data you have (Excel, csv, ...) and do the proper inserts.

Fetch an entity's read-only collection from a separate database

I'm building a new NHibernate 3.3 application that must connect to a legacy system in order to look up some information about my users. There's a separate, read-only, database that holds course enrollments that I'd like to use to populate a collection on my Student entity. These would be components in NHibernate-speak, consisting of a department code and course and section numbers, like "MTH101 sec. 2"
The external database has a surrogate key, the student number, which corresponds to a property in my User entity, but it's not the primary key of a Student.
These databases are on separate servers. I can't change the legacy database,
Do I have a hope of mapping the enrollments collection as NHibernate components?
Two Options
When you have multiple databases or multiple database servers that you're trying to link together in a single domain model using NHibernate, you basically have two options.
Leverage the database server's capabilities (linked servers, etc.) to join the data so that NHibernate only has to worry about connecting to one database. In your NHibernate mappings, you fully specify the table attribute so that the database server knows to query against the other database server. For your "surrogate key, ... not the primary key", you could map this using <many-to-one property-ref="...">.
Use multiple NHibernate session factories, one for each database. You would be responsible for coordinating what gets loaded from which database. You configure each session factory for just the tables that exist in that database and with the appropriate connection string. Then, to load the data, you execute two queries, one against one database, and another against the other database.
Which one?
Which is the right choice? It depends...
Available features
If your database server doesn't have any features to support #1, or if there are other things preventing you from using those features, then you obviously have to use #2.
Cross-DB where Clauses
#1 gives you more flexibility when writing queries - you could specify where clauses that span both databases if you needed to, though you need to be careful that the query you write doesn't require database A to fetch tons of data from database B. With method #2 you execute a second query to get what you need from database B, which forces you to be more conscious about exactly what data you have to fetch from each database to get the job done.
Unenforced relationship
There won't be any foreign keys enforcing the relationship because the data lives in two different databases. NHibernate (very reasonably) assumes that database relationships are enforced by foreign keys. Since there's a chance these two databases could be out of sync, #1 will require you to resort to things like not-found="ignore", which has performance implications.
Complexity of Deployment
Inter-database relationships make deploying to various environments (DEV, QA, PROD) difficult. You can't just deploy the application and database, and make sure the application's connection strings are pointing at the correct databases; instead you also have to make sure that any references inside the databases to other databases are pointing to the correct places.
Given all of the above factors, I usually lean towards option #2, but there are some situations where #1 is just so much more convenient.

share datas between 4 websites

I'm planing a webproject, containing 4 websites build in MVC3. As a databaseserver I'm going to use the ms sql server.
Each of this websites will have something arround 40 tables. But some of the tables are shared between the websites:
Contact, Cities, Postalcodes, Countries...
How to handle this? should I put all the tables of each database into a common database (so that the database of website 1,2,3 and website 4 are in one databse together). Or should I create one database containing shared datase?
But then I think I'm getting problems with the data consitency, because I think there is no way to point from one database to an other (linking for example the citytable in database one to the buldingtable in databse 2).
Any ideas?
Thanks a lot!
What I like about splitting it out into separate databases is that if each web site has its own database, and one of those web sites gets extremely popular, it is very easy to just move their database to a different, more powerful database server and not much has to change except (a) you need to reference the central "control" data remotely (or replicate/mirror/etc), and (b) you point that web site at a different database server. Another benefit is that if two web sites have the same types of tables (e.g. Patients), you don't have to have tables like Patients_WebSite1, Patients_WebSite2, with different stored procedures that are identical except for table names (or ugly dynamic SQL procedures that paste the table name in). Separated out you can have the exact same schema and the exact same codebase without having to combine everyone's data into a single table.
If you mix the data within a single database, data consistency is easier, and the whole setup is slightly simpler, but splitting it out when you grow is a lot tougher. If you split it out into different databases, no you won't be able to enforce referential integrity using standard DRI (foreign keys). You can accomplish this in other ways if it is important (triggers, validation before insert/update, etc).