Core data relationships vs. server based foreign keys - sql

I have a complex iPad app moving to use Core Data. I receive data from a server which has foreign keys built into the various tables to represent relationships between tables (entities).
As I rewrite the app to use Core Data, should I maintain the foreign key structure and create my own accessors, or convert them to Core Data relationships or use both? Seems double the work. I already have the data to link two tables that I potentially need to maintain for data I send back to the server. Yet Core Data will create its own keys for relationships. It duplicates information and could get out of sync.
I could:
1. Keep the existing attributes to represent relationships between tables and write my own fetches as needed.
2. Build an object graph as I receive the data from the server and use core data relationships .
3. Use a hybrid, sometimes foreign key attributes and sometimes relationships depending on need.
Is there a typically approach used for Core Data applications receiving most of their data from a server?

If you are going to use core data instead of sqllite, then convert to Core Data. Remember, CoreData is not just a relational database. It is used to persist an object graph. Thus, the way you lay our your data structures may be quite different.
Typically, you may have more de-normalized data in a Core Data application, but really, you should remap your data as you want it to be used in your application. Then you will know the real answer. However, I do not think I'd leave foreign keys... I'd use relationships because that's how core data will fit best.

Related

Best way to keep track of users and records in a NET Core Web Application

I'm trying to build an Inventory web application with .NET Core. In this app, I want to keep track of every create and update operation, so almost every model in my application has CreatedBy and ModifiedBy fields and each of those fields have a one-to-many relationship with the UserId field from the Users model.
So there are a lot of foreign keys in my models and lots of navigational properties in my Users model. It works but looks kind of messy especially in my Users model so it got me thinking maybe there is something wrong with my approach. I thought of some other ways but I am just learning the ropes so I can't really predict the possible downsides of those approaches, thus, I need help.
So what's the best way to deal with this kind of situation in a web application?
Should I keep defining foreign keys?
Should I store UserId as string in those columns?
Should I create another table which holds records for every create / update operation?
Is there a better way out there?
After some research I decided to go on with temporal tables solution from SQL Server directly. You have to add just a couple of codes to your dbcontext's onmodelcreating method to set it up and it looks like it's working very good for my needs.

Database design & 3rd party integrations

We're building an application where eCommerce owners can connect their store from different platforms (e.g. Shopify, Magento, Woocommerce). We do this in order to import data from these various platforms.
So we have a Stores table. In there we have data that are common to all platforms and some data that are specific to the platforms.
I'm not sure what to do here. Should we create specific tables that contain platform-specific information or we create columns to store certain information but that will be empty for the stores from the other platforms?
What would be the pros and cons? Knowing that we would then need to create tables for all new platforms that we integrate with if we go for option 2.
You haven't said which specific RDBMS you're using, but with PostgreSQL you have the option of foreign data wrappers. These let you federate data from other sources and APIs into your application database and read and write foreign tables just like you do the internal tables (assuming the external APIs allow you to modify data). With this approach, you just need to make sure that your stores are properly associated with their respective entries in the foreign tables. Developing FDWs is relatively easy with Multicorn.
If that's not an option: using columns is efficient to query since the information is right there in your store record. However, it could get unwieldy depending on how much of it there is, and if you could have a tenant with multiple presences on one of those external platforms -- weirder things have happened -- you're in for some trouble. And the relational form makes adding and changing support for the external platforms easier since you don't have to lock the entire tenants table to add or remove columns.
The simpler approach may be all you need to start out with, but it'd probably be smart to plan for tables in the end.

Refactoring database and preserve existing data best practice?

I have been working on a very data intensive application that has around 250 tables. Recently there have been some design changes required. Some of the design changes require adding new tables and linking those up with existing tables (foregin key) in a 1-N manner for parent - child relationships (in ORM).
Take this example. Current design allows for one Rental Vehicle per Contract. New design requires multiple Vehicles in the same Contract with Multiple rates.
So the data in one table needs to be put in 2 additional tables now.
I have completed the changes to the schema but I can't deploy those changes to the test environment until I find a way to convert the existing data and put it in the new design format.
My current process.
Add 3 new Tables nContract, nContractedAsset, nContractRate
Copy information from Contract table into 3 new tables. Preserve primary key field on nContract table same as Contract table.
Copy foregin key references / Indexes / Rights to nContract from Contract table
Drop Contract table
Rename nContract to Contract and so on.
The only issue I have is I am not comfortable doing part 2 in SQL. I want to use the power of the ORM and .Net to do more intelligent and complex tasks for more complex scenarios than this example
Is there a way I can write the data migration using ADO.Net or ORM for step 2?
What are best practices or the processes for this? Am I doing something wrong?
I ended up using FluentMigrator https://github.com/schambers/fluentmigrator
It allowed me to do Entity Framework like migrations (See: Ruby On Rails Active Records migrations)
Most of the DDL can be written in .NET in a fluent format. It supports UP and DOWN migrations wrapped up in transactions and even supports full SQL scripts for data migration.
Best thing about it is all your migration scripts can be put in source control and even tested.

Totally unstructured data

We currently have a solution were we are having more and more the need to store unstructured data of various kinds. For example clients have the ability to define their own workflows where they define what kind of data should be captured (of various types...some simple some complex). This data then needs to be stored and are then displayed on a web application with a bit of functionality to modify the data.
Until now the workflows have been defined internally and therefore a MS SQL database was designed to cater for these specific workflows and their data. However now that clients have the ability to define workflows we need to relax the structure of our db. At first I thought that a key value table in ms sql might be a good idea but obviously I lose the typeness of the data being capture and then need to deserialize all the data in website (MVC.NET). I am also considering something like raven db but are not sure if this would be a good fit?
So my question is thus what would be the best way to store this unstructured data bearing in mind users must be able to search and edit/display this data as well?
How about combining 2 types of databases. Use a NO-SQL database for your unstructured data and the relational MS SQL database to save the references of your data for each workflow to retrieve them later on?
The data type will always be a problem and you always have to de-serialize it. Searching can be done by using the string representation of each value in your workflow and combining them in a searchable field in your MS SQL row.

Fetch an entity's read-only collection from a separate database

I'm building a new NHibernate 3.3 application that must connect to a legacy system in order to look up some information about my users. There's a separate, read-only, database that holds course enrollments that I'd like to use to populate a collection on my Student entity. These would be components in NHibernate-speak, consisting of a department code and course and section numbers, like "MTH101 sec. 2"
The external database has a surrogate key, the student number, which corresponds to a property in my User entity, but it's not the primary key of a Student.
These databases are on separate servers. I can't change the legacy database,
Do I have a hope of mapping the enrollments collection as NHibernate components?
Two Options
When you have multiple databases or multiple database servers that you're trying to link together in a single domain model using NHibernate, you basically have two options.
Leverage the database server's capabilities (linked servers, etc.) to join the data so that NHibernate only has to worry about connecting to one database. In your NHibernate mappings, you fully specify the table attribute so that the database server knows to query against the other database server. For your "surrogate key, ... not the primary key", you could map this using <many-to-one property-ref="...">.
Use multiple NHibernate session factories, one for each database. You would be responsible for coordinating what gets loaded from which database. You configure each session factory for just the tables that exist in that database and with the appropriate connection string. Then, to load the data, you execute two queries, one against one database, and another against the other database.
Which one?
Which is the right choice? It depends...
Available features
If your database server doesn't have any features to support #1, or if there are other things preventing you from using those features, then you obviously have to use #2.
Cross-DB where Clauses
#1 gives you more flexibility when writing queries - you could specify where clauses that span both databases if you needed to, though you need to be careful that the query you write doesn't require database A to fetch tons of data from database B. With method #2 you execute a second query to get what you need from database B, which forces you to be more conscious about exactly what data you have to fetch from each database to get the job done.
Unenforced relationship
There won't be any foreign keys enforcing the relationship because the data lives in two different databases. NHibernate (very reasonably) assumes that database relationships are enforced by foreign keys. Since there's a chance these two databases could be out of sync, #1 will require you to resort to things like not-found="ignore", which has performance implications.
Complexity of Deployment
Inter-database relationships make deploying to various environments (DEV, QA, PROD) difficult. You can't just deploy the application and database, and make sure the application's connection strings are pointing at the correct databases; instead you also have to make sure that any references inside the databases to other databases are pointing to the correct places.
Given all of the above factors, I usually lean towards option #2, but there are some situations where #1 is just so much more convenient.