I have an existing database with 10 or so tables and thousands of rows. I’m tiring of SQL and would like to add an ORM — probably either Objection or Sequelize.
Is there a good way to implement either ORM on an existing database?
It's not particularly difficult to connect Sequelize to an existing database. You just need to configure the connection and set up models.
If your previously-created tables include some attributes that don't match up to Sequelize's approach out of the box, you may need to write some extra code in your models. Again, this is fairly painless. See Working With Legacy Tables for additional information.
I've research this topic and I'm relatively sure in most practices the answer is "No", but I would like some second opinions specific to my case.
We're currently working on a multi user web-app where each user will basically have there own copy "portal/app" within the web-app. It's not performance I'm worried about, but security.
I'm considering partitioning the data with a prefix userid_table1, userid_table2 to make it more manageable and ensure no security validation oversight is made by the team in development as we can easily add a validation to ensure that queries can only be run against tables with userid_*.
Would you still recommend against this method ?
I'm considering partitioning the data with a prefix userid_table1, userid_table2 to make it more manageable and ensure no security validation oversight is made by the team in development as we can easily add a validation to ensure that queries can only be run against tables with userid_*.
More manageable? That sounds like a joke. Your database will end up with a zillion different tables. Any operation that you want to do across all users will be a nightmare:
Declaring foreign key constraints.
Defining a new index on the tables.
Adding a new column.
Restructuring the tables.
And so on. And so on.
Your users may be limited to a single table. But the application developer and DBA need to deal with all of them. I cringe thinking about trying to figure out where performance bottlenecks are in such a system.
I should add that databases are optimized for big tables not lots of tables, so multiple tables are typically less efficient. And even less efficient when you think about all the half-filled pages in all those tables.
The same entities should not be spread among multiple tables, unless you have a really, really good reason. This is not a really good reason. One simple solution is to prevent users from having access to the base tables. Just give them access to views or user-defined table functions -- and have all of these filter on user ids.
There are some edge cases where you do want separate tables for each user. Typically, each user would have a very complex tables (think B2B application) and, in fact, they might have their own database. There may also be legal requirements to separate data. In these cases, though, the "separateness" would typically be at the database level, not the table level.
I am thinking and exploring options on designing database for my new application. In general, I will have registered users and info about them. They will be able to do some things in app and that data will be in the sam DB as users data (so I can have FK's shared and stuff)
But, then I plan to have second database that will be in logic totally independent of the first database except it will share userID as FK.
I don't know should I even put that second logic in an extra DB or should I have everything in the same database. I plan to have subdomain in my app for second logic (it is like app in app) but what if I discover they should share more data? Will that cross querying drop my peformances? And is that a way to go actually, is there a real reason to separate databases ?
As soon as you have two databases you have potential complexity. You have not given any particular reason why you need two databases. So keep it simple until you have a reason.
An example of what folks do: have a "current" database, small, holding just the data needed right now. That might be where orders are taken and fulfilled. Once the data is no longer current, say some days or weeks after the order is filled move the data to a "historic" database. There marketing and mangement folks can look at overall trends in the history without affecting performance of the "current" database, whose performance might be critical to keeping your customers happy.
As an example of complexity: any time you have two databases you need to consider consistency between them, this is much harder to ensure than it might appear. Databases do offer Two-Phase Transactional capabilities, or you can devise batch processes but there are always subtleties that are hard to catch.
I would just keep all in one database. Unless you have dozens of tables there should be no real performance problems, imho. It will however facilitate your life greatly, only having to work with one database connection & not having to worry about merging information from two queries,
Also agree that unless volume of your data is going to be huge (judging by the question, doesn't seem like that is the case here), you can use single database to store your data without performance issues.
For "visual" separation of data structure, you can always create tables in two schemas of single database.
I've worked in several SQL environments. In one environment, the different tables holding business data were split across several different SQL databases, all on the same server.
In another environment, almost all the tables are kept on one single SQL database.
I'm creating a new project that is closely related to another project, and I've been wondering if I should put the new tables in the same SQL database or a new SQL database.
This all runs on MS SQL Server.
What factors do I need to consider as I make this decision?
It's tough from your question to tell what your actual requirements are, or what data you would consider to store in different databases. But in addition to Gordon's points I can address a couple of additional reasons why you might want to use separate databases for data belonging to different customers / users (and this answer assumes that one possible separation of data, whether by database or schema, would be by customer):
As I mentioned in a comment, some customers will demand that their data be stored separately, and you may need to agree to that in writing before you see a penny or are able to secure their business. So you may as well be prepared for that inevitability.
Keeping each customer in their own database makes it very easy to move them if they outgrow your current server. At my previous job we designed the system in this way, and it saved our bacon later - we were able to move customers completely to a different server with what essentially amounted to a metadata operation. During a maintenance window, backed up their database, set the original to offline, restored the backup to a new server, and updated a config table that told all the apps where to find that database. This is much more flexible than trying to extract all of their data from a database shared by others...
Separate databases also allow you to handle maintenance differently. One customer needs point-in-time restore, and another doesn't? Perfect, you can just use a different recovery model on separate databases. Much easier than separating by filegroups and trying to implement some filegroup-level backup solution, and much more efficient than just treating one big database in full recovery.
This isn't free, of course, it's about trade-offs. Multiple databases scares some people away but having managed such a system for 13 years I can tell you that managing 100 or 500 databases that are largely identical is not that much more complicated than managing 500 schemas in one massive database (in fact I would say it is less so in a lot of respects).
A database is the unit of backup and recovery, so that should be the first consideration when designing database structures. If the data has different back up and recovery requirements, then they are very good candidates for separate databases.
That is only half the problem, though. In most environments, backup/recovery is pretty much the same for all databases. It becomes a question of application design. In other words, the situation becomes quite subjective.
In the environment that I'm working in right now, here are some criteria for splitting data into different databases:
(1) Publishing tables to a wide audience. We "publish" data in tables and put these into a database, separate from other tables used for building them or for special purposes. Admittedly, SQL Server claims that "schema" are the unit of security. However, databases seem to do a good job in the real world.
(2) Strict security requiremeents. Some data is so sensitive that lawyers have to approve who can see it. This goes into its own database, with its own access.
(3) Separation of data tables (which users can see) and tables that describe the production system.
(4) Separation of tables used for general querying by a skilled group of analysts (the published tables) versus tables used for specific reports/applications.
Finally, I would add this. If some of the data is being updated continuously throughout the day and other data is used for reporting, I would tend to put them in different databases. This helps separate them in the case of problems.
I have a bit of an architecture problem here. Say I have two tables, Teacher and Student, both of them on separate servers. Since this tables share a lot of data and functionality, I would like to use this inheritance scheme and create a People table; however, I would need tho keep the Teacher table and the People records relating Teacher in one server, and the Student table and the People records relating Student in another server. This was a requirement made by the lead developer, since we have too many (and I mean too many) records for Teacher and Student, and a single database containing all of the People would collapse. Moreover, the clients NEED to have them on separate servers (sigh*).
I would really like to implement the inheritance scheme, since a lot of the funcionality could be shared among the databases. Is there any possible way to do this? any other architecture that may suit this type of problem? I'm I just crazy?
--- EDIT ---
Ok, I don't really have Teachers and Students per se, I just used those names to simplify my explanation. Truth is, there are about 9 sub-tables that would inherit the super table, all of them in separate servers for separate applications, and no, I don't have this type of database, but we have pretty low end servers for the amount of transactions we have ;). You're right, my statements are a bit exagerated and I apologize for that, it was just to make you guys answer faster (sorry :P). The different servers are more of a business restriction than anything else (although the lead developer DID say that a common database to store the SuperTable would collapse under it's own weight -his words, not mine :S). Our clients don't like their information mixed with other clients information, so we must have their information on different servers -pretty stupid, but the decision-makers have spoken :(.
Under what assumption did you determine that you have too much data? I'm pretty sure you could list every teacher and student in the world, and not cause SQL Server any grief.
This seems like an arbitrary decision that is going to have significant impact on the complexity of any solution you design.
Take a look here - I'm sure you don't measure your database in anything close to the scale represented on this page, and many of these db's are running on SQL Server.
I don't know for sure if this is possible with SQL Server specifically, but it smells like something that could be solved with clustering and tablespace partitioning.
What I wonder about is whether this is really a good requirement; it introduces a lot of technical complexity based on a pretty simple assertion that there's just too much data. Have you attempted to verify this? A simple test would be to create a simple schema and populate it with dummy data for the number of rows you expect in production. It would probably be in your best interest to perform this test before you go too far down the road to implement this 'requirement'.
By the way, the type of schema you linked to is an example of the class table inheritance pattern.
It would be possible for you to implement a domain model for this project where the common attributes of Teacher and Student are described by a Person interface or base class which the common operations are written against. If you plan to use stored procedures extensively, this might not be a useful option, but it's something to consider.
I think Paul is correct - perhaps look at your hardware infrastructure rather than your DB schema.
Using clustering, proper indexing, and possibly a data archive scheme should solve any performance problems. The inheritance scheme seems to be the best data model.
It is possible to split the data over multiple servers and keep the scheme, but I think you'd definitely have more performance problems than if you looked at clustering/proper indexing. By setting up linked servers you can do cross-server queries.
e.g. Students query
SELECT *
FROM SERVER_A.People.dbo.Persons P
INNER JOIN SERVER_B.People.dbo.Students S
ON P.PersonID = S.PersonID
--EDIT-- As Paul said, you could perform your database separation in your abstraction layer.
E.g. have your Student class extend your Person class. In your Person class constructor, have it connect to Server A to populate whichever fields are available. In your student class constructor, have it connect to Server B (the Person attributes will already be populated by the Person constructor).
I'm with Aaron here (sup Aaron). Move the tables into a single database. SQL Server can easily handle billions of rows per table (I've done it on SQL 2000 6-7 years ago, so modern versions and modern hardware are no problem). As long as your tables are indexed correctly There probably haven't been enough students in all of time at every school in the world to overload SQL Server much less at a single school.
In this case your best practice would be to put the tables in the same database, on the same server and index them for better performance.
Too many records cause 'database collapse'? What kind of pot is that lead developer smoking? Potent stuff!
I would recommend you guys study partitioned tables first. Making an application distributed (which really the two server approach implies) is much much harder than you think and it does not provide scalability.
Yep, I'd have to agree with the others here, and single database, single server is just fine. It is far easier and cheaper to scale up your hardware currently to support the workload than it will be to scale out to federated servers. I only know of one place that does federated servers and their workload is phenomenal.
link the servers and create a view
SELECT
FirstName
,LastName
....
FROM server.database.owner.Teachers
UNION
FirstName
,LastName
....
FROM server.database.owner.Students
What kind of client are you using? If you're using a Java client, and are using ORM, you may want to look into Hibernate Shards.
Besides all the good answers here that the assumptions behind the question are highly questionable, if I needed to do this seriously (and if I take the assumptions as true) I would compare what Oracle had to offer, because it is in this type of scenario that it shows a benefit (I say this from experience).
But on the core question, assuming that the assumptions you outline are true, I would not try to have a combined table. If teachers and students can't be in the same database, it is unlikely that their identifying information can, and if the amount of data is overwhelming, then putting it all in one table is worse.
What I suspect is that if the underlying assumptions are true it is because there is an anticipation of a lot of contention on the tables and a lot of connections and activity on the tables, causing a lot of locks. In that case, adding a Person table will make things worse.
All that being said, if you still really wanted to do it, then you can reference one database from another in queries, via linked databases.
But if the real issues is number of connections and contention and deadlocks around the tables, such a solution would make things worse.
EDIT: In response to those who question what advantage Oracle would bring to such a situation, one would be in the federated database area, where it is much more mature. Another would be in tables where you have a high amount of contention, it makes copies of the data in certain situations, and in general its model is more sophisticated when it comes to handling contention. For example scenarios where tables are read in longer running queries, causing a lot of potential read locks. Oracle helps you keep transactional integrity without having to lock on read. In MS-SQL, you have to resort to dirty reads.
MS-SQL is a fine database, but it has its limits (raw amounts of data without any particular parameters about volume of reads and writes is not really one of them, though, which makes the question strange). And given the stiff competition, the non-Enterprise version of Oracle is really close enough in price to be worth a look. It could end up costing you a lot later.
Of course, if you already purchased an MS-SQL license, the cost factor is larger for Oracle, so the benifits have to be more obvious.