How to bond N database table with one master-table? - sql

Lets assume that I have N tables for N Bookstores. I have to keep data about books in separate tables for each bookstore, because each table has different scheme (number and types of columns is different), however there are same set of columns which is common for all Bookstores table;
Now I want to create one "MasterTable" with only few columns.
| MasterTable |
|id. | title| isbn|
| 1 | abc | 123 |
| MasterToBookstores |
|m_id | tb_id | p_id |
| 1 | 1 | 2 |
| 1 | 2 | 1 |
| BookStore_Foo |
|p_id| title| isbn| date | size|
| 1 | xyz | 456 | 1998 | 3KB |
| 2 | abc | 123 | 2003 | 4KB |
| BookStore_Bar |
|p_id| title| isbn| publisher | Format |
| 1 | abc | 123 | H&K | PDF |
| 2 | mnh | 986 | Amazon | MOBI |
My question, is it right to keep data in such way? What are best-practise about this and similar cases? Can I give particular Bookstore table an aliase with number, which will help me manage whole set of tables?
Is there a better way of doing such thing?

I think you are confusing the concepts of "store" and "book".
From you comments and the example data, it appears the problem is in having different sets of attributes for books, not stores. If so, you'll need a structure similar to this:
The symbol: denotes inheritance1. The BOOK is the "base class" and BOOK1/BOOK2/BOOK3 are various "subclasses"2. This is a common strategy when entities share a set of attributes or relationships3. For the fuller explanation of this concept, please search for "Subtype Relationships" in the ERwin Methods Guide.
Unfortunately, inheritance is not directly supported by current relational databases, so you'll need to transform this hierarchy into plain tables. There are generally 3 strategies for doing so, as described in these posts:
Interpreting ER diagram
Parent and Child tables - ensuring children are complete
Supertype-subtype database design
NOTE: The structure above allows various book types to be mixed inside the same bookstore. Let me know if that's not desirable (i.e. you need exactly one type of books in any given bookstore)...
1 Aka. category, subclassing, subtyping, generalization hierarchy etc.
2 I.e. types of books, depending on which attributes they require.
3 In this case, books of all types are in the many-to-many relationship with stores.

If you had at least two columns which all other tables use it then you could have base table for all books and add more tables for the rest of the data using the id from Base table.
UPDATE:
If you use entity framework to connect to your DB I suggest you to try this:
Create your entities model something like this:
then let entity framework generate the database(Update database from Model) for you. Note this uses inheritance(not in database).
Let me know if you have questions.

Suggest data model:
1. Have a master database, which saves master data
2. The dimension tables in master database, transtional replicated to your distributed bookstore database
3. You can choose to use updatable scriscriber or merge replication is also a good choice
4. Each distributed bookstore database still work independently, however master data either merge back by merge replication or updatable subscriber.
5. If you want to make sure master data integrity, you can only read-only subscriber, and use transational replication to distribute master data into distributed database, but in this design, you need to have store proceduces in master database to register your dimension data. Make sure there is no double-hop issue.

I would suggest you to have two tables:
bookStores:
id name someMoreColumns
books:
id bookStore_id title isbn date publisher format size someMoreColumns
It's easy to see the relationship here: a bookStore has many books.
Pay attention that I'm putting all the columns you have in all of your BookStore tables in just one table, even if some row from some table does not have a value to some column.
Why I prefer this way:
1) To all the data from BookStore tables, just few columns will never have a value on table books (as example, size and format if you don't have an e-book version). The other columns can be filled someday (you can set a date to your e-books, but you don't have this column on your table BookStore_Bar, which seems to refer to the e-books). This way you can have much more detailed infos from all your books if someday you want to update it.
2) If you have a bunch of tables BookStore, lets say 12, you will not be able to handle your data easily. What I say is, if you want to run some query to all your books (which means to all your tables), you will have at least three ways:
First: run manually the query to each of the 12 tables and so merge the data;
Second: write a query with 12 joins or set 12 tables on your FROM clause to query all your data;
Third: be dependent of some script, stored procedure or software to do for you the first or the second way I just said;
I like to be able to work with my data as easy as possible and with no dependence of some other script or software, unless I really need it.
3) As of MySQL (because I know much more of MySQL) you can use partitions on your table books. It is a high level of data management in which you can distribute the data from your table to several files on your disk instead of just one, as generally a table is allocated. It is very useful when handling a large ammount of data in a same table and it speeds up queries based on your data distribution plan. Lets see an example:
Lets say you already have 12 distinct bookStores, but under my database model. For each row in your table books you'll have an association to one of the 12 bookStore. If you partition your data over the bookStore_id it will be almost the same as you had 12 tables, because you can create a partition for each bookStore_id and so each partition will handle only the related data (the data that match the bookStore_id).
Lets say you want to query the table books to the bookStore_id in (1, 4, 9). If your query really just need of these three partitions to give you the desired output, then the others will not be queried and it will be as fast as you were querying each separated table.
You can drop a partition and the other will not be affected. You can add new partitions to handle new bookStores. You can subpartition a partition. You can merge two partitions. In a nutshell, you can turn your single table books in an easy-to-handle, multi-storage table.
Side Effects:
1) I don't know all of table partitioning, so it's good to refer to the documentation to learn all important points to create and manage it.
2) Take care of data with regular backups (dumps) as you probably may have a very populated table books.
I hope it helps you!

Related

How does BigQuery search through a cluster / partition?

My colleague asked if it was possible to reverse the order of the data in a cluster. So it would look something like the following.
| Normal cluster | Reversed cluster |
|---|---|
| 1 | 2 |
| 1 | 1 |
| 2 | 1 |
I said that I can remember reading that the data is searched through like a binary tree, so it doesn't really matter if it's reversed or not. But now I can't find anything that mentions how it actually searches through the cluster.
How does BigQuery actually search for a specific value in clusters / partitions?
When you create a clustered table in BigQuery, the data is automatically organized based on the contents of one or more columns in the table’s schema. The columns that we specify are used to colocate related data. When you cluster a table using multiple columns, the order of columns we specify is important, as the order of the columns determines the sort order of the data.
When you create a partitioned table, data is stored in physical blocks, each of which holds one partition of data. A partitioned table maintains these properties across all operations that modify the data. You can typically split large tables into many smaller partitions using data ingestion time or TIMESTAMP/DATE column or an INTEGER column.

Normalization of multiple similar tables

i'm quite new to all this tech stuff so excuse me for making mistakes - beforehand.
My question is regarding data normalization. I'm using PGadmin4 for this task.
I have multiple tables one for each year containing multiple columns. I wish to normalize these data in order to make further inquiries. The data is in this form:
Table 1
| id | name1 | code1| code2 | year|
| 1 | Peter | 111 | 222 | 2007|
Table 2
| id | name1 | code1| code2 | year|
| 2 | Peter | 111 | 223 | 2008|
So my tables area similar but with some different data each year
I have broken it down so i have multiple tables containing only one column of information:
name1_table
| id | name1 |
And i have done this by every column. Now i need to link it all together - am heading in the right direction or have i gone of in a bad one?
What is the next step and if possible what is the code i need to use.
The easiest way to combine two tables with identical schemas is to create a new third table with the same schema and copy all the records into it.
Something like this:
INSERT INTO Table3 SELECT * FROM Table1;
INSERT INTO Table3 SELECT * FROM Table2;
Or if you simply need a combined query result you can use UNION:
SELECT * FROM Table1
UNION
SELECT * FROM Table2;
You are not headed in the right direction. The best approach is simply to store all the data in one table and to use indexes and/or partitions to access particular rows.
Sometimes this is not possible, notably because the tables have different formats. Possible solutions:
Break the existing tables into similarity sets based on columns, and create one table for each similarity set.
Create a table based on the most recent definition of the table, NULLing out columns that don't exist in historical tables.
Use a facility such as JSON for columns that have changed over time.
Use a facility such as inheritance for columns that have changed over time.

Is it good practice to have two SQL tables with bijective row correspondence?

I have a table of tasks,
id | name
----+-------------
1 | brush teeth
2 | do laundry
and a table of states.
taskid | state
--------+-------------
1 | completed
2 | uncompleted
There is a bijective correspondence between the tables, i.e.
each row in the task table corresponds to exactly one row in the state table.
Another way of implementing this would be to place a state row in the task table.
id | name | state
----+-------------+-------------
1 | brush teeth | completed
2 | do laundry | uncompleted
The main reason why I have selected to use two tables instead of this one, is because updating the state will then cause a change in the task id.
I have other tables referencing the task(id) column, and do not want to have to update all those other tables too when altering a task's state.
I have two questions about this.
Is it good practice to have two tables in bijective row-row correspondence?
Is there a way I can ensure a constraint that there is exactly one row in the state table corresponding to each row in the task table?
The system I am using is postgresql.
You can ensure the 1-1 correspondence by making the id in each table a primary key and a foreign key that references the id in the other table. This is allowed and it guarantees 1-1'ness.
Sometimes, you want such tables, but one table has fewer rows than the other. This occurs when there is a subsetting relationship, and you don't want the additional columns on all rows.
Another purpose is to store separate columns in different places. When I learned about databases, this approach was called vertical partitioning. Nowadays, columnar databases are relatively common; these take the notion to the extreme -- a separate "store" for each column (although the "store" is not exactly a "table").
Why would you do this? Here are some reasons:
You have infrequently used columns that you do not want to load for every query on the more frequent columns.
You have frequently updated columns and you do not want to lock the rest of the columns.
You have too many columns to store in one row.
You have different security requirements on different columns.
Postgres does offer other mechanisms that you might find relevant. In particular, table inheritance might be useful in your situation.
All that said, you would not normally design a database like this. There are good reasons for doing so, but it is more typical to put all columns related to an entity in the same table.

SQL Server 2012 Query to extract subsets of data

I'm trying to 2nf some data:
Refid | Reason
------|---------
1 | Admission
1 | Advice and Support
1 | Behaviour
As you can see one person might have multiple reasons so i need another table to have the following format:
Refid | Reason1 | Reason2 | Reason3 | ETC...
------|-----------|--------------------|-----------
1 | Admission | Advice and Support | Behaviour
But I don't know how to write a query to extract the data and write it in a new table like this. The reasons don't have dates of other criteria that would make any reason to be in any special order. All reasons are assigned at the time of referral.
Thanks For yor Help.. SQL Server 2012
You are modelling a many to many relationship
You need 3 tables
- One for Reasons (say ReasonID and Reason)
- One for each entity identified by RefID (say RefID and ReferenceOtherData)
- An junction (or intersection) table with the keys (RefID, ReasonID)
This way,
Multiple reasons can apply to one Ref entity
Multiple Refs can have the same reason
You turn repeated columns into rows.

What should a relationships table look like - Need confirmation of my technique

Lets say I have 3 models:
User
Page
Comments
I asked a question based on if I should have each model keep track of its relationships: SQL relationships and best practices
an example of this would be a "Pages" table that states who its author was... The problem seemed to be that if 2 users were the author of the one page, you'd have to add a new specific table called PageRelationshipsWithUsers that might have a reference to the PageID and the UserID that created it and a separate row for the co-author.
Understandably this sounds a bit naff. I would end up with a heck load of relation tables and most likely, it could be replaced with just the one multi-purpose relationship table... So I decided to come up with a relationships table like the following:
Relationships Table New
RelationshipID | ItemID | LinkID | ItemType | LinkType | Status
-----------------------------------------------------------------------------
1 | 23(PageID) | 7(UserID) | ("Page") | ("User") | TRUE
2 | 22(CommentID) | 7(UserID) | ("Comment") | ("User") | TRUE
3 | 22(CommentID) | 23(PageID) | ("Comment") | ("Page") | TRUE
however, I would very much appreciate some input as to how good of an idea laying out my relationships table like this is.
Any thoughts?
Answer was told to me by a work colleague:
Imagine the above relationships table for the model "Book"
A User can Rent a book, so the relation is User -> Book...
But what if he can buy a book too: User->Book....
Ooops, we need a new relationship... and considering this relationship table was supposed to be the 1 size fits all, we now have a requirement to add a new separate table... whoops.
So the answer is NO NO NO. don't, it's naughty. Keep your relationship tables separate and specific.
Your suggestion for a relationship table is not optimal for several reasons:
It's difficult to write queries that join tables through the relationship table, as you will need filters on the ItemType and LinkType columns, which is not intuitive when writing queries.
If a need arises to add new entities in the future, that use different datatypes for their primary keys, you cannot easily store ID's of various datatypes in your ItemID and LinkID columns.
You cannot create explicit foreign keys in your database, to enforce referential integrity, which is possibly the best reason to avoid the design you suggest.
Query performance might suffer.
When normalizing a database, you should not be afraid to have many tables. Just make sure to use a naming convention that makes sense and is self-documenting. For example, you could name the relation table between authors and pages "PageAuthors", instead of "Pages".