For example,
user has [image1.jpg, image2.jpg ,image3.jpg]
user could reorder them to [image2.jpg, image1.jpg, image3.jpg], add to the end, delete from any position
I can think of 2 methods to store them:
just store as Array type in the database. when the user adds/deletes/reorders photos, overwrite the entire array in the database
store many photos rows with position column belongs to 1 user. on insert add to last position + 1. on delete, have to shift the positions after the deleted positions back - 1
What is the recommended design?
I think the most natural design in SQL would be a separate table:
create table userImages (
userImageId serial,
userId int references users(userId),
image varchar(255),
position int
);
As you have noticed, if you want positions to be gapless and ordered, then you need to update all the rows.
This has several features/advantages:
You can put the logic into a trigger or stored procedure so it is inside the database.
You can add additional information about the images, such as the date they were added or soft deletes.
The database can prevent duplicate images.
The alternative would be to store these as an array within a row in users. To maintain the ordering, you would basically need to do this in the application. That is, read the array, do deletes, inserts, and reorders, and then save the row again.
This has several features/disadvantages, such as:
The application has to be responsible for the column, instead of the database.
There is no place to put additional information about images.
I am generally biased toward the first approach, but there are some situations where the second is quite reasonable.
What I think is. if photo ordered by some attribute, (e.g. last edit time) you just select XXX order by attribute.
if photo ordered by user require, we have to use 'position' column.
Related
while creating a tkinter application to store book information, I realize that simply deleting a row of information from the SQL database does not update the indexes. Kinda hard to explain but here is a picture of what I meant:
link to picture. (still young on this account, so pictures can't be embedded, sorry for the inconvenience)
As you can see, the first column represents the index and index 3 is missing because I deleted it. Is there a way such that upon deleting a row, anything below it just shifts up to cover for the empty spot?
Your use of the word "index" must be based on the application language, not the database language. In databases, indexes are additional data structures that speed certain operations on tables.
You are referring to an "id" column, presumably one that is defined automatically as identity, auto_increment, serial, or whatever the underlying database uses.
A very important point is that deleting a row from a table does not affect other rows in the table (unless you have gone through the work of writing triggers to make that happen). It just deletes the rows.
The second more important point is that you do not want to change the "identity" of rows -- and that is what the column you are calling an "index" is doing. It identifies the row. It not only identifies the row today, but it identifies the same row tomorrow. And, if it existed, yesterday. That is, you don't want to change the identity.
This is even more important when you have foreign key relationships -- that is, other tables that refer to this row. Those relationships could get all messed up if the ids start changing.
SQL does offer a simple way to get a number with no gaps:
select row_number() over (order by "index") as seqnum
from t;
For what I searched there are 2 ways to insert an already present record into a ICollection list:
group.Users.Add(db.Users.FirstOrDefault(x=> x.Id = 1));
var to_add = new User{Id: 1}; db.Users.Attach(to_add); group.Users.Add(to_add);
The problem with both the above approach is it makes a db call every time we want to add a record. While we already know the user's Id and the group's id and that's all it needs to create a relationship.
Imagine a long list to be added, both the above methods would make multiple calls to db.
So you have Groups and Users. Every Group has zero or more Users; every User has zero or more Groups. A traditional many-to-many relationship.
Normally one would add a User to a Group, or a Group to a User. However you don't have a Group, nor a User, you only have a GroupId and a UserId. and because of the large number of insertions you don't want to fetch the Users and the Groups of which you want to create relations
The problem is, if you could add the GroupId-UserId combination directly to your junction table, how would you know that you wouldn't be adding a Group-User relation that already exists? If you wouldn't care, you'd end up with twice the relation. This would lead to problems: Would you want them to be shown twice if you'd ask the Users of a Group? Which one should be removed if the relation ends, or should they all be removed?
If you really want to implement the possibility of double relation, then you'd need to Implement a a Custom Junction Table as described here The extra field would be the number of relations.
This would not help you with your large batch, because you would still need to fetch the field from the custom junction table to increment the NrOfRelations value.
On the other hand, if you don't want double relations, you'd have to check whether the value already exists, and you didn't want to fetch data before inserting.
Usually the number of additions to a database is far less then the number of queries. If you have a large batch of data to be inserted, then it is usually only during the initialization phase of the database. I wouldn't bother optimizing initialization too much.
Consider remembering already fetched Groups and Users in a dictionary, preventing them to be fetched twice. However, if your list is really huge, this is not a practical solution.
If you really need this functionality for a prolonged period of time consider creating a Stored Procedure that checks if the GroupId / UserId already exists in the junction table, and if not, add it.
See here For SQL code how to do Add-Or-Update
Entity Framework call stored procedure
I have finished all my changes to a database table in sql server management studio 2012, but now I have a large gap between some values due to editing. Is there a way to keep my data, but re-assign all the ID's from 1 up to my last value?
I would like this cleaned up as I populate dropdownlists with these values and then I make interactions with my database with the assumption that my dropdownlist index and the table's ID match up, which is not the case right now.
My current DB has a large gap from 7 to 28, I would like to shift everything from 28 and up, back down to 8, 9, 10, 11, ect... so that my database has NO gaps from 1 and onward.
If the solution is tricky please give me some steps as I am new to SQL.
Thank you!
Yes, there are any number of ways to "close the gaps" in an auto generated sequence. You say you're new to SQL so I'll assume you're also new to relational concepts. Here is my advice to you: don't do it.
The ID field is a surrogate key. There are several aspects of surrogates one must be mindful of when using them, but the one I want to impress upon you is,
-- A surrogate key is used to make the row unique. Other than the guarantee that
-- the value is unique, no other assumptions may be made concerning the value.
-- In particular, no meaning may be derived from the value as to the contents of
-- the row or the row's relationship to any other row.
You have designed your app with a built-in assumption of the value of the key field (that they will be consecutive). Already it is causing you problems. Do you really want to go through this every time you make changes to the table? And suppose a future feature requires you to filter out some of the choices according to an option the user has selected? Or enable the user to specify the order of the items? Not going to be easy. So what is the solution?
You can create an additional (non-visible) field in the dropdown list that contains the key value. When the user makes a selection, use that index to get the key value of the selection and then go out to the database and get whatever additional data you need. This will work if you populate the list from the entire table or just select a few according to some as yet unknown filtering criteria or change the order in any way.
Viola. You never have this problem again, no matter how often you add and remove rows in the table.
However, on the off chance that you are as stubborn as me (not likely!) or just refuse to listen to the melodious voice of reason and experience, then try this:
Create a new table exactly like the old table, including auto incrementing PK.
Populate the new table using a Select from the old table. You can specify any order you want.
Drop the old table.
Rename the new table to the old table name.
You will have to drop and redefine any FKs from other tables. But this entire process
can be placed in a script because if you do this once, you'll probably do it again.
Now all the values are consecutive. Until you edit the table again...
You should refactor the code for your dropdown list and not the PK of the table.
If you do not agree, you can do one of the following:
Insert another column holding the dropdown's "order of appearance", make a unique index on it and fill this by hand (or programmatically).
Replace the SERIAL with an INT would work, make a unique index on the column and fill this by hand (or programmatically).
Remove the large ids and reseed your serial - the code depending on your DBMS
This happens to me all the time. If you don't have any foreign key constraints then it should be an easy fix.
Remember a DELETE statement will remove the record but keep the identity seed the same. (If I remove id # 5 and #5 was the last record inserted then SQL server still stores the identity seed value at "6").
TRUNCATING the table will reset the identity seed back to it's original value.
INSERT_IDENTITY [TABLE] ON can also be used to insert the correct data in the correct order if tuncating cannot happen.
SELECT *
INTO #tempTable
FROM [TableTryingToFix]
TRUNCATE TABLE [TableTryingToFix];
INSERT INTO [TableTryingToFix] (COL1, COL2, COL3, ETC)
SELECT COL1, COL2, COL2, ETC
FROM #tempTable
ORDER BY oldTableID
I am trying to design a sqlite database that will store notes. Each of these notes will have common fields like title, due date, details, priority, and completed.
In addition though, I would like to add data for more specialized notes like price for shopping list items and author/publisher data for books.
I also want to have a few general purpose fields that users can fill with whatever text data they want.
How can I design my database table in this case?
I could just have a field for each piece of data for every note, but that would waste a lot of fields and I'd like to have other options and suggestions.
There are several standard approaches you could use for solving this situation.
You could create separate tables for each kind of note, copying over the common columns in each case. this would be easy but it would make it difficult to query over all notes.
You could create one large table with many columns and some kind of type field which would let you know which type of note it is (and therefore which subset of columns to use)
CREATE TABLE NOTE ( ID int PRIMARY KEY, NOTE_TYPE int, DUEDATE datetime, ...more common fields, price NUMBER NULL, author VARCHAR(100) NULL,.. more specific fields)
you could break your tables up into a inheritance relationship something like this:
CREATE TABLE NOTE ( ID int PRIMARY KEY, NOTE_TYPE int, DUEDATE datetime, ...more common fields);
CREATE TABLE SHOPPINGLITITEM (ID int PRIMARY KEY, NOTE_ID int FORIENKEY NOTE.ID, price number ... more shopping list item fields)
Option 1 would be easy to implement but would involve lots of mostly redundant table definitions.
Option 2 would be easy to create and easy to write queries on but would be space inefficient
And option 3 would be more space efficient and less redundant but would possibly have slower queries because of all the foreign keys.
This is the typical set of trade-offs for modeling these kinds of relationships in SQL, any of these solutions could be appropriate for use case depending non your performance requirements.
You could create something like a custom_field table. It gets pretty messy once you start to normalize.
So you have your note table with it's common fields.
Now add:
dynamic_note_field
id label
1 publisher
2 color
3 size
dynamic_note_field_data
id dynamic_note_field_id value
1 1 Penguin
2 1 Marvel
3 2 Red
Finally, you can relate instances of your data with the fields they use through
note_dynamic_note_field_data
note_id dynamic_note_field_data_id
1 1
1 3
2 2
So now we've said: note_id 1 has two additional fields. The first one has a value "Penguin" and represents a publisher. The second one has a value of "Red" and represents a color.
So what's the point of normalizing it this far?
You're not wasting space adding fields to every item (you relate a note with it's additional dynamic field via the m2m table).
You're not storing redundant labels (you may continue to store redundant data however as the same publisher is likely to appear many times... this aspect is extremely subjective. If you want rich data about your publishers you typically want to take the step of turning them into their own entity rather than an ad-hoc string. Be careful when making this leap because it adds an extra level of hairiness to the db. Evaluate the use case accordingly.
The dynamic_note_field acts as your data definition. If you're interested in answering a question such as "what are the additional fields I've created" this lets you do it easily without searching all of your dynamic_note_field_data. Eventually, you might add extra info to this table such as a type field. I like to create this separation off the bat, but that might be a violation of the YAGNI principle in your case.
Disadvantages:
It's not too bad to search for all notes that have a publisher, where that publisher is "Penguin".
What's tricky is something like "Find any note with a value of 'Penguin' in any field". You don't know up front which field's your searching. At this point you're better off with a separate index that's generated alongside your normalized db data which acts as the point of truth. Again, the nice thing about normalization is that you maintain the data in a very lossless, non-destructive state.
For data you want to store but does not have to be searchable, another option is to serialize it to/from JSON and store it in a TEXT column. This gives you arbitrary structure, but you cannot readily query against those values.
Yet another option is to dump SQLite and go with an object database. I seem to recall there are one or two working for Android. I have not tried any of these, however.
Just create a small table which contains the common fields of all your notes.
Then a table for each class of special notes you have, that that contains all the extra fiels plus a reference on your first table.
For each note you will enter, you create a row in your main table (that contains the common fields) and a row in your extra table that contains the extra fields, and a reference to the row in your main table.
Then you will just have to make a join in you request.
With this solution :
1)you have a safe design (can't access fields that are not part of your note)
2)your db will be optimized
I am using Microsoft SQL Server and I have a master-detail scenario where I need to store the order of details. So in the Detail table I have ID, MasterID, Position and some other columns. There is also a unique index on MasterID and Position. It works OK except one case: when I have some existing details and I change their order. For example when I change a detail on position 3 with a detail on position 2. When I save the detail on position 2 (which in the database has Position equal to 3) SQL Server protests, because the index uniqueness constraint.
How to solve this problem in a reasonable way?
Thank you in advance
Lukasz Glaz
This is a classic problem and the answer is simple: if you want to move item 3 to position 2, you must first change the sort column of 2 to a temporary number (e.g. 99). So it goes like this:
Move 2 to 99
Move 3 to 2
Move 99 to 3
You must be careful, though, that your temporary value is never used in normal processing and that you respect multiple threads if applicable.
Update: BTW - one way to deal with the "multiple users may be changing the order" issue is to do what I do: give each user a numberical ID and then add this to the temporary number (my staff ID is actually the Unique Identity field ID from the staff table used to gate logins). So, for example, if your positions will never be negative, you might use -1000 - UserID as your temporary value. Trust me on one thing though: you do not want to just assume that you'll never have a collision. If you think that and one does occur, it'll be extremely hard to debug!
Update: GUZ points out that his users may have reordered an entire set of line items and submitted them as a batch - it isn't just a switch of two records. You can approach this in one of two ways, then.
First, you could change the existing sort fields of the entire set to a new set of non-colliding values (e.g. -100 - (staffID * maxSetSize) + existingOrderVal) and then go record-by-record and change each record to the new order value.
Or you could essentially treat it like a bubble sort on an array where the orderVal value is the equivalent of your array index. Either this makes perfect sense to you (and is obvious) or you should stick with solution 1 (which is easier in any event).
you could just remove the unique constraint (but leave an index key) on the order column, and ensure uniqueness in your code if necessary.