I am using Django and Django Rest Framework for the backend of my Student Management System. The database is going to be postgresql. I want to give a unique student number to each student that gets admission(let's call that STD_ID). I have searched around and all the answers suggest getting the highest value of the STD_ID column from the database and saving the new record by incrementing the highest value. That works fine but there is a problem.
The issue is let's say the last entered student has the STD_ID 100, now the next one will be 101 all good but what happens when I delete the student with STD_ID 100. Now the highest STD_ID will be 99 and the next student will be 100. It won't create a conflict in the table because it does not remember having STD_100 but in physical records, lets say student reports and all that which were issued to previous STD_ID 100 students will exist and the new student will also be issued reports and such in the same STD_ID.
So I guess you see the issue here. There are now two students with the same ID. One is no longer in the database but he used to be and one day it can cause problems.
What I want to do is have a way to assign STD_ID without this issue. One way I can think of is having a second table which stores the last student id issued so that the next student is saved with a unique id regardless of whether student has been deleted or not.
But, I am not sure how this system would handle multiple users. Let's say two users try to save a student record at the same time, they will get new unique numbers but for some reason one of the requests to save the record fails and now there is a gap in the STD_ID.
So, how is this handled in the industry because STD_ID does not seem like a big issue but just assume we are dealing with invoice numbers and a number is skipped in the sequence. Well, that would be a mess to sort out in internal records and auditors will get unnecessary suspicion.
Any guidance will be much appreciated.
PS. I am still learning all this on my own and this is I think my first question on StackOverflow. So, I apologise for any blunders or mistakes, any advice on asking questions and suggestions will be highly appreciated. Thank you
Related
I have an issue/problem that I'm looking for a solution for. If I find a solution for this, it could fix two issues I'm facing in a database tool I'm creating for my school's admin office.
The gist of it is, I'm looking for a way in Access 365 (or Access 2019) either through a form/query/vba... to limit data to not have duplicates. Now, of course I know that you can set the field in your table to not have duplicates. But, here is the thing, I wouldn't mind a duplicate in that column in certain cases.
So, let me explain the issue/problem by giving an example.
Image that you have two tables. One table is filled with the name of a school bus and the yearly price that the parents have to pay for it. Another table has all the stops that the school busses take.
Now, here comes the problem. I would love to have a system where I can let the user decide the order of those stops.
What I'm currently doing is, I let the user fill in a number. In another column, I have a calculated field that adds the primary key of the schoolbus table "." the number of the order.
So, that would look like this for the first school bus and the first five stops.
1.1, 1.2, 1.3, 1.4, 1.5
And I have a duplicate control that column. Since that way, I can make sure that it doesn't block the number 1 for the next bus.
Something that might work as well, if that's possible in access is that for each school bus you create, you add a column in the stops table to check for duplicates. But how do you make sure then to filter which stops belongs to which schoolbus?
I hope my issue is a bit clear. If it's not, feel free to ask.
I have a similar style problem in another part of the tool with a book fair system.
Where you have an order table where the order number, the student number, the creation and pickup dates are stored. It also has a field to lock the form to avoid any edits for the bookkeeping folks. I also have an orderline form where all the actual bought books are stored. Now, I would love it if a system exists to avoid users adding the same book twice on an order in the orderline table since I do have an amount column for that. So, in a way, it's almost similar to the above described problem.
I have a database with millions of client contacts. However, a lot of them are duplicated and may I ask some hero from here to advise how to identify those duplicates using Oracle SQL, PL/SQL or Excel.
Following is the data structure:
Client_Header
id integer (Primary Key)
Client_First_Name (varchar2)
Client_Last_Name (varchar2)
Client_Date_Of_Birth (timestamp)
Client_Address
Client_Id (Foreign Key ref Client_header)
Address_Line1 (varchar2)
Address_Line2 (varhchar2)
Adderss_Line3 (varchar2)
Suburb (Varchar2)
State (varchar2)
Country (varchar2)
My challenge is other than Client_Date_Of_Birth and those key fields, all fields are free text only.
For example, we have a client like following
Surname : Jones
First name : David
Client_Date_Of_Birth: 10/05/1975
Address: Unit 10 Floor 1, 20 Railway Parade, St Peter, NSW 2044
However, as those fields are free text, I have a lot of data issues and following link (jpeg file only) illustrated some of those issues
Sample of data issues
Note:
Other than those issues, sometime we may miss the first name or last name of the client (but not both) too
Sometimes multiple problems can be find within the same record.
Also sometime, the address may simply be the name of a school,
shopping center etc.
The system does not store any other id that can uniquely identify the client.
I understand it is close to impossible to gather all duplicate records where the client address is a school or shopping center. However, for other cases, is there anyway to identify most of the duplication.
Thank you for your help!
Not a pretty sight, and I'm afraid I don't have good news for you.
This is a common problem in databases, especially if the data entry personnel are insufficiently trained. One of the main objectives in data entry training is to make the problem well understood and show ways to avoid it. Something to keep in mind in the future.
Unfortunately, there isn't any "magic wand" that will clean your data for you. I'm sorry, but you have before you one of the most tedious tasks in database maintenance. You're going to have to basically remove the duplicates by hand, and the job requires more of an editor than a database administrator.
If you have millions of records, of which perhaps a million are actually duplicates, I would estimate that it will take an expert working full time for at least two years -- and probably longer -- to clean up your problem: to do it in two years would require fixing 2000 records a day, with time off on weekends and two weeks of vacation.
In the end, the only sure way to remove all the duplicates is to compare all of them and remove them one at a time. But there are plenty of tricks you can use to get rid of blocks of them at once. Here are a few that I can think of with your data sample:
Change "Dave" to "David" in both first and last name fields. (Make sure that nobody actually has the last name "Dave.")
Change all instances of "Jones David" to "David Jones." (Make sure that there are no people named "Jones David".)
Change "1/F" to "Floor 1."
The idea is to focus on some of the fields, and in those fields get all of the duplicates to be exact duplicates. Once you have that done, you delete all the records with the target values in the fields, except the one with the primary key of the record that you want to keep (if your table isn't keyed, you'll have to find another way to do it, such as selecting the top record into a new table).
This technique speeds things up for records with a large number of duplicates. Where you have only a few duplicates, it's quicker to just identify them one by one. One way to do this quickly is to go into edit mode on a table, work with a particular field (for example, the postal code field in this case), and put a unique value in that field when you want to mark it for deletion (in this case, perhaps a single zero). Then you can periodically delete all the records with that value in the field.
You'll also need to sort the data in multiple ways to find the duplicates, which it appears you already know.
As for your notes, don't try to identify all the ways that the data is messed up. Once you identify one record as a duplicate of another, you don't care what's wrong with it, you just have to get rid of it. If you have two records and each contains data that you want to keep that the other one is missing, then you'll have to consolidate them and delete one of them. And then go on to the next, and the next, and the next...
Some years ago I had a similar task and I tooks about one years to clean the data.
What I did in short:
send the address to api.addressdoctor.com for validation and split into single fields (with maps.googleapis.com it is also possible)
use a first name and last name match list to check the names (we used namepedia.org). A lot depends on the quality of this list. This list should base on country of birth or of the first address. From the results we made a propability what kind of name it is (first/last/company).
with this improved date you should create some normalized and fuzzy attributes. Normalized fields from names and address...like upper and just with alpha-numeric
List item
at the end I would change the data model a little bit to improve the data quality by design. I recommend you adding pre-title, post-title, middle-name and post-name fields. You should also add the splitted address fields like street, streetno, zip, location, longitude, latitude, etc...
I would also change the relation between Client_Header and Client_Address with an extra address_Id as primary key...but this depends on the requirements. And at the end I would add some constraints to prevent duplicated entries.
after all that is the deduplication not hard. Group just all normalized or fuzzy data together and greate a dense_rank. (I group by person, household, ...) Make a ranking over the attributes (I used data quality, data fillrate and transaction history for a score value) Finally it is your choice if you just want to delete the duplicates and copy the corresponding data to the living client or virtually connect the data via Client_Id in an extra Field.
for insert and update processes you should create PL/SQL functions that check if fuzzy last-name (eg. first-name) + fuzzy address exist. Split the names and address fileds and check them with the address API's and match them with the names reference. If it is a single tuple data entry, show the best results to the user and let him decide.
I have the following tables:
Student(Student_ID(PK), FName, LName,...)
Subject_details(Subject_code(PK), Subject_Name)
Subjects_Enrolled(Student_ID,Subject_Code)
My question is:
Can i have table as the one below
Marks_Details(Student_ID,Subject_Code,Marks_Obtained)
Does that table break any rule of database or anything as such? My requirement is to have a table that maps the marks obtained by a student in a particular subject and this is what I've come up with. Is it correct or is there any other approach to do the same?
Please let me the know the reason if you're going to downvote.
Thanks.
That data model looks fine. Your simply setting up a Many To Many relationship with some additional information (marks) contained in each record.
One suggestion would be to rename the Subject_Details table to Subject. I think this verbiage makes the relationship more clear.
Another suggestion would be to rename Subjects_Enrolled to Enrollment and just add the Marks_Obtained column to this table. This would eliminate the need for Marks_Details since the two tables basically contain the same information. Why store and maintain this data twice? The idea would be to insert a record into Enrollment when a student enrolls within a course and then to update the Marks_Obtained column at a later date when the course is completed.
Your idea doesn't "break any rule of database". I'd actually say it's pretty much the standard way of storing this data.
I would recommend to give the Marks_Details table a separate primary key, and maybe a date field. If a student wants to retake the subject, do you want the new data to override the old, or do you want to keep it both? It's up to you really.
I am working on a VB.net (VS-2010, Win XP Pro 2 SP3), Employee Management Project. I need to keep track of Employee Leave Attendance and also each Equipment assigned to an Employee. How can I achieve this using SQLlite.
It will be very useful if you could provide me with examples as I am completely new to the field of SQL and VB.net
I think this can be done with two tables where one has the primary key while the other has a foreign key, but I am not sure. Also how many tables will I need for storing data in Leave and Equipment Form.
I went through other questions but I was unable to figure out a solution for my problem.
(Sorry, I cannot provide with images as this site prevents me from posting images without 10 reps)
Most problems are only as complex, and as simple as you make them. Out of habbit, nearly all tables end up with a unique ID field. There are exceptions, which I will call "link" tables, eg, ones that provide connection details between two data tables.
Now, in your senario
You would need a "holiday" table, where each row will contain the employee unique ID and either a start/finish date, eg, if they take half a day, it needs to be visible, or, just a year and value, eg in 2011, I booked, 2 lots of 35 hours, and 1 lot of 4 hours eg, Ive taken 2 weeks and half a day.
For the equipment, you would need a data table, since an item can only got to 1 employee, it depends if you're going to use this for booking or not, but if its just like a library, eg I currently have a loaner laptop, then you can just have an employee field in the equipment table. If you need a booking system, then you would require link tables and more complex.
Best way to work out your tables is to try and group your data, and then write the items on peices of paper and see how you as a human do it. After a while you end up able to do so in your head.
I am working on a project where there are several types of users (students and teachers). Currently to store the user's information, two tables are used. The users table stores the information that all users have in common. The teachers table stores information that only teachers have with a foreign key relating it to the users table.
users table
id
name
email
34 other fields
teachers table
id
user_id
subject
17 other fields
In the rest of the database, there are no references to teachers.id. All other tables who need to relate to a user use users.id. Since a user will only have one corresponding entry in the teachers table, should I just move the fields from the teachers table into the users table and leave them blank for users who aren't teachers?
e.g.
users
id
name
email
subject
51 other fields
Is this too many fields for one table? Will this impede performance?
I think this design is fine, assuming that most of the time you only need the user data, and that you know when you need to show the teacher-specific fields.
In addition, you get only teachers just by doing a JOIN, which might come in handy.
Tomorrow you might have another kind of user who is not a teacher, and you'll be glad of the separation.
Edited to add: yes, this is an inheritance pattern, but since he didn't say what language he was using I didn't want to muddy the waters...
In the rest of the database, there are no references to teachers.id. All other tables who need to relate to a user
use users.id.
I would expect relating to the teacher_id for classes/sections...
Since a user will only have one corresponding entry in the teachers table, should I just move the fields from the teachers table into the users table and leave them blank for users who aren't teachers?
Are you modelling a system for a high school, or post-secondary? Reason I ask is because in post-secondary, a user can be both a teacher and a student... in numerous subjects.
I would think it fine provided neither you or anyone else succumbs to the temptation to reuse 'empty' columns for other purposes.
By this I mean, there will in your new table be columns that are only populated for teachers. Someone may decide that there is another value they need to store for non-teachers, and use one of the teacher's columns to hold it, because after all it'll never be needed for this non-teacher, and that way we don't need to change the table, and pretty soon your code fills up with things testing row types to find what each column holds.
I've seen this done on several systems (for instance, when loaning a library book, if the loan is a long loan the due date holds the date the book is expected back. but if it's a short loan the due date holds the time it's expected back, and woe betide anyone who doesn't somehow know that).
It's not too many fields for one table (although without any details it does seem kind of suspicious). And worrying about performance at this stage is premature.
You're probably dealing with very few rows and a very small amount of data. You concerns should be 1) getting the job done 2) designing it correctly 3) performance, in that order.
It's really not that big of a deal (at this stage/scale).
I would not stuff all fields in one table. Student to teacher ratio is high, so for 100 teachers there may be 10000 students with NULLs in those 17 fields.
Usually, a model would look close to this:
I your case, there are no specific fields for students, so you can omit the Student table, so the model would look like this
Note that for inheritance modeling, the Teacher table has UserID, same as the User table; contrast that to your example which has an Id for the Teacher table and then a separate user_id.
it won't really hurt the performance, but the other programmers might hurt you if you won't redisign it :) (55 fielded tables ??)