Handling CustomField Insert - Which option is efficient and easier to maintain - sql

We have the following Table structure:
User Table
UserId CompanyId FullName Email
1 1 Alex alex#alex.com
2 1 Sam sam#sam.com
3 2 Rohit rohit#rohit.com
CustomField Table
CustomFieldId CompanyId Name Type
1 1 DOB Datetime
2 1 CompanySize Number
3 2 LandingPage Text
CustomFieldValue Table
UserId CustomFieldId DatetimeValue NumberValue TextValue
1 2 01-01-2020
1 2 10
1 3 Home
2 1
2 2 20
2 3 Product
Please consider the following facts:
There are millions of users in a particular CompanyId
When displaying a particular user in the UI we need to show all the Custom Fields that an end customer can fill up.
How to handle CustomFieldValues table in this case? We are considering the following options
When a new CustomField row is created for a particular CompanyId have a After Insert Trigger to create all corresponding rows in CustomFieldValue table for all users.
This I think would have an initial cost of creating so many rows for each Custom Field in the CustomFieldValue Table. (This may also lock up the table and users of the application would have to wait till all the inserts are done).
Same issue for deleting all CustomFieldValue rows when a CustomField row is deleted from a Company
But easier for UI and backend developers as they don't need to worry about whether a CustomFieldValue doesn't have an entry for a Custom Field that has been created for a Company
Don't create CustomFieldValue rows when a CustomField is added to the Company. Create the CustomFieldValue whenever user fills up the relevant input field in the UI view
This would have negligible insert cost and users would not have to wait for insert or delete to complete in CustomFieldValue table for all the users in a particular company.
The downside is that developers would have to make sure that relevant CustomFields are displayed in the frontend even though no relevant records yet exist in the CustomFieldValue table.
On each Custom Field input update by the end user, the developers would have to first check if a corresponding CustomFieldValue row exits, if so - store the updated value, if not - create the CustomFieldValue row.
Kindly suggest a solution which is efficient and easier to maintain.

Related

How I check existing record and avoid redundant data

I have two tables, both are situated at different server and belongs to different schema.
Table 1 : SALARY_DETAIL_REPORT_[monthYearSuffix]-Same Replica of Table 2.
Table 2: XXMPCD_SALARY_DETAIL_TABLE-Contains employee's salary related data,each employee having multiple pay code like HOUSE RENT ALLOW,DEARNESS ALLOW, NPS-Company Contri(Earning), BASIC PAY, NET EARNING, GROSS DEDUCTION, GROSS EARNING, GROSS DEDUCTION, NET EARNING, BASIC PAY, GRADE PAY.
So Employee code repeated multiple times and hence we are not able to maintain primary key or unique index.
Suppose 1000 records pushed in TABLE 2 which I need to copy exactly in my TABLE 1. This is handle by my SPRING service class a GUI is available in which we simply click on migrate button and services running in background fetch the data from TABLE 2 and insert the same in the TABLE 1.
There is two column inside TABLE 2
1.PICK_DATE
2.IS_DATA_PICKED
If both of these data contain null that simply means that data is not migrated into our TABLE 1, we updated the same columns after successful migration by acknowledging, so the next time the data will not be available for migration.
PROBLEM
Now suppose we migrated the 1000 records from TABLE 2 to TABLE 1.
What I done, I go back to my SQL DEVELOPER, choose 3 random records and set their PICK_DATE & IS_DATA_PICKED to NULL.
Now I migrated the same another time and those 3 records inserted again.
That means replication of 3 records, 1000 records becomes 1003.
Now what I want to check:
If same data existed, than record should be updated not inserted, means overwrite.
TABLE 1:SALARY_DETAIL_REPORT_092018
SALARY_REPORT_ID
EMP_NAME
EMP_CODE
PAY_CODE
PAY_CODE_NAME
AMOUNT
PAY_MODE
PAY_CODE_DESC
YYYYMM
REMARK
EMP_ID
PRAN_NUMBER
PF_NUMBER
PRAN_NO
ATTOFF_EMPCODE
REFERENCE_ID
**
TABLE 2:XXMPCD_SALARY_DETAIL_TABLE
EMP_NAME
EMP_CODE
PAY_CODE
AMOUNT
PAY_MODE
PAY_CODE_NAME
YYYYMM
REMARK
PUSH_DATE
PICK_DATE
IS_DATA_PICKED
ERROR_MESG
REFERENCE_ID
PRAYAS_ERP_ORG_ID
ERP_ORG_ID
PF_NUM
PRAN_NO
VERIFIED_BY

Automatic id assign

i'm a student and im having problems using the automatic increment because when i delete a row it will continue to increment. explaining:
i want to increment id automaticly
so:
id name age
1 michael 18
2 katy 17
3 jack 20
now i delete row 3 and when i click in the button new it'll go to the id 4 instead of id 3
i'v tried rows.count and refresh the textbox but nothing
some adicional info
ds= dataset
maxrows = ds.Tables("virtualtable").Rows.Count
idcliTextBox.Text = maxrows
how do i make it set id to the real last row?
It is the correct behavior and it is not a problem. Usually the autoincrement columns in a database are never reset to accomodate for empty holes caused by deletion of previous inserted records.
The autoincrement column is usually used as primary key to uniquely identify a single record in your table.
Suppose that your table represents students where the ID field value is used as foreign key for another table examresults. In this table you store the exam result of your students. Your student Katy (2) has two records in the examresults table for the graduation in math and geography.
If you delete the record with ID=2 from the table students and the related records from examresults changing the record for Jack from 3 to 2 means that you need to change also the related records for examresults of Jack. This is very impractical and useless if you think about it.

SQL Identity for compose primary key

First at all, sorry for my english. I've tried to find an answer to this question but I don't really how to express myself.
If it's a duplicate, please close it and let me know the answer.
I have a table to store the data for each item of customer's shopping cart. The structure is:
Table - tmpShoppingCartItem
tmpShoppingCart_ID int FK from tmpShoppingCart
ID int PK, Ident 1,1
StoreSKU_ID int FK from StoreSku
Quantity int
Enabled bit
The column ID is a identity with seed = 1. But when I start inserting data, it looks like:
tmpShoppingCart_ID ID ....
1 1
1 2
1 3
1 4
until here, its ok but when it's a new shopping cart, it looks like:
tmpShoppingCart_ID ID ....
2 5
2 6
3 7
4 8
the ID columns still seeding 1.
I want to know if (and how) can I reset the seed counter when the tmpShoppingCart_ID changes, like:
tmpShoppingCart_ID ID ....
1 1
1 2
1 3
1 4
2 1
2 2
3 1
3 2
3 3
3 4
3 5
4 1
Thanks for your time.
Identity columns when used as a primary key should be sequential and not repeat in a table. First, you need to make both temShoppingCart_ID and ID the unique PK to prevent duplicatation. Secondly, you will not be able to use the IDENTITY function, but rather a counter in your application for each row that's inserted for a given tempShoppingCart_ID.
In my opinion, keep the ID as an identity column like it currently is. Add a second column called LineID and make that increment per record.
You cannot do that with an auto incrementing field. If you want to do that you will have to write a trigger to popluate the field based on a process you write. The process will have to make sure it includes multiple row inserts and can handle race conditions. WHy do you want to do this? Are you planning on letting customers reorder the shopping cart ids. THey don't need to ever see the id so it should not matter if the first on is 1 or 7.
I do want to point out that while DBCC CHECKIDENT ([Table_Name], RESEED, 0) might technically work, it requires the user to be a sys_admin or db_owner or db_ddladmin. None of these are roles that should ever be assigned to the user who logs in from an application to do data entry.

Locking records in back-end using relationship in frontend

I am trying to figure out how to protect records in access back-end with a relationship in access front-end.
I have the following table in back-end:
tblSit(linked from back-end)
tblSitID(autonumber) ProductID LocationID
1 1 2
2 5 1
3 8 3
temp_tblToMove(table in front-end)
temp_tblToMoveID(autonumber) tblSitID
1 1
2 3
what I want to do is move the product from one location to another. The idea is:
I select the record in tblSit that stores locations for each product. Then I insert that ID in temp_tblToMove local table. Then I have a form that in the end will delete the selected records from tblSit and insert them again in tblSit changing LocationID.
I want record locking so that if two users try to move the same product then they get error when trying to delete the record from tblSit.
if I would have temp_tblToMove in back-end then the relationship would prevent record deleting. But I'd like to keep temp_tblToMove in front-end, but here the ralationship doesn't include "Enforce referential integrity".
Thanks for the help.
PS: sorry if I didn't do a good job at explaining what I want.
Any reason you can't just update the existing records?
UPDATE tblSit SET Location = NewLocationID WHERE ID = WhicheverID;

How to merge two identical database data to one?

Two customers are going to merge. They are both using my application, with their own database. About a few weeks they are merging (they become one organisation). So they want to have all the data in 1 database.
So the two database structures are identical. The problem is with the data. For example, I have Table Locations and persons (these are just two tables of 50):
Database 1:
Locations:
Id Name Adress etc....
1 Location 1
2 Location 2
Persons:
Id LocationId Name etc...
1 1 Alex
2 1 Peter
3 2 Lisa
Database 2:
Locations:
Id Name Adress etc....
1 Location A
2 Location B
Persons:
Id LocationId Name etc...
1 1 Mark
2 2 Ashley
3 1 Ben
We see that person is related to location (column locationId). Note that I have more tables that is referring to the location table and persons table.
The databases contains their own locations and persons, but the Id's can be the same. In case, when I want to import everything to DB2 then the locations of DB1 should be inserted to DB2 with the ids 3 and 4. The the persons from DB1 should have new Id 4,5,6 and the locations in the person table also has to be changed to the ids 4,5,6.
My solution for this problem is to write a query which handle everything, but I don't know where to begin.
What is the best way (in a query) to renumber the Id fields also having a cascade to the childs? The databases does not containing referential integrity and foreign keys (foreign keys are NOT defined in the database). Creating FKeys and Cascading is not an option.
I'm using sql server 2005.
You say that both customers are using your application, so I assume that it's some kind of "shrink-wrap" software that is used by more customers than just these two, correct?
If yes, adding special columns to the tables or anything like this probably will cause pain in the future, because you either would have to maintain a special version for these two customers that can deal with the additional columns. Or you would have to introduce these columns to your main codebase, which means that all your other customers would get them as well.
I can think of an easier way to do this without changing any of your tables or adding any columns.
In order for this to work, you need to find out the largest ID that exists in both databases together (no matter in which table or in which database it is).
This may require some copy & paste to get a lot of queries that look like this:
select max(id) as maxlocationid from locations
select max(id) as maxpersonid from persons
-- and so on... (one query for each table)
When you find the largest ID after running the query in both databases, take a number that's larger than that ID, and add it to all IDs in all tables in the second database.
It's very important that the number needs to be larger than the largest ID that already exists in both databases!
It's a bit difficult to explain, so here's an example:
Let's say that the largest ID in any table in both databases is 8000.
Then you run some SQL that adds 10000 to every ID in every table in the second database:
update Locations set Id = Id + 10000
update Persons set Id = Id + 10000, LocationId = LocationId + 10000
-- and so on, for each table
The queries are relatively simple, but this is the most work because you have to build a query like this manually for each table in the database, with the correct names of all the ID columns.
After running the query on the second database, the example data from your question will look like this:
Database 1: (exactly like before)
Locations:
Id Name Adress etc....
1 Location 1
2 Location 2
Persons:
Id LocationId Name etc...
1 1 Alex
2 1 Peter
3 2 Lisa
Database 2:
Locations:
Id Name Adress etc....
10001 Location A
10002 Location B
Persons:
Id LocationId Name etc...
10001 10001 Mark
10002 10002 Ashley
10003 10001 Ben
And that's it! Now you can import the data from one database into the other, without getting any primary key violations at all.
If this were my problem, I would probably add some columns to the tables in the database I was going to keep. These would be used to store the pk values from the other db. Then I would insert records from the other tables. For the ones with foreign keys, I would use a known value. Then I would update as required and drop the columns I added.