ASP.NET help inserting data into normalized SQL tables best practice - sql

Hi i have a most common situation of inserting a client order into my SQL database. I have created an order header and order detail tables in the db and i am trying to insert a single header record and multiple detail lines corresponding to that header record (the PK and FK constraint is the headerID).
Currently, i insert my header record, then query the db for last created headerID and use that ID to insert my detail lines by looping through grid for example.
I know this is a stupid way for inserting records into normalized tables and there is an extra sql call made, which seems unnecessary. Therefore, would anyone know of a better solution to this problem.

I found that using Entity Framework solves this problem, without making any extra effort to search for the last insert headerid and then insertingit into the detail table. So if relationships are well defined in the Entity Framework, the framework takes care of this process. For Example:
using (var rep = new Repo()){
Header hd = new Header();
hd.name = "some name";
Detail dt = new Detail();
dt.itemname = "some item name";
hd.Details.Add(dt);
rep.Headers.Add(hd);
rep.savechanges();
}
So as long as all the operations are done before the rep.savechanges(); master/detail data will be inserted correctly.
If however, EF finds an integrity error or any other errors, no data will be inserted at all.
So this a nice clean way to insert master/detail data into SQL database.

Related

How to insert customer and order info at same time with matching keys

MyForm1
MYCode(withError)
MYForm2
Lets say I have two tables for simplicity (the real database has more tables like the pictures imply); both primary keys are auto-increment do not allow nulls, the foreign key allows nulls. I need to understand a fundamental concept:
One table is called customers, with the following columns:
customer_id (pk)
name
phone
The second table is called orders:
order_id (pk)
customer_id (fk)
product
quantity
There is an Order form for a new customer, where you have to enter the customer info (name, phone) and the order info (product, quantity). I made text boxes for these on the form and a button.
How do I insert this info into the correct tables while ensuring the keys match after its done? I want the form to add customer info and tie orders to them.
I have tried table adapters, watched Youtube, written SQL queries and even typing in the data manually in the server explorer. I can retrieve data easily but inserting confuses me because of keys and auto increment. I think I am missing a fundamental concept. It seems that tutorials almost never show inserting data into multiple tables? Do I need a trigger or scope identity in my table adapter query?
Your dataset represents your database tables; you fill the dataset's tables with data, and then call update in the right order (you have to insert things into order and products before you can insert into orderitems)
If you're doing this in a modern version of VS your dataset will also generate a TableAdapterManager, a device that knows of all the tableadapters in the set and the order they should be used in to achieve correct functionality so doing the inserts is as simple as calling tableAdapterManager.UpdateAll(), tableAdapterManager being an instance of the TableAdapterManager class
There is a huge amount of info needed to pass to you to really answer your question, and I can't see the code so I can't tell how you've arranged things thus far, but in essence you either need your Orders and Products tables to already have data in them (downloaded from the db) or they need to have rows inserted as part of the order creation. Logically, I would assume the products are known. I also don't know if you're using VB or C# - i'll write in VB, as a C# programmer is probably less likely to complain about having to mentally translate the VB than vice versa:
Dim prodToAdd = ds.Products.Find(1234) 'gets a row from the product table with id 1234
'create an order
Dim order = ds.Orders.NewOrdersRow()
order.Address = "Whatever" 'set properties of the order
ds.Orders.Add(order) 'put the new row in the local datatable, not saved to db yet
'put a product on an order
ds.OrderItems.AddOrderItemsRow(prodToAdd, order, ...) 'directly add an orderitem row linking the Product and the Order
'save to db
tableAdapterManager.UpdateAll(ds)
If you don't have a manager, you must insert the rows to the db in the right order yourself:
ordersTableAdapter.Update(ds.Orders) 'yes, the method is called update, but it runs an INSERT sql for rows in an Added rowstate
orderItemsTableAdater.Update(ds.OrderItems)

Creating 'custom' tables in PostgreSQL

I’ve hit sort of a roadblock in a current project I’m working on, I don’t have a lot of web developers in my office and as a matter in fact the only other web dev just went on vacation. Anyway I was wondering if anyone could help me with structuring two of my postgres tables.
The user needs to be able to create custom data tables, one for each specific program (a parent record). The form I’ve setup for these tables allows you to add or remove inputs based on how many fields you need and then specify the name, data_type, etc.
My initial idea was to create a new table in the dB each time a user created one of these custom tables. The other web dev, who has created something similar, said it would be better to create a fields table that stores each custom field information and then have a data table that stores every cell of data tying to a field id.
I understand having the fields table so that I can retrieve just the field information and build my front-end tables and edit forms dynamically, but I’m a little confused on how to get the data into the table. I’m used to having an array of objects and each object relating to an entire row. But with this method it’s storing each cell of data instead of row of data and I don’t know the best way to select and organize it on the backend.
Data for these tables are going to be imported in from CSV files formatted to the custom table structure, below is the current structure I have for my two tables. I got a suggestion on reddit to use JSON to store each rows data, but I'm wondering how I'll be able to do sorting and filtering with this data. My current table structure is listed below, and this is before I got the suggestion to use the json data. I'm guessing if I went that route I would remove the fieldId column and instead use it for
the JSON key name, and store that fields data with it.
fields
id -- name -- program_id -- type -- required -- position -- createdAt -- updatedAt
data
id -- fieldId -- data -- createdAt -- updatedAt
So I guess my question is does this sound like the right way to structure these tables for my needs and if so can I still perform sorting and filtering on it?

Best practice for many to many data mapping

I am looking to find the best practice for many to many mapping in database.
For example I have two tables to be mapped. I create third table to store this mapping.
On UI I have several A to be mapped(or not) multiple with B. And I see two solutions for now:
1 - On every update for every record from A I will delete all mapped data for it and insert new data mapping.
Advantage: I store only mapped data.
Disadvantage: I need to use delete and insert statement every time.
2 - I need to add new bit column to AB table with name isMapped. And I will store all mapping for every record from A to every record from B. On save mapping action I will use only update statement.
Advantage: No need to delete and insert every time.
Disadvantage: Need to store unnecessary records.
Can you offer me best solution?
Thanks
between the 2 options you have listed I would go with option no 1, isMapped is not meaningful, if they are not mapped the records should not exists in the first place.
you still have one more option though:
DELETE FROM AB where Not in the new map
INSERT INTO AB FROM (New map) where NOT in AB
if these are a lot of maps I would delete and insert from the new mapping, otherwise I would just delete all then insert like you are suggesting.
I'd say anytime you see the second bullet point in your #2 scenario
"Need to store unnecessary records"
that's your red flag not to use that scenario.
Your data is modeled correctly in scenario 1, i.e. mapppings exist in the mapping table when there are mappings between records in A and B and mappings do not exist in the mapping table when there is not a mapping between those records in A and B.
Also, the underlying mechanics of an update statement are a delete and then an insert, so you are not really saving the database any work by issuing one over the other.
Lastly, speaking of saving the database work, don't try and do it at this stage. This is what they are designed for. :)
Implementing your data model correctly as you are in Scenario 1 is the best optimization you can make.
Once you have the basic normalized structure in place and have some test data, then you can start testing performance and refactoring if necessary. Adding indexes, changing data structures, etc.

SQL Server Database - Hidden Fields?

I'm implementing CRUD on my silverlight application, however I don't want to implement the Delete functionality in the traditional way, instead I'd like to set the data to be hidden instead inside the database.
Does anyone know of a way of doing this with an SQL Server Database?
Help greatly appreciated.
You can add another column to the table "deleted" which has value 0 or 1, and display only those records with deleted = 0.
ALTER TABLE TheTable ADD COLUMN deleted BIT NOT NULL DEFAULT 0
You can also create view which takes only undeleted rows.
CREATE VIEW undeleted AS SELECT * FROM TheTable WHERE deleted = 0
And you delete command would look like this:
UPDATE TheTable SET deleted = 1 WHERE id = ...
Extending Lukasz' idea, a datetime column is useful too.
NULL = current
Value = when soft deleted
This adds simple versioning that a bit column can not which may work better
In most situations I would rather archive the deleted rows to an archive table with a delete trigger. This way I can also capture who deleted each row and the deleted rows don't impact my performance. You can then create a view that unions both tables together when you want to include the deleted ones.
You could do as Lukasz Lysik suggests, and have a field that serves as a flag for "deleted" rows, filtering them out when you don't want them showing up. I've used that in a number of applications.
An alternate suggestion would be to add an extra status assignment if there's a pre-existing status code. For example, in a class attendance app we use internally an attendance record could be "Imported", "Registered", "Completed", "Incomplete", etc.* - we added a "Deleted" option for times where there are unintentional duplicates. That way we have a record and we're not just throwing a new column at the problem.
*That is the display name for a numeric code used behind the scenes. Just clarifying. :)
Solution with triggers
If you are friends with DB trigger, then you might consider:
add a DeletedAt and DeletedBy columns to your tables
create a view for each tables (ex: for table Customer have a CustomerView view, which would filter out rows that have DeletedAt not null (idea of gbn with date columns)
all your CRUD operations perform as usual, but not on the Customer table, but on the CustomerView
add INSTEAD OF DELETE trigger that would mark the row as delete instead of physically deleting it.
you may want to do a bit more complex stuff there like ensuring that all FK references to this row are also "logically" deleted in order to still have logical referential integrity
I you choose to use this pattern, I would probably name my tables differently like TCustomer, and views just Customer for clarity of client code.
Be careful with this kind of implementation because soft deletes break referential integrity and you have to enforce integrity in your entities using custom logic.

Is there any way to fake an ID column in NHibernate?

Say I'm mapping a simple object to a table that contains duplicate records and I want to allow duplicates in my code. I don't need to update/insert/delete on this table, only display the records.
Is there a way that I can put a fake (generated) ID column in my mapping file to trick NHibernate into thinking the rows are unique? Creating a composite key won't work because there could be duplicates across all of the columns.
If this isn't possible, what is the best way to get around this issue?
Thanks!
Edit: Query seemed to be the way to go
The NHibernate mapping makes the assumption that you're going to want to save changes, hence the requirement for an ID of some kind.
If you're allowed to modify the table, you could add an identity column (SQL Server naming - your database may differ) to autogenerate unique Ids - existing code should be unaffected.
If you're allowed to add to the database, but not to the table, you could try defining a view that includes a RowNumber synthetic (calculated) column, and using that as the data source to load from. Depending on your database vendor (and the products handling of views and indexes) this may face some performance issues.
The other alternative, which I've not tried, would be to map your class to a SQL query instead of a table. IIRC, NHibernate supports having named SQL queries in the mapping file, and you can use those as the "data source" instead of a table or view.
If you're data is read only one simple way we found was to wrapper the query in a view and build the entity off the view, and add a newguid() column, result is something like
SELECT NEWGUID() as ID, * FROM TABLE
ID then becomes your uniquer primary key. As stated above this is only useful for read-only views. As the ID has no relevance after the query.