VB.NET LINQ to SQL Delete All Records - vb.net

I am having problems with deleting all records in a table with VB.NET. I am using this code to delete all records in the Contacts table
For Each contact In database.Contacts
database.Contacts.DeleteOnSubmit(contact)
Next
But I get this error
Can't perform Create, Update or Delete
operations on 'Table(Contact)' because
it has no primary key.
Anyone have any suggestions?

You should probably have a primary key on your table. This will make working with your table much easier. If you don't have a primary key, try finding a suitable candidate key to set as the primary key. If you have no suitable columns then you may wish to consider adding an auto incrementing surrogate key (called an identity in SQL Server). If you already have a primary key, make sure your LINQ to SQL classes are updated.
However if you just want to delete all values you may find that this method is too slow. An alternative is to execute SQL directly using DataContext.ExecuteCommand:
database.ExecuteCommand("DELETE Contacts");
This doesn't require that the table has a primary key. Note that this will irretrievably delete all rows in your table, so be careful. Even faster is the TRUNCATE command, but note that this requires greater privileges:
database.ExecuteCommand("TRUNCATE TABLE Contacts");
Again, be careful with this command. It will delete all rows from your table.

Related

SQL Server - Updating a table that has foreign keys, using DELETE/INSERT instead of UPDATE

I have a main table with many associated tables linked to it using an "id" foreign key.
I need to update a row in this main table.
Instead of updating all the fields of the row, one by one, it would be easier for me to simply deleting the whole row and recreating it with the new values (by keeping the original primary key!).
Is there a way, inside a transaction, to delete such row that has foreign key constraints if the row is recreated, with the same primary key, before the transaction is actually commited?
I tried it, and it doesn't seem to work...
Is there something I can do to achieve that other than dropping the constraints before my DELETE operation? Some kind of lock?
No.
Without dropping/disabling the constraint, SQL Server will enforce the relationship and prevent you from the deleting the referenced row.
It is possible to disable the constraint, but you'll incur the overhead when enabling it that SQL Server must verify EVERY REFERENCE to that key before it will consider the relationships trusted again.
You are much better off taking the time to develop a separate update/upsert function than to incur that additional processing overhead every time you need to change a record.
You could alter the foreign key to use a CASCADE DELETE, but that has its own overhead and baggage.

SQL Server: How to allow duplicate records on small table

I have a small table "ImgViews" that only contains two columns, an ID column called "imgID" + a count column called "viewed", both set up as int.
The idea is to use this table only as a counter so that I can track how often an image with a certain ID is viewed / clicked.
The table has no primary or foreign keys and no relationships.
However, when I enter some data for testing and try entering the same imgID multiple times it always appears greyed out and with a red error icon.
Usually this makes sense as you don't want duplicate records but as the purpose is different here it does make sense for me.
Can someone tell me how I can achieve this or work around it ? What would be a common way to do this ?
Many thanks in advance, Tim.
To address your requirement to store non-unique values, simply remove primary keys, unique constraints, and unique indexes. I expect you may still want a non-unique clustered index on ImgID to improve performance of aggregate queries that would otherwise require a scan the entire table and sort. I suggest you store an insert timestamp, not to provide uniqueness, but to facilitate purging data by date, should the need arise in the future.
You must have some unique index on that table. Make sure there is no unique index and no unique or primary key constraint.
Or, SSMS simply doesn't know how to identify the row that was just inserted because it has no key.
It is generally not best practice to have a table without a (logical) primary key. In your case, I'd make the image id the primary key and increment the counter. The MERGE statement is well-suited for performing and insert or update at the same time. Alternatives exist.
If you don't like that, create a surrogate primary key (an identity column set as the primary key).
At the moment you have no way of addressing a specific row. That makes the table a little unwieldy.
If you allow multiple rows being absolutely identical, how would you update/delete one of those rows?
How would you expect the database being able to "know" what row you referred to??
At the very least add a separate identity column (preferred being the clustered index, too).
As a side note: It's weird that you "like to avoid unneeded data" but at the same time insert duplicates over and over again instead of simply add up the click count per single image...
Use SQL statements, not GUI, if the table has not primary key or unique constraint.

SQL Trigger: On update of primary key, how to determine which "deleted" record cooresponds to which "inserted" record?

Assume that I know that updating a primary key is bad.
There are other questions which imply that the inserted and updated table records match by position (the first of one matches the first of the other.) Is this a fact or coincidence?
Is there anything that could join the two tables together when the primary key changes on an update?
There is no match of inserted+deleted virtual table row positions.
And no, you can't match rows
Some options:
there is another unique unchanging (for that update) key to link rows
limit to single row actions.
use a stored procedure with the OUTPUT clause to capture before and after keys
INSTEAD OF trigger with OUTPUT clause (TBH not sure if you can do this)
disallow primary key updates (added after comment)
Each table is allowed to have one identity column. Identity columns are not updateable; they are assigned a value when the records are inserted (or when the column is added), and they can never change. If the primary key is updateable, it must not be an identity column. So, either the table has another column which is an identity column, or you can add one to it. There is no rule that says the identity column has to be the primary key. Then in the trigger, rows in inserted and updated that have the same identity value are the same row, and you can support updating the primary key on multiple rows at a time.
Yes -- create an "old_primary_key" field in the table you're updating, and populate it first.
Nothing you can do to match-up the inserted and deleted psuedo table record keys -- even if you store their data in a log table somewhere.
I guess alternatively, you could create a separate log table that tracked changes to primary keys (old and new). This might be more useful than adding a field to the table you're updating as I suggested right at first, as it would allow you to track more than one change for a given record. Just depends on your situation, I guess.
But that said -- before you do anything, please go find a chalk board and write this 100 times:
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
I know that updating a primary key is bad.
...
:-) (just kidding)

Appending Rows into an SQLite Database Where Primary Key May Already Exist

I’m trying to merge a few pairs of SQLite3 databases that have the same tables (and schemas). Some of the tables are pretty simple and just have rows of plain data, but some of the tables have primary keys. Some of the keys are unique like a URL (eg url LONGVARCHAR PRIMARY KEY), and some of them are just simple integer indexes, but NOT set to auto-increment (eg id INTEGER PRIMARY KEY).
I’ve found several topics on merging databases (and I had already manually merged one pair of non-primary-key databases without effort), but am concerned about the ones with keys which may already exist in both.
My question is what happens if a row is inserted to a database where a row with the same key already exists? It should overwrite the row that has that key right? I was hoping that it would append them to the table and update the key, but that only works if the key has a numeric component that is set to auto-increment correct?
Can anyone confirm my suppositions—and if possible, offer a suggestion on the easiest way to append such rows?
Thanks a lot.
You should have no problems if you set the primary key in the destination table to auto increment.
Therefore, when you do you bulk insert command or whatever you are using to insert values into your new table, you simply do not supply input for your primary key field and there will NEVER be a duplicate.
Columns:
ID Name
Just don't provide ID field, ie/
INSERT INTO tableName ("Synetech")
The insert would just add this with the next available ID index in the table.
Good Luck!
If you try to INSERT a duplicate primary key, it will give you an error and not allow the insert. SQLite also supports the 'REPLACE INTO' syntax, which will update on a duplicate primary key.
If you want to append on duplicates, you will have to check whether a field with that key already exists, and if so then change the key to some new value. The correct way to do this likely depends on your application. For integer keys you could just take the max+1, but for the url keys it's not clear what the correct behavior should be.

How do I rename primary key values in Oracle?

Our application uses an Oracle 10g database where several primary keys are exposed to the end user. Productcodes and such. Unfortunately it's to late to do anything with this, as there are tons of reports and custom scripts out there that we do not have control over. We can't redefine the primary keys or mess up the database structure.
Now some customer want to change some of the primary key values. What they initially wanted to call P23A1 should now be called CAT23MOD1 (not a real example, but you get my meaning.)
Is there an easy way to do this? I would prefer a script of some sort, that could be parametrized to fit other tables and keys, but external tools would be acceptable if no other way exists.
The problem is presumably with the foreign keys that reference the PK. You must define the foreign keys as "deferrable initially immediate", as described in this Tom Kyte article: http://www.oracle.com/technology/oramag/oracle/03-nov/o63asktom.html
That lets you ...
Defer the constraints
Modify the parent value
Modify the child values
Commit the change
Simple.
Oops. A little googling makes it appear that, inexplicably, Oracle does not implement ON UPDATE CASCADE, only ON DELETE CASCADE. To find workarounds google ORACLE ON UPDATE CASCADE. Here's a link on Creating A Cascade Update Set of Tables in Oracle.
Original answer:
If I understand correctly, you want to change the values of data in primary key columns, not the actual constraint names of the keys themselves.
If this is true it can most easily be accomplished redefining ALL the foreign keys that reference the affected primary key constraint as ON UPDATE CASCADE. This means that when you make a change to the primary key value, the engine will automatically update all related values in foreign key tables.
Be aware that if this results in a lot of changes it could be prohibitively expensive in a production system.
If you have to do this on a live system with no DDL changes to the tables involved, then I think your only option is to (for each value of the PK that needs to be changed):
Insert into the parent table a copy of the row with the PK value replaced
For each child table, update the FK value to the new PK value
Delete the parent table row with the old PK value
If you have a list of parent tables and the PK values to be renamed, it shouldn't be too hard to write a procedure that does this - the information in USER_CONSTRAINTS can be used to get the FK-related tables for a given parent table.