Multiple filters in Rows.Find() - vb.net

I have a code where I am trying to delete records by pulling records from database and then updating them with Delete Flag set to "Y". I am facing issues in discarding the previously deleted items to come up in my search.
This is what I am using to get the table rows -
Datatableadapter.getData().Rows.Find(ID.Text)
This searches on the Primary field of the table automatically. Now i want to add delete flag filter also to the search criteria. Pls suggest what to do.

Find is meant when searching by the entity key or composite key. Use the Where to search by additional criteria. You can either use both conditions in your where, or use Find so you leverage the clustered index (ideally), and then where to enforce your business rule that the element has not been deleted.

Related

Postgres "archive" a row of a table

I have a few tables in postgres that refer to each other. I want to set up a mechanism to "archive" rows in one of my tables. That is, I want to still hold onto the data and be able to read from it, but I don't want to be able to edit that row anymore or edit the foreign keys in other tables to reference this now "archived" row.
Is this something that can be achieved? Essentially, I want the rest of the database to act like this row's primary key is no longer there, the same way that if you try to set an invalid foreign key, postgres will throw an error that that key was not found in the referenced table.
Thanks
EDIT:
I don't want to actually archive any of the data. I say "archive" because I can't think of a better way to describe it. Essentially, I just want to be able to change a bool value in a row of the table and then that signals to postgres to no longer allow any changes to that row or use that rows id as a foreign key in any other tables. The only thing that someone should be able to do would be to change that bool back to true and then interact normally.
Add a flag column, either a bool or an enum jf there are multiple states. Postgres won't check this for you, you have to add a where clause to every applicable query.
This is error prone. You can make it safer by defining a view which already has the where clause. Do all queries on that view. Rename the table to something like "table_all" and let the view use the table's name, then all existing queries will Just Work.

Detect which field is causing duplicates

I am designing a table and during testing it was found that one of the fields causes duplicate rows (which it shouldn't).
As a precaution, I would like to rule out possible duplicates in any other field. How would I go about checking which one of my columns causes duplicate PK's?
Intuitive method:
Select
count(*),
pk_field,
other_field1
from
table
group by
pk_field,
other_field
having
count(*) > 1
and count(distinct other_field1) >1;
I want to make sure that if I run this query it will rule out 100% that there are no duplicates caused by other_field1 (that there is only one value of other_field1 for each value of PK).
Extra bonus: is there a query that would show me directly which fields cause duplicate rows without having to make one query per field in the table?
Thanks a bunch!
EDIT: for clarity, the PK will not be enforced and the table is actually a view in a third party system
From my point of view, primary key should be enforced, and there should be a unique index on (pk_field, other_field). Additionally, other_field should be NOT NULL (so that you wouldn't have "duplicates" for the same pk_field, but empty other_field.
Doing so, database would handle your problem itself.
If you want to do it yourself, well, what can you do? A view? Third party system? What kind of a control do you have over the whole process? If all you CAN do is to find "duplicates", that's kind of too late.

Saving change history

Background:
I am trying to solve one simple problem. I have a database with two tables, one stores text (this is something like articles), and the other stores the category to which this text belongs. Users can make changes to the text, and I need to save who and when made the changes, also when saving changes, the user writes a comment on his changes, which I also save.
As I have done now:
I added another table to which I save everything related to changes, who made the changes and when, as well as a comment on the changes, and the ID of the text to which the changes apply.
What is the problem:
Deleting the text also needs to be recorded in history, but since in the records with history there is a foreign key with a check, then I have to delete the entire history that is associated with the text, so as not to get an error.
What I have tried else:
I tried to add an attribute to the table with the text "Deleted", and the row is not physically deleted, but the "Deleted" = 1 flag is simply set, and at the same time I can save the history, and even the moment of deletion. But there is another problem, the table with the text has an attribute "Name", which must be unique, and if the record is not physically deleted, then when I try to insert a new record with the value "Name", which already exists, I get a uniqueness error, although the old record with such a name is considered remote.
Question:
What are the approaches to solving the problem, in which it is possible to save the history of changes in another table, even after deleting records from the main table, and at the same time keep the uniqueness of some attributes of the main table and maintain data integrity.
I would be grateful for any options and hints.
A good practice is to use a unique identifier such as a UUID as the primary key for your primary record (ie. your text record). That way, you can safely soft delete the primary record and any associated metadata can be kept without fear of collisions in the future.
If you need to enforce uniqueness of certain attributes (such as the Name you mentioned) you can create a secondary index (non-clustered index in SQL terminology) on that column in the table and then, when performing the soft delete you can set the Name to NULL and record the old Name value in some other column. For SQL Server (since 2008), in order to allow multiple NULL values in a unique index you need to created what they call a filtered index where you explicitly say you want to ignore NULL values.
In other words, you schema would consist of something like this:
a UUID as primary key for the text record
change metadata would have a foreign key relation to text record via the UUID
a Name column with a non-clustered UNIQUE index
a DeletedName column that will store the Name when record is deleted
a Deleted bit column that can be NULL for non-deleted records and set to 1 for deleted
When you do a soft-delete, you would execute an atomic transaction that would:
set the DeletedName = Name
set Name = NULL (so as not to break the UNIQUE index)
mark record as deleted by setting Deleted = 1
There are other ways too but this one seems to be easily achievable based on what you already have.
In my opinion, you can do it in one of two ways:
Using the tables corresponding to the main table, which includes the action field, and using the delete , insert , update trigger of main tables for filling.
ArticlesTable(Id,Name) -> AuditArticlesTable(Id,Name,Action,User,ModifiedDate)
You can use the Filtered unique index (https://learn.microsoft.com/en-us/sql/relational-databases/indexes/create-filtered-indexes?view=sql-server-ver15) on the “Name” field to solving your issue on adding same name when exists another instance as deleted record

SQL SERVER Check constraint - Querying other rows

I want to enforce a business rule on my database table to ensure that a row can't be inserted if the table already contains rows meeting a certain criteria.
Was wanting to use a CHECK constraint but suspect this may have to be done via a trigger.
Is there a way to do this via a CHECK constraint? OR is there another way to do this at the database level without using a trigger?
Depending on your specific criteria (which you haven't shared yet), you may be able to do a unique filtered index.
This is normally faster than functions or other workarounds.
General format would be:
CREATE UNIQUE NONCLUSTERED INDEX ix_IndexName ON MyTable (FieldstoIndex)
WHERE <filter to only include certain rows>

SQL: Best way to perform Update record operation to any given MySQL table

im programming a app to perform CRUD to any given table in any given database (MySQL).
Im having trouble figuring the best way to deal with the Update operation.
I was thinking: 1)Find Primary Key on table & 2)Update record according to Primary Key field coincidence between both records (incoming and allready present in MySQL table).
I know that while Primary Key in every table is very suggested it is still optional, so as i said im not sure if theres a better aproach since my method would not work with a table without a Primary Key.
Thanks in advance.
The answer i found that i believe is valid is the following: For the Update action send two records to the server, the non updated one and the updated one.
The server side has to include each field of the non-updated record in the where clause of the update query with LIMIT=1 (to avoid problems with duplicated records).