Deleting value using SQlite while doing an INNER JOIN - sql

I am trying to delete all voters from a voters table where they are not registered as a democrat or republican AND only voted once. I have a database with three tables, congress_members, voters, and votes and have to JOIN votes with voters in order to delete the right data.
This code finds the data I want to delete:
SELECT voters.*
FROM voters JOIN votes ON voters.id = votes.voter_id
WHERE party = 'green' OR party = 'na' OR party = 'independent'
GROUP BY votes.voter_id
HAVING COUNT(*) = 1;
But I am unable to delete it because I am getting an error everytime I try to delete with a JOIN statement

You can phrase this as a delete with a where clause:
delete from voters
where votes.party not in ('democrat', 'republican') and
voters.id in (select id from votes group by id having count(*) = 1);

You are getting the error because the join will query your database and create a temporary table that will hold your newly queried data. The delete staements are used to remove data that is stored inside your database on your disk and not inside your memory.
The delete statement syntax is "DELETE FROM table WHERE conditions". The table value will need to be one of the three tables in your database, and your target is voters. As of right now, you have half of your delete statement complete.
The where clause needs to evaluate to a boolean value for each row. There is a function called EXISTS (). This function can be used to delete this data. Essentially, you will place your select statement from your post inside of the EXISTS (). The function will compare each of your rows in the target delete table to a row in your table inside of exists. If there is a match, then the row exists, the function evaluates to true for that row, and it is deleted.
DELETE FROM voters
WHERE (party = 'green' OR party = 'na' OR party = 'independent')
AND EXISTS (
SELECT 1 FROM votes WHERE votes.id = voters.id
HAVING COUNT(*) = 1
)

Related

Find a single row and update it with nested queries

Good evening everyone, I'm trying to do an update on a Table but I can't really make it work
The feature needed is:
-Watch a field on a form, it contains the number of people that need to sit at the restaurant table.
-Find the first free table that has enough seats, set it as busy and assign a random waiter
Any idea?
more db infos:
Table "Waiters" is composed by ID(Autonumber),Name(Short Text). Has 2 names atm
Table "Tables" is composed by ID(Autonumber),Seats(Number),Busy(y/n),Waiter(short text). All tables have a fixed number of seats and have no Waiter + not busy
SOLUTION:
In the end i used "First" for the assignment and it works perfectly as it follows:
UPDATE Tables SET Tables.Waiter = DLookUp("FirstName","TopWtr")
WHERE ID IN (SELECT FIRST (ID)
FROM Tables
WHERE Seats >= Val(Forms!Room!Text12) AND Waiter Is Null);
Top wasn't working because it was returning multiple records - every table with same number of seats - and couldn't make it work with DISTINCT. This works probably because the table is already ordered by seats
Thanks to June7 for the input
Cannot SET a field value to result of a SELECT subquery - SELECT returns a dataset not a single value. Can return a single value with domain aggregate function.
Build a query object named TopWtr:
SELECT Top 1 ID FROM Waiters ORDER BY Rnd(ID);
Then use DLookup to pull that value. The Busy field seems redundant because if table has a waiter assigned that would indicate busy.
UPDATE Tables SET Tables.Waiter = DLookUp("ID","TopWtr"), Tables.Busy = True
WHERE ID IN (SELECT TOP 1 ID FROM Tables
WHERE Seats >= Val(Forms!Room!Testo17) AND Waiter Is Null
ORDER BY Seats)
An INNER JOIN may be preferable to WHERE clause:
UPDATE Tables INNER JOIN (SELECT TOP 1 ID FROM Tables
WHERE Seats >= Val(Forms!Room!Testo17) AND Waiter Is Null
ORDER BY Seats) AS T1
ON Tables.ID = T1.ID
SET Tables.Waiter = DLookUp("ID","TopWtr"), Tables.Busy = True

SQL Server : trigger firing every time

For my school project I need to add a trigger to my SQL Server database. I decided a 'no double usernames' trigger on my Users table would be relevant.
The problem is, that this trigger is firing every time I execute an INSERT query. I can't figure out why this is happening every time. I even tried different ways of writing my trigger.
The trigger I have now:
CREATE TRIGGER [Trigger_NoDuplicates]
ON [dbo].[Users]
FOR INSERT
AS
BEGIN
SET NOCOUNT ON
IF(EXISTS(SELECT Username FROM Users
WHERE Username = (SELECT Username FROM inserted)))
BEGIN;
RAISERROR('This username already exists!',15, 0)
ROLLBACK
END
END
Thanks in advance!
A trigger always fires every time, do you mean "raises an error every time"?
You currently have the following (expanded to multiple lines to make it clearer)...
IF (
EXISTS (
SELECT Username
FROM users
WHERE Username = (SELECT Username FROM inserted)
)
)
The key point here is the name of the table inserted. Past tense. It's already happened.
Anything in the inserted table has already been inserted into the target table.
So, what you need to check is that the username is in the target table more than once already.
However, it is possible to insert more than one record in to a table at once. This means that Username = (SELECT Username FROM inserted) will cause its own error. (You can't compare a single value to a set of values, and inserted can contain more than one row => more than one username...)
This is how I would approach your trigger...
IF EXISTS (
SELECT
users.Username
FROM
users
INNER JOIN
inserted
ON inserted.Username = users.Username
GROUP BY
users.Username
HAVING
COUNT(*) > 1
)
This takes the (already inserted in to) users table, and picks out all the records that mach username with any record in the inserted table.
Then it GROUPs them by they username field.
Then it filters the results to only include groups with more than 1 record.
These groups (usernames), have duplicate entries and should cause your trigger to raise an error.
An alternative is a bit more similar to your approach, but many people won't recognise it, so I generally wouldn't recommend it...
IF EXISTS (
SELECT
users.Username
FROM
users
WHERE
users.Username = ANY (SELECT username FROM inserted)
GROUP BY
users.Username
HAVING
COUNT(*) > 1
)
The ANY keyword gets very rarely used, but does what it sounds like. It allows a single value to be compared to a set of values.
Finally, if your table has an IDENTITY column, you can avoid the GROUP BY by explicitly stating you don't want to compare a row to itself...
IF EXISTS (
SELECT
users.Username
FROM
users
INNER JOIN
inserted
ON inserted.Username = users.Username
AND inserted.id <> users.id
)

SQL-Oracle: Updating table multiple row based on values contained in the same table

I have one table named: ORDERS
this table contains OrderNumber's which belong to the same person and same address lines for that person.
However sometimes the data is inconsistent;
as example looking at the table screenshot: Orders table with bad data to fix -
you all can noticed that orderNumber 1 has a name associated to and addresses line1-2-3-4. sometimes those are all different by some character or even null.
my goal is to update all those 3 lines with one set of data that is already there and set equally all the 3 rows.
to make more clear the result expected should be like this:
enter image description here
i am currently using a MERGE statement to avoid a CURSOR (for loop )
but i am having problems to make it work
here the SQL
MERGE INTO ORDERS O USING
(SELECT
INNER.ORDERNUMBER,
INNER.NAME,
INNER.LINE1,
INNER.LINE2,
INNER.LINE3,
INNER.LINE4
FROM ORDERS INNER
) TEMP
ON( O.ORDERNUMBER = TEMP.ORDERNUMBER )
WHEN MATCHED THEN
UPDATE
SET
O.NAME = TEMP.NAME,
O.LINE1 = TEMP.LINE1,
O.LINE2 = TEMP.LINE2,
O.LINE3 = TEMP.LINE3,
O.LINE4 = TEMP.LINE4;
the biggest issues i am facing is to pick a single row out of the 3 randomly - it does not matter whihc of the data - row i pick to update the line/s
as long i make the records exaclty the same for an order number.
i also used ROWNUM =1 but it in multip[le updates will only output one row and update maybe thousand of lines with the same address and name whihch belong to an order number.
order number is the join column to use ...
kind regards
A simple correlated subquery in an update statement should work:
update orders t1
set (t1.name, t1.line1, t1.line2, t1.line3, t1.line4) =
(select t2.name, t2.line1, t2.line2, t2.line3, t2.line4
from orders t2
where t2.OrderNumber = t1.OrderNumber
and rownum < 2)

How to import values into a column from another table in Access

I created an Access Database and I wish to import a subset of data from a master table into a smaller table using SQL Queries. Basically, I want the smaller (Customer) table to reflect any changes made in the bigger (Total) table.
I tried the following code:
UPDATE Customer SET Brand =
(SELECT Brand FROM Total WHERE Chance = -1) ; Chance is a binary column
WHERE EXIST (SELECT Brand FROM Total WHERE Chance = -1);
, but I get an error: "operation must use an updateable query" and my file is not a read-only file.
Is there another Query that I can use to perform the same task?
Update statement in MS Access database should looks like:
UPDATE Customer AS C
INNER JOIN Total AS T ON T.PK = C.FK
SET C.Brand = T.Brand
WHERE T.Chance=-1;
Where:
PK = Primary Key
FK = Foreign Key

Delete duplicates with no primary key

Here want to delete rows with a duplicated column's value (Product) which will be then used as a primary key.
The column is of type nvarchar and we don't want to have 2 rows for one product.
The database is a large one with about thousands rows we need to remove.
During the query for all the duplicates, we want to keep the first item and remove the second one as the duplicate.
There is no primary key yet, and we want to make it after this activity of removing duplicates.
Then the Product columm could be our primary key.
The database is SQL Server CE.
I tried several methods, and mostly getting error similar to :
There was an error parsing the query. [ Token line number = 2,Token line offset = 1,Token in error = FROM ]
A method which I tried :
DELETE FROM TblProducts
FROM TblProducts w
INNER JOIN (
SELECT Product
FROM TblProducts
GROUP BY Product
HAVING COUNT(*) > 1
)Dup ON w.Product = Dup.Product
The preferred way trying to learn and adjust my code with something similar
(It's not correct yet):
SELECT Product, COUNT(*) TotalCount
FROM TblProducts
GROUP BY Product
HAVING COUNT(*) > 1
ORDER BY COUNT(*) DESC
--
;WITH cte -- These 3 lines are the lines I have more doubt on them
AS (SELECT ROW_NUMBER() OVER (PARTITION BY Product
ORDER BY ( SELECT 0)) RN
FROM Word)
DELETE FROM cte
WHERE RN > 1
If you have two DIFFERENT records with the same Product column, then you can SELECT the unwanted records with some criterion, e.g.
CREATE TABLE victims AS
SELECT MAX(entryDate) AS date, Product, COUNT(*) AS dups FROM ProductsTable WHERE ...
GROUP BY Product HAVING dups > 1;
Then you can do a DELETE JOIN between ProductTable and Victims.
Or also you can select Product only, and then do a DELETE for some other JOIN condition, for example having an invalid CustomerId, or EntryDate NULL, or anything else. This works if you know that there is one and only one valid copy of Product, and all the others are recognizable by the invalid data.
Suppose you instead have IDENTICAL records (or you have both identical and non-identical, or you may have several dupes for some product and you don't know which). You run exactly the same query. Then, you run a SELECT query on ProductsTable and SELECT DISTINCT all products matching the product codes to be deduped, grouping by Product, and choosing a suitable aggregate function for all fields (if identical, any aggregate should do. Otherwise I usually try for MAX or MIN). This will "save" exactly one row for each product.
At that point you run the DELETE JOIN and kill all the duplicated products. Then, simply reimport the saved and deduped subset into the main table.
Of course, between the DELETE JOIN and the INSERT SELECT, you will have the DB in a unstable state, with all products with at least one duplicate simply disappeared.
Another way which should work in MySQL:
-- Create an empty table
CREATE TABLE deduped AS SELECT * FROM ProductsTable WHERE false;
CREATE UNIQUE INDEX deduped_ndx ON deduped(Product);
-- DROP duplicate rows, Joe the Butcher's way
INSERT IGNORE INTO deduped SELECT * FROM ProductsTable;
ALTER TABLE ProductsTable RENAME TO ProductsBackup;
ALTER TABLE deduped RENAME TO ProductsTable;
-- TODO: Copy all indexes from ProductsTable on deduped.
NOTE: the way above DOES NOT WORK if you want to distinguish "good records" and "invalid duplicates". It only works if you have redundant DUPLICATE records, or if you do not care which row you keep and which you throw away!
EDIT:
You say that "duplicates" have invalid fields. In that case you can modify the above with a sorting trick:
SELECT * FROM ProductsTable ORDER BY Product, FieldWhichShouldNotBeNULL IS NULL;
Then if you have only one row for product, all well and good, it will get selected. If you have more, the one for which (FieldWhichShouldNeverBeNull IS NULL) is FALSE (i.e. the one where the FieldWhichShouldNeverBeNull is actually not null as it should) will be selected first, and inserted. All others will bounce, silently due to the IGNORE clause, against the uniqueness of Product. Not a really pretty way to do it (and check I didn't mix true with false in my clause!), but it ought to work.
EDIT
actually more of a new answer
This is a simple table to illustrate the problem
CREATE TABLE ProductTable ( Product varchar(10), Description varchar(10) );
INSERT INTO ProductTable VALUES ( 'CBPD10', 'C-Beam Prj' );
INSERT INTO ProductTable VALUES ( 'CBPD11', 'C Proj Mk2' );
INSERT INTO ProductTable VALUES ( 'CBPD12', 'C Proj Mk3' );
There is no index yet, and no primary key. We could still declare Product to be primary key.
But something bad happens. Two new records get in, and both have NULL description.
Yet, the second one is a valid product since we knew nothing of CBPD14 before now, and therefore we do NOT want to lose this record completely. We do want to get rid of the spurious CBPD10 though.
INSERT INTO ProductTable VALUES ( 'CBPD10', NULL );
INSERT INTO ProductTable VALUES ( 'CBPD14', NULL );
A rude DELETE FROM ProductTable WHERE Description IS NULL is out of the question, it would kill CBPD14 which isn't a duplicate.
So we do it like this. First get the list of duplicates:
SELECT Product, COUNT(*) AS Dups FROM ProductTable GROUP BY Product HAVING Dups > 1;
We assume that: "There is at least one good record for every set of bad records".
We check this assumption by positing the opposite and querying for it. If all is copacetic we expect this query to return nothing.
SELECT Dups.Product FROM ProductTable
RIGHT JOIN ( SELECT Product, COUNT(*) AS Dups FROM ProductTable GROUP BY Product HAVING Dups > 1 ) AS Dups
ON (ProductTable.Product = Dups.Product
AND ProductTable.Description IS NOT NULL)
WHERE ProductTable.Description IS NULL;
To further verify, I insert two records that represent this mode of failure; now I do expect the query above to return the new code.
INSERT INTO ProductTable VALUES ( "AC5", NULL ), ( "AC5", NULL );
Now the "check" query indeed returns,
AC5
So, the generation of Dups looks good.
I proceed now to delete all duplicate records that are not valid. If there are duplicate, valid records, they will stay duplicate unless some condition may be found, distinguishing among them one "good" record and declaring all others "invalid" (maybe repeating the procedure with a different field than Description).
But ay, there's a rub. Currently, you cannot delete from a table and select from the same table in a subquery ( http://dev.mysql.com/doc/refman/5.0/en/delete.html ). So a little workaround is needed:
CREATE TEMPORARY TABLE Dups AS
SELECT Product, COUNT(*) AS Duplicates
FROM ProductTable GROUP BY Product HAVING Duplicates > 1;
DELETE ProductTable FROM ProductTable JOIN Dups USING (Product)
WHERE Description IS NULL;
Now this will delete all invalid records, provided that they appear in the Dups table.
Therefore our CBPD14 record will be left untouched, because it does not appear there. The "good" record for CBPD10 will be left untouched because it's not true that its Description is NULL. All the others - poof.
Let me state again that if a record has no valid records and yet it is a duplicate, then all copies of that record will be killed - there will be no survivors.
To avoid this can may first SELECT (using the query above, the check "which should return nothing") the rows representing this mode of failure into another TEMPORARY TABLE, then INSERT them back into the main table after the deletion (using transactions might be in order).
Create a new table by scripting the old one out and renaming it. Also script all objects (indexes etc..) from the old table to the new. Insert the keepers into the new table. If you're database is in bulk-logged or simple recovery model, this operation will be minimally logged. Drop the old table and then rename the new one to the old name.
The advantage of this over a delete will be that the insert can be minimally logged. Deletes do double work because not only does the data get deleted, but the delete has to be written to the transaction log. For big tables, minimally logged inserts will be much faster than deletes.
If it's not that big and you have some downtime, and you have Sql Server Management studio, you can put an identity field on the table using the GUI. Now you have the situation like your CTE, except the rows themselves are truly distinct. So now you can do the following
SELECT MIN(table_a.MyTempIDField)
FROM
table_a lhs
join table_1 rhs
on lhs.field1 = rhs.field1
and lhs.field2 = rhs.field2 [etc]
WHERE
table_a.MyTempIDField <> table_b.MyTempIDField
GROUP BY
lhs.field1, rhs.field2 etc
This gives you all the 'good' duplicates. Now you can wrap this query with a DELETE FROM query.
DELETE FROM lhs
FROM table_a lhs
join table_b rhs
on lhs.field1 = rhs.field1
and lhs.field2 = rhs.field2 [etc]
WHERE
lhs.MyTempIDField <> rhs.MyTempIDField
and lhs.MyTempIDField not in (
SELECT MIN(lhs.MyTempIDField)
FROM
table_a lhs
join table_a rhs
on lhs.field1 = rhs.field1
and lhs.field2 = rhs.field2 [etc]
WHERE
lhs.MyTempIDField <> rhs.MyTempIDField
GROUP BY
lhs.field1, lhs.field2 etc
)
Try this:
DELETE FROM TblProducts
WHERE Product IN
(
SELECT Product
FROM TblProducts
GROUP BY Product
HAVING COUNT(*) > 1)
This suffers from the defect that it deletes ALL the records with a duplicated Product. What you probably want to do is delete all but one of each group of records with a given Product. It might be worthwhile to copy all the duplicates to a separate table first, and then somehow remove duplicates from that table, then apply the above, and then copy remaining products back to the original table.