What is the best way to modify the latest added row without using a temporary table.
E.g. the table structure is
id | text | date
My current approach would be an insert with the postgresql specific command "returning id" so that I can update the table afterwards with
update myTable set date='2013-11-11' where id = lastRow
However I have the feeling that postgresql is not simply using the last row but is iterating through millions of entries until "id = lastRow" is found. How can i directly access the last added row?
update myTable date='2013-11-11' where id IN(
SELECT max(id) FROM myTable
)
Just to add to mvb13's answer (since I don't have enough points to comment directly yet) there is one word missing. Hopefully, this will save someone some time from working out the correct syntax LOL.
update myTable set date='2013-11-11' where id IN(
SELECT max(id) FROM myTable
);
Related
I have a table that looks like the below table:
Every time the user loan a book a new record is inserted.
The data in this table is derived or taken from another table which has no dates.
I need to update this tables based on the records in the other table: Meaning I only need to update this table based on what changes.
Example: Lets say the user return the book Starship Troopers and the book return is indicated to Yes.
How do I update just that column?
What I have tried:
I tried using the MERGE Statement but it works only with unique rows of data, meaning you get an error if the same ID appears more than once.
I also tried using a basic UPDATE Statement and a JOIN but that's not going well.
I am asking because I have ran out of ideas.
Thanks for reading
If you need to update BooksReturn in target table based on the same column in source table
UPDATE t
SET t.booksreturn = s.booksreturn
FROM target t JOIN source s
ON t.userid = s.userid
AND t.booksloaned = s.booksloaned
Here is SQLFiddle demo
You can do this by simple Update & Insert statement.....
Two table A & B
From B you want to insert data into A if not exists other wise Update that data....
,First Insert into temp table....
SELECT *
INTO #MYTEMP
FROM B
WHERE BOOKSLOANED NOT IN (SELECT BOOKSLOANED
FROM A)
,Second Check data and insert into A.
INSERT INTO A
SELECT *
FROM #MYTEMP
And at last write one simple update statement which update all data of A. If any change then it also reflect to that data otherwise data as it is.
You can also update from #MYTEMP table.
I have this table:
Table1:
id text
1 lala
And i want take first row and copy it, but the id 1 change to 2.
Can you help me with this problem?
A SQL table has no concept of "first" row. You can however select a row based on its characteristics. So, the following would work:
insert into Table1(id, text)
select 2, text
from Table1
where id = 1;
As another note, when creating the table, you can have the id column be auto-incremented. The syntax varies from database to database. If id were auto-incremented, then you could just do:
insert into Table1(text)
select text
from Table1
where id = 1;
And you would be confident that the new row would have a unique id.
Kate - Gordon's answer is technically correct. However, I would like to know more about why you want to do this.
If you're intent is to have the field increment with the insertion of each new row, manually setting the id column value isn't a great idea - it becomes very easy for there to be a conflict with two rows attempting to use the same id at the same time.
I would recommend using an IDENTITY field for this (MS SQL Server -- use an AUTO_INCREMENT field in MySQL). You could then do the insert as follows:
INSERT INTO Table1 (text)
SELECT text
FROM Table1
WHERE id = 1
SQL Server would automatically assign a new, unique value to the id field.
One attribute in a table became corrupted after a certain point in a table of mine. I want to delete every pat_coun attribute if it has an ID that begins with 11 (number, not text). So I don't want to get rid of any of the records in the database, just clear out the attribute pat_coun if it's ID begins with 11
DELETE pat_coun from myTable
WHERE id %11
Just want to make sure this is right before I go deleting stuff. Thanks.
To clear out an attribute, do NOT use the DELETE function! That deletes a row from your table!
You need to use UPDATE instead:
UPDATE myTable
SET pat_coun = NULL
WHERE id LIKE '11%'
If you want to delete a record (a row) you can use
DELETE FROM myTable
WHERE condition
If you just want to "clear" a particular column you should use
UPDATE myTable
SET pat_coun = 0 // or NULL or whatever you please
WHERE condition
For condition IMHO you should convert your number to string and check like this
WHERE CONVERT(VARCHAR(20), pat_coun) LIKE '11%'
try this
update myTable
set pat_coun = null
where id like '11%'
I've got some duplicate records in a table because as it turns out Netezza does not support constraint checks on primary keys. That being said, I have some records where the information is the exact same and I want to delete just ONE of them. I've tried doing
delete from table_name where test_id=2025 limit 1
and also
delete from table_name where test_id=2025 rowsetlimit 1
However neither option works. I get an error saying
found 'limit'. Expecting a keyword
Is there any way to limit the records deleted by this query? I know I could just delete the record and reinsert it but that is a little tedious since I will have to do this multiple times.
Please note that this is not SQL Server or MySQL.This is for Netezza
If it doesn't support either "DELETE TOP 1" or the "LIMIT" keyword, you may end up having to do one of the following:
1) add some sort of an auto-incrementing column (like IDs), making each row unique. I don't know if you can do that in Netezza after the table has been created, though.
2) Programmatically read the entire table with some programming language, eliminate duplicates programmatically, then deleting all the rows and inserting them again. This might not be possible if they are references by other tables, in which case, you might have to temporarily remove the constraint.
I hope that helps. Please let us know.
And for future reference; this is why I personally always create an auto-incrementing ID field, even if I don't think I'll ever use it. :)
The below query works for deleting duplicates from a table.
DELETE FROM YOURTABLE
WHERE COLNAME1='XYZ' AND
(
COLNAME1,
ROWID
)
NOT IN
(
SELECT COLNAME1,
MAX(ROWID)
FROM YOURTABLENAME
WHERE COLNAME = 'XYZ'
GROUP BY COLNAME1
)
If the records are identical then you could do something like
CREATE TABLE DUPES as
SELECT col11,col2,col3,col....... coln from source_table where test_id = 2025
group by
1,2,3..... n
DELETE FROM source_table where test_id = 2025
INSERT INTO Source_table select * from duoes
DROP TABLE DUPES
You could even create a sub-query to select all the test_ids HAVING COUNT(*) > 1 to automatically find the dupes in steps 1 and 3
-- remove duplicates from the <<TableName>> table
delete from <<TableName>>
where rowid not in
(
select min(rowid) from <<TableName>>
group by (col1,col2,col3)
);
The GROUP BY 1,2,3,....,n will eliminate the dupes on the insert to the temp table
Does the use rowid is allowed in Netezza...As far as my knowledge is concern i don't think this query will executed in Netezza...
This is similar to this question, but it seems like some of the answers there aren't quite compatible with MySQL (or I'm not doing it right), and I'm having a heck of a time figuring out the changes I need. Apparently my SQL is rustier than I thought it was. I'm also looking to change a column value rather than delete, but I think at least that part is simple...
I have a table like:
rowid SERIAL
fingerprint TEXT
duplicate BOOLEAN
contents TEXT
created_date DATETIME
I want to set duplicate=true for all but the first (by created_date) of each group by fingerprint. It's easy to mark all of the rows with duplicate fingerprints as dupes. The part I'm getting stuck on is keeping the first.
One of the apps that populates the table does bulk loads of data, with multiple workers loading data from different sources, and the workers' data isn't necessarily partitioned by date, so it's a pain to try to mark these all as they come in (the first one inserted isn't necessarily the first one by date). Also, I already have a bunch of data in there I'll need to clean up either way. So I'd rather just have a relatively efficient query I can run after a bulk load to clean up than try to build it into that app.
Thanks!
MySQL needs to be explicitly told if the data you are grouping by is larger than 1024 bytes (see this link for details). So if your data in the fingerprint column is larger than 1024 bytes you should use set the max_sort_length variable (see this link for details about values allowed, and this link about how to set it) to a larger number so that the group by wont silently use only part of your data for grouping.
Once you're certain that MySQL will group your data properly, the following query will set the duplicate flag so that the first fingerprint record has duplicate set to FALSE/0 and any subsequent fingerprint records have duplicate set to TRUE/1:
UPDATE mytable m1
INNER JOIN (SELECT fingerprint
, MIN(rowid) AS minrow
FROM mytable m2
GROUP BY fingerprint) m3
ON m1.fingerprint = m3.fingerprint
SET m1.duplicate = m3.minrow != m1.rowid;
Please keep in mind that this solution does not take NULLs into account and if it is possible for the fingerprint field to be NULL then you would need additional logic to handle that case.
How about a two-step approach, assuming you can go offline during a data load:
Mark every item as duplicate.
Select the earliest row from each group, and clear the duplicate flag.
Not elegant, but gets the job done.
Here's a funny way to do it:
SET #rowid := 0;
UPDATE mytable
SET duplicate = (rowid = #rowid),
rowid = (#rowid:=rowid)
ORDER BY rowid, created_date;
First set a user variable to zero, assuming this is less than any rowid in your table.
Then use the MySQL UPDATE...ORDER BY feature to ensure that the rows are updated in order by rowid, then by created_date.
For each row, if the current rowid is not equal to the user variable #rowid, set duplicate to 0 (false). This will be true only on the first row encountered with a given value for rowid.
Then add a dummy set of rowid to its own value, setting #rowid to that value as a side effect.
As you UPDATE the next row, if it's a duplicate of the previous row, rowid will be equal to the user variable #rowid, and therefore duplicate will be set to 1 (true).
Edit: Now I have tested this, and I corrected a mistake in the line that sets duplicate.
Here's another way to do it, using MySQL's multi-table UPDATE syntax:
UPDATE mytable m1
JOIN mytable m2 ON (m1.rowid = m2.rowid AND m1.created_date < m2.created_date)
SET m2.duplicate = 1;
I don't know the MySQL syntax, but in PLSQL you just do:
UPDATE t1
SET duplicate = 1
FROM MyTable t1
WHERE rowid != (
SELECT TOP 1 rowid FROM MyTable t2
WHERE t2.fingerprint = t1.fingerprint ORDER BY created_date DESC
)
That may have some syntax errors, as I'm just typing off the cuff/not able to test it, but that's the gist of it.
MySQL version (not tested):
UPDATE t1
SET duplicate = 1
FROM MyTable t1
WHERE rowid != (
SELECT rowid FROM MyTable t2
WHERE t2.fingerprint = t1.fingerprint
ORDER BY created_date DESC
LIMIT 1
)
Untested...
UPDATE TheAnonymousTable
SET duplicate = TRUE
WHERE rowid NOT IN
(SELECT rowid
FROM (SELECT MIN(created_date) AS created_date, fingerprint
FROM TheAnonymousTable
GROUP BY fingerprint
) AS M,
TheAnonymousTable AS T
WHERE M.created_date = T.created_date
AND M.fingerprint = T.fingerprint
);
The logic is that the innermost query returns the earliest created_date for each distinct fingerprint as table alias M. The middle query determines the rowid value for each of those rows; it is a nuisance to have to do this (but necessary), and the code assumes that you won't get two records for the same fingerprint and timestamp. This gives you the rowid for the earlist record for each separate fingerprint. Then the outer query (the UPDATE) sets the 'duplicate' flag on all those rows where the rowid is not one of the earliest rows.
Some DBMS may be unhappy about doing (nested) sub-queries on the table being updated.