Postgresql update sequence after importing data from csv - sql

I have a database with 10 tables, I'm using Postgresql db on pgAdmin4 (symfony back-end), I imported the data from 10 different csv files, in each data files i added an ID column to set their values for the single fact that there is foreign key, lets just say TABLE1 is a foreign key in TABLE_2, TABLE_2_ID is a foreign key in TABLE_3, TABLE_3_ID is a foreign key in TABLE_C etc... it goes on like this till the last table.
The data import for the csv files worked. I can display my data in my front-end.
Now that I'm trying to insert new data into any table via the the plateform user interface, I'm getting a constraint error :
SQLSTATE[23505]: Unique violation: 7 ERROR: duplicate key value violates unique constraint "theme_pkey" DETAIL: Key (id)=(4) already exists."
Let just say I'm trying to create a new value in Table_2, Doctrine orm will launch an error that's going to say the id_key already exist, it seem like my database didn't update itself by following the current data I have already in the data base, like the ID isn't incremented.
I looked around it seems like it is a common when you import your database from an outside source, but can't manage to find something that would get me out of this error message and move forward with my dev. I look at everything they all talk about update sequence but nothing seems to fit my problem.

After importing my data from a csv file to my postrgresql database using PGAdmin4, the data was successfully imported, querying to see the data worked and each row had an id, in my mind everything was fine until i tried to insert in to my database a new object, but it seems like my database was stuck in Primary_Key id 1, it wasn't taking into account the database that was imported and the fact that it showed on the table an object with a an Id of 1. I didn't fully understand what was going on cause i'm not an expert with sql, the convention and restriction that applies to it, i knew enough to get things working around with using an ORM.
I came to the conclusion after hours of research and documentations reading that a thing called Sequence existed and that each table had their own sequence, for example tableA and tableB had a tableA_id_seq and a tableB_id_seq.
All i had to do was to get the max id from my table / get the next Id and simply increment that next Id.
---GETTING THE MAX VALUE ID FROM MY TARGETED TABLE
SELECT MAX(id) FROM table;
--- GETTING THE NEXT ID VALUE FROM THAT TARGETED TABLE
SELECT NEXT ID
SELECT NEXTVAL('table_id_seq');
Lets just say my max id value from my table was 90, so in theory my next value is suppose to be 91, but when i launched the 2nd script the next value was 1,
So i had to set the next value based on my max value + 1
---- SETTING NEXT VALUE THAT FOLLOWS MY MAX VALUE
SELECT setval('table_id_seq', (SELECT MAX(id) FROM table)+1)
Hope this helps out the next person that runs into similar a issue.

Related

Trying to copy one table from another database to another in SQL Server 2008 R2

i am trying to copy table information from a backup dummy database to our live sql database(as an accident happened in our program, Visma Business, where someone managed to overwrite 1300 customer names) but i am having a hard time figuring out the perfect code for this, i've looked around and yes there are several similar problems, but i just can't get this to work even though i've tried different solutions.
Here is the simple code i used last time, in theory all i need is the equivilant of mysqls On Duplicate, which would be MERGE on SQL server? I just didn't quite know what to write to get that merge to work.
INSERT [F0001].[dbo].[Actor]
SELECT * FROM [FDummy].[dbo].[Actor]
The error message i get with this is:
Violation of PRIMARY KEY constraint 'PK__Actor'. Cannot insert duplicate key in object 'dbo.Actor'.
What error message says is simply "You cant add same value if an attribute has PK constraint". If you already have all the information in your backup table what you should do is TRUNCATE TABLE which removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain.
After that step you should follow this answer . Or alternatively i recommend a tool called Kettle which is open source and easy to use for these kinds of data movements. That will save you a lot of work.
Here are thing which can be the reason :
You have multiple row in [FDummy].[dbo].[Actor] with same data in a column which is going to be inserted in primary key column of [F0001].[dbo].[Actor].
You have existing rows in [FDummy].[dbo].[Actor] with some value x in primary key column and there is/are row(s) in [F0001].[dbo].[Actor] with same value x in the column which is going to be inserted in primary key column.
List item
-- to check first point. if it returns row then you have some problem
SELECT ColumnGoingToBeMappedWithPK,
Count(*)
FROM [FDummy].[dbo].[Actor]
GROUP BY ColumnGoingToBeMappedWithPK
HAVING Count(*) > 1
-- to check second point. if count is greater than 0 then you have some problem
SELECT Count(*)
FROM [FDummy].[dbo].[Actor] a
JOIN [F0001].[dbo].[Actor] b
ON a.ColumnGoingToBeMappedWithPK = b.PrimaryKeyColumn
The MERGE statement will be possibly the best for you here, unless the primary key of the Actor table is reused after a previous record is deleted, so not autoincremented and say record with id 13 on F0001.dbo.Actor is not the same "actor" information as on FDummy.dbo.Actor
To use the statement with your code, it will look something like this:
begin transaction
merge [F0001].[dbo].[Actor] as t -- the destination
using [FDummy].[dbo].[Actor] as s -- the source
on (t.[PRIMARYKEY] = s.[PRIMARYKEY]) -- update with your primary keys
when matched then
update set t.columnname1 = s.columnname1,
t.columnname2 = s.columnname2,
t.columnname3 = s.columnname3
-- repeat for all your columns that you want to update
output $action,
Inserted.*,
Deleted.*;
rollback transaction -- change to commit after testing
Further reading can be done at the sources below:
MERGE (Transact-SQL)
Inserting, Updating, and Deleting Data by Using MERGE
Using MERGE in SQL Server to insert, update and delete at the same time

MSSQL give new primary key in table 1 and update corresponding keys in table 2

I have a product and product_detail table pair that I need to copy the data from and update the primary key. Essentially what I'm trying to do is copy last night's data from both tables, give them new primary keys so they don't clash with the current data and insert back into the table with some of the information updated - pk/fk, update_date, and two fields flagged as something different.
I can't make changes to the tables, so I can't use update cascade. We have a file that does an End of Day batch and inserts the data into the tables. We also have a file that is updated every time a transaction happens during the day, so what I thought I could do is copy last nights data, update the keys so they didn't clash, and update that new set of data with whatever comes in from the file during the day. The way it is now, our users have to wait until the end of the day to see where we are. With updating during the day, they can see the balance as the day progresses.
I believe I will have to grab the information out of the main table, get a new pk, update the other info, and pass that new pk to the second table to replace the original fk it has and do it row by row.
Am I heading in the right direction?

Deleting record from sql table and update sql table

I am trying to delete a record in sqlite. i have four records record1, record2, record3, record 4
with id as Primary Key.
so it will auto increment for each record that i insert. now when i delete record 3, the primary key is not decrementing. what to do to decrement the id based on the records that i am deleting.
i want id to be 1,2,3 when i delete the record 3 from the database. now it is 1,2,4. Is there any sql query to change it. I tried this one
DELETE FROM TABLE_NAME WHERE name = ?
Note: I am implementing in xcode
I don't know why you want this but I would recommend leaving these IDs as is.
What is wrong with having IDs as 1,2,4?
Also you can potentially break things (referential integrity) if you use these ID values as foreign keys somewhere else.
Also please refer to this page to get a better understanding how autoincrement fields works
http://sqlite.org/autoinc.html
The sense of auto increment is always to create a new unique ID and not to fill the gaps created by deleting records.
EDIT
You can reach it by a special table design. There are no deleted records but with a field "del" marked as deleted.
For example, with a "select ... where del> 0" will find all active records.
Or place without the "where" all the records, then the ID's remain unaffected. To loop through an array with "if del = 0 continue". Thus, the array is always in consecutive order.
It's very flexible. Depending on the select ... you get.
all active records
all the deleted records
all records

find duplicate record in BODI

I am loading a file data into a table using BODI (Business Objects Data Integrator).
Unfortunately the input file has a duplicate record which is giving unique constraint error while inserting in the table. Is there a way to find that duplicate record?
import into table with the constraint disabled or removed or table with similar structure without the constraint and do the query
select uniquefield(s) from tablename group by uniquefield(s) having count(*) > 1
and sometimes the error message shows the key that is duplicate too.
You Can use a table compare transformation and set the field "Input Contains DUplicate Keys" . This will load the the record only once even though the source field has duplicate records.

can I insert a copy of a row from table T into table T without listing its columns and without primary key error?

I want to do something like this:
INSERT INTO T SELECT * FROM T WHERE Column1 = 'MagicValue' -- (multiple rows may be affected)
The problem is that T has a primary key column and so this causes an error as if trying to set the primary key. And frankly, I don't want to set the primary key either. I want to create entirely new rows with new primary keys but the rest of the fields being copied over from the original rows.
This is supposed to be generic code applicable to various tables. Well, so if there is no nice way of doing this, I will just write code to dynamically extract column names, construct the list etc. But maybe there is? Am I the first guy trying to create duplicate rows in a database or something?
I'm assuming by "Primary Key" you mean identity or guid data types that auto-assign or auto-increment.
Without some very fancy dynamic SQL, you can't do what you are after. If you want to insert everything but the identity field, you need to specify fields.
If you want to specify a value for that field, you need to specify all the fields in the SELECT and in the INSERT AND turn on IDENTITY_INSERT.
You don't gain anything from duplicating a row in a database (considering you didn't try to set the Primary Key). It would be wiser and will avoid problem to have another column called "amount" or something.
something like
UPDATE T SET Amount = Amount + 1 WHERE Column1 = 'MagicValue'
or if it can increase by more than 1 like amount of returned fields
Update T SET Amount = Amount * 2 WHERE Column1 = 'MagicValue'
I'm not sure what you're trying to do exactly but if the above doesn't work for what you're doing I think your design requires a new table and insert it there.
EDIT: Also as mentioned under your comments, a generic insert doesn't really make sense. Imagine, for this to work, you need the same number of fields, and they will hold the same values suggesting that they should also have the same names(even if it wouldn't require it to). It would basically be the same table structure twice.