I have a CSV file which consists of one column. I want to insert (not import) all the elements in the columns into the database table. I know that if I wanted to insert few elements, then I can use the below statement to insert individually.
INSERT INTO table(column_name )
VALUES (element1);
But is there a method that I can insert all the elements at once?
You can just comma separate the values like below. I sometimes use Excel to do the formatting if you have a lot of values.
INSERT INTO table
(column_name)
VALUES
(element1), (element2)
Related
I have a table in BigQuery with very complex scheme (up to 2300 column)
in these columns I have RECORD type fields, someof them are in REPEATED mode,
The insert statement is generated by processor in the code,
but when test this insertion statement on BigQuery Web-UI I see an error,
after investigating the issue, I found that inserting array is not done in the appropriate way.
INSERT INTO Table_X (RECORD_FIELD) VALUES (
...
STRUCT([STRUCT(X), STRUCT(Y)]) as property_z
...
it this format is correct for inserting REPEATED fields?
INSERT INTO TABLE_NAME (columns) VALUES (STRUCT([ STRUCT(...), STRUCT(...) ]), ...)
Repeated fields are arrays, so you want to insert them as arrays:
INSERT INTO TABLE_NAME (repeated_column)
VALUES (ARRAY[ STRUCT(...), STRUCT(...) ]);
Note that the array is a single column, You can include values for other columns in the INSERT as well.
I have a list of about 22,000 ids that I would like to insert into one SQL table. The table contains only one column which will contain all of the 22,000 ids.
How can I populate the column with all of these values in one query? Thanks.
It depends (as usual) where you have the values.
If the values reside in a table it is just insert into <yourTargetTable> select <yourColumns> from <yourSourceTable>.
If you have the values in a file, one way could be to load it with bteq's .importcommand. See an example here https://community.teradata.com/t5/Tools-Utilities/BTEQ-examples/td-p/2466
Other options: SQL-Assistant, TD-Studio, TPT, Easy Loader ...
Serach for teradata import data from text file and you'll get a lot of answers.
I have a local db that I'm trying to insert multiple rows of data, but I do not want duplicates. I do not have a second db that I'm trying to insert from. I have an sql file. The structure is this for the db I'm inserting into:
(db)artists
(table)names-> ID | ArtistName | ArtistURL | Modified
I am trying to do this insertion:
INSERT names (ArtistName, Modified)
VALUES (name1, date),
(name2, date2),
...
(name40, date40)
The question is, how can I insert data and avoid duplication by checking a specific column to this list of data that I want inserted using SQL?
Duplicate what? Duplicate name? Duplicate row? I'll assume no dup ArtistName.
Have UNIQUE(ArtistName) (or PRIMARY KEY) on the table.
Use INSERT IGNORE instead of IGNORE.
(No LEFT JOIN, etc)
I ended up following the advice of #Hart CO a little bit by inserting all my values into a completely new table. Then I used this SQL statement:
SELECT ArtistName
FROM testing_table
WHERE !EXISTS
(SELECT ArtistName FROM names WHERE
testing_table.ArtistName = testing_table.ArtistName)
This gave me all my artist names that were in my data and not in the name table.
I then exported to an sql file and adjusted the INSERT a little bit to insert into the names table with the corresponding data.
INSERT IGNORE INTO `names` (ArtistName) VALUES
*all my values from the exported data*
Where (ArtistName) could have any of the data returned. For example,
(ArtistName, ArtistUrl, Modified). As long as the values returned from the export has 3 values.
This is probably not the most efficient, but it worked for what I was trying to do.
I am trying to add an additional column and value to an existing insert query - both integers, and running into trouble.
Anything to look out for?
you don't give much to go in in your question:
I am trying to add an additional
column and value to an existing insert
query - both integers, and running
into trouble.
Anything to look out for?
it is best practice to list all columns you intend to include values for in the list of columns, so make sure you add them there, as well as the VALUES list:
insert into YourTable (col1, col2,..., newCol1, newCol2)
VALUES (1,2,...,new1, new2)
make sure the you get the column names spelled correct and that the table actually has those new columns in it.
make sure the column name sequence is the same as your insert data sequence.
Example
INSERT INTO TABLENAME
(ColumnName1,ColumnName2) VALUES (1,'data')
Becomes
INSERT INTO TABLENAME
(ColumnName1,ColumnName2,ColumnNameNEW) VALUES (1,'data','newcolumndata')
Notice both the new column name and the new data are in the third position in the sequence.
I have a table with 3 columns, 2 of which are auto-incrementing. I have a text file with various content that I would like to insert into the 3rd column. In that text, values are separated as such:
"value1", "value2", "value3", "etc",
How would I go about inserting all those values into a specific column, while being able to use the above format for my initial content(or something similar that I could do by a "replace all"), hopefully with a single command in phpmyadmin?
Let me know if my question is not clear!
Thanks in advance
Use a regular expression to convert each value into a full Insert query on a separate row:
INSERT INTO mytable (column3) VALUES ('value1')
Something like this should do it:
Match: "\([^"]*\)",
Replace with: INSERT INTO mytable (column3) VALUES ('\1') \n
First, I would if at all possible get that text file in the right format which would be 1 column with each value on a separate line. Other wise, you need to first write a function to split the data into sometype of one column temp table and then insert by joining to the table. If the text file is in a reasonable format instead of a comma delimited mess, then you can directly insert it in a bulk operation in most databases.