I have a table with 3 columns, 2 of which are auto-incrementing. I have a text file with various content that I would like to insert into the 3rd column. In that text, values are separated as such:
"value1", "value2", "value3", "etc",
How would I go about inserting all those values into a specific column, while being able to use the above format for my initial content(or something similar that I could do by a "replace all"), hopefully with a single command in phpmyadmin?
Let me know if my question is not clear!
Thanks in advance
Use a regular expression to convert each value into a full Insert query on a separate row:
INSERT INTO mytable (column3) VALUES ('value1')
Something like this should do it:
Match: "\([^"]*\)",
Replace with: INSERT INTO mytable (column3) VALUES ('\1') \n
First, I would if at all possible get that text file in the right format which would be 1 column with each value on a separate line. Other wise, you need to first write a function to split the data into sometype of one column temp table and then insert by joining to the table. If the text file is in a reasonable format instead of a comma delimited mess, then you can directly insert it in a bulk operation in most databases.
Related
Imagine there is a table with 1000 columns.
I want to add a row with values for 20 columns and assume NULLs for the rest.
INSERT VALUES syntax can be used for that:
INSERT INTO `tbl` (
date,
p,
... # 18 more names
)
VALUES(
DATE('2020-02-01'),
'p3',
... # 18 more values
)
The problem with it is that it is hard to tell which value corresponds to which column. And if you need to change/comment out some value then you have to make edits in two places.
INSERT SELECT syntax also can be used:
INSERT INTO `tbl`
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
... # 980 more NULL AS column
Then if I need to comment out some column just one line has to be commented out.
But obviously having to set 980 NULLs is an inconvenience.
What is the way to combine both approaches? To achieve something like:
INSERT INTO `tbl`
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
The query above doesn't work, the error is Inserted row has wrong column count; Has 20, expected 1000.
Your first version is really the only one you should ever be using for SQL inserts. It ensures that every target column is explicitly mentioned, and is unambiguous with regard to where the literals in the VALUES clause should go. You can use the version which does not explicitly mention column names. At first, it might seem that you are saving yourself some code. But realize that there is a column list which will be used, and it is the list of all the table's columns, in whatever their positions from definition are. Your code might work, but appreciate that any addition/removal of a column, or changing of column order, can totally break your insert script. For this reason, most will strongly advocate for the first version.
You can try following solution, it is combination of above 2 process which you have highlighted in case study:-
INSERT INTO `tbl` (date, p, 18 other coll names)
SELECT
DATE('2020-02-01') AS date,
'p3' AS p,
... # 18 more value AS column
Couple of things you should consider here are :-
Other 980 Columns should ne Nullable, that means it should hold NULL values.
All 18 columns in Insert line and Select should be in same order so that data will be inserted in same correct order.
To Avoid any confusion, try to use Alease in Select Query same as Insert Table Column name. It will remove any ambiguity.
Hopefully it will work for you.
In BigQuery, the best way to do what you're describing is to first load to a staging table. I'll assume you can get the values you want to insert into JSON format with keys that correspond to the target column names.
values.json
{"date": "2020-01-01", "p": "p3", "column": "value", ... }
Then generate a schema file for the target table and save it locally
bq show --schema project:dataset.tbl > schema.json
Load the new data to the staging table using the target schema. This gives you "named" null values for each column present in the target schema but missing from your json, bypassing the need to write them out.
bq load --replace --source_format=NEWLINE_DELIMIITED_JSON \
project:dataset.stg_tbl values.json schema.json
Now the insert select statement works every time
insert into `project:dataset.tbl`
select * from `project:dataset.stg_tbl`
Not a pure SQL solution but I managed this by loading my staging table with data then running something like:
from google.cloud import bigquery
client = bigquery.Client()
table1 = client.get_table(f"{project_id}.{dataset_name}.table1")
table1_col_map = {field.name: field for field in table1.schema}
table2 = client.get_table(f"{project_id}.{dataset_name}.table2")
table2_col_map = {field.name: field for field in table2.schema}
combined_schema = {**table2_col_map, **table1_col_map}
table1.schema = list(combined_schema.values())
client.update_table(table1_cols, ["schema"])
Explanation:
This will retrieve the schemas of both, convert their schemas into a dictionary with key as column name and value as the actual field info from the sdk. Then both are combined with dictionary unpacking (the order of unpacking determines which table's columns have precedence when a column is common between them. Finally the combined schema is assigned back to table 1 and used to update the table, adding the missing columns with nulls.
I have a list of about 22,000 ids that I would like to insert into one SQL table. The table contains only one column which will contain all of the 22,000 ids.
How can I populate the column with all of these values in one query? Thanks.
It depends (as usual) where you have the values.
If the values reside in a table it is just insert into <yourTargetTable> select <yourColumns> from <yourSourceTable>.
If you have the values in a file, one way could be to load it with bteq's .importcommand. See an example here https://community.teradata.com/t5/Tools-Utilities/BTEQ-examples/td-p/2466
Other options: SQL-Assistant, TD-Studio, TPT, Easy Loader ...
Serach for teradata import data from text file and you'll get a lot of answers.
I have a CSV file which consists of one column. I want to insert (not import) all the elements in the columns into the database table. I know that if I wanted to insert few elements, then I can use the below statement to insert individually.
INSERT INTO table(column_name )
VALUES (element1);
But is there a method that I can insert all the elements at once?
You can just comma separate the values like below. I sometimes use Excel to do the formatting if you have a lot of values.
INSERT INTO table
(column_name)
VALUES
(element1), (element2)
I have INVOICE TABLE
I want to insert value by not specifying column names
using SQL Server, I have tried this but it is not working..please help
INSERT INTO INVOICE
VALUES( 1,1,KEYBOARD,1,15,5,75)
As long as you have the right number of columns in your INSERT statement, and as long as all the values except KEYBOARD are some numeric data type, and as long as you have suitable permissions, this should work.
INSERT INTO INVOICE VALUES( 1,1,'KEYBOARD',1,15,5,75);
SQL requires single quotes around text values.
But not using column names isn't a good practice. It's not unheard of for people to change the order of columns in a table. Changing the order of columns isn't a good practice, either, but some people insist on doing it anyway.
If somebody does that, and swaps the 5th and 7th columns in your table, your INSERT statement will still succeed--both those columns are numeric--but the INSERT will screw up your data.
Why would you want to do this? Not specifying column names is bad coding practice.
In your case, though, keyboard needs to be surrounded by single quotes:
INSERT INTO INVOICE VALUES( 1,1, 'KEYBOARD',1,15,5,75)
If you just don't want to type the column names, you can get them easily in SQL Server Management Studio. Open the "Object Browser", open the database, and choose "Tables". Choose the Invoice table. One of the options is "Columns".
Just click on the "Columns" name (no need to open it) and drag it into a query window. It will insert the list of columns.
yes you can directly insert values into table as follows:
insert into `schema`.`tablename` values(val1,val2,val3);
INSERT INTO INVOICE VALUES( 1,1,'KEYBOARD',1,15,5,75);
you forget to include the single quotes in keyboard,text required single quotes in sql
I am trying to add an additional column and value to an existing insert query - both integers, and running into trouble.
Anything to look out for?
you don't give much to go in in your question:
I am trying to add an additional
column and value to an existing insert
query - both integers, and running
into trouble.
Anything to look out for?
it is best practice to list all columns you intend to include values for in the list of columns, so make sure you add them there, as well as the VALUES list:
insert into YourTable (col1, col2,..., newCol1, newCol2)
VALUES (1,2,...,new1, new2)
make sure the you get the column names spelled correct and that the table actually has those new columns in it.
make sure the column name sequence is the same as your insert data sequence.
Example
INSERT INTO TABLENAME
(ColumnName1,ColumnName2) VALUES (1,'data')
Becomes
INSERT INTO TABLENAME
(ColumnName1,ColumnName2,ColumnNameNEW) VALUES (1,'data','newcolumndata')
Notice both the new column name and the new data are in the third position in the sequence.