Insert same information to all stores - sql

I have a lot of stores in my database and i have some similar data that has to be in all of the stores. Here is my example:
INSERT INTO [dbo].[stores]
([identifiers],
[sales_price],
[discount],
[store])
VALUES ('9788276911',
99,
20,
'store121')
Is it any ways i can insert this data in all stores and not only 'store121'? Just looking for a easy way out here really :)

First, if you don't have your store names in table, you should create a table and populate it with names (copy/paste from your Excel).
If we assume your names are in table StoreNames, column Store, you can use a query like this to insert same data to table stores for all your stores
INSERT INTO [dbo].[stores]
([identifiers],
[sales_price],
[discount],
[store])
SELECT '9788276911',
99,
20,
[store]
FROM StoreNames
SQLFiddle DEMO

update stores
set identifiers = '9788276911',
sales_price = 99,
discount = 20
will update all records.

Using the solution proposed by #Nenad Zivkovic, you can also enumerate your 15 stores in a row constructor instead of reading the stores table if that makes it easier for you:
INSERT INTO [dbo].[stores]
([identifiers],
[sales_price],
[discount],
[store])
SELECT '9788276911',
99,
20,
[store]
FROM (values
('store1'),('store2'),('store3'),('store4')) x(store)

Related

Snowflake how can we loop over each row of a temp table and insert its values with into another table where each field with its value is a single row?

We are loading data into a fact table, we the original temporary table on Snowflake looks like the following:
Where indicator_nbr fields are questions asked within a survey.
We are using data modelling techniques in building our warehouse database, so the data will be added into a fact table like so:
Then the same for the indicator 2 and 3 and so on if there is other questions.
Each Field with its value will be as a single row. Of course there is other metadata to be added like load_dt and record_src but they are not a problem.
The current script is doing the following:
Get the fields into an array => fields_array = ['indicator_1', 'indicator_2', 'indicator_3']
A loop will run over the array and start adding each field with its value for each row. So imagine we are having 100 rows, we will run 300 inserts, one at a time:
for (var col_num = 0; col_num<fields_array.length; col_num = col_num+1) {
var COL_NAME = fields_array[col_num];
var field_value_query = "INSERT INTO SAT_FIELD_VALUE SELECT md5(id), CURRENT_TIMESTAMP(), NULL, 'SRC', "+COL_NAME+", md5(foreign_key_field) FROM "+TEMP_TABLE_NAME+"";
}
As mentioned in the comment on this post showing the full script, it is better to loop over a string concatenating each from values of the insert query.
There is 2 issues of the suggested solution:
There is a size limit of a query on Snowflake (it should be less than 1 MB);
if we are going to loop over each field and concatenate the from values, we should do a select query as well from the temp table to get the value of the column, so there will be no optimization, or we will reduce the time a little bit but not to much.
EDIT: A possible solution
I was thinking of doing an sql query selecting everything from the temp table, and do hashing and everything and save it into an array after transposing, but I have no idea how to do it.
Not sure if this is what you're looking for but it seems as though you just want to do a pivot:
Setup example scenario
create or replace transient table source_table
(
id number,
indicator_1 varchar,
indicator_2 number,
indicator_3 varchar
);
insert overwrite into source_table
values (1, 'Test', 2, 'DATA'),
(2, 'Prod', 3, 'DATA'),
(3, 'Test', 1, 'METADATA'),
(4, 'Test', 1, 'DATA')
;
create or replace transient table target_table
(
hash_key varchar,
md5 varchar
);
Run insert
select
name_col as hash_key,
md5(id)
from (select
id,
indicator_1,
indicator_2::varchar as indicator_2,
indicator_3
from source_table) unpivot ( val_col for name_col in (indicator_1, indicator_2, indicator_3))
;
This results in a target_table that looks like this:
+-----------+--------------------------------+
|HASH_KEY |MD5 |
+-----------+--------------------------------+
|INDICATOR_1|c4ca4238a0b923820dcc509a6f75849b|
|INDICATOR_2|c4ca4238a0b923820dcc509a6f75849b|
|INDICATOR_3|c4ca4238a0b923820dcc509a6f75849b|
|INDICATOR_1|c81e728d9d4c2f636f067f89cc14862c|
|INDICATOR_2|c81e728d9d4c2f636f067f89cc14862c|
|INDICATOR_3|c81e728d9d4c2f636f067f89cc14862c|
|INDICATOR_1|eccbc87e4b5ce2fe28308fd9f2a7baf3|
|INDICATOR_2|eccbc87e4b5ce2fe28308fd9f2a7baf3|
|INDICATOR_3|eccbc87e4b5ce2fe28308fd9f2a7baf3|
|INDICATOR_1|a87ff679a2f3e71d9181a67b7542122c|
|INDICATOR_2|a87ff679a2f3e71d9181a67b7542122c|
|INDICATOR_3|a87ff679a2f3e71d9181a67b7542122c|
+-----------+--------------------------------+
It is great scenario to use INSERT ALL:
INSERT ALL
INTO dst_tab(hash_key, md5) VALUES (indicator_1, md5)
INTO dst_tab(hash_key, md5) VALUES (indicator_2, md5)
INTO dst_tab(hash_key, md5) VALUES (indicator_3, md5)
SELECT MD5(id) AS md5, indicator_1, indicator_2::STRING AS indicator_2, indicator_3
FROM src_tab;

How to insert direct values into a hive table?

I am new to hive. I just wanted to know how I can insert data into Hive table directly
Create table t1 ( name string)
and I want to insert a value eg name = 'John'
But I have seen so many documentation there isn't any example that inserts data directly into the table. Either I need to create a file internal or externally and add the value 'John' and load this data into the table or i can load data from another table.
My goal is to add data directly into the hive table by providing a values directly? I have provided an oracle example of a sql query I want to achieve:
INSERT INTO t1 (name)
values ('John')
I want an equivalent statement as above in Hive ?
You can use hive's table generating functions,like exlode() or stack()
Example
Table struct as (name String, age Int)
INSERT INTO TABLE target_table
SELECT STACK(
2, # Amount of record
'John', 80, # record 1
'Bill', 61 # record 2
)
FROM dual # Any table already exists
LIMIT 2; # Amount of record! Have to add this line!
That will add 2 records in your target_table.
As of latest version of Hive, insert into .. values (...)is not supported. The enhancement to insert/update/delete syntax is under development. Please look at the Implement insert, update, and delete in Hive with full ACID support
Inserting values into a table is now supported by HIVE from the version Hive 0.14.
CREATE TABLE students (name VARCHAR(64), age INT, gpa DECIMAL(3, 2)) CLUSTERED BY (age) INTO 2 BUCKETS STORED AS ORC;
INSERT INTO TABLE students VALUES ('fred flintstone', 35, 1.28), ('barney rubble', 32, 2.32);
More can be found at https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-InsertingvaluesintotablesfromSQL

Insert into more tables based on the first inserted primary key

I have two tables. Products(id_product, name) and Images(id_image, id_product, image).
How can i INSERT a product and it's categories in a single query, inserting the inserted id_product into the Images coresponding id_product.
Product(1, 'Toy')
Image(1, 1, 'image.jpg')
Image(2, 1, 'image-2.jpg')
Image(3, 1, 'image-3.jpg')
Something like this. I need it to be in a single query.
you can use a store procedure to handle the sql statements

Inserting row into SQL table with some predefined values, and some from another table

I am trying to insert a row into a table called "all_games" using SQLite. I have the values "game" and "money" where game is an integer and money is a string, however the middle value "players" is the value that I don't have, so I want to get it from another table that contains it.
This is the SQL Query that I am currently using:
INSERT INTO all_games (game, ... , money)
SELECT (12, players, \'100, 200\') FROM games WHERE id=2
Just to clarify, "12" and "100, 200" represent values that I already have, I just want to get players from another table.
Thanks for any help in advance!
I believe you just need to remove the second set of parentheses.
INSERT INTO all_games (game, players , money)
SELECT 12, players, '100, 200' FROM games WHERE id=2

SQL - Poor design choice?

Please take a look at the SQL below:
create table DatasetToID(
Dataset varchar(100),
ID INT,
Name varchar(100),
age varchar(100),
primary key (dataset,id)
)
INSERT INTO DatasetToID VALUES ('Sales', 1, 'Ian Ratkin','30')
INSERT INTO DatasetToID VALUES ('Finance', 1, 'Bert Edwards','56')
INSERT INTO DatasetToID VALUES ('Production', 1, 'Marie Edwards','56')
INSERT INTO DatasetToID VALUES ('Sales', 2, 'Karen Bromley','30')
INSERT INTO DatasetToID VALUES ('Finance', 2, 'Steven Tardy','56')
INSERT INTO DatasetToID VALUES ('Production', 2, 'Eric Bishop','56')
create table Deletion(
Dataset varchar(100),
ID INT, decision bit,
date datetime
)
INSERT INTO Deletion VALUES ('Sales', 1, 1, '2013-01-01')
INSERT INTO Deletion VALUES ('Finance', 2, 1, '2013-01-01')
INSERT INTO Deletion VALUES ('Sales', 1, 0, '2013-02-02')
A live system I support is designed like this. Records are deleted from DatasetToID and Deletion at the end of each month if the most recent Deletion decision (bit) is true. In the case of the above Finance,2 will be deleted but Sales,1 will not because Sales,1 most recent decision is 0 (false).
I believe this is quite a poor design. I believe that Dataset and ID should be in a different table like i.e. not DatasetToID. The original developer seemed to disagree before he left. I am wandering if I am wrong.
It's a denormalized design, which is common in some scenarios for this kind of work. In particular, a periodic routine like a monthly delete or archive should really not be influencing your schema design. If this is the only overlap between that key pair, then I would say your old dev was probably right. If these two columns appear together in tables, however, you are probably right, there should be a master record for this pairing.