Oracle Sql : create column/table with a for loop - sql

I want to create a table with a column that will hold integer values from 1 to 2000.
Then I want insert records into the table with each value (1,2,3,...,2000).
I tried using a for loop to do so, but I don't succeed.
Can someone help me ?'
Thanks

Try:
CREATE TABLE mytable AS
SELECT level AS columnname FROM DUAL
CONNECT BY LEVEL <= 2000;
Demo: http://sqlfiddle.com/#!4/49c0d/1

Related

How can I save query results into a new column and save it in the table with SQL Oracle Live?

This is the query:
SELECT ROUND(column_1,0) as new_column_to_save
FROM table_to_save_to
Thank you
if you wanna insert it into a new table :
INSERT INTO NEWTableName ( colName)
SELECT ROUND(column_1,0) as new_column_to_save
FROM table_to_save_to
If you want to update a column within the same table
UPDATE table_to_save_to
SET new_column_to_save = ROUND(column_1,0)
FROM table_to_save_to

Big Query - Create a table/view from a temp table

We have a query in Big Query like below
CREATE temp table ttt as (
SELECT * FROM TABLE
);
EXECUTE IMMEDIATE (
---Dynamic query goes here---
);
The above query is storing the results in a temporary table as written in the query. How to store these results into an actual table/view so that it can be utilized for further data modelling?
Try to use:
EXECUTE IMMEDIATE (concat('create table dataset.table as ',---Dynamic query goes here---)
);
Could you use a persistent table in the first place? Or could you save the results after EXECUTE IMMEDIATE using query below?
CREATE TABLE yourDataset.ttt as (
SELECT * FROM ttt
);
As I already suggested you in your previous question - just add CREATE TABLE your_table AS or INSERT your_table as in below example. Use CREATE (DDL) or INSERT (DML) depends on do you need to create new table or insert into existing one
EXECUTE IMMEDIATE (
CREATE TABLE dataset.table AS
SELECT ---Rest of your dynamic query goes here---
);

How to create a stored procedure to copy data from a query to a temporary table?

I have need of inserting data to a temporary table from an existing table/query. The following produces the error detailed below.
CREATE TABLE SPTemporary
AS
BEGIN
SELECT * into #temppT
FROM SampleTable
END
Throws this error:
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'begin'.
Correct your syntax, use procedure instead of table :
create procedure SPTemporay
as
begin
select * into #temppT
from SampleTable
end
However, if you want only copy of data then only subquery is enough :
select st.* into #temppT
from SampleTable st
One method is:
select st.*
into SPTemporay
from SampleTable st
One select can only put data in one table. It is unclear which one you really want SPTemporary or #temppT. You can repeat the select if you really need the same data in two tables.
EDIT:
If you want a stored procedure, you could do:
create procedure SPTemporay
as begin
select *
into #temppT
from SampleTable
end;
This is rather non-sensical, because the temporary table is discarded when the stored procedure returns.
I think the syntax is wrong, it should be like that:
create table SPTemporay
as
select * from SampleTable
I hope this helps.

postgresql: INSERT INTO ... (SELECT * ...)

I'm not sure if its standard SQL:
INSERT INTO tblA
(SELECT id, time
FROM tblB
WHERE time > 1000)
What I'm looking for is: what if tblA and tblB are in different DB Servers.
Does PostgreSql gives any utility or has any functionality that will help to use INSERT query with PGresult struct
I mean SELECT id, time FROM tblB ... will return a PGresult* on using PQexec. Is it possible to use this struct in another PQexec to execute an INSERT command.
EDIT:
If not possible then I would go for extracting the values from PQresult* and create a multiple INSERT statement syntax like:
INSERT INTO films (code, title, did, date_prod, kind) VALUES
('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');
Is it possible to create a prepared statement out of this!! :(
As Henrik wrote you can use dblink to connect remote database and fetch result. For example:
psql dbtest
CREATE TABLE tblB (id serial, time integer);
INSERT INTO tblB (time) VALUES (5000), (2000);
psql postgres
CREATE TABLE tblA (id serial, time integer);
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > 1000;
TABLE tblA;
id | time
----+------
1 | 5000
2 | 2000
(2 rows)
PostgreSQL has record pseudo-type (only for function's argument or result type), which allows you query data from another (unknown) table.
Edit:
You can make it as prepared statement if you want and it works as well:
PREPARE migrate_data (integer) AS
INSERT INTO tblA
SELECT id, time
FROM dblink('dbname=dbtest', 'SELECT id, time FROM tblB')
AS t(id integer, time integer)
WHERE time > $1;
EXECUTE migrate_data(1000);
-- DEALLOCATE migrate_data;
Edit (yeah, another):
I just saw your revised question (closed as duplicate, or just very similar to this).
If my understanding is correct (postgres has tbla and dbtest has tblb and you want remote insert with local select, not remote select with local insert as above):
psql dbtest
SELECT dblink_exec
(
'dbname=postgres',
'INSERT INTO tbla
SELECT id, time
FROM dblink
(
''dbname=dbtest'',
''SELECT id, time FROM tblb''
)
AS t(id integer, time integer)
WHERE time > 1000;'
);
I don't like that nested dblink, but AFAIK I can't reference to tblB in dblink_exec body. Use LIMIT to specify top 20 rows, but I think you need to sort them using ORDER BY clause first.
If you want insert into specify column:
INSERT INTO table (time)
(SELECT time FROM
dblink('dbname=dbtest', 'SELECT time FROM tblB') AS t(time integer)
WHERE time > 1000
);
This notation (first seen here) looks useful too:
insert into postagem (
resumopostagem,
textopostagem,
dtliberacaopostagem,
idmediaimgpostagem,
idcatolico,
idminisermao,
idtipopostagem
) select
resumominisermao,
textominisermao,
diaminisermao,
idmediaimgminisermao,
idcatolico ,
idminisermao,
1
from
minisermao
You can use dblink to create a view that is resolved in another database. This database may be on another server.
insert into TABLENAMEA (A,B,C,D)
select A::integer,B,C,D from TABLENAMEB
If you are looking for PERFORMANCE, give where condition inside the db link query.
Otherwise it fetch all data from the foreign table and apply the where condition.
INSERT INTO tblA (id,time)
SELECT id, time FROM dblink('dbname=dbname port=5432 host=10.10.90.190 user=postgresuser password=pass123',
'select id, time from tblB where time>'''||1000||'''')
AS t1(id integer, time integer)
I am going to SELECT Databasee_One(10.0.0.10) data from Database_Two (10.0.0.20)
Connect to 10.0.0.20 and create DBLink Extenstion:
CREATE EXTENSION dblink;
Test the connection for Database_One:
SELECT dblink_connect('host=10.0.0.10 user=postgres password=dummy dbname=DB_ONE');
Create foreign data wrapper and server for global authentication:
CREATE FOREIGN DATA WRAPPER postgres VALIDATOR postgresql_fdw_validator;
You can use this server object for cross database queries:
CREATE SERVER dbonepostgres FOREIGN DATA WRAPPER postgres OPTIONS (hostaddr '10.0.0.10', dbname 'DB_ONE');
Mapping of user and server:
CREATE USER MAPPING FOR postgres SERVER dbonepostgres OPTIONS (user 'postgres', password 'dummy');
Test dblink:
SELECT dblink_connect('dbonepostgres');
Import data from 10.0.0.10 into 10.0.0.20
INSERT INTO tableA
SELECT
column1,
,column2,
...
FROM dblink('dbonepostgres', 'SELECT column1, column2, ... from public.tableA')
AS data(column1 DATATYPE, column2 DATATYPE, ...)
;
Here's an alternate solution, without using dblink.
Suppose B represents the source database and A represents the target database:
Then,
Copy table from source DB to target DB:
pg_dump -t <source_table> <source_db> | psql <target_db>
Open psql prompt, connect to target_db, and use a simple insert:
psql
# \c <target_db>;
# INSERT INTO <target_table>(id, x, y) SELECT id, x, y FROM <source_table>;
At the end, delete the copy of source_table that you created in target_table.
# DROP TABLE <source_table>;

Insert into Table select result set from stored procedure but column count is not same

I need something like that which is of course not working.
insert into Table1
(
Id,
Value
)
select Id, value from
(
exec MySPReturning10Columns
)
I wanted to populate Table1 from result set returned by MySPReturning10Columns. Here the SP is returning 10 columns and the table has just 2 columns.
The following way works as long as table and result set from SP have same number of columns but in my case they are not same.
INSERT INTO TableWith2Columns
EXEC usp_MySPReturning2Columns;
Also, I want to avoid adding "." as linked server just to make openquery and openrowset work anyhow.
Is there a way not to have define table strucutre in temp table (all columns with datatypes and lenght)? Something like CTE.
You could use a temporary table as a go-between:
insert into #TempTable exec MySP
insert into Table1 (id, value) select id, value from #TempTable
You could solve the problem in two steps by doing the insert from the stored procedure into a temporary table, then do the insert selecting just the columns you want from the temporary table.
Information on temporary tables: http://www.sqlteam.com/article/temporary-tables
-- Well, declare a temp table or a table var, depending on the number of rows expected
-- from the SP. This table will be basically the result set of your SP.
DECLARE #spResult AS TABLE
(
ID INT,
VALUE FLOAT,
....
);
-- Get the result set of the SP into the temp table.
INSERT #spResult EXEC STORED_PROC;
-- Now you can query the SP's result set for ID and Value;
INSERT Table1 (ID, VALUE)
SELECT ID, VALUE FROM #spResult;
You dont need to create a temporary table, you can do it with single query by creating temporary view like this
with tempView as EXEC MySPReturning10Columns insert into Table1 select id, value from tempView
The temporary view disappears as soon as the statement finishes execution