Create temporary table with fixed values - sql

How do I create a temporary table in PostgreSQL that has one column "AC" and consists of these 4-digit values:
Zoom
Inci
Fend
In essence the table has more values, this should just serve as an example.

If you only need the temp table for one SQL query, then you can hard-code the data into a Common Table Expression as follows :
WITH temp_table AS
(
SELECT 'Zoom' AS AC UNION
SELECT 'Inci' UNION
SELECT 'Fend'
)
SELECT * FROM temp_table
see it work at http://sqlfiddle.com/#!15/f88ac/2
(that CTE syntax also works with MS SQL)
HTH

Related

Storing a list of codes into a variable in Oracle SQL [duplicate]

This question already has answers here:
PL/SQL - Use "List" Variable in Where In Clause
(3 answers)
How to load a large number of strings to match with oracle database?
(3 answers)
Closed 10 months ago.
There's a way to store a set of codes into a variable in Oracle SQL?
I have these codes and I'll need to use them in different parts of my query.
But I wouldn't repeat this list in many places in my SQL code.
'G31', 'G310', 'G311', 'G312', 'G318', 'G319', 'G239', 'G122', 'G710',
'B20', 'B22', 'B23', 'B24', 'G35', 'C811', 'G37', 'G375', 'K702', 'K741'
I would like to do something like this idea:
LIST <- ['G31', 'G310', 'G311', 'G312', 'G318', 'G319', 'G239', 'G122', 'G710',
'B20', 'B22', 'B23', 'B24', 'G35', 'C811', 'G37', 'G375', 'K702', 'K741']
SELECT * FROM TABLE_A where COLUMN IN [LIST];
SELECT * FROM TABLE_B where COLUMN IN [LIST];
A fancy approach is this
WITH CODE_VALUES AS
( SELECT DISTINCT COLUMN_VALUE AS CODE_VALUE
FROM TABLE (sys.dbms_debug_vc2coll ('G31',
'G310',
'G311',
'G312',
'G318',
'G319',
'G239',
'G122',
'G710',
'B20',
'B22',
'B23',
'B24',
'G35',
'C811',
'G37',
'G375',
'K702',
'K741'))
)
SELECT *
FROM CODE_VALUES -- + the rest of your query
You could do the same thing with successive union's against "dual" too
WITH CODE_VALUES AS
( SELECT 'ABC' AS code_value FROM dual UNION
SELECT 'CDE' AS code_value FROM dual
)
If this is going to get used across multiple operational queries it's probably best just to store them in a table.
Create a global temporary table once and add the desired values in the gtt and then use it in query using join.
Benifit of gtt is that you don't have to worry about data maintance. (Delete - insert). Data added in one session/transaction will be visible in that session/transaction only (based on type of gtt that you have created.
Create global temporary table gtt
(Col1 varchar2(10))
On commit preserve row; -- session specific
Insert into gtt
Select 'G31' from dual union all
Select 'G310' from dual union all
...
...
Select 'K741' from dual;
Now, you can use it anywhere in the same session as follows:
SELECT *
FROM TABLE_A a
Join gtt g on a.COLUMN = g.col1;
SELECT *
FROM TABLE_B b
Join gtt g on b.COLUMN = g.col1;

Redshift - Extract value matching a condition in Array

I have a Redshift table with the following column
How can I extract the value starting by cat_ from this column please (there is only one for each row and at different position in the array)?
I want to get those results:
cat_incident
cat_feature_missing
cat_duplicated_request
Thanks!
There is no easy way to extract multiple values from within one column in SQL (or at least not in the SQL used by Redshift).
You could write a User-Defined Function (UDF) that returns a string containing those values, separated by newlines. Whether this is acceptable depends on what you wish to do with the output (eg JOIN against it).
Another option is to pre-process the data before it is loaded into Redshift, to put this information in a separate one-to-many table, with each value in its own row. It would then be trivial to return this information.
You can do this using tally table (table with numbers). Check this link on information how to create this table: http://www.sqlservercentral.com/articles/T-SQL/62867/
Here is example how you would use it. In real life you should replace temporary #tally table with a permanent one.
--create sample table with data
create table #a (tags varchar(500));
insert into #a
select 'blah,cat_incident,mcr_close_ticket'
union
select 'blah-blah,cat_feature_missing,cat_duplicated_request';
--create tally table
create table #tally(n int);
insert into #tally
select 1
union select 2
union select 3
union select 4
union select 5
;
--get tags
select * from
(
select TRIM(SPLIT_PART(a.tags, ',', t.n)) AS single_tag
from #tally t
inner join #a a ON t.n <= REGEXP_COUNT(a.tags, ',') + 1 and n<1000
)
where single_tag like 'cat%'
;
Thanks!
In the end I managed to do it with the following query:
SELECT SUBSTRING(SUBSTRING(tags, charindex('cat_', tags), len(tags)), 0, charindex(',', SUBSTRING(tags, charindex('cat_', tags), len(tags)))) tags
FROM table

Abbreviate a list in PostgreSQL

How can I abbreviate a list so that
WHERE id IN ('8893171511',
'8891227609',
'8884577292',
'886790275X',
.
.
.)
becomes
WHERE id IN (name of a group/list)
The list really would have to appear somewhere. From the point of view of your code being maintainable and reusable, you could represent the list in a CTE:
WITH id_list AS (
SELECT '8893171511' AS id UNION ALL
SELECT '8891227609' UNION ALL
SELECT '8884577292' UNION ALL
SELECT '886790275X'
)
SELECT *
FROM yourTable
WHERE id IN (SELECT id FROM cte);
If you have a persistent need to do this, then maybe the CTE should become a bona fide table somewhere in your database.
Edit: Using the Horse's suggestion, we can tidy up the CTE to the following:
WITH id_list (id) AS (
VALUES
('8893171511'),
('8891227609'),
('8884577292'),
('886790275X')
)
If the list is large, I would create a temporary table and store the list there.
That way you can ANALYZE the temporary table and get accurate estimates.
The temp table and CTE answers suggested will do.
Just wanted to bring another approach, that will work if you use PGAdmin for querying (not sure about workbench) and represent your data in a "stringy" way.
set setting.my_ids = '8893171511,8891227609';
select current_setting('setting.my_ids');
drop table if exists t;
create table t ( x text);
insert into t select 'some value';
insert into t select '8891227609';
select *
from t
where x = any( string_to_array(current_setting('setting.my_ids'), ',')::text[]);

How to create temp table in postgresql with values and empty column

I am very new to postgresql. I want to create a temp table containing some values and empty columns. Here is my query but it is not executing, but gives an error at , (comma).
CREATE TEMP TABLE temp1
AS (
SELECT distinct region_name, country_name
from opens
where track_id=42, count int)
What did I do wrong?
How to create a temp table with some columns that has values using select query and other columns as empty?
Just select a NULL value:
CREATE TEMP TABLE temp1
AS
SELECT distinct region_name, country_name, null::integer as "count"
from opens
where track_id=42;
The cast to an integer (null::integer) is necessary, otherwise Postgres wouldn't know what data type to use for the additional column. If you want to supply a different value you can of course use e.g. 42 as "count" instead
Note that count is a reserved keyword, so you have to use double quotes if you want to use it as an identifier. It would however be better to find a different name.
There is also no need to put the SELECT statement for an CREATE TABLE AS SELECT between parentheses.
Your error comes form your statement near the clause WHERE.
This should work :
CREATE TEMP TABLE temp1 AS
(SELECT distinct region_name,
country_name,
0 as count
FROM opens
WHERE track_id=42)
Try This.
CREATE TEMP TABLE temp1 AS
(SELECT distinct region_name,
country_name,
cast( '0' as integer) as count
FROM opens
WHERE track_id=42);

How to combine IN operator with LIKE condition (or best way to get comparable results)

I need to select rows where a field begins with one of several different prefixes:
select * from table
where field like 'ab%'
or field like 'cd%'
or field like "ef%"
or...
What is the best way to do this using SQL in Oracle or SQL Server? I'm looking for something like the following statements (which are incorrect):
select * from table where field like in ('ab%', 'cd%', 'ef%', ...)
or
select * from table where field like in (select foo from bar)
EDIT:
I would like to see how this is done with either giving all the prefixes in one SELECT statement, of having all the prefixes stored in a helper table.
Length of the prefixes is not fixed.
Joining your prefix table with your actual table would work in both SQL Server & Oracle.
DECLARE #Table TABLE (field VARCHAR(32))
DECLARE #Prefixes TABLE (prefix VARCHAR(32))
INSERT INTO #Table VALUES ('ABC')
INSERT INTO #Table VALUES ('DEF')
INSERT INTO #Table VALUES ('ABDEF')
INSERT INTO #Table VALUES ('DEFAB')
INSERT INTO #Table VALUES ('EFABD')
INSERT INTO #Prefixes VALUES ('AB%')
INSERT INTO #Prefixes VALUES ('DE%')
SELECT t.*
FROM #Table t
INNER JOIN #Prefixes pf ON t.field LIKE pf.prefix
you can try regular expression
SELECT * from table where REGEXP_LIKE ( field, '^(ab|cd|ef)' );
If your prefix is always two characters, could you not just use the SUBSTRING() function to get the first two characters of "field", and then see if it's in the list of prefixes?
select * from table
where SUBSTRING(field, 1, 2) IN (prefix1, prefix2, prefix3...)
That would be "best" in terms of simplicity, if not performance. Performance-wise, you could create an indexed virtual column that generates your prefix from "field", and then use the virtual column in your predicate.
Depending on the size of the dataset, the REGEXP solution may or may not be the right answer. If you're trying to get a small slice of a big dataset,
select * from table
where field like 'ab%'
or field like 'cd%'
or field like "ef%"
or...
may be rewritten behind the scenes as
select * from table
where field like 'ab%'
union all
select * from table
where field like 'cd%'
union all
select * from table
where field like 'ef%'
Doing three index scans instead of a full scan.
If you know you're only going after the first two characters, creating a function-based index could be a good solution as well. If you really really need to optimize this, use a global temporary table to store the values of interest, and perform a semi-join between them:
select * from data_table
where transform(field) in (select pre_transformed_field
from my_where_clause_table);
You can also try like this, here tmp is temporary table that is populated by the required prefixes. Its a simple way, and does the job.
select * from emp join
(select 'ab%' as Prefix
union
select 'cd%' as Prefix
union
select 'ef%' as Prefix) tmp
on emp.Name like tmp.Prefix