I have some select statements, one of them:
SELECT c_c.id,
c_c.full_name,
c_c.telephone,
c_c.fax,
cast(c_c.adress_juridical as varchar(256)) as adress_juridical,
cast(c_c.adress_naturale as varchar(256)) as adress_naturale,
c_c.director,
c_c.name,
c_c.b_id,
c_c.dt_start,
c_c.dt_end
FROM c_counteragents c_c
WHERE c_c.id = -1 AND c_c.b_id = -1;
How can I get names of select statement?
The names of columns of this select must be without allias c_c.
It's not so easy, just to take a table name and find table's columns names. I don't know the name of table, because I retrieve this SELECT statement from some table, where it stores as a text.
Related
I have a Redshift table with the following column
How can I extract the value starting by cat_ from this column please (there is only one for each row and at different position in the array)?
I want to get those results:
cat_incident
cat_feature_missing
cat_duplicated_request
Thanks!
There is no easy way to extract multiple values from within one column in SQL (or at least not in the SQL used by Redshift).
You could write a User-Defined Function (UDF) that returns a string containing those values, separated by newlines. Whether this is acceptable depends on what you wish to do with the output (eg JOIN against it).
Another option is to pre-process the data before it is loaded into Redshift, to put this information in a separate one-to-many table, with each value in its own row. It would then be trivial to return this information.
You can do this using tally table (table with numbers). Check this link on information how to create this table: http://www.sqlservercentral.com/articles/T-SQL/62867/
Here is example how you would use it. In real life you should replace temporary #tally table with a permanent one.
--create sample table with data
create table #a (tags varchar(500));
insert into #a
select 'blah,cat_incident,mcr_close_ticket'
union
select 'blah-blah,cat_feature_missing,cat_duplicated_request';
--create tally table
create table #tally(n int);
insert into #tally
select 1
union select 2
union select 3
union select 4
union select 5
;
--get tags
select * from
(
select TRIM(SPLIT_PART(a.tags, ',', t.n)) AS single_tag
from #tally t
inner join #a a ON t.n <= REGEXP_COUNT(a.tags, ',') + 1 and n<1000
)
where single_tag like 'cat%'
;
Thanks!
In the end I managed to do it with the following query:
SELECT SUBSTRING(SUBSTRING(tags, charindex('cat_', tags), len(tags)), 0, charindex(',', SUBSTRING(tags, charindex('cat_', tags), len(tags)))) tags
FROM table
I am very new to postgresql. I want to create a temp table containing some values and empty columns. Here is my query but it is not executing, but gives an error at , (comma).
CREATE TEMP TABLE temp1
AS (
SELECT distinct region_name, country_name
from opens
where track_id=42, count int)
What did I do wrong?
How to create a temp table with some columns that has values using select query and other columns as empty?
Just select a NULL value:
CREATE TEMP TABLE temp1
AS
SELECT distinct region_name, country_name, null::integer as "count"
from opens
where track_id=42;
The cast to an integer (null::integer) is necessary, otherwise Postgres wouldn't know what data type to use for the additional column. If you want to supply a different value you can of course use e.g. 42 as "count" instead
Note that count is a reserved keyword, so you have to use double quotes if you want to use it as an identifier. It would however be better to find a different name.
There is also no need to put the SELECT statement for an CREATE TABLE AS SELECT between parentheses.
Your error comes form your statement near the clause WHERE.
This should work :
CREATE TEMP TABLE temp1 AS
(SELECT distinct region_name,
country_name,
0 as count
FROM opens
WHERE track_id=42)
Try This.
CREATE TEMP TABLE temp1 AS
(SELECT distinct region_name,
country_name,
cast( '0' as integer) as count
FROM opens
WHERE track_id=42);
How would the SQL statement look like to return the bottom result from the upper table?
The last letter from the key should be removed. It stands for the language. EXP column should be split into 5 columns with the language prefix and the right value.
I'm weak at writing more or less difficult SQL statements so any help would be appreciated!
The Microsoft Access equivalent of a PIVOT in SQL Server is known as a CROSSTAB. The following query will work for Microsoft Access 2010.
TRANSFORM First(table1.Exp) AS FirstOfEXP
SELECT Left([KEY],Len([KEY])-2) AS [XKEY]
FROM table1
GROUP BY Left([KEY],Len([KEY])-2)
PIVOT Right([KEY],1);
Access will throw a circular field reference error if you try to name the row heading with KEY since that is also the name of the original table field that you are deriving it from. If you do not want XKEY as the field name, then you would need to break apart the above query into two separate queries as shown below:
qsel_table1:
SELECT Left([KEY],Len([KEY])-2) AS XKEY
, Right([KEY],1) AS [Language]
, Table1.Exp
FROM Table1
ORDER BY Left([KEY],Len([KEY])-2), Right([KEY],1);
qsel_table1_Crosstab:
TRANSFORM First(qsel_table1.Exp) AS FirstOfEXP
SELECT qsel_table1.XKEY AS [KEY]
FROM qsel_table1
GROUP BY qsel_table1.XKEY
PIVOT qsel_table1.Language;
In order to always output all language columns regardless of whether there is a value or not, you need to spike of those values into a separate table. That table will then supply the row and column values for the crosstab and the original table will supply the value expression. Using the two query solution above we would instead need to do the following:
table2:
This is a new table with a BASE_KEY TEXT*255 column and a LANG TEXT*1 column. Together these two columns will define the primary key. Populate this table with the following rows:
"AbstractItemNumberReportController.SelectPositionen", "D"
"AbstractItemNumberReportController.SelectPositionen", "E"
"AbstractItemNumberReportController.SelectPositionen", "F"
"AbstractItemNumberReportController.SelectPositionen", "I"
"AbstractItemNumberReportController.SelectPositionen", "X"
qsel_table1:
This query remains unchanged.
qsel_table1_crosstab:
The new table2 is added to this query with an outer join with the original table1. The outer join will allow all rows to be returned from table2 regardless of whether there is a matching row in the table1. Table2 now supplies the values for the row and column headings.
TRANSFORM First(qsel_table1.Exp) AS FirstOfEXP
SELECT Table2.Base_KEY AS [KEY]
FROM Table2 LEFT JOIN qsel_table1 ON (Table2.BASE_KEY = qsel_table1.XKEY)
AND (Table2.LANG = qsel_table1.Language)
GROUP BY Table2.Base_KEY
PIVOT Table2.LANG;
Try something like this:
select *
from
(
select 'abcd' as [key], right([key], 1) as id, expression
from table1
) x
pivot
(
max(expression)
for id in ([D], [E])
) p
Demo Fiddle
how to
select name,family from student where name="X"
without its column name.
for example :
select "column1","column2" from student where "column1"="x"
or for example
select "1","2" from student where "1"="x"
"1" is column1
"2" is column2
i dont want to say columns name. i want to say just its number or other....
idont tired from select *. but it just for that i dont know the columns name but i know where they are. my columns name are change every i want to read the file but its data are same, and the data are in the same columns in each file.
Although you can not use field positions specifiers in the SELECT statement, the SQL standard includes the INFORMATION_SCHEMA where the dictionary of your tables is defined. This includes the COLUMNS view where all the fields of all tables are defined. And in this view, there is a field called ORDINAL_POSITION which you can use to assist in this problem.
If you query
SELECT ORDINAL_POSITION, COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TABLE'
ORDER BY ORDINAL_POSITION
then you can obtain the column names for the ordinal positions you want. From there you can prepare a SQL statement.
You could use temp table as:
DECLARE #TB TABLE(Column1 NVARCHAR(50),...)
INSERT #TB
SELECT * FROM student
Then use it:
SELECT Column1 FROM #TB WHERE Column1='aa'
If it's a string you can do this :
Select Column1 + '' From Table
If it's a number you can do this :
Select Column1 + 0 From Table
If it's a datetime you can do this :
Select dateadd(d, 0, Column1) From Table
And similarly for other data types..
No, you can not use the ordinal (numeric) position in the SELECT clause. Only in Order by you can.
however you can make your own column alias...
Select Column1 as [1] From Table
You can use alias:
SELECT name AS [1], family AS [2] FROM student WHERE name="X"
It's just not possible. Unfortunately, they didn't think about table-valued functions, for which information_schema is not available, so good luck with that.
I need to select rows where a field begins with one of several different prefixes:
select * from table
where field like 'ab%'
or field like 'cd%'
or field like "ef%"
or...
What is the best way to do this using SQL in Oracle or SQL Server? I'm looking for something like the following statements (which are incorrect):
select * from table where field like in ('ab%', 'cd%', 'ef%', ...)
or
select * from table where field like in (select foo from bar)
EDIT:
I would like to see how this is done with either giving all the prefixes in one SELECT statement, of having all the prefixes stored in a helper table.
Length of the prefixes is not fixed.
Joining your prefix table with your actual table would work in both SQL Server & Oracle.
DECLARE #Table TABLE (field VARCHAR(32))
DECLARE #Prefixes TABLE (prefix VARCHAR(32))
INSERT INTO #Table VALUES ('ABC')
INSERT INTO #Table VALUES ('DEF')
INSERT INTO #Table VALUES ('ABDEF')
INSERT INTO #Table VALUES ('DEFAB')
INSERT INTO #Table VALUES ('EFABD')
INSERT INTO #Prefixes VALUES ('AB%')
INSERT INTO #Prefixes VALUES ('DE%')
SELECT t.*
FROM #Table t
INNER JOIN #Prefixes pf ON t.field LIKE pf.prefix
you can try regular expression
SELECT * from table where REGEXP_LIKE ( field, '^(ab|cd|ef)' );
If your prefix is always two characters, could you not just use the SUBSTRING() function to get the first two characters of "field", and then see if it's in the list of prefixes?
select * from table
where SUBSTRING(field, 1, 2) IN (prefix1, prefix2, prefix3...)
That would be "best" in terms of simplicity, if not performance. Performance-wise, you could create an indexed virtual column that generates your prefix from "field", and then use the virtual column in your predicate.
Depending on the size of the dataset, the REGEXP solution may or may not be the right answer. If you're trying to get a small slice of a big dataset,
select * from table
where field like 'ab%'
or field like 'cd%'
or field like "ef%"
or...
may be rewritten behind the scenes as
select * from table
where field like 'ab%'
union all
select * from table
where field like 'cd%'
union all
select * from table
where field like 'ef%'
Doing three index scans instead of a full scan.
If you know you're only going after the first two characters, creating a function-based index could be a good solution as well. If you really really need to optimize this, use a global temporary table to store the values of interest, and perform a semi-join between them:
select * from data_table
where transform(field) in (select pre_transformed_field
from my_where_clause_table);
You can also try like this, here tmp is temporary table that is populated by the required prefixes. Its a simple way, and does the job.
select * from emp join
(select 'ab%' as Prefix
union
select 'cd%' as Prefix
union
select 'ef%' as Prefix) tmp
on emp.Name like tmp.Prefix