I want to write a simple query to select a number of columns in PostgreSQL. However, I keep getting errors - I tried a few options but they did not work for me. At the moment I am getting the following error:
org.postgresql.util.PSQLException: ERROR: syntax error at or near
"column"
To get the columns with values I try the followig:
select * from weather_data where column like '%2010%'
Any ideas?
column is a reserved word. You cannot use it as identifier unless you double-quote it. Like: "column".
Doesn't mean you should, though. Just don't use reserved words as identifiers. Ever.
To ...
select a list of columns with 2010 in their name:
.. you can use this function to build the SQL command dynamically from the system catalog table pg_attribute:
CREATE OR REPLACE FUNCTION f_build_select(_tbl regclass, _pattern text)
RETURNS text AS
$func$
SELECT format('SELECT %s FROM %s'
, string_agg(quote_ident(attname), ', ')
, $1)
FROM pg_attribute
WHERE attrelid = $1
AND attname LIKE ('%' || $2 || '%')
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0; -- no system columns
$func$ LANGUAGE sql;
Call:
SELECT f_build_select('weather_data', '2010');
Returns something like:
SELECT foo2010, bar2010_id, FROM weather_data;
You cannot make this fully dynamic, because the return type is unknown until we actually build the query.
This will get you the list of columns in a specific table (you can optionally add schema if needed):
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'yourtable'
and column_name like '%2010%'
SQL Fiddle Demo
You can then use that query to create a dynamic sql statement to return your results.
Attempts to use dynamic structures like this usually indicate that you should be using data formats like hstore, json, xml, etc that are amenible to dynamic access.
You can get a dynamic column list by creating the SQL on the fly in your application. You can query the INFORMATION_SCHEMA to get information about the columns of a table and build the query.
It's possible to do this in PL/PgSQL and run the generated query with EXECUTE but you'll find it somewhat difficult to work with the result RECORD, as you must get and decode composite tuples, you can't expand the result set into a normal column list. Observe:
craig=> CREATE OR REPLACE FUNCTION retrecset() returns setof record as $$
values (1,2,3,4), (10,11,12,13);
$$ language sql;
craig=> select retrecset();
retrecset
---------------
(1,2,3,4)
(10,11,12,13)
(2 rows)
craig=> select * from retrecset();
ERROR: a column definition list is required for functions returning "record"
craig=> select (r).* FROM (select retrecset()) AS x(r);
ERROR: record type has not been registered
About all you can do is get the raw record and decode it in the client. You can't index into it from SQL, you can't convert it to anything else, etc. Most client APIs don't provide facilities for parsing the text representations of anonymous records so you'll likely have to write this yourself.
So: you can return dynamic records from PL/PgSQL without knowing their result type, it's just not particularly useful and it is a pain to deal with on the client side. You really want to just use the client to generate queries in the first place.
You can't search all columns like that. You have to specify a specific column.
For example,
select * from weather_data where weather_date like '%2010%'
or better yet if it is a date, specify a date range:
select * from weather_data where weather_date between '2010-01-01' and '2010-12-31'
Found this here :
SELECT 'SELECT ' || array_to_string(ARRAY(SELECT 'o' || '.' || c.column_name
FROM information_schema.columns As c
WHERE table_name = 'officepark'
AND c.column_name NOT IN('officeparkid', 'contractor')
), ',') || ' FROM officepark As o' As sqlstmt
The result is a SQL SELECT query you just have to execute further.
It fits my needs since I pipe the result in the shell like this :
psql -U myUser -d myDB -t -c "SELECT...As sqlstm" | psql -U myUser -d myDB
That returns me the formatted output, but it only works in the shell.
Hope this helps someone someday.
Related
I am having an issue with my postgresql database. I added 5 Tables with a lot of data and a lot of columns. Now I noticed I added the columns with a mix of upper and lowercase letters, which makes it difficult to query them using sqlalchemy or pandas.read_sql_query, because I need double quotes to access them.
Is there a way to change all values in the column names to lowercase letters with a single command?
Im new to SQL, any help is appreciated.
Use an anonymous code block with a FOR LOOP over the table columns:
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT table_schema,table_name,column_name
FROM information_schema.columns
WHERE table_schema = 'public' AND
table_name = 'table1'
LOOP
EXECUTE format('ALTER TABLE %I.%I RENAME COLUMN %I TO %I',
row.table_schema,row.table_name,row.column_name,lower(row.column_name));
END LOOP;
END $$;
Demo: db<>fiddle
If you wish to simply ensure that the query returns lowercase (without changing the original entries), you can simply input:
select lower(variable) from table;
On the other hand, if you wish to actually change the case in the table itself, you must use an UPDATE command.
UPDATE table SET variable = LOWER(variable);
Something like that should do the trick:
SELECT LOWER(column) FROM my_table;
I have one table with id, name and complex queries. Below is just a sample of that table..
ID name Query
1 advisor_1 "Select * from advisor"
2 student_1 "Select * from student where id = 12"
3 faculty_4 "Select * from student where id = 12"
I want to iterate over this table and save each record into the csv file
Is there any way I can do it though Anonymous block automatically.
I don't want to do this manually as table has lots of rows.
Can anyone please help?
Not being superuser means the export can't be done in a server-side DO block.
It could be done client-side in any programming language that can talk to the database, or assuming a psql-only environment, it's possible to generate a list of \copy statements with an SQL query.
As an example of the latter, assuming the unique output filenames are built from the ID column, something like this should work:
SELECT format('\copy (%s) TO ''file-%s.csv'' CSV', query, id)
FROM table_with_queries;
The result of this query should be put into a file in a format such that it can be directly included into psql, like this:
\pset format unaligned
\pset tuples_only on
-- \g with an argument treats it as an output file.
SELECT format('\copy (%s) TO ''file-%s.csv'' CSV', query, id)
FROM table_with_queries \g /tmp/commands.sql
\i /tmp/commands.sql
As a sidenote, that process cannot be managed with the \gexec meta-command introduced in PG 9.6, because \copy itself is a meta-command. \gexec iterates only on SQL queries, not on meta-commands. Otherwise the whole thing could be done by a single \gexec invocation.
You may use a function like: (IF your problem is the code)
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT id, query FROM table_name
LOOP
EXECUTE('COPY (' || rec.query || ') TO ' || QUOTE_LITERAL('d:/csv' || rec.id || '.csv') || ' CSV');
END LOOP;
END;
for permission problem, You should use some places on server that you have writing access to them (or request from vendor).
I have tables that contain the same type of data for every year, but the data gathered varies slightly in that they may not have the same fields.
d_abc_2016
d_def_2016
d_ghi_2016
d_jkl_2016
There are certain constants for each table: company_id, employee_id, salary.
However, each one might or might not have these fields that are used to calculate total incentives: bonus, commission, cash_incentives. There are a lot more, but just using these as a examples. All numeric
I should note at this point, users only have the ability to run SELECT statements.
What I would like to be able to do is this:
Give the user the ability to call in SELECT and specify their own fields in addition to the call
Pass the table name being used into the function to use in conditional logic to determine how the query string should be constructed for the eventual total_incentives calculation in addition to passing the whole table so a ton of arguments don't have to be passed into the function
Basically this:
SELECT employee_id, salary, total_incentives(t, 'd_abc_2016')
FROM d_abc_2016 t;
So the function being called will calculate total_incentives which is numeric for that employee_id and also show their salary. But the user might choose to add other fields to look at.
For the function, because the fields used in the total_incentives function will vary from table to table, I need to create logic to construct the query string dynamically.
CREATE OR REPLACE FUNCTION total_incentives(ANYELEMENT, t text)
RETURNS numeric AS
$$
DECLARE
-- table name lower case in case user typed wrong
tbl varchar(255) := lower($2;
-- parse out the table code to use in conditional logic
tbl_code varchar(255) := split_part(survey, '_', 2);
-- the starting point if the query string
base_calc varchar(255) := 'salary + '
-- query string
query_string varchar(255);
-- have to declare this to put computation INTO
total_incentives_calc numeric;
BEGIN
IF tbl_code = 'abc' THEN
query_string := base_calc || 'bonus';
ELSIF tbl_code = 'def' THEN
query_string := base_calc || 'bonus + commission';
ELSIF tbl_code = 'ghi' THEN
-- etc...
END IF;
EXECUTE format('SELECT $1 FROM %I', tbl)
INTO total_incentives_calc
USING query_string;
RETURN total_incentives_calc;
END;
$$
LANGUAGE plpgsql;
This results in an:
ERROR: invalid input syntax for type numeric: "salary + bonus"
CONTEXT: PL/pgSQL function total_incentives(anyelement,text) line 16 at EXECUTE
Since it should be returning a set of numeric values. Change it to the following:
CREATE OR REPLACE FUNCTION total_incentives(ANYELEMENT, t text)
RETURNS SETOF numeric AS
$$
...
RETURN;
Get the same error.
Figure well, maybe it is a table it is trying to return.
CREATE OR REPLACE FUNCTION total_incentives(ANYELEMENT, t text)
RETURNS TABLE(tot_inc numeric) AS
$$
...
Get the same error.
Really, any variation produces that result. So really not sure how to get this to work.
Look at RESULT QUERY, RESULT NEXT, or RESULT QUERY EXECUTE.
https://www.postgresql.org/docs/9.6/static/plpgsql-control-structures.html
RESULT QUERY won't work because it takes a hard coded query from what I can tell, which won't take in variables.
RESULT NEXT iterates through each record, which I don't think will be suitable for my needs and seems like it will be really slow... and it takes a hard coded query from what I can tell.
RESULT QUERY EXECUTE sounds promising.
-- EXECUTE format('SELECT $1 FROM %I', tbl)
-- INTO total_incentives_calc
-- USING query_string;
RETURN QUERY
EXECUTE format('SELECT $1 FROM %I', tbl)
USING query_string;
And get:
ERROR: structure of query does not match function result type
DETAIL: Returned type character varying does not match expected type numeric in column 1.
CONTEXT: PL/pgSQL function total_incentives(anyelement,text) line 20 at RETURN QUERY
It should be returning numeric.
Lastly, I can get this to work, but it won't be DRY. I'd rather not make a bunch of separate functions for each table with duplicative code. Most of the working examples I have seen have the whole query in the function and are called like such:
SELECT total_incentives(d_abc_2016, 'd_abc_2016');
So any additional columns would have to be specified in the function as:
EXECUTE format('SELECT employee_id...)
Given the users will only be able to run SELECT in query this really isn't an option. They need to specify any additional columns they want to see inside a query.
I've posted a similar question but was told it was unclear, so hopefully this lengthier version will more clearly explain what I am trying to do.
The column names and tables names should not be used as query parameters passed by USING clause.
Probably lines:
RETURN QUERY
EXECUTE format('SELECT $1 FROM %I', tbl)
USING query_string;
should be:
RETURN QUERY
EXECUTE format('SELECT %s FROM %I', query_string, tbl);
This case is example why too DRY principle is sometimes problematic. If you write it directly, then your code will be simpler, cleaner and probably shorter.
Dynamic SQL is one from last solution - not first. Use dynamic SQL only when your code will be significantly shorter with dynamic sql than without dynamic SQL.
I have table which has almost hundred fields. I want to get all the fields which data type is date. Is it possible in oracle to write such a query to return fields only contain a certain data type? Here is my pseudo query:
Select * from mytable
where colum_datatype is date
Similarly, I want to get all fields which is varchar2 type. is it possible to do that?
I can find all the date fields manually and put them in the query but I just want to know is there another way to do it.
Thank you!
You can query one of the system tables/views to get the list of columns:
select column_name
from all_tab_cols
where owner = :owner and table_name = :table and data_type = 'DATE';
If you need a one-off solution, just aggregate these and plug into a sql query. You can construct the entire SQL query:
select 'SELECT ' || listagg(column_name, ', ') within group (order by column_id) || ' FROM ' || :table
from all_tab_cols
where owner = :owner and table_name = :table and data_type = 'DATE';
You can also put the query into a string and use dynamic SQL (execute immediate) to run the query.
Sorry, no such functionality exists in vanilla SQL. You may be able to simulate such functionality by creating a PL/SQL function that returns a cursor to a dynamically created SQL statement.
I am trying to update a bunch of columns in a DB for testing purposes of a feature. I have a table that is built with hibernate so all of the columns that are created for an embedded entity begin with the same name. I.e. contact_info_address_street1, contact_info_address_street2, etc.
I am trying to figure out if there is a way to do something to the affect of:
UPDATE table SET contact_info_address_* = null;
If not, I know I can do it the long way, just looking for a way to help myself out in the future if I need to do this all over again for a different set of columns.
You need dynamic SQL for this. So you must defend against possible SQL injection.
Basic query
The basic query to generate the DML command needed can look like this:
SELECT format('UPDATE tbl SET (%s) = (%s)'
,string_agg (quote_ident(attname), ', ')
,string_agg ('NULL', ', ')
)
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND NOT attisdropped
AND attnum > 0
AND attname ~~ 'foo_%';
Returns:
UPDATE tbl SET (foo_a, foo_b, foo_c) = (NULL, NULL, NULL);
Make use of the "column-list syntax" of UPDATE to shorten the code and simplify the task.
I query the system catalogs instead of information schema because the latter, while being standardized and guaranteed to be portable across major versions, is also notoriously slow and sometimes unwieldy. There are pros and cons, see:
Get column names and data types of a query, table or view
quote_ident() for the column names prevents SQL-injection - also necessary for identifiers.
string_agg() requires 9.0+.
Full automation with PL/pgSQL function
CREATE OR REPLACE FUNCTION f_update_cols(_tbl regclass, _col_pattern text
, OUT row_ct int, OUT col_ct bigint)
LANGUAGE plpgsql AS
$func$
DECLARE
_sql text;
BEGIN
SELECT INTO _sql, col_ct
format('UPDATE tbl SET (%s) = (%s)'
, string_agg (quote_ident(attname), ', ')
, string_agg ('NULL', ', ')
)
, count(*)
FROM pg_attribute
WHERE attrelid = _tbl
AND NOT attisdropped -- no dropped columns
AND attnum > 0 -- no system columns
AND attname LIKE _col_pattern; -- only columns matching pattern
-- RAISE NOTICE '%', _sql; -- output SQL for debugging
EXECUTE _sql;
GET DIAGNOSTICS row_ct = ROW_COUNT;
END
$func$;
COMMENT ON FUNCTION f_update_cols(regclass, text)
IS 'Updates all columns of table _tbl ($1)
that match _col_pattern ($2) in a LIKE expression.
Returns the count of columns (col_ct) and rows (row_ct) affected.';
Call:
SELECT * FROM f_update_cols('myschema.tbl', 'foo%');
To make the function more practical, it returns information as described in the comment. More about obtaining the result status in plpgsql in the manual.
I use the variable _sql to hold the query string, so I can collect the number of columns found (col_ct) in the same query.
The object identifier type regclass is the most efficient way to automatically avoid SQL injection (and sanitize non-standard names) for the table name, too. You can use schema-qualified table names to avoid ambiguities. I would advise to do so if you (can) have multiple schemas in your db! See:
Table name as a PostgreSQL function parameter
db<>fiddle here
Old sqlfiddle
There's no handy shortcut sorry. If you have to do this kind of thing a lot, you could create a function to dynamically execute sql and achieve your goal.
CREATE OR REPLACE FUNCTION reset_cols() RETURNS boolean AS $$ BEGIN
EXECUTE (select 'UPDATE table SET '
|| array_to_string(array(
select column_name::text
from information_schema.columns
where table_name = 'table'
and column_name::text like 'contact_info_address_%'
),' = NULL,')
|| ' = NULL');
RETURN true;
END; $$ LANGUAGE plpgsql;
-- run the function
SELECT reset_cols();
It's not very nice though. A better function would be one that accepts the tablename and column prefix as args. Which I'll leave as an exercise for the readers :)