Trying to run sha256 function
CREATE EXTENSION pgcrypto;
CREATE OR REPLACE FUNCTION sha256(bytea) returns text AS $$
SELECT encode(digest($1, 'sha256'), 'hex')
$$ LANGUAGE SQL STRICT IMMUTABLE;
WITH
tab_email as (SELECT 'my#email.com'::text as email FROM tmp),
INSERT INTO users (email, password) VALUES ((SELECT email FROM tab_email), sha256('mypass'));
i got this error
ERROR: function sha256(text) does not exist
It's because Postgres's built-in sha256 function takes a bytea argument:
citus=> \df+ sha256
List of functions
Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description
------------+--------+------------------+---------------------+------+------------+----------+----------+----------+-------------------+----------+--------------+--------------
pg_catalog | sha256 | bytea | bytea | func | immutable | safe | postgres | invoker | | internal | sha256_bytea | SHA-256 hash
(1 row)
So just cast to ::bytea first.
citus=> select encode(sha256('a'::bytea), 'hex');
encode
------------------------------------------------------------------
ca978112ca1bbdcafac231b39a23dc4da786eff8147c4e72b9807785afee48bb
(1 row)
I ended up using
encode(digest(password, 'sha256'), 'hex') from tab_password)
If you create this function in redshift, you will be able to your f_sha256
CREATE OR REPLACE FUNCTION f_sha256 (mes VARCHAR)
returns VARCHAR
STABLE AS $$
import hashlib
return hashlib.sha256(mes).hexdigest()
$$ language plpythonu;
Related
in order to create a more complex custom aggregate function, i followed first this amazing tutorial.
Here the data i use :
create table entries(
id serial primary key,
amount float8 not null
);
select setseed(0);
insert into entries(amount)
select (2000 * random()) - 1000
from generate_series(1, 1000000);
So I have this table :
id | amount | running_total
---------+-----------------------+--------------------
1 | -462.016298435628 | -462.016298435628
2 | 162.440904416144 | -299.575394019485
3 | -820.292402990162 | -1119.86779700965
4 | -866.230697371066 | -1986.09849438071
5 | -495.30001822859 | -2481.3985126093
6 | 772.393747232854 | -1709.00476537645
7 | -323.866365477443 | -2032.87113085389
8 | -856.917716562748 | -2889.78884741664
9 | 285.323366522789 | -2604.46548089385
10 | -867.916810326278 | -3472.38229122013
-- snip --
And I would like the max of the running_total column
(I know I can do do it without a new aggregate function, but it's for the demonstration)
So i've made this aggregate function
create or replace function grt_sfunc(agg_state point, el float8)
returns point
immutable
language plpgsql
as $$
declare
greatest_sum float8;
current_sum float8;
begin
current_sum := agg_state[0] + el;
greatest_sum := 40;
/*if agg_state[1] < current_sum then
greatest_sum := current_sum;
else
greatest_sum := agg_state[1];
end if;*/
return point(current_sum, greatest_sum);
/*return point(3.14159, 0);*/
end;
$$;
create or replace function grt_finalfunc(agg_state point)
returns float8
immutable
strict
language plpgsql
as $$
begin
return agg_state[0];
end;
$$;
create or replace aggregate greatest_running_total (float8)
(
sfunc = grt_sfunc,
stype = point,
finalfunc = grt_finalfunc
);
Normally it sould work, but in the end, it gives me a null result :
select greatest_running_total(amount order by id asc)
from entries;
id | running_total
---------+---------------
1 | [NULL]
I tried to change the type of the data, to check the 2 first aggregate functions separately, they are working well. Does someone could help me find a solution please ? :)
Thank you very much !
You need to set a non-NULL initcond for the aggregate. Presumably that would be (0,0), or maybe negative very large numbers for each? Or manually check for the agg_state being NULL.
Also, it seems like your grt_finalfunc should be returning subscript [1], not [0].
So, the solution was to add an initial condition. Indeed, without initial condition, the first value is considered as NULL :D (thank you #jjanes and #The Impaler)
So I corrected ma aggregated function :
create or replace aggregate greatest_running_total (float8)
(
sfunc = grt_sfunc,
stype = point,
finalfunc = grt_finalfunc,
initcond = '(0,0)'
);
And, indeed SQL indexes its tables from 1 and not from 0... Here was my second mistake,
Thank you very much !!
I have this type:
CREATE TYPE public.user_type AS (
text_1 VARCHAR(512),
text_2 VARCHAR(1000),
jsonb_1 JSONB,
jsonb_2 JSONB
);
And need to know how to format a literal value to set an array of this type for a unit test. I keep getting malformed array literal errors when I try to set it.
SQL Error [22P02]: ERROR: malformed array literal: "{{"(text 1,text 2,{"key_1":"value_1"},{"key_2":"value_2"})"}"
Detail: Unexpected array element.
Where: PL/pgSQL function inline_code_block line 4 during statement block local variable initialization
with this snippet:
DO $$
DECLARE
user_type public.user_type[] = '{{"(text 1,text 2,{"key_1":"value_1"},{"key_2":"value_2"})"}';
BEGIN
END $$;
How do I form a literal string to set this composite type?
For other unit tests I can declare composite types without JSONB elements and set them with literal values. For example this works:
For this type:
CREATE TYPE public.user_type_2 AS (
text_1 VARCHAR(512),
text_2 VARCHAR(1000)
);
This snippet will return multiple rows:
DO $$
DECLARE
user_type_2 public.user_type_2[] = '{{"(string_1a,string_1b)"},{"(string_2a,string_2b)"}}';
BEGIN
DROP TABLE IF EXISTS _results;
CREATE TEMPORARY TABLE _results AS
SELECT * FROM UNNEST(user_type_2) x (text_1, text_2);
END $$;
SELECT * FROM _results;
as I would expect
+-----------+-----------+
| text_1 | text_2 |
+-----------+-----------+
| string_1a | string_1b |
| string_2a | string_2b |
+-----------+-----------+
You have to use several levels of escaping for that:
The array element has to be surrounded by ", because it contains ,, { and }.
Each of the " inside the array element has to be escaped to \".
The JSON elements in the composite type have to be surrounded by " (now \") because they contain ,.
The " for JSON strings have to be doubled to escape the " from the previous point, so they eventually become \"\".
Your assignment could look like this:
user_type public.user_type[] := '{"(text 1,text 2,\"{\"\"key_1\"\": \"\"value_1\"\"}\",\"{\"\"key_2\"\": \"\"value_2\"\"}\")"}';
Yuck.
I have PostgreSQL 9.6.8 running on Fedora 27 64bit. When i execute this query:
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col1",'')), 'D') ||
setweight(to_tsvector('french', coalesce("col2",'')), 'D');
I get this error:
ERROR: cache lookup failed for function 3625
********** Error **********
ERROR: cache lookup failed for function 3625
SQL state: XX000
but when I execute either:
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col1",'')), 'D');
or
UPDATE tbl SET textsearchable_index_col =
setweight(to_tsvector('french', coalesce("col2",'')), 'D');
I get:
Query returned successfully: 0 rows affected, 11 msec execution time.
My question is why does it work for either column individually but it does not work when together?
This link shows that it should be possible to use both columns in the same query (at the end of section 12.3.1).
Edit: here is what the system returns for Laurenz's queries. The first query returns
oprname | oprleft | oprright | oprcode
---------+----------+----------+----------
|| | tsvector | tsvector | 3625
The second query returns an empty result set.
Your database is corrupted, and you are lacking the function tsvector_concat which is the function behind the || operator.
This is how it should look on a healthy system:
SELECT oprname, oprleft::regtype, oprright::regtype, oprcode
FROM pg_operator
WHERE oid = 3633;
oprname | oprleft | oprright | oprcode
---------+----------+----------+-----------------
|| | tsvector | tsvector | tsvector_concat
(1 row)
SELECT proname, proargtypes::regtype[], prosrc
FROM pg_proc
WHERE oid = 3625;
proname | proargtypes | prosrc
-----------------+---------------------------+-----------------
tsvector_concat | [0:1]={tsvector,tsvector} | tsvector_concat
(1 row)
The second part is missing in your case.
You should restore from a backup.
Try to figure out how you got into this mess so that you can avoid it in the future.
So I have a lot of data like this:
pix11co;10.115.0.1
devapp087co;10.115.0.100
old_main-mgr;10.115.0.101
radius03co;10.115.0.110
And I want to delete the stuff after the ; so it just becomes
pix11co
devapp087co
old_main-mgr
radius03co
Since they're all different I can live with the semi-colon staying there.
I have the following query and it runs successfully but doesn't delete anything.
UPDATE dns$ SET [Name;] = REPLACE ([Name;], '%_;%__________%', '%_;');
What wildcards can I use to specify the characters after the ; ?
Can you use CHARINDEX? E.g.:
SELECT LEFT('pix11co;10.115.0.1', CHARINDEX(';', 'pix11co;10.115.0.1') - 1)
You can use SUBSTRING() and CHARINDEX() functions:
CREATE TABLE MyStrings (
STR VARCHAR(MAX)
);
INSERT INTO MyStrings VALUES
('pix11co;10.115.0.1'),
('devapp087co;10.115.0.100'),
('old_main-mgr;10.115.0.101'),
('radius03co;10.115.0.110');
SELECT STR, SUBSTRING(STR, 1, CHARINDEX(';', STR) -1 ) AS Result
FROM MyStrings;
Results:
+---------------------------+--------------+
| STR | Result |
+---------------------------+--------------+
| pix11co;10.115.0.1 | pix11co |
| devapp087co;10.115.0.100 | devapp087co |
| old_main-mgr;10.115.0.101 | old_main-mgr |
| radius03co;10.115.0.110 | radius03co |
+---------------------------+--------------+
In my database I have a table "Datapoint" with the two columns "Id" (integer) and "Description" (character varying). Table "Datapoint"
I then have a table "Logging" with the three columns "Id" (integer), "Dt" (timestamp without timezone) and "Value" (double precision).Table "Logging"
I also have the following function:
CREATE OR REPLACE FUNCTION count_estimate(query text)
RETURNS integer AS
$BODY$ DECLARE rec record;ROWS INTEGER;BEGIN FOR rec IN EXECUTE 'EXPLAIN ' || query LOOP ROWS := SUBSTRING(rec."QUERY PLAN" FROM ' rows=([[:digit:]]+)');EXIT WHEN ROWS IS NOT NULL;END LOOP;RETURN ROWS;END $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
This function returns the estimated count of entries that are found by a SELECT-Query, e.g. SELECT count_estimate('SELECT * FROM "Logging" WHERE "Id" = 3') would return 2.
I would now like to combine a SELECT-query on the table "Datapoint" with the return value of my function, so that my result looks like this:
ID | Description | EstimatedCount
1 | Datapoint 1 | 3
2 | Datapoint 2 | 4
3 | Datapoint 3 | 2
4 | Datapoint 4 | 1
My SELECT-query should look something like this:
SELECT
"Datapoint"."Id",
"Datapoint"."Description",
(SELECT count_estimate ('SELECT * FROM "Logging" WHERE "Logging"."Id" = "Datapoint"."Id"')) AS "EstimatedCount"
FROM
"Datapoint"
So my problem is to write a functioning SELECT-query for my purposes.
What about:
SELECT
"Datapoint"."Id",
"Datapoint"."Description",
count_estimate ('SELECT * FROM "Logging" WHERE "Logging"."Id" = "Datapoint"."Id"') AS "EstimatedCount"
FROM
"Datapoint"
You almost got it right, except that you need to supply the value of "Datapoint"."Id":
SELECT
"Datapoint"."Id",
"Datapoint"."Description",
count_estimate(
'SELECT * FROM "Logging" WHERE "Logging"."Id" = ' || "Datapoint"."Id"
) AS "EstimatedCount"
FROM "Datapoint";