Get row counts and distinct row counts for every table in a Snowflake database? - sql

I am trying to create a data quality dashboard, showing every table in my Snowflake database, the row count, the distinct row count, and the number of duplicates. The table I want should look like this:
table_name | row_count | distinct_row_count | duplicates
————————————————————————————————————————————————————————
table_a | 1,372 | 1,370 | 2
table_b | 4,735 | 4,735 | 0
I've been able to get the table name and row count using information_schema.tables. I'm trying to figure out how to get distinct counts for all of these tables. The primary key column for every table is different. On some tables it will be a user_id, on others a session_id, etc.
I've looked through the snowflake documentation for built in functions that could help. I've explored the information/usage schemas, etc. I'm not sure if a stored procedure would help here (I haven't used a lot of those).
In python or another language, I'd loop through every table and calculate what I need. Is there a way to do this in SQL?

create or replace TABLE DEMO_DB.PUBLIC.SNOWBALL (
TABLE_NAME VARCHAR(314),
TOTAL_ROWS NUMBER(18,0),
TABLE_LAST_ALTERED TIMESTAMP_LTZ(9),
TABLE_CREATED TIMESTAMP_LTZ(9),
TABLE_BYTES NUMBER(18,0),
COL_NAME ARRAY,
COL_DATA_TYPE ARRAY,
COL_HLL ARRAY,
COL_NULL_CNT ARRAY,
COL_MIN ARRAY,
COL_MAX ARRAY,
COL_TOP ARRAY,
COL_AVG ARRAY,
COL_MODE ARRAY,
COL_STDDEV ARRAY,
COL_VAR_POP ARRAY,
COL_AVG_LENGTH ARRAY,
STATS_RUN_DATE_TIME TIMESTAMP_LTZ(9)
);
create or replace view SNOWBALL_COLUMNS as
select
concat_ws('.', table_catalog, table_schema, table_name) as full_table_name,
*
from (
select * from demo_db.information_schema.columns
union
select * from snowflake_sample_data.information_schema.columns
union
select * from util_db.information_schema.columns
);
create or replace view SNOWBALL_TABLES as
select
concat_ws('.', table_catalog, table_schema, table_name) as full_table_name,
*
from (
select * from demo_db.information_schema.tables
union
select * from snowflake_sample_data.information_schema.tables
union
select * from util_db.information_schema.tables
);
CREATE OR REPLACE PROCEDURE DEMO_DB.PUBLIC.SNOWBALL(
db_name STRING,
schema_name STRING,
snowball_table STRING,
max_age_days FLOAT,
limit FLOAT
)
RETURNS VARIANT
LANGUAGE JAVASCRIPT
COMMENT = 'Collects table and column stats.'
EXECUTE AS OWNER
AS
$$
var validLimit = Math.max(LIMIT, 0); // prevent SQL syntax error caused by negative numbers
var sqlGenerateInserts = `
WITH snowball_tables AS (
SELECT CONCAT_WS('.', table_catalog, table_schema, table_name) AS full_table_name, *
FROM IDENTIFIER(?) -- <<DB_NAME>>.INFORMATION_SCHEMA.TABLES
),
snowball_columns AS (
SELECT CONCAT_WS('.', table_catalog, table_schema, table_name) AS full_table_name, *
FROM IDENTIFIER(?) -- <<DB_NAME>>.INFORMATION_SCHEMA.COLUMNS
),
snowball AS (
SELECT table_name, MAX(stats_run_date_time) AS stats_run_date_time
FROM IDENTIFIER(?) -- <<SNOWBALL_TABLE>> table
GROUP BY table_name
)
SELECT full_table_name, aprox_row_count,
CONCAT (
'INSERT INTO IDENTIFIER(''', ?, ''') ', -- SNOWBALL table
'(table_name,total_rows,table_last_altered,table_created,table_bytes,col_name,',
'col_data_type,col_hll,col_avg_length,col_null_cnt,col_min,col_max,col_top,col_mode,col_avg,stats_run_date_time)',
'SELECT ''', full_table_name, ''' AS table_name, ',
table_stats_sql,
', ARRAY_CONSTRUCT( ', col_name, ') AS col_name',
', ARRAY_CONSTRUCT( ', col_data_type, ') AS col_data_type',
', ARRAY_CONSTRUCT( ', col_hll, ') AS col_hll',
', ARRAY_CONSTRUCT( ', col_avg_length, ') AS col_avg_length',
', ARRAY_CONSTRUCT( ', col_null_cnt, ') AS col_null_cnt',
', ARRAY_CONSTRUCT( ', col_min, ') AS col_min',
', ARRAY_CONSTRUCT( ', col_max, ') AS col_max',
', ARRAY_CONSTRUCT( ', col_top, ') AS col_top',
', ARRAY_CONSTRUCT( ', col_MODE, ') AS col_MODE',
', ARRAY_CONSTRUCT( ', col_AVG, ') AS col_AVG',
', CURRENT_TIMESTAMP() AS stats_run_date_time ',
' FROM ', quoted_table_name
) AS insert_sql
FROM (
SELECT
tbl.full_table_name,
tbl.row_count AS aprox_row_count,
CONCAT ( '"', col.table_catalog, '"."', col.table_schema, '"."', col.table_name, '"' ) AS quoted_table_name,
CONCAT (
'COUNT(1) AS total_rows,''',
IFNULL( tbl.last_altered::VARCHAR, 'NULL'), ''' AS table_last_altered,''',
IFNULL( tbl.created::VARCHAR, 'NULL'), ''' AS table_created,',
IFNULL( tbl.bytes::VARCHAR, 'NULL'), ' AS table_bytes' ) AS table_stats_sql,
LISTAGG (
CONCAT ('''', col.full_table_name, '.', col.column_name, '''' ), ', '
) AS col_name,
LISTAGG ( CONCAT('''', col.data_type, '''' ), ', ' ) AS col_data_type,
LISTAGG ( CONCAT( ' HLL(', '"', col.column_name, '"',') ' ), ', ' ) AS col_hll,
LISTAGG ( CONCAT( ' AVG(ZEROIFNULL(LENGTH(', '"', col.column_name, '"','))) ' ), ', ' ) AS col_avg_length,
LISTAGG ( CONCAT( ' SUM( IFF( ', '"', col.column_name, '"',' IS NULL, 1, 0) ) ' ), ', ') AS col_null_cnt,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MODE(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_MODE,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MIN(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_min,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MAX(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_max,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' AVG(', '"', col.column_name,'"',') ' ), 'NULL' ), ', ' ) AS col_AVG,
LISTAGG ( CONCAT ( ' APPROX_TOP_K(', '"', col.column_name, '"', ', 100, 10000)' ), ', ' ) AS col_top
FROM snowball_tables tbl JOIN snowball_columns col ON col.full_table_name = tbl.full_table_name
LEFT OUTER JOIN snowball sb ON sb.table_name = tbl.full_table_name
WHERE (tbl.table_catalog, tbl.table_schema) = (?, ?)
AND ( sb.table_name IS NULL OR sb.stats_run_date_time < TIMESTAMPADD(DAY, - FLOOR(?), CURRENT_TIMESTAMP()) )
--AND tbl.row_count > 0 -- NB: also excludes views (table_type = 'VIEW')
GROUP BY tbl.full_table_name, aprox_row_count, quoted_table_name, table_stats_sql, stats_run_date_time
ORDER BY stats_run_date_time NULLS FIRST )
LIMIT ` + validLimit;
var tablesAnalysed = [];
var currentSql;
try {
currentSql = sqlGenerateInserts;
var generateInserts = snowflake.createStatement( {
sqlText: currentSql,
binds: [
`"${DB_NAME}".information_schema.tables`,
`"${DB_NAME}".information_schema.columns`,
SNOWBALL_TABLE, SNOWBALL_TABLE,
DB_NAME, SCHEMA_NAME, MAX_AGE_DAYS, LIMIT
]
} );
var insertStatements = generateInserts.execute();
// loop over generated INSERT statements and execute them
while (insertStatements.next()) {
var tableName = insertStatements.getColumnValue('FULL_TABLE_NAME');
currentSql = insertStatements.getColumnValue('INSERT_SQL');
var insertStatement = snowflake.createStatement( {
sqlText: currentSql,
binds: [ SNOWBALL_TABLE ]
} );
var insertResult = insertStatement.execute();
tablesAnalysed.push(tableName);
}
return { result: "SUCCESS", analysedTables: tablesAnalysed };
}
catch (err) {
return {
error: err,
analysedTables: tablesAnalysed,
sql: currentSql
};
}
$$;
call DEMO_DB.PUBLIC.SNOWBALL(
'SNOWFLAKE_SAMPLE_DATA',
'TPCH_SF1',
'DEMO_DB.PUBLIC.SNOWBALL',
1, -- evals tables not analysed for x days -- first time you run this doesn't matter.
1000 -- limits # of tables analysed
);

CREATE OR REPLACE PROCEDURE DEMO_DB.PUBLIC.SNOWBALL(
db_name STRING,
schema_name STRING,
snowball_table STRING,
max_age_days FLOAT,
limit FLOAT
)
RETURNS VARIANT
LANGUAGE JAVASCRIPT
COMMENT = 'Collects table and column stats.'
EXECUTE AS OWNER
AS
$$
var validLimit = Math.max(LIMIT, 0); // prevent SQL syntax error caused by negative numbers
var sqlGenerateInserts = `
WITH snowball_tables AS (
SELECT CONCAT_WS('.', table_catalog, table_schema, table_name) AS full_table_name, *
FROM IDENTIFIER(?) -- <<DB_NAME>>.INFORMATION_SCHEMA.TABLES
),
snowball_columns AS (
SELECT CONCAT_WS('.', table_catalog, table_schema, table_name) AS full_table_name, *
FROM IDENTIFIER(?) -- <<DB_NAME>>.INFORMATION_SCHEMA.COLUMNS
),
snowball AS (
SELECT table_name, MAX(stats_run_date_time) AS stats_run_date_time
FROM IDENTIFIER(?) -- <<SNOWBALL_TABLE>> table
GROUP BY table_name
)
SELECT full_table_name, aprox_row_count,
CONCAT (
'INSERT INTO IDENTIFIER(''', ?, ''') ', -- SNOWBALL table
'(table_name,total_rows,table_last_altered,table_created,table_bytes,col_name,',
'col_data_type,col_hll,col_avg_length,col_null_cnt,col_min,col_max,col_top,col_mode,col_avg,stats_run_date_time)',
'SELECT ''', full_table_name, ''' AS table_name, ',
table_stats_sql,
', ARRAY_CONSTRUCT( ', col_name, ') AS col_name',
', ARRAY_CONSTRUCT( ', col_data_type, ') AS col_data_type',
', ARRAY_CONSTRUCT( ', col_hll, ') AS col_hll',
', ARRAY_CONSTRUCT( ', col_avg_length, ') AS col_avg_length',
', ARRAY_CONSTRUCT( ', col_null_cnt, ') AS col_null_cnt',
', ARRAY_CONSTRUCT( ', col_min, ') AS col_min',
', ARRAY_CONSTRUCT( ', col_max, ') AS col_max',
', ARRAY_CONSTRUCT( ', col_top, ') AS col_top',
', ARRAY_CONSTRUCT( ', col_MODE, ') AS col_MODE',
', ARRAY_CONSTRUCT( ', col_AVG, ') AS col_AVG',
', CURRENT_TIMESTAMP() AS stats_run_date_time ',
' FROM ', quoted_table_name
) AS insert_sql
FROM (
SELECT
tbl.full_table_name,
tbl.row_count AS aprox_row_count,
CONCAT ( '"', col.table_catalog, '"."', col.table_schema, '"."', col.table_name, '"' ) AS quoted_table_name,
CONCAT (
'COUNT(1) AS total_rows,''',
IFNULL( tbl.last_altered::VARCHAR, 'NULL'), ''' AS table_last_altered,''',
IFNULL( tbl.created::VARCHAR, 'NULL'), ''' AS table_created,',
IFNULL( tbl.bytes::VARCHAR, 'NULL'), ' AS table_bytes' ) AS table_stats_sql,
LISTAGG (
CONCAT ('''', col.full_table_name, '.', col.column_name, '''' ), ', '
) AS col_name,
LISTAGG ( CONCAT('''', col.data_type, '''' ), ', ' ) AS col_data_type,
LISTAGG ( CONCAT( ' HLL(', '"', col.column_name, '"',') ' ), ', ' ) AS col_hll,
LISTAGG ( CONCAT( ' AVG(ZEROIFNULL(LENGTH(', '"', col.column_name, '"','))) ' ), ', ' ) AS col_avg_length,
LISTAGG ( CONCAT( ' SUM( IFF( ', '"', col.column_name, '"',' IS NULL, 1, 0) ) ' ), ', ') AS col_null_cnt,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MODE(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_MODE,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MIN(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_min,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' MAX(', '"', col.column_name, '"', ') ' ), 'NULL' ), ', ' ) AS col_max,
LISTAGG ( IFF ( col.data_type = 'NUMBER', CONCAT ( ' AVG(', '"', col.column_name,'"',') ' ), 'NULL' ), ', ' ) AS col_AVG,
LISTAGG ( CONCAT ( ' APPROX_TOP_K(', '"', col.column_name, '"', ', 100, 10000)' ), ', ' ) AS col_top
FROM snowball_tables tbl JOIN snowball_columns col ON col.full_table_name = tbl.full_table_name
LEFT OUTER JOIN snowball sb ON sb.table_name = tbl.full_table_name
WHERE (tbl.table_catalog, tbl.table_schema) = (?, ?)
AND ( sb.table_name IS NULL OR sb.stats_run_date_time < TIMESTAMPADD(DAY, - FLOOR(?), CURRENT_TIMESTAMP()) )
--AND tbl.row_count > 0 -- NB: also excludes views (table_type = 'VIEW')
GROUP BY tbl.full_table_name, aprox_row_count, quoted_table_name, table_stats_sql, stats_run_date_time
ORDER BY stats_run_date_time NULLS FIRST )
LIMIT ` + validLimit;
var tablesAnalysed = [];
var currentSql;
try {
currentSql = sqlGenerateInserts;
var generateInserts = snowflake.createStatement( {
sqlText: currentSql,
binds: [
`"${DB_NAME}".information_schema.tables`,
`"${DB_NAME}".information_schema.columns`,
SNOWBALL_TABLE, SNOWBALL_TABLE,
DB_NAME, SCHEMA_NAME, MAX_AGE_DAYS, LIMIT
]
} );
var insertStatements = generateInserts.execute();
// loop over generated INSERT statements and execute them
while (insertStatements.next()) {
var tableName = insertStatements.getColumnValue('FULL_TABLE_NAME');
currentSql = insertStatements.getColumnValue('INSERT_SQL');
var insertStatement = snowflake.createStatement( {
sqlText: currentSql,
binds: [ SNOWBALL_TABLE ]
} );
var insertResult = insertStatement.execute();
tablesAnalysed.push(tableName);
}
return { result: "SUCCESS", analysedTables: tablesAnalysed };
}
catch (err) {
return {
error: err,
analysedTables: tablesAnalysed,
sql: currentSql
};
}
$$;
I've done somewhat of an overkill solution solving this.
SQL used supplied ... basically does everything you've asked for plus top 100 values, min,max, stddev, avg, null % to the column level for every table in ALL databases.
Oh yes and works out ALL PK/FK's returning not just the PK but the description instead.
Runs in seconds ... All sql available from a post in the community snowflake. Hit me up if you want the really smart stuff :-)
SQL here :
https://community.snowflake.com/s/group/0F90Z000000IOX5SAO/general-snowflake-community-help

Related

Group by on multiple subqueries

I'm new to Oracle SQL and am still learning, I'm trying to work out what GROUP BY I need to use.
The subquery works by itself:
SELECT TO_CHAR(CREATE_DATE_TIME, 'DD-MON-YYYY') "DTTM"
, CASE_NBR
, COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID) "REF_FIELD_1"
, COALESCE(PT.REF_FIELD_2, LPN.ASN_ID || LPN.ITEM_ID) "REF_FIELD_2"
FROM PIX_TRAN PT, LPN
WHERE ( ( PT.TRAN_TYPE = '300'
AND PT.TRAN_CODE = '01'
AND PT.ACTN_CODE = '20' )
OR ( PT.TRAN_TYPE = '300'
AND PT.TRAN_CODE = '04'
AND PT.ACTN_CODE = '21' ) )
AND SUBSTR(COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID), 1, INSTR(COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID), '_', 1)) != 'Return_'
AND PT.CASE_NBR = LPN.TC_LPN_ID (+)
AND PT.WHSE = 'DCV'
AND TRUNC(CREATE_DATE_TIME) = TRUNC(SYSDATE)
But when I try to add it as a subquery with a GROUP BY, I can't seem to work out what the correct GROUP BY should be?
SELECT 'PO Lines/LPNs Putaway' AS "FACILITY_ACTIVITY"
, TRUNC DTTM AS "CREATED"
, COUNT(DISTINCT REF_FIELD_1 || REF_FIELD_2)|| '/'|| COUNT(DISTINCT CASE_NBR) "Total"
FROM (
SELECT TO_CHAR(CREATE_DATE_TIME, 'DD-MON-YYYY') "DTTM"
, CASE_NBR
, COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID) "REF_FIELD_1"
, COALESCE(PT.REF_FIELD_2, LPN.ASN_ID || LPN.ITEM_ID) "REF_FIELD_2"
FROM PIX_TRAN PT, LPN
WHERE ( ( PT.TRAN_TYPE = '300'
AND PT.TRAN_CODE = '01'
AND PT.ACTN_CODE = '20' )
OR ( PT.TRAN_TYPE = '300'
AND PT.TRAN_CODE = '04'
AND PT.ACTN_CODE = '21' ) )
AND SUBSTR(COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID), 1, INSTR(COALESCE(PT.REF_FIELD_1, LPN.TC_ASN_ID), '_', 1)) != 'Return_'
AND PT.CASE_NBR = LPN.TC_LPN_ID (+)
AND PT.WHSE = 'DCV'
AND TRUNC(CREATE_DATE_TIME) = TRUNC(SYSDATE)
)
GROUP BY TRUNC(DTTM);
I've tried the following GROUP BY's
GROUP BY TRUNC(DTTM)
ERROR - "FROM Keyword not found where expected"
GROUP BY TRUNC(TO_CHAR(CREATE_DATE_TIME, 'DD-MON-YYYY'))
with changing the select clause to
TRUNC(TO_CHAR(CREATE_DATE_TIME, 'DD-MON-YYYY')) AS "CREATED"
ERROR - "CREATE_DATE_TIME" invalid identifier
GROUP BY TRUNC(CREATE_DATE_TIME)
with changing the select clause to
TRUNC(CREATE_DATE_TIME) AS "CREATED"
ERROR - "CREATE_DATE_TIME" invalid identifier
Can someone please point out what I'm missing?
I formatted your queries which makes it easy to see the issue
Original
SELECT
to_char(create_date_time, 'DD-MON-YYYY') "DTTM",
case_nbr,
coalesce(pt.ref_field_1, lpn.tc_asn_id) "REF_FIELD_1",
coalesce(pt.ref_field_2, lpn.asn_id || lpn.item_id) "REF_FIELD_2"
FROM
pix_tran pt,
lpn
WHERE
( ( pt.tran_type = '300'
AND pt.tran_code = '01'
AND pt.actn_code = '20' )
OR ( pt.tran_type = '300'
AND pt.tran_code = '04'
AND pt.actn_code = '21' ) )
AND substr(coalesce(pt.ref_field_1, lpn.tc_asn_id),
1,
instr(coalesce(pt.ref_field_1, lpn.tc_asn_id),
'',
1)) != 'Return'
AND pt.case_nbr = lpn.tc_lpn_id (+)
AND pt.whse = 'DCV'
AND trunc(create_date_time) = trunc(sysdate)
Inline view
select 'PO Lines/LPNs Putaway' as "FACILITY_ACTIVITY",
trunc dttm AS "CREATED" , COUNT(DISTINCT REF_FIELD_1 || REF_FIELD_2)|| '/'|| COUNT(DISTINCT CASE_NBR) "Total" FROM
(
select to_char(
create_date_time,
'DD-MON-YYYY'
) "DTTM",
case_nbr,
coalesce(
pt.ref_field_1,
lpn.tc_asn_id
) "REF_FIELD_1",
coalesce(
pt.ref_field_2,
lpn.asn_id || lpn.item_id
) "REF_FIELD_2"
from pix_tran pt,
lpn
where ( ( pt.tran_type = '300'
and pt.tran_code = '01'
and pt.actn_code = '20' )
or ( pt.tran_type = '300'
and pt.tran_code = '04'
and pt.actn_code = '21' ) )
and substr(
coalesce(
pt.ref_field_1,
lpn.tc_asn_id
),
1,
instr(
coalesce(
pt.ref_field_1,
lpn.tc_asn_id
),
'',
1
)
) != 'Return'
and pt.case_nbr = lpn.tc_lpn_id (+)
and pt.whse = 'DCV'
and trunc(create_date_time) = trunc(sysdate)
)
group by trunc(dttm);
You are missing the brackets on your TRUNC, and since your DTTM is a string, the use of trunc at all is probably not appropriate.
I would move the TRUNC inside the subquery (it will reduce the datetime to a date) and then just group by DTTM

Duplicates in SQL when using DISTINCT

Not sure why I keep getting duplicates with this query. It should be easy, but for some reason I just cannot figure it out.
This is my query:
SELECT DISTINCT
STUFF((SELECT
'; ' +
CASE
WHEN staff.lastname IS NOT NULL
THEN UPPER(REPLACE(RTRIM(staff.lastname), ' ', '') + ', ' + RTRIM(staff.firstname))
ELSE UPPER('Not Assigned')
END
FROM
ca_case_assign ca
JOIN
staff ON staff.username = ca.staffusername
JOIN
tbl_case c on ca.appid = c.col_caseid
WHERE
ca.clientusername = c.col_username
FOR XML PATH('')), 1, 1, '') [CaseManager]
and this is the result that I get:
LOCALSTAFF, THERESA; LOCALSTAFF, THERESA; O'MALLEY, ELLEN; STAFF, STATE; STAFF, STATE; STAFF, STATE; STAFF, STATE; STAFF, STATE; STAFF, BC; STAFF, BC; STAFF, BC; STAFF, BC; STAFF, BC; STAFF, BC; STAFF, BC;
Which is obviously incorrect.
Please help, thank you.
The inner query should have the distinct instead:
SELECT
STUFF((SELECT DISTINCT
'; ' +
CASE
WHEN staff.lastname IS NOT NULL
THEN UPPER(REPLACE(RTRIM(staff.lastname), ' ', '') + ', ' + RTRIM(staff.firstname))
ELSE UPPER('Not Assigned')
END
FROM
ca_case_assign ca
JOIN
staff ON staff.username = ca.staffusername
JOIN
tbl_case c on ca.appid = c.col_caseid
WHERE
ca.clientusername = c.col_username
FOR XML PATH('')), 1, 1, '') [CaseManager]

How to remove and replace ")" space?

i have problem how to remove space between ")","(" and "/" in the sql. i just want to remove space NOT include the text. How to do that?.
For example:-
Sek. 175 (1) (a)/(b) atau Sek. 187B (1) (a)/(b)
AND i want the text to be like this:
Sek.175(1)(a)/(b) atau Sek.187B(1)(a)/(b)
This is my query:
SELECT distinct mhn.id_mohon,
'oleh sebab (' || ku.ruj_kanun || ')' ruj_kanun
FROM mohon mhn, kod_urusan ku, mohon_ruj_luar mrl, pguna pg,
kod_perintah kp
WHERE mhn.id_mohon = :p_id_mohon
AND mhn.kod_urusan = ku.kod(+)
AND mhn.id_mohon = mrl.id_mohon(+)
AND mrl.kod_perintah = kp.kod(+)
AND mhn.dimasuk = pg.id_pguna(+)
AND mhn.kod_urusan = 'PHKK'
Anyone know about this?
replace(
regexp_replace(
regexp_replace(
regexp_replace(
string,
'\s([a-zA-Z]+($|\W))', chr(0)||'\1'
),
'((^|\W)[a-zA-Z]+)\s', '\1'||chr(0)
),
'\s'),
chr(0), ' ')
fiddle
Definitely not the most effective but this should work
REPLACE(REPLACE(column, ' ', ''), 'atau', ' atau ')
Replace(') (', ')(')
etcetera.

Multiple joins sql

I have a query which must contain 3 joins, however I get this error:
Syntax error (Missing operator)
The SQL:
SELECT
Agents.[PF],
Agents.[User_ID],
Agents.[First_Name],
Agents.[Second_Name],
Agents.[Third_Name],
Agents.[Family_Name],
Agents.[Gender],
Agents.[Contract_Type],
Agents.Area,
Teams.Team_Name,
Agents.Hiring_Date,
Resignation_Pool.Resignation_Date,
Resignation_Pool.Effective_Date,
Replace(
IIf(Skills.Skill_Directory IS NULL, '', 'Directory, ')
+ IIf(Skills.Skill_TRC IS NULL, '', 'TRC, ')
+ IIf(Skills.Skill_Prepaid IS NULL, '', 'Prepaid, ')
+ IIf(Skills.Skill_Postpaid IS NULL, '', 'Postpaid, ')
+ IIf(Skills.Skill_KeyAccount IS NULL, '', 'KeyAccount, ')
+ IIf(Skills.Skill_Blackberry IS NULL, '', 'Blackberry, ')
+ IIf(Skills.Skill_Broadband IS NULL, '', 'Broadband, ')
+ IIf(Skills.Skill_Concierge IS NULL, '', 'Concierge, ')
+ IIf(Skills.Skill_ISP IS NULL, '', 'ISP, ')
+ IIf(Skills.Skill_Mada IS NULL, '', 'Mada, ')
+ IIf(Skills.Skill_CSCS IS NULL, '', 'CSCS, ')
+ '$', ', $', ''
) AS Skills
FROM Agents
LEFT JOIN Resignation_Pool
ON Agents.PF = Resignation_Pool.PF
LEFT JOIN Teams
ON Agents.Team = Teams.ID
LEFT JOIN Skills
ON Agents.PF = Skills.PF
WHERE Agents.Contract_Status = 'Active'
What is causing this error?
MS Access requires parentheses around JOIN syntax with multiple tables. You will need to use something similar to this:
FROM ((Agents
LEFT JOIN Resignation_Pool
ON Agents.PF = Resignation_Pool.PF)
LEFT JOIN Teams
ON Agents.Team = Teams.ID)
LEFT JOIN Skills
ON Agents.PF = Skills.PF

Oracle - SubQuery returning Multiple rows

I have the following table structure:
HSM
HSM_EXC_CODE Y VARCHAR2(60)
HSM_INSTR_CODE Y VARCHAR2(60)
HSM_ISIN Y VARCHAR2(60)
HSM_VWD_TICKERSYMBL Y VARCHAR2(80)
TENFORE_EXCHANGE_MAP
HS_MARKET Y VARCHAR2(40)
TF_EXCHANGE Y VARCHAR2(40)
TFV
TFE_ID Y NUMBER(22)
TFE_VSE_CODE Y VARCHAR2(1000)
Different TFE_ID can have same TFE_VSE_CODE! I think this is what I'm missing in the update query below.
VSD
VSD_ON Y VARCHAR2(160)
VSD_ISIN Y VARCHAR2(15)
The tables are connected like the following:
TENFORE_EXCHANGE_MAP.HS_MARKET = HSM.HSM_EXC_CODE
TENFORE_EXCHANGE_MAP.TF_EXCHANGE = TFV.TFE_ID
I'm trying to fill hsm_isin and hsm_on fields. To reach the goal I'm trying to generate the names from hsm.hsm_exc_code . tfv.tfe_vse_code. But I'm doing it wrong, cause I'm getting the error from the topic. This is what I have tried:
UPDATE hsm
SET hsm_isin =
(SELECT distinct vsd.vsd_isin
FROM vsd, tfv, TENFORE_EXCHANGE_MAP
WHERE vsd.vsd_on = hsm.hsm_instr_code || '.' || tfv.tfe_vse_code
AND hsm.hsm_exc_code = TENFORE_EXCHANGE_MAP.HS_MARKET
AND TENFORE_EXCHANGE_MAP.TF_EXCHANGE = tfv.tfe_id)
,hsm.hsm_vwd_tickersymbl =
(SELECT distinct vsd.vsd_on
FROM vsd, tfv, TENFORE_EXCHANGE_MAP
WHERE vsd.vsd_on = hsm.hsm_instr_code || '.' || tfv.tfe_vse_code
AND hsm.hsm_exc_code = TENFORE_EXCHANGE_MAP.HS_MARKET
AND TENFORE_EXCHANGE_MAP.TF_EXCHANGE = tfv.tfe_id);
There must be more than one line for key in either first or second subquery:
Try something like:
SELECT hsm.hsm_instr_code,
count( distinct( vsd.vsd_on ) ) cnt1,
count( distinct( vsd.vsd_isin ) ) cnt2
FROM vsd, tfv, TENFORE_EXCHANGE_MAP, hsm
WHERE vsd.vsd_on = hsm.hsm_instr_code || '.' || tfv.tfe_vse_code
AND hsm.hsm_exc_code = TENFORE_EXCHANGE_MAP.HS_MARKET
AND TENFORE_EXCHANGE_MAP.TF_EXCHANGE = tfv.tfe_id
GROUP BY hsm.hsm_instr_code
HAVING count( distinct( vsd.vsd_on ) ) > 1 OR count( distinct( vsd.vsd_isin ) ) > 1
NOTE: Once you fix multiline problem, you can simplify two subqueries in one, like below:
UPDATE hsm SET ( hsm_isin, hsm.hsm_vwd_tickersymbl ) =
(SELECT distinct vsd.vsd_isin, vsd.vsd_on
FROM vsd, tfv, TENFORE_EXCHANGE_MAP
WHERE vsd.vsd_on = hsm.hsm_instr_code || '.' || tfv.tfe_vse_code
AND hsm.hsm_exc_code = TENFORE_EXCHANGE_MAP.HS_MARKET
AND TENFORE_EXCHANGE_MAP.TF_EXCHANGE = tfv.tfe_id);