Crosstab of row - SQL - Oracle - sql

I try to generate a matrix or crosstab with the rows below:
TBL_CURRENCY_PAIR
ID | ISO_1 | ISO_2
1 | EUR | USD
2 | JPY | USD
4 | GBP | USD
I'd like to obtain a oracle view that contains something like below:
VIEW_PAIR
|PAIR|
USD.USD
GBP.USD
EUR.USD
JPY.USD
USD.GBP
GBP.GBP
EUR.GBP
JPY.GBP
USD.EUR
GBP.EUR
EUR.EUR
JPY.EUR
USD.JPY
GBP.JPY
EUR.JPY
JPY.JPY
I have tried with inner join to obtain a recursivity but nothing...
thanks in advance for your help,
Have nice day.

Perhaps the following does what you want:
with c as (
select iso_1 as iso
from tbl_currency_pair
union
select iso_2
from tbl_currency_pair
)
select c1.iso || '.' || c2.iso
from c c1 cross join c c2;
This generates all unique combinations of the currencies in the pair table.

Related

Conditional Join Big Query

I am beginner with BigQuery and SQL in general. I have a query that looks like this:
SELECT
base.*
IF( regexp_contains(rate_name, 'usd'), price * ft.usd, IF(regexp_contains(rate_name, 'gbp'), price * ft.gbp, price )) AS converted_price
FROM base_table base
JOIN
finance_table ft
ON
base.date = ft.date
In short, I have a table with some data (base) and depending on the currency that is the price, I want to convert using the rate stored in another table. The table with the rates (finance_table) has data only for 2021 but the base_table has data for dates before that.
What I want to do is to use this query as is when the date exists in the finance_table, otherwise use the rates from 2021-01-01 (this first date of finance_table).
What I tried is to join on this:
ON
IF( ft.date IS NOT NULL, base.date = ft.date, ft.date = '2021-01-01')
However, this doesn't give me any results when I query for a random date from 2020. I am sure that the condition is wrong, so any ideas?
P.S. Another thing that would suffice is using fixed numbers, e.g. if the date doesn't exist, multiply the price with 0.85 or 1.15, but this would probably make things more complicated.
EDIT:
Tables look like this:
BASE:
DATE | PRODUCT_NAME | PRICE | RATE_NAME
2020-01-01| APPLE | 0.5 | usd
2021-01-01| ORANGE | 0.4 | gbp
FINANCE_TABLE:
DATE | USD | GBP
2021-01-01| 0.844 | 1.443
2021-01-02| 0.846 | 1.423
The final result should look like this, when I query for date = '2021-01-01'
DATE | PRODUCT_NAME| PRICE | RATE_NAME | CONVERTED_PRICE
2021-01-01 | ORANGE | 0.4 | gbp | 0.5772
The problem lies in the case where I query for dates that don't exist in the finance_table.
You can use two joins. A direct translation into your query is:
SELECT price
(CASE WHEN base.rate_name = 'usd'
THEN base.price * coalesce(ft.usd, ft1.usd)
WHEN base.rage_name = 'gbp'
THEN base.price * coalesce(ft.gbp, ft.gbp)
ELSE base.price
END) AS converted_price
FROM base_table base LEFT JOIN
finance_table ft
ON base.date = ft.date JOIN
finance_table ft1
ON ft1.date = DATE '2020-01-01';

SQL query to select from multiple tables and create third table

Can someone please help me with a SQL query? My apologies if a similar question has been asked before.
Finding it difficult from other examples I have seen.
I have 2 tables and would like to create a third table.
Table - advisories
advID | productName
1 | 3.4/3.5/3.6
2 | 3.4
3 | 3.5/3.6
Table - customerA
hostname | version
A | 3.3
B | 3.5
C | 3.6
Final Table
hostname | advID
A | NULL
B | 1
B | 3
C | 1
C | 3
Does this look correct?
select advId, productNames, hostname, version
from advisories, customerA
where productNames
like '%3.6%';
Thanks for the help.
I would suggest using the LIKE in the JOIN as follows:
SELECT C.HOSTNAME,
A.advisoryId
FROM customerA C LEFT JOIN advisories A
ON CONCAT('/',A.productNames,'/') LIKE concat('%/',C.version,'/%')
order by C.HOSTNAME
See SQL Fiddle. I have changed the || to CONCAT function.

Select to join two tables with replace function/ORACLE SQL

I've got two tables:
Table_promo
Name | Code |
Promo1 | 123 |
Promo2 | 124 |
Promo3 | 125 |
And second table:
Table_invoice
Index | Promo | Price
1155 | 123+ | 1.25
2754 | 125K | 3.26
2378 | 124+ | 2.28
I need select that will give me every index from table_invoice with name of the promo from table_promo. The problem is that in table_invoice there are chars '+' or 'K' at the end of the promo number so I can't simply compare promo codes between two tables.
I've tried writing a select subquery like that:
(select name from table_promo where table_promo.code=to_number(replace(replace(table_invoice.promo,'+',''),'K','')
to replace every '+' and 'K' with empty char ''
It doesn't work, I get error
ORA-01427: single-row subquery returns more than one row
I think that the problem is with converting data in table_invoice.promo and table_promo.code
I've tried converting both to_number, both to_char and using the 'like' clausule between, nothing helps
I am sure that there is another way to delete this chars from table_invoice.promo in this select and compare it to table_promo.code, but can't get any info in the internet
Just use concatenation:
select . . .
from table_promo p join
table_invoice i
on i.promo = p.code || '+';
I think you want the inverse to handle + and K:
select . . .
from table_promo p join
table_invoice i
on p.promo like i.promo || '_'
In the meantime, you should fix the data model. The connection to code should use an exact code. You can store the + and K information in a separate column.
select *
from table_promo p
join table_invoice I
on regexp_substr (I.promo,'^\d+') =
P.code

SQL Distinct Pair Groupings

I am interested in manipulating my data like so:
My Source Data:
From | To | Rate
----------------
EUR | AUD | 1.5895
EUR | BGN | 1.9558
EUR | GBP | 0.7347
EUR | USD | 1.1151
GBP | AUD | 2.1633
GBP | BGN | 2.6618
GBP | EUR | 1.3610
GBP | USD | 1.5176
USD | AUD | 1.4254
USD | BGN | 1.7539
USD | EUR | 0.8967
USD | GBP | 0.6589
In regards to "distinct pairs", I consider the following to be "duplicates".
EUR | USD matches USD | EUR
EUR | GBP matches GBP | EUR
GBP | USD matches USD | GBP
I want my source data to be filtered such that it removes any 1 of the above "duplicates", such that my final table is 3 records less than the original. I do not care which record from the "duplicates" is kept or removed, just so long as only 1 is selected.
I have tried many variations of Joins, Exists, Except, Distinct, Group By, logical comparisons (< >) and I feel like I am so close with any given approach... but it just does not seem to click.
My favorite effort has involved inner joining on EXCEPT:
SELECT a.[FROM], a.[TO], a.[Rate]
FROM Table a
INNER JOIN
(
SELECT DISTINCT [From], [To]
FROM Table
EXCEPT
(
SELECT [TO] as [From], [From] as [To]
FROM Table
)
) b
ON a.[From] = b.[From] AND a.[To] = b.[To]
But alas, it removes all of the matched pairs.
I can suggest something very easy, if it doesn't matter which one of then you want, than you can pick only the one that his rate is bigger than 1 or on the contrary the one smaller. Each pare should be 1 rate bigger and one smaller (make sense) so
Select * from table where rate>1
One way to remove the duplicates that doesn't depend on the rates:
select s.*
from source s
where from < to
union all
select s.*
from source s
where to > from and
not exists (select 1 from source s2 where s.from = s2.to and s.to = s2.from);
Note: I did not put escape characters around from and to, although you would need them in your actual query.
Just to make it complete an DISTINCT ON solution:
SELECT DISTINCT ON(Least(from, to), Greatest(from, to)) *
FROM
source AS s1
ORDER BY Least(from, to), Greatest(from, to)

Combine two tables into a new one so that select rows from the other one are ignored

I have two tables that have identical columns. I would like to join these two tables together into a third one that contains all the rows from the first one and from the second one all the rows that have a date that doesn't exist in the first table for the same location.
Example:
transactions:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | ABC | 123 | -20
2013-01-23 | ABC | 123 | -13.158
2013-02-04 | BCD | 234 | -4.063
transactions2:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | BDE | 123 | -30
2013-01-23 | DCF | 123 | -2
2013-02-05 | UXJ | 234 | -6
Desired result:
date |location_code| product_code | quantity
------------+------------------+--------------+----------
2013-01-20 | ABC | 123 | -20
2013-01-23 | ABC | 123 | -13.158
2013-01-23 | DCF | 123 | -2
2013-02-04 | BCD | 234 | -4.063
2013-02-05 | UXJ | 234 | -6
How would I go about this? I tried for example this:
SELECT date, location_code, product_code, type, quantity, location_type, updated_at
,period_start_date, period_end_date
INTO transactions_combined
FROM ( SELECT * FROM transactions_kitchen k
UNION ALL
SELECT *
FROM transactions_admin h
WHERE h.date NOT IN (SELECT k.date FROM k)
) AS t;
but that doesn't take into account that I'd like to include the rows that have the same date, but different location. I have Postgresql 9.2 in use.
UNION simply doesn't do what you describe. This query should:
CREATE TABLE AS
SELECT date, location_code, product_code, quantity
FROM transactions_kitchen k
UNION ALL
SELECT h.date, h.location_code, h.product_code, h.quantity
FROM transactions_admin h
LEFT JOIN transactions_kitchen k USING (location_code, date)
WHERE k.location_code IS NULL;
LEFT JOIN / IS NULL to exclude rows from the second table for the same location and date. See:
Select rows which are not present in other table
Use CREATE TABLE AS instead of SELECT INTO. The manual:
CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS is the recommended syntax, since this form of SELECT INTO
is not available in ECPG or PL/pgSQL, because they interpret the
INTO clause differently. Furthermore, CREATE TABLE AS offers a
superset of the functionality provided by SELECT INTO.
Or, if the target table already exists:
INSERT INTO transactions_combined (<list names of target column here!>)
SELECT ...
Aside: I would not use date as column name. It's a reserved word in every SQL standard and a function and data type name in Postgres.
Change UNION ALL to just UNION and it should return only unique rows from each table.