select from table with between - sql

please, help advice.
I have a table.
id|score_max|score_min| segment
--|---------|---------|--------
1 |264 | |girl
2 |263 | 250 |girl+
3 |249 | 240 |girl
4 | | 239 |girl
It is not necessary to obtain a value depending on the value of the score.
But it can be null.
For example, 260 is value from other table
select segment
from mytable
where score_max<260 and score_min>260
Output:
2 |263 | 250 |girl+
but if value =200, sql is not correct
How to make a request correctly?

For this sample data that makes more sense:
id|score_max|score_min| segment
--|---------|---------|--------
1 | | 264 |girl
2 |263 | 250 |girl+
3 |249 | 240 |girl
4 |239 | |girl
you can get the result that you want like this:
select *
from tablename
where
(? >= score_min or score_min is null)
and
(? <= score_max or score_max is null)
Replace ? with the value that you search for.
See the demo.

Related

How to rename all ltree column data that equals a specific string? postgresql

PostgreSQL 12.0
https://www.postgresql.org/docs/current/ltree.html
Given following Table:
Create Table IF NOT EXISTS Tree
(
path ltree PRIMARY KEY,
amount BIGINT DEFAULT 0 NOT NULL,
added_values double precision DEFAULT 0 NOT NULL
);
With following data:
path | amount | added_values
---------------+------------+-------------
Tree.Cash | 20 | 2000
Tree.Cash.Hans | 20 | 1200
Tree.Cash.Peter| 10 | 1000
Tree.Cash.Cash | 30 | 900
Tree.Cash.asd | 40 | 1600
I want to change all labels that equal 'Cash' with the new value 'Coin'.
Desired results:
path | amount | added_values
---------------+------------+-------------
Tree.Coin | 20 | 2000
Tree.Coin.Hans | 20 | 1200
Tree.Coin.Peter| 10 | 1000
Tree.Coin.Coin | 30 | 900
Tree.Coin.asd | 40 | 1600
Can someone help me out?
Edit: 'Cash' could appear on another label 'Tree.Cash.Cash' for example
You should be able to use replace(). Assuming that cash only appears once:
update t
set path = replace(path, '.cash.', '.coin.')
where path ~ '[.]cash[.]' and
path not ~ '[.]cash.*[.]cash[.]' -- no duplicates
EDIT:
If you just want to replace the first occurrence, you can use:
update t
set path = regexp_replace(path, '.cash.', '.coin.')
where path ~ '[.]cash[.]';

Query a table so that data in one column could be shown as different fields

I have a table that stores data of customer care . The table/view has the following structure.
userid calls_received calls_answered calls_rejected call_date
-----------------------------------------------------------------------
1030 134 100 34 28-05-2018
1012 140 120 20 28-05-2018
1045 120 80 40 28-05-2018
1030 99 39 50 28-04-2018
1045 50 30 20 28-04-2018
1045 200 100 100 28-05-2017
1030 160 90 70 28-04-2017
1045 50 30 20 28-04-2017
This is the sample data. The data is stored on day basis.
I have to create a report in a report designer software that takes date as an input. When user selects a date for eg. 28/05/2018. This date is send as parameter ${call_date}. i have to query the view in such a way that result should look like as below. If user selects date 28/05/2018 then data of 28/04/2018 and 28/05/2017 should be displayed side by side as like the below column order.
userid | cl_cur | ans_cur | rej_cur |success_percentage |diff_percent|position_last_month| cl_last_mon | ans_las_mon | rej_last_mon |percentage_lm|cl_last_year | ans_last_year | rej_last_year
1030 | 134 | 100 | 34 | 74.6 % | 14% | 2 | 99 | 39 | 50 | 39.3% | 160 | 90 | 70
1045 | 120 | 80 | 40 | 66.6% | 26.7% | 1 | 50 | 30 | 20 | 60% | 50 | 30 | 20
The objective of this query is to show data of selected day, data of same day previous month and same day previous years in columns so that user can have a look and compare. Here the result is ordered by percentage(ans_cur/cl_cur) of selected day in descending order of calculated percentage and show under success_percentage.
The column position_last_month is the position of that particular employee in previous month when it is ordered in descending order of percentage. In this example userid 1030 was in 2nd position last month and userid 1045 in 1 st position last month. Similarly I have to calculate this also for year.
Also there is a field called diff_percent which calculates the difference of percentage between the person who where in same position last month.Same i have to do for last year. How i can achieve this result.Please help.
THIS ANSWERS THE ORIGINAL VERSION OF THE QUESTION.
One method is a join:
select t.user_id,
t.calls_received as cr_cur, t.calls_answered as ca_cur, t.calls_rejected as cr_cur,
tm.calls_received as cr_last_mon, tm.calls_answered as ca_last_mon, tm.calls_rejected as cr_last_mon,
ty.calls_received as cr_last_year, ty.calls_answered as ca_last_year, ty.calls_rejected as cr_last_year
from t left join
t tm
on tm.userid = t.userid and
tm.call_date = dateadd(month, -1, t.call_date) left join
t ty
on ty.userid = t.userid and
tm.call_date = dateadd(year, -1, t.call_date)
where t.call_date = ${call_date};

How to determine value changes according to another columns in Oracle

Below table includes non-unique id, money value and dates/times.
id_1 value_1 value_time id_version Version_time
138 250 09-SEP-14 595 02-SEP-14
140 250 15-SEP-14 695 01-AUG-14
140 300 30-DEC-14 720 05-NOV-14
233 250 01-JUN-15 800 16-MAY-15
As you can see id_1, id_version and time columns can change in table but value_1 may stay the same.
I know that if id_1 is same in rows, value_1 can only change according to id_version. But there are too many id_version in the table. And I know that it changes according to id_version, but i don't know the exact change time of it.
So firstly I have to decide, which id_version and id_version time cause the value change group by id_1.
But again id_1 is not uniqe, and id may change but value stays the same :)
editor: From OP's comment - Begin
Here is the desired result example i want to get the first and second row not the third and fourth row.
| 140 | 250 | 15-SEP-14 | 695 | 01-AUG-14 |
| 140 | 300 | 31-DEC-14 | 725 | 07-NOV-14 |
| 140 | 300 | 05-JAN-14 | 740 | 30-NOV-14 |
| 140 | 300 | 30-DEC-14 | 720 | 05-NOV-14 |
editor: From OP's comment - End
Thanks in advance really need help in this situation.
Based on the input given so far (and processing just the data in the linked to picture - rather than the one in the current example data), the following should help to get you started:
SELECT
TMin.id_1
, TMin.value_1
, TO_CHAR(TAll.value_time, 'DD-MON-RR') value_time
, TMin.id_version
, TO_CHAR(TMin.version_time, 'DD-MON-RR') version_time
FROM
(SELECT
id_1
, value_1
, MIN(id_version) id_version
, MIN(version_time) version_time
FROM T
GROUP BY id_1, value_1
ORDER BY id_1, value_1
) TMin
JOIN T TAll
ON TMin.id_1 = TAll.id_1
AND TMin.value_1 = TAll.value_1
AND TMin.id_version = TAll.id_version
AND TMin.version_time = TAll.version_time
ORDER BY TMin.id_1, TMin.value_1
;
See it in action: SQL Fiddle.
Please comment, if and as this requires adjustment / further detail.

SQL: how to separate combined row into individual rows

I have a database table like this:
id | check_number | amount
1 | 1001]1002]1003 | 200]300]100
2 | 2001]2002 | 500]1000
3 | 3002]3004]3005]3007 | 100]300]600]200
I want to separate the records into something like this:
id | check_number | amount
1 | 1001 | 200
2 | 1002 | 300
3 | 1003 | 100
. | . | .
. | . | .
. | . | .
How do I do this just using SQL in Oracle and SQL Server?
Thanks,
Milo
In Oracle Only, using the CONNECT BY LEVEL method (see here), with several caveats:
select rownum, id,
substr(']'||check_number||']'
,instr(']'||check_number||']',']',1,level)+1
,instr(']'||check_number||']',']',1,level+1)
- instr(']'||check_number||']',']',1,level) - 1) C1VALUE,
substr(']'||amount||']'
,instr(']'||amount||']',']',1,level)+1
,instr(']'||amount||']',']',1,level+1)
- instr(']'||amount||']',']',1,level) - 1) C2VALUE
from table
connect by id = prior id and prior dbms_random.value is not null
and level <= length(check_number) - length(replace(check_number,']')) + 1
ROWNUM ID C1VALUE C2VALUE
1 1 1001 200
2 1 1002 300
3 1 1003 100
4 2 2001 500
5 2 2002 1000
6 3 3002 100
7 3 3004 300
8 3 3005 600
9 3 3007 200
Essentially we blow out the query using the hierarchical functions of oracle and then only get the substrings for the data in each "column" of data inside the check_number and amount columns.
Major Caveat: The data to be transformed must have the same number of "data elements" in both columns, since we use the first column to "count" the number of items to be transformed.
I have tested this on 11gR2. YMMV depending on DMBS version as well. Note the need to use the "PRIOR" operator, which prevents oracle from going into an infinite connect by loop.

SQL Group By On Output From User Defined Function

Is it possible, in Oracle, to group data on the output of a user defined function? I get errors when I try to, and it best illustrated by the below example:
I am trying to interrogate results in table structure similar to below:
id | data
1000 | {abc=123, def=234, ghi=111, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=234, ghi=222, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=434, ghi=333, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=434, ghi=444, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=123, def=634, ghi=555, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=634, ghi=666, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=434, ghi=777, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=434, ghi=888, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=234, ghi=999, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
1000 | {abc=923, def=234, ghi=000, jkl=456, mno=567, pqr=678, stu=789, vwx=890, yza=901}
There are other columns, just not shown. The id column can have different values, but in this example, does not. In the data column, only the fields abc, def, and ghi differ, all the others are the same. Again this is only illustrative for this data example.
I have written a function to extract the value assigned to fields in the data column, and it is used in the following query:
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
from table
giving results:
id | abc | def
1000 | 123 | 234
1000 | 123 | 234
1000 | 123 | 434
1000 | 123 | 434
1000 | 123 | 634
1000 | 923 | 634
1000 | 923 | 434
1000 | 923 | 434
1000 | 923 | 234
1000 | 923 | 234
For reporting purposes, I would like to be able to display the amount of each type of record. There are 6 types in the above example, and ideally the output would be:
id | abc | def | count
1000 | 123 | 234 | 2
1000 | 123 | 434 | 2
1000 | 123 | 634 | 1
1000 | 923 | 634 | 1
1000 | 923 | 434 | 2
1000 | 923 | 234 | 2
I expected to achieve this by writing SQL like so (and I'm convinced I have done so in the past):
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
,count(1)
from table
group by id
,abc
,def
This however, will not work. Oracle is giving me an error of:
ORA-00904: "ABC": invalid identifier
00904. 00000 - "%s: invalid identifier"
From my initial research on "the google", I have seen that I should perhaps be grouping on the column I am passing into my user defined function. This would be due to SQL requiring all columns not part of an aggregate function needing to be part of the group by clause.
This will work for some records, however in my data example, the field ghi in the data column is different for every record , thus making the data column unique, and ruining the group by clause, as a count of 1 is given for each record.
I've used sybase and db2 in the past, and (setting myself up for a fall here...) I'm pretty sure in both that I was able to group by on the output of a user defined function.
I thought that there might be an issue with the naming of the columns and how they can be referenced by the group by? Referencing by column number hasn't worked.
I've tried various combinations of what I have, and can't get it to work, so I'd appreciate any insight you guys out there could give.
If you need any more information I'll edit as required or clarify in the comments.
Thanks,
GC.
You should be able to group by the functions themselves, not by the aliases
select id
,extract_data(data,abc) as abc
,extract_data(data,def) as def
,count(*)
from table
group by id
,extract_data(data,abc)
,extract_data(data,def)
Note that this does not generally involve executing the function multiple times. You can see that yourself with a simple function that increments a counter in a package every time it is called
SQL> ed
Wrote file afiedt.buf
1 create or replace package pkg_counter
2 as
3 g_cnt integer := 0;
4* end;
SQL> /
Package created.
SQL> create or replace function f1( p_arg in number )
2 return number
3 is
4 begin
5 pkg_counter.g_cnt := pkg_counter.g_cnt + 1;
6 return mod( p_arg, 2 );
7 end;
8 /
Function created.
There are 16 rows in the EMP table
SQL> select count(*) from emp;
COUNT(*)
----------
16
so when we execute a query that involves grouping by the function call, we hope to see the function executed only 16 times. And that is, in fact, what we see.
SQL> select deptno,
2 f1( empno ),
3 count(*)
4 from emp
5 group by deptno,
6 f1( empno );
DEPTNO F1(EMPNO) COUNT(*)
---------- ---------- ----------
1 1
30 0 4
20 1 1
10 0 2
30 1 2
20 0 4
10 1 1
0 1
8 rows selected.
SQL> begin
2 dbms_output.put_line( pkg_counter.g_cnt );
3 end;
4 /
16
PL/SQL procedure successfully completed.
Try this:
select id, abc, def, count(1)
from
(
select
id,
extract_data(data,abc) as abc,
extract_data(data,def) as def
from table
)
group by id, abc, def
Have you tried:
SELECT
id,
extract_data(data, abc) as abc,
extract_data(data, def) as def,
COUNT(1)
FROM
table
GROUP BY
id,
extract_data(data, abc)
extract_data(data, def)