I have two queries :
Queries Simplified excluding Joins
Query 1 : select ProductName,NumberofProducts (in inventory) from Table1.....;
Query 2 : select ProductName, NumberofProductssold from Table2......;
I would like to know how I can get an output as :
ProductName NumberofProducts(in inventory) ProductName NumberofProductsSold
The relationships used for getting the outputs for each query are different.
I need the output this way for my SSRS report .
(I tried the union statement but it doesnt work for the output I want to see. )
Here is an example that does a union between two completely unrelated tables: the Student and the Products table. It generates an output that is 4 columns:
select
FirstName as Column1,
LastName as Column2,
email as Column3,
null as Column4
from
Student
union
select
ProductName as Column1,
QuantityPerUnit as Column2,
null as Column3,
UnitsInStock as Column4
from
Products
Obviously you'll tweak this for your own environment...
I think you are after something like this; (Using row_number() with CTE and performing a FULL OUTER JOIN )
Fiddle example
;with t1 as (
select col1,col2, row_number() over (order by col1) rn
from table1
),
t2 as (
select col3,col4, row_number() over (order by col3) rn
from table2
)
select col1,col2,col3,col4
from t1 full outer join t2 on t1.rn = t2.rn
Tables and data :
create table table1 (col1 int, col2 int)
create table table2 (col3 int, col4 int)
insert into table1 values
(1,2),(3,4)
insert into table2 values
(10,11),(30,40),(50,60)
Results :
| COL1 | COL2 | COL3 | COL4 |
---------------------------------
| 1 | 2 | 10 | 11 |
| 3 | 4 | 30 | 40 |
| (null) | (null) | 50 | 60 |
How about,
select
col1,
col2,
null col3,
null col4
from Table1
union all
select
null col1,
null col2,
col4 col3,
col5 col4
from Table2;
The problem is that unless your tables are related you can't determine how to join them, so you'd have to arbitrarily join them, resulting in a cartesian product:
select Table1.col1, Table1.col2, Table2.col3, Table2.col4
from Table1
cross join Table2
If you had, for example, the following data:
col1 col2
a 1
b 2
col3 col4
y 98
z 99
You would end up with the following:
col1 col2 col3 col4
a 1 y 98
a 1 z 99
b 2 y 98
b 2 z 99
Is this what you're looking for? If not, and you have some means of relating the tables, then you'd need to include that in joining the two tables together, e.g.:
select Table1.col1, Table1.col2, Table2.col3, Table2.col4
from Table1
inner join Table2
on Table1.JoiningField = Table2.JoiningField
That would pull things together for you into however the data is related, giving you your result.
If you mean that both ProductName fields are to have the same value, then:
SELECT a.ProductName,a.NumberofProducts,b.ProductName,b.NumberofProductsSold FROM Table1 a, Table2 b WHERE a.ProductName=b.ProductName;
Or, if you want the ProductName column to be displayed only once,
SELECT a.ProductName,a.NumberofProducts,b.NumberofProductsSold FROM Table1 a, Table2 b WHERE a.ProductName=b.ProductName;
Otherwise,if any row of Table1 can be associated with any row from Table2 (even though I really wonder why anyone'd want to do that), you could give this a look.
Old question, but where others use JOIN to combine unrelated queries to rows in one table, this is my solution to combine unrelated queries to one row, e.g:
select
(select count(*) c from v$session where program = 'w3wp.exe') w3wp,
(select count(*) c from v$session) total,
sysdate
from dual;
which gives the following one-row output:
W3WP TOTAL SYSDATE
----- ----- -------------------
14 290 2020/02/18 10:45:07
(which tells me that our web server currently uses 14 Oracle sessions out of the total of 290 sessions; I log this output without headers in an sqlplus script that runs every so many minutes)
Load each query into a datatable:
http://www.dotnetcurry.com/ShowArticle.aspx?ID=143
load both datatables into the dataset:
http://msdn.microsoft.com/en-us/library/aeskbwf7%28v=vs.80%29.aspx
This is what you can do. Assuming that your ProductName column have common values.
SELECT
Table1.ProductName,
Table1.NumberofProducts,
Table2.ProductName,
Table2.NumberofProductssold
FROM Table1
INNER JOIN Table2
ON Table1.ProductName= Table2.ProductName
Try this:
SELECT ProductName,NumberofProducts ,NumberofProductssold
FROM table1
JOIN table2
ON table1.ProductName = table2.ProductName
Try this:
GET THE RECORD FOR CURRENT_MONTH, LAST_MONTH AND ALL_TIME AND MERGE THEM INTO SINGLE ARRAY
$analyticsData = $this->user->getMemberInfoCurrentMonth($userId);
$analyticsData1 = $this->user->getMemberInfoLastMonth($userId);
$analyticsData2 = $this->user->getMemberInfAllTime($userId);
foreach ($analyticsData2 as $arr) {
foreach ($analyticsData1 as $arr1) {
if ($arr->fullname == $arr1->fullname) {
$arr->last_send_count = $arr1->last_send_count;
break;
}else{
$arr->last_send_count = 0;
}
}
foreach ($analyticsData as $arr2) {
if ($arr->fullname == $arr2->fullname) {
$arr->current_send_count = $arr2->current_send_count;
break;
}else{
$arr->current_send_count = 0;
}
}
}
echo "<pre>";
print_r($analyticsData2);die;
Related
Given the following table structure:
table1
first | pkey <- column names
val1 | 1
table2
second | fkey <- column names
val2 | 1
val3 | 1
val4 | 1
I would like some output like the following:
first | second <- column names
val1 | val2, val3, val4
I have tried:
select table1.first, (select table2.second where table1.pkey=table2.fkey)
from table1 join on table2 where table1.pkey=table2.fkey;
Something about that looks wrong, so I get the following:
more than one row returned by a subquery used as an expression
I also tried
select table1.first, (select table2.second where table1.pkey=table2.fkey)
from table1, table2;
Try the following using string_agg. Here is the DEMO
select
first,
string_agg(second, ',') as second
from table1 t1
join table2 t2
on t1.pkey = t2.fkey
group by
first
Output
*----------------------*
| first second |
*----------------------*
| val1 val2,val3,val4 |
*----------------------*
I would recommend aggregating the second column as an array rather than a string:
select t1.first, array_agg(t2.second)
from table1 t1 join
table2 t2
on t1.pkey = t2.fkey
group by first
You are almost there. Depending on the result you want to get there are three options:
select
table1.first,
(select string_agg(table2.second, ', ') from table2 where table1.pkey=table2.fkey)
from table1;
to get a plain text value, or
select
table1.first,
(select array_agg(table2.second) from table2 where table1.pkey=table2.fkey),
array(select table2.second from table2 where table1.pkey=table2.fkey)
from table1;
to get an array of values (in two syntactic ways), or
select
table1.first,
(select json_agg(table2.second) from table2 where table1.pkey=table2.fkey)
from table1;
to get JSON array.
Note that I moved table2 completely to the subquery.
When I posted this question, I remember I had inadvertently done what was asked a while ago. But I could not remember how. Turns out this might be an easier way to do this.
select table1.first, table2 as second
from table1 join table2 on table1.pkey=table2.fkey;
Seems as though selecting the table in a join, puts the whole row of table returned in one column named the table name. Then you can use as to name the column the name you want.
Please help me, I need to find out a SQL solution for grouping data using SQL Server database.
I'm pretty sure that it could be done in one SQL request but I can't see the trick.
Let' see the problem :
I have a two columns table (please see below an example). I just want to add a new column containing a number or a string which indicates the group
BEFORE :
Col1 | Col2
-----+-----
A | B
B | C
D | E
F | G
G | H
I | I
J | U
AFTER TRANSFORMATION :
Col1 | Col2 | Group
-----+------+------
A | B | 1
B | C | 1
D | E | 2
F | G | 3
G | H | 3
I | I | 4
J | U | 5
In other words: A, B, C are in the same group; D and E too; F, G, H in group 3 ....
Do you have any lookup table to get this group mapping?
Or if you just have a logic defined to decide a group, i would recommend to add a UDF which will return group for supplied values.
SELECT Col1,Col2,GetGroupID(Col1,Col2) AS Group
FROM Table
Your UDF will be something like following
CREATE FUNCTION GetGroupID
(
-- Add the parameters for the function here
#Col1 varchar(10),
#Col2 varchar(10)
)
RETURNS int
AS
BEGIN
DECLARE #groupID int
IF (#Col1="A" AND #Co2 = "B") OR (#Col1="B" AND #Co2 = "C")
BEGIN
SET #groupID = 1
END
IF #Col1="D" AND #Co2 = "E"
BEGIN
SET #groupID = 2
END
-- You can write saveral conditions in the same manner.
return #groupID
END
However, in case you have this mapping defined somewhere in another table, let us know the structure of the table and we can then update the query to join with that table instead of using UDF.
Considering the performance of the query, if the amount of data is huge in your table , it is recommended to have these mappings to one fix table and Join that table in query. Using UDF may harm performance if data amount is huge.
There is absolutely no need for a UDF here. Regardless of whether you are looking to update the table with a new column or simply pull out the data with the grouping applied, you will be best off using a set based solution, ie: create and join to a table.
I am assuming here that you don't have messy data, such as a row with Col1 = 'A' and Col2 = 'F'.
If you are able to add new tables permanently you can use the following to create your lookup table:
create table Col1Groups(Col1 nvarchar(10), GroupNum int);
insert into Col1Groups(Col1,GroupNum) values ('A',1),('B',1),('C',1),('D',2),('E',2),('F',3),('G',3),('H',3);
and then join to it:
select t.Col1
,t.Col2
,g.GroupNum
from Table t
inner join Col1Groups g
on t.Col1 = g.Col1
If you can't, you can just create a derived table via a CTE:
with Col1Groups as
(
select Col1
,GroupNum
from (values('A',1),('B',1),('C',1),('D',2),('E',2),('F',3),('G',3),('H',3)) as x(Col1,GroupNum)
)
select t.Col1
,t.Col2
,g.GroupNum
from Table t
inner join Col1Groups g
on t.Col1 = g.Col1
You get the first rows per group with
select col1, col2 from mytable where col1 not in (select col2 from mytable) or col1 = col2;
We can give these rows numbers with
rank() over (order by col1) as grp
Now we must iterate through the rows to find the ones belonging to those first ones, then those belonging to these, etc. A recursive query.
with cte(col1, col2, grp) as
(
select col1, col2, rank() over (order by col1) as grp
from mytable where col1 not in (select col2 from mytable) or col1 = col2
union all
select mytable.col1, mytable.col2, cte.grp
from cte
join mytable on mytable.col1 = cte.col2
where mytable.col1 <> mytable.col2
)
select * from cte
order by grp, col1;
Additional answer for a more flexible approach
Originally you asked for chains A|B -> B|C, F|G -> G|H etc., but in your comment to my other answer you introduced forks like A|B -> B|C, B|D and I've adjusted my answer.
If you want to go one step further and introduce net-like relations such as A|B -> B|C, D|C, we can no longer follow chains forward only (in the example D belongs to the A group, because though A doesn't lead to D directly, it leads to C and D also leads to C. Here is a way to solve this:
Get all letters from the table (no matter whether in col1 or col2). Then for each of them find related letters (again no matter whether in col1 or col2). And for these again find related letters and so on. That will give you complete groups. But duplicates (as D is in the A group, A is in the D group also), which you can get rid of by simply taking the smallest (or greatest) group key per letter. Then join the Groups to the table.
The query:
with cte(col, grp) as
(
select col, rownum as grp from
(select col1 as col from mytable union select col2 from mytable)
union all
select case when mytable.col1 = cte.col then mytable.col2 else mytable.col1 end, cte.grp
from cte
join mytable on cte.col in (mytable.col1, mytable.col2)
where mytable.col1 <> mytable.col2
)
cycle col set is_cycle to 'y' default 'n'
select mytable.col1, mytable.col2, x.grp
from mytable
join (select col, min(grp) as grp from cte group by col) x on x.col = mytable.col1
order by grp, col;
Say I have a table that looks something like this
COL1
1
1
1
2
2
3
3
3
3
4
5
5
6
6
7
7
With some other columns that are unimportant for this question. If I want to return all but the first two values from 4 how would I do this with derby?
Here is the expected output to clear up what I'm wanting
COL1
3
3
3
3
4
5
5
6
6
Thanks for the help, I'm not the best with SQL but I'm trying :)
try this...
SELECT t.*
FROM mytab t
INNER JOIN (SELECT MIN(COL1) AS VAL2
FROM mytab
WHERE COL1 NOT IN (SELECT MIN(COL1) FROM mytab)) x
ON t.COL1 > x.VAL2
working example at
http://www.datagloop.com/?USERNAME=DATAGLOOP/SO_DERBY&ACTION=LOGIN
Here is a solution that uses a temporary table that allows more flexibility, readability and also allows you to tune the parameters better.
The ROWID part of the queries is your position reference.
CREATE TEMP TABLE mycol_order
(COL1 INTEGER NOT NULL,
TOTAL_COLS INTEGER NOT NULL,
PRIMARY KEY (COL1));
INSERT INTO mycol_order
(COL1,TOTAL_COLS)
SELECT DISTINCT t1.COL1,
t2.total
FROM mytab t1,
(SELECT COUNT(DISTINCT COL1) AS total FROM mytab) t2
ORDER BY 1;
SELECT t.*
FROM mytab t
INNER JOIN mycol_order co
ON co.col1 = t.col1
AND co.ROWID > 2
AND co.ROWID < co.total_cols;
Also updated the working example at
working example at http://www.datagloop.com/?USERNAME=DATAGLOOP/SO_DERBY&ACTION=LOGIN
I have 2 tables that are not related at all and I need to put them together - one column per table. When I try the cartesian join, I end up with every combination:
SELECT Field1, Field2 FROM Table1, Table2
Result:
Table1.Field1 Table2.Field2
---------------------------
1 1
1 2
1 3
2 1
2 2
2 3
3 1
3 2
3 3
I need it to return side-by-side:
Table1.Field1 Table2.Field2
---------------------------
1 1
2 2
3 3
is this possible? Thanks in advance
EDIT
Table1.Table1IDs
----------------
1
2
3
4
5
Table2.Table2IDs
----------------
6
7
8
9
10
Desired output (into a temp table/select statement)
Table1.Table1IDs Table2.Table2IDs
------------------------------------
1 6
2 7
3 8
4 9
5 10
So that I can then do my insert into the actual table I need to do an insert:
INSERT INTO dbo.MTMObjects
SELECT Table1IDs, Table2IDs
FROM [temp table or solution]
ANSWER
Bluefeet gave me the idea to use temp tables with an identity column that i can then use to join. His is 'safer' because you aren't relying on SQL's good humor to sort both recordsets the same, but this might help the next guy:
DECLARE #tmp_Table1 TABLE(ID int IDENTITY(1,1) NOT NULL, TableID1 int NOT NULL)
DECLARE #tmp_Table2 TABLE(ID int IDENTITY(1,1) NOT NULL, TableID2 int NOT NULL)
INSERT INTO #tmp_Table1
OUTPUT INSERTED.Field1
SELECT * FROM Table1
INSERT INTO #tmp_Table2
OUTPUT INSERTED.Field2
SELECT * FROM Table2
OUTPUT
SELECT tmp1.Field1, tmp2.Field2
FROM #tmp_Table1 tmp1 INNER JOIN #tmp_Table2 tmp2 ON tmp2.ID = tmp1.ID
CHEERS!
You can try something like this using row_number(). This will force a relationship between the two tables based on the row_number:
select t1.col1, t2.col2
from
(
select col1, row_number() over(order by col1) rn
from table1
) t1
inner join
(
select col2, row_number() over(order by col2) rn
from table2
) t2
on t1.rn = t2.rn
See SQL Fiddle with Demo
if i understood ...the only you need is to add a filter in where clause
SELECT Field1, Field2 FROM Table1, Table2 WHERE Table1.Field1=Table2.Field2
You should probably change your final solution to use an outer join instead of inner join, in case the tables don't have exactly the same number of rows.
If I have to search for some data I can use wildcards and use a simple query -
SELECT * FROM TABLE WHERE COL1 LIKE '%test_string%'
And, if I have to look through many values I can use -
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
But, is it possible to use both together. That is, the query doesn't just perform a WHERE IN but also perform something similar to WHERE LIKE? A query that just doesn't look through a set of values but search using wildcards through a set of values.
If this isn't clear I can give an example. Let me know. Thanks.
Example -
lets consider -
AnotherTable -
id | Col
------|------
1 | one
2 | two
3 | three
Table -
Col | Col1
------|------
aa | one
bb | two
cc | three
dd | four
ee | one_two
bb | three_two
Now, if I can use
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable)
This gives me -
Col | Col1
------|------
aa | one
bb | two
cc | three
But what if I need -
Col | Col1
------|------
aa | one
bb | two
cc | three
ee | one_two
bb | three_two
I guess this should help you understand what I mean by using WHERE IN and LIKE together
SELECT *
FROM TABLE A
INNER JOIN AnotherTable B on
A.COL1 = B.col
WHERE COL1 LIKE '%test_string%'
Based on the example code provided, give this a try. The final select statement presents the data as you have requested.
create table #AnotherTable
(
ID int IDENTITY(1,1) not null primary key,
Col varchar(100)
);
INSERT INTO #AnotherTable(col) values('one')
INSERT INTO #AnotherTable(col) values('two')
INSERT INTO #AnotherTable(col) values('three')
create table #Table
(
Col varchar(100),
Col1 varchar(100)
);
INSERT INTO #Table(Col,Col1) values('aa','one')
INSERT INTO #Table(Col,Col1) values('bb','two')
INSERT INTO #Table(Col,Col1) values('cc','three')
INSERT INTO #Table(Col,Col1) values('dd','four')
INSERT INTO #Table(Col,Col1) values('ee','one_two')
INSERT INTO #Table(Col,Col1) values('ff','three_two')
SELECT * FROM #AnotherTable
SELECT * FROM #Table
SELECT * FROM #Table WHERE COL1 IN(Select col from #AnotherTable)
SELECT distinct A.*
FROM #Table A
INNER JOIN #AnotherTable B on
A.col1 LIKE '%'+B.Col+'%'
DROP TABLE #Table
DROP TABLE #AnotherTable
Yes. Use the keyword AND:
SELECT * FROM TABLE WHERE COL1 IN (Select col from AnotherTable) AND COL1 LIKE '%test_string%'
But in this case, you are probably better off using JOIN syntax:
SELECT TABLE.* FROM TABLE JOIN AnotherTable on TABLE.COL1 = AnotherTable.col WHERE TABLE.COL1 LIKE '%test_string'
no because each element in the LIKE clause needs the wildcard and there's not a way to do that with the IN clause
The pattern matching operators are:
IN, against a list of values,
LIKE, against a pattern,
REGEXP/RLIKE against a regular expression (which includes both wildcards and alternatives, and is thus closest to "using wildcards through a set of valuws", e.g. (ab)+a|(ba)+b will match all strings aba...ba or bab...ab),
FIND_IN_SET to get the index of a string in a set (which is represented as a comma separated string),
SOUNDS LIKE to compare strings based on how they're pronounced and
MATCH ... AGAINST for full-text matching.
That's about it for string matching, though there are other string functions.
For the example, you could try joining on Table.Col1 LIKE CONCAT(AnotherTable.Col, '%'), though performance will probably be dreadful (assuming it works).
Try a cross join, so that you can compare every row in AnotherTable to every row in Table:
SELECT DISTINCT t.Col, t.Col1
FROM AnotherTable at
CROSS JOIN Table t
WHERE t.col1 LIKE ('%' + at.col + '%')
To make it safe, you'll need to escape wildcards in at.col. Try this answer for that.
If I understand the question correctly you want the rows from "Table" when "Table.Col1" is IN "AnotherTable.Col" and you also want the rows when Col1 IS LIKE '%some_string%'.
If so you want something like:
SELECT
t.*
FROM
[Table] t
LEFT JOIN
[AnotherTable] at ON t.Col1 = at.Col
WHERE (at.Col IS NOT NULL
OR t.Col1 LIKE '%some_string%')
Something like this?
SELECT * FROM TABLE
WHERE
COL1 IN (Select col from AnotherTable)
AND COL1 LIKE '%test_string%'
Are you thinking about something like EXISTS?
SELECT * FROM TABLE t WHERE EXISTS (Select col from AnotherTable t2 where t2.col = t.col like '%test_string%' )