Sqlite : Loop through rows and match, break when distinct row encountered - sql

I want to compare two tables A and B, row by row on the basis of a column name.As soon as I encounter a distinct row I want to break.
I want to do this using a query, something like this :
select case
when ( compare row 1 of A and B
if same continue with row+1
else break
)
if all same, then 1
else 0
end
as result
I am not sure how to loop through rows and break? Is it even possible in sqlite?
EDIT
Table looks like this
-------------------------- -----------------------------------
id | name id | name
-------------------------- -----------------------------------
1 | A 1 | A (same)
2 | C 2 | C (same)
3 | B 3 | Z (different break)
4 | K
Both tables have same structure. I want to just compare the names row by row, to see whether there is any order difference.

Related

Lookup row in another table that has the maximum value among all rows that are related to the current record

There are two tables, say A and B. I want to create a calculated column in A with the following data from B. For a given row i in A, I want the ID of that row in B that has the maximum value among all rows that are related to row i.
For example:
Table A:
ID
1
2
Table B:
ID | A_ID | Value
x | 1 | 100
y | 1 | 200
x | 2 | 400
y | 2 | 300
Desired result:
Table A:
ID | B_ID
1 | y
2 | x
I hope this is clear. A SQL statement as the following one would do the job.
update A set B_ID = (select B.ID from B where B.A_ID = ID order by Value desc limit 1)
The closest I got so far was with LOOKUPVALUE, but it gave me the value of the global MAX, instead of the MAX within the relevant window.
Here is a solution:
=SELECTCOLUMNS(
FILTER(B,[Flow]=CALCULATE(MAX(B[Value])) &&
A[ID]=B[A_ID]),"some name", [ID])

SQL UPDATE issues

I'm currently working on an application where an UPDATE query is performed multiple times in a row on a single table and I've stumbled into a problem.
If UPDATE query ends up swapping two rows, e.g. UPDATING 1 -> 2 and then 2 -> 1
The following happens
original | 1 -> 2 | 2 -> 1 | what i want
1 | 2 | 1 | 2
2 | 2 | 1 | 1
3 | 3 | 3 | 3
4 | 4 | 4 | 4
There are no other columns which can be used to further differentiate the tuples consistently.
Is there a way to achieve 'what i want' without restructuring the table/database? One solution I could think of is to first delete all the rows and insert the updated ones instead (this is satisfactory implementation wise) but I'd like to know whether it's doable with an UPDATE query.
Either make this one update:
update mytable
set col = case when col = 1 then 2 else 1 end
where col in (1,2);
Or three updates (by using an "impossible" value, i.e. a value that is not used in the column):
update mytable set col = -1 where col = 1;
update mytable set col = 1 where col = 2;
update mytable set col = 2 where col = -1;
u may select the ids first and put them on a two separate lists or variables to store ids then update based on that previously selected ids ?

SQL - Retrieve only one record of related records

I have table which depicts shares of a particular type of record. 2 records are created for a shared item which results in something like this
|--------------|------------|
| Shared From | Shared To |
|--------------|------------|
| Record 1 | Record 2 |
|--------------|------------|
| Record 2 | Record 1 |
|--------------|------------|
Is it possible to retrieve a single share record ? Meaning that from the table above I get only one record (Doesn't make a difference which)
|--------------|------------|
| Shared From | Shared To |
|--------------|------------|
| Record 1 | Record 2 |
Using distinct on both columns doesn't work since the combination is different
Use case expressions to return the smaller value in the first column, and the larger value in the seceond column. Do SELECT DISTINCT to remove duplicates.
select distinct case when SharedFrom < SharedTo then SharedFrom else SharedTo end,
case when SharedFrom > SharedTo then SharedFrom else SharedTo end
from tablename
Note: May switch columns for unique combinations. (If col1 > col2.)
If I understand you correctly, you want to get a single line from the table.
To get the top N of rows in a table, you can use TOP('N'), for example:
SELECT TOP(1) share_column FROM shares_table

SQL: reverse groupby : EDIT

Is there a build in function in sql, to reverse the order in which the groupby works? I try to groupby a certain key but i would like to have the last inserted record returned and not the first inserted record.
Changing the order with orderby does not affect this behaviour.
Thanx in advance!
EDIT:
this is the sample data:
id|value
-----
1 | A
2 | B
3 | B
4 | C
as return i want
1 | A
3 | B
4 | C
not
1 | A
2 | B
4 | C
when using group by id don't get the result i want.
Question here is how are you identifying last inserted row. Based on your example, it looks like based on id. If id is auto generated, or a sequence then you can definitely do this.
select max(id),value
from your_table
group by value
Ideally in a table design, people uses a date column which holds the time a particular record was inserted, so it is easy to order by that.
Use Max() as your aggregate function for your id:
SELECT max(id), value FROM <table> GROUP BY value;
This will return:
1 | A
3 | B
4 | C
As for eloquent, I've not used it but I think it would look like:
$myData = DB::table('yourtable')
->select('value', DB::raw('max(id) as maxid'))
->groupBy('value')
->get();

PostgreSQL: Distribute rows evenly and according to frequency

I have trouble with a complex ordering problem. I have following example data:
table "categories"
id | frequency
1 | 0
2 | 4
3 | 0
table "entries"
id | category_id | type
1 | 1 | a
2 | 1 | a
3 | 1 | a
4 | 2 | b
5 | 2 | c
6 | 3 | d
I want to put entries rows in an order so that category_id,
and type are distributed evenly.
More precisely, I want to order entries in a way that:
category_ids that refer to a category that has frequency=0 are
distributed evenly - so that a row is followed by a different category_id
whenever possible. e.g. category_ids of rows: 1,2,1,3,1,2.
Rows with category_ids of categories with frequency<>0 should
be inserted from ca. the beginning with a minimum of frequency rows between them
(the gaps should vary). In my example these are rows with category_id=2.
So the result could start with row id #1, then #4, then a minimum of 4 rows of other
categories, then #5.
in the end result rows with same type should not be next to each other.
Example result:
id | category_id | type
1 | 1 | a
4 | 2 | b
2 | 1 | a
6 | 3 | d
.. some other row ..
.. some other row ..
.. some other row ..
5 | 2 | c
entries are like a stream of things the user gets (one at a time).
The whole ordering should give users some variation. It's just there to not
present them similar entries all the time, so it doesn't have to be perfect.
The query also does not have to give the same result on each call - using
random() is totally fine.
frequencies are there to give entries of certain categories a higher
priority so that they are not distributed across the whole range, but are placed more
at the beginning of the result list. Even if there are a lot of these entries, they
should not completely crowd out the frequency=0 entries at the beginning, through.
I'm no sure how to start this. I think I can use window functions and
ntile() to distribute rows by category_id and type.
But I have no idea how to insert the non-0-category-entries afterwards.