Fetching row of a table and moving across column of another table - sql

Given a table like this with single column,
Names
------
A
B
C
D
E
Take each record from table and check previous and next value and put it across new table.
Print 'null' if no previous or next value exists with column name as prev_name,current_name,next
Prev_name| Current name| next
-----------------------------
NULL | A |B
A | B |C
B | C |D
C | D |E
D | E |NULL
I am learning SQL and googled to find something which might help solve this but couldn't.
Any help will be great!

See this link in fiddle, Query. But NULLS are considered as spaces and omitted in string concatenation.
Please use this query
SELECT LAG(NAMES) OVER(ORDER BY NAMES) AS PREV_NM, NAMES,
LEAD(NAMES) OVER(ORDER BY NAMES) AS NEXT_NM
FROM SAMPLE
Fiddle link is changed.

Related

split column into lines (explode) in dbt with sql

I am very new to dbt, I am trying to explode a column value into rows using SQL on dbt.
I have a table sample_data:
| id | content_toSplit |
| -------- | --------------- |
| 1 | [a, b, c, d] |
| 2 | [a, v] |
| 3 | [m, n, a] |
|id |output_column|
|-----|-------------|
|1 | a |
|1 | b |
|1 | c |
|1 | d |
|2 | a |
|2 | v |
I tried using unnest in a macro, and used the macro in my model:
Macro
{% macro split_column_to_row(column_name) %}
cross join unnest{{ column_name }} as output_column
{% endmacro %}
The Macro used in my model
select id, content_toSplit from {{ref('source')}}
{{split_column_to_row(content_toSplit)}}
But I am getting an "SQL compilation error: Object 'UNNEST' does not exist or not authorized." error. Also, I would like to get a position of each value.
I have also tried the below using SQL:
with unnest_column as (
select
id,
content_toSplit
from my_table, unnest(content_toSplit) as exploded_value
)
select *
from unnest_column
But getting "unexpected '('. syntax error line"
Perhaps your macro is interpreting unnest as a table name since it follows cross join but that is just a guess without the actual generated SQL. Your SQL statement is using unnest improperly. It is uses as an old style (pre SQL-92) cross join. However, no join clause is necessary, just unnest(content_toSplit) as exploded_value in the select column list. (see demo)
select id
, unnest(content_toSplit) as exploded_value
from sample_data
where id < 3; -- needed to get your specific output
Caution: Please be consistent: You initially state I have a table sample_data: yet your SQL references my_table. This is a physically small question, but had it been larger, the inconsistency could actually keep you from getting an answer. Details are important.

SQL: reverse groupby : EDIT

Is there a build in function in sql, to reverse the order in which the groupby works? I try to groupby a certain key but i would like to have the last inserted record returned and not the first inserted record.
Changing the order with orderby does not affect this behaviour.
Thanx in advance!
EDIT:
this is the sample data:
id|value
-----
1 | A
2 | B
3 | B
4 | C
as return i want
1 | A
3 | B
4 | C
not
1 | A
2 | B
4 | C
when using group by id don't get the result i want.
Question here is how are you identifying last inserted row. Based on your example, it looks like based on id. If id is auto generated, or a sequence then you can definitely do this.
select max(id),value
from your_table
group by value
Ideally in a table design, people uses a date column which holds the time a particular record was inserted, so it is easy to order by that.
Use Max() as your aggregate function for your id:
SELECT max(id), value FROM <table> GROUP BY value;
This will return:
1 | A
3 | B
4 | C
As for eloquent, I've not used it but I think it would look like:
$myData = DB::table('yourtable')
->select('value', DB::raw('max(id) as maxid'))
->groupBy('value')
->get();

SQL join two tables using value from one as column name for other

I'm a bit stumped on a query I need to write for work. I have the following two tables:
|===============Patterns==============|
|type | bucket_id | description |
|-----------------------|-------------|
|pattern a | 1 | Email |
|pattern b | 2 | Phone |
|==========Results============|
|id | buc_1 | buc_2 |
|-----------------------------|
|123 | pass | |
|124 | pass |fail |
In the results table, I can see that entity 124 failed a validation check in buc_2. Looking at the patterns table, I can see bucket 2 belongs to pattern b (bucket_id corresponds to the column name in the results table), so entity 124 failed phone validation. But how do I write a query that joins these two tables on the value of one of the columns? Limitations to how this query is going to be called will most likely prevent me from using any cursors.
Some crude solutions:
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_1" = 'fail' AND "bucket_id" = 1
union all
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_2" = 'fail' AND "bucket_id" = 2
Or, with a very probably better execution plan:
SELECT "id", "description" FROM
Results JOIN Patterns
ON "buc_1" = 'fail' AND "bucket_id" = 1
OR "buc_2" = 'fail' AND "bucket_id" = 2;
This will report all failure descriptions for each id having a fail case in bucket 1 or 2.
See http://sqlfiddle.com/#!4/a3eae/8 for a live example
That being said, the right solution would be probably to change your schema to something more manageable. Say by using an association table to store each failed test -- as you have in fact here a many to many relationship.
An other approach if you are using Oracle ≥ 11g, would be to use the UNPIVOT operation. This will translate columns to rows at query execution:
select * from Results
unpivot ("result" for "bucket_id" in ("buc_1" as 1, "buc_2" as 2))
join Patterns
using("bucket_id")
where "result" = 'fail';
Unfortunately, you still have to hard-code the various column names.
See http://sqlfiddle.com/#!4/a3eae/17
It looks to me that what you really want to know is the description(in your example Phone) of a Pattern entry given the condition that the bucket failed. Regardless of the specific example you have you want a solution that fulfills that condition, not just your particular example.
I agree with the comment above. Your bucket entries should be tuples(rows) and not arguments, and also you should share the ids on each table so you can actually join them. For example, Consider adding a bucket column and index their number then just add ONE result column to store the state. Like this:
|===============Patterns==============|
|type | bucket_id | description |
|-----------------------|-------------|
|pattern a | 1 | Email |
|pattern b | 2 | Phone |
|==========Results====================|
|entity_id | bucket_id |status |
|-------------------------------------|
|123 | 1 |pass |
|124 | 1 |pass |
|123 | 2 | |
|124 | 2 |fail |
1.-Use an Inner Join: http://www.w3schools.com/sql/sql_join_inner.asp and the WHERE clause to filter only those buckets that failed:
2.-Would this example help?
SELECT Patterns.type, Patterns.description, Results.entity_id,Results.status
INNER JOIN Results
ON
Patterns.bucket_id=Results.bucket_id
WHERE
Results.status=fail
Lastly, I would also add a primary_key column to each table to make sure indexing is faster for each unique combination.
Thanks!

Remove semi-duplicate rows from a result set

I have a logging table (TABLE_B) that is updated via triggers from a main table (TABLE_A). The trigger operates whenever any field on TABLE_A is inserted/updated. We need to pull a report out that shows only a subset of updates on TABLE_B - ie the user is only interested in the fields:
ID
STAGE
STATUS
UPDATE_DATE
I need to remove sequential duplicates from the result set, ie, suppose the following entries exist in TABLE_B:
+----+-----+------+-----------+
|ID |STAGE|STATUS|UPDATE_DATE|
+----+-----+------+-----------+
|4567|7 |9 |2012-12-25 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-24 |
+----+-----+------+-----------+
|4567|4 |3 |2012-12-23 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-22 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-21 |
+----+-----+------+-----------+
|4567|4 |3 |2012-12-20 |
+----+-----+------+-----------+
|4567|4 |2 |2012-12-19 |
+----+-----+------+-----------+
From the bottom, I need to extract rows 1,2,3,5,6,7 - omitting row 4 only: I have 2 entries at rows 3 & 4 that are duplicates (row 4 has been triggered into TABLE_B because of an update to some other field in TABLE_A, but it's stage/status combination hasn't altered therefore it can be ignored).
So, when I discover that the next row in a result set is a duplicate (and only the next row) of the current row, how can I either remove it from the result set, or neglect to select it in the first place. I'll be performing the operation using a stored proc - will a cursor be involved in this?
Sybase 12.5, though the syntax is very close to SQL Server.
Had a look at a similar question on StackOF:
http://stackoverflow.com/questions/19774273/remove-duplicates-in-sql-result-set-of-one-table
I think this answers the question:
select id, status, stage, min(updated_date)
from TABLE_B
where id = <someValue>
group by id, status, stage

Transforming a 2 column SQL table into 3 columns, column 3 lagged on 2

Here's my problem: I want to write a query (that goes into a larger query) that takes a table like this;
ID | DATE
A | 1
A | 2
A | 3
B | 1
B | 2
and so on, and transforms it into;
ID | DATE1 | DATE2
A | 1 | 2
A | 2 | 3
A | 3 | NOW
B | 1 | 2
B | 2 | NOW
Where the numbers are dates, and NOW() is always appended to the most recent date. Given free rein I would do this in Python, but unfortunately this goes into a larger query. We're using SyBase's SQL Anywhere 12, I think? I interact with the database using SQuirreL SQL.
I'm very stumped. I thought (SQL query to transform a list of numbers into 2 columns) would help, but I'm afraid I don't know enough to make it work. I was thinking of JOINing the table to itself, but I don't know how to SELECT for only the A-1-2 rows instead of the A-1-3 rows as well, for instance, or how to insert the NOW() value into it. Does anyone have any ideas?
I made a an sqlfiddle.com to outline a solution for your example. You were mentioning dates, but using integers so I chose to do an integer example, but it can be modified. I wrote it in postgresql so the coalesce() function can be substituted with nvl() or similar. Also, the parameter '0' can be substituted with any value, including now(), but you must change the data type of the "i" column in the table to be a date as well. Please let me know if you need further help on this.
select a.id, a.i, coalesce(min(b.i),'0') from
test a
left join test b on b.id=a.id and a.i<b.i
group by a.id,a.i
order by a.id, a.i
http://sqlfiddle.com/#!15/f1fba/6