Edit to clarify my question
Database: Presto, I'm querying using my company own tool but it's basically similar to MySQL or other stuff.
The purpose is: I have training classes, and I want to evaluate them by comparing a few metrics after and before the training day (says, W+1, W+2, etc. vs pre-training). After doing a few sub-queries, I was able to achieve the table with values as below (each class has its own metric, and is unique).
class | metric | shopid | pre-training | w+1 | w+2 | w+3
A | increasing sth | 1122 | x | x1 | x2 | x3
B | decrease sth | 3322 | y | y1 | y2 | y3 etc.
So now I want to compare the value of W+1, W+2 to pre-training to give a conclusions, for example: if x1 > x -> good, if x2 < x -> bad, etc.
So I write an CASE WHEN statement
CASE metric = 'increase sth'
WHEN x1 > x THEN 'good'
WHEN x1 < x THEN 'bad'
CASE metric = 'decrease sth'
....
To apply on columns w+1, w+2, etc. so to get the desired result, but since I have to write the CASE statement for 4 columns, it would be very lengthy and inefficient and repetitive, so I was thinking of LOOP so that I'd just need to write the CASE statement once and it could be apply on all 4 columns without repetition.
I could've extracted the data and done this execution in Python, but I want to learn how to do it in SQL so that I don't have to do extra work after finishing querying.
Sorry, I'm very new to SQL (only about two months in, still working hard enrich my knowledge)
Hope you can help. Much thanks for your help.
Your question is quite vague, but you can use a lateral join to split the rows apart and do the calculation only once. This will tend to put the values in separate rows:
select t.*, diff.*
from t cross join lateral
(select which, (v.metric - t.metric1) as diff
from (values (2, t.metric2), (3, t.metric3)
) v(which, metric)
) diff;
You can also put the values back in one row.
That said, you don't seem to have a good foundation in working with relational databases. The problem starts with your initial structure, where you are storing values across columns rather than in separate rows.
SQL doesn't have "looping" because it is set-based. And it is set-based so the optimizer can figure out the best way to run queries.
Related
I am supposed to do some changes to an enormous CSV file based on a different file. Therefore I chose to do it in SQL but after further consideration I am not sure how to proceed..
In the 1st table I have a list of contracts. Columns represent some segments the contract belongs to and some products that can be linked to the contract (example in the table below).
Here contract no. 1234 belongs to segments X1 and Y2. There is no product number 1 linked to it, but it has product number 2 linked to it. The product originaly ends on the 1st of January 2030.
cont_n|date|segment_1|segment_2|..|prod_1|date_prod_1|product_2|date_product_2|..
1234 |3011| X1 | Y2 |..| | |YES |01/01/2030 |..
The 2nd file is a list of combinations of segments and an indication how the "date" columns should be adjusted. The example shows following situation - if there is prod_2 linked to the contract which belongs to groups X1 and Y2, end the prod_2 this year. I need this result to alter table no. 1.
prod_no|segment_1|segment_2|result
prod_2 | X1 | Y2 | end the product on anniversary
Ergo I need to get to the result:
cont_n|date|segment_1|segment_2|..|prod_1|date_prod_1|product_2|date_product_2|..
1234 |3011| X1 | Y2 |..| | |YES |30/11/2019 |..
In the original files I have around 600k rows and more than 300 columns (meaning around 100 different products) in table 1 and around 800 possible combinations of segments in table 2.
The algorithm I need to implement (very generally):
for x=1 to 100
IF product_x = YES THEN date_product_x = date + "Seach for result in table2"
Is there a reasonable way how to change the "date_product_x" columns based on the 2nd table or would it be better to find a different solution?
Thanks a lot!
I can only give you a general approach, because the information in your question is general (for example, why does "end the product on anniversary" translate to "30/11/2019"? It's not explained in the question, so I assume you're going to be able to handle that part of the logic).
You can approach this by using an UNPIVOT on Table 1 to get a structure like:
cont_n | segment1 | segment2 | product_number | product_date
You will UNPIVOT..FOR date_product_1 thru date_product_100. You'll either have to type out all 100 column names, or use dynamic sql to build the whole thing.
You'll do some string manipulation to grab the "x" portion of "date_product_x", and turn it into "prod_x", and then you can join to the second table on the two segment columns and the "prod_x" column, get the result column value, and do whatever rules you're doing to get the value you want for date_product_x.
Finally, you take that result, and PIVOT it back to the one-row-per-contract form, and JOIN it to your original table to UPDATE the date_product_x columns.
I need to select x unique rows randomly from a table with n rows, while excluding 1 specific row. (x is small, 3 for example) This can be done in several queries if needed and I can also compute anything in programming language (Java). The one important thing is that it must be done faster than O(n), consuming O(x) memory and indefinite looping (retrying) is also undesirable.
Probability of selection should be equal for all rows (except the one which is excluded, of course)
Example:
| id | some data |
|————|———————————|
| 1 | … |
| 2 | … |
| 3 | … |
| 4 | … |
| 5 | … |
The algorithm is ran with arguments (x = 3, exclude_id = 4), so:
it should select 3 different random rows from rows with id in 1,2,3,5.
(end of example)
I tried the following approach:
get row count (= n);
get the position of the excluded row by something like select count(*) where id < excluded_id, assuming id is monotonically increasing;
select the numbers from 0..n, obeying all the rules, by using some "clever" algorithms, it's something like O(x), in other words fast enough;
select these x rows one by one by using limit(index, 1) SQL clause.
However, it turned out that it's possible for rows to change positions (I'm not sure why), so the auto-generated ids are not monotonically increasing. And in this case the second step (get the position of the excluded row) produces wrong result and the algorithm fails to do its job correctly.
How can this be improved?
Also, if this is vastly easier with a different SQL-like database system, it would be interesting, too. (the DB is on a server and I can install any software there as long as it's compatible with Ubuntu 14.04 LTS)
(I'm sorry for a bit of confusion;) the algorithm I used is actually correct it the id is monotonically increasing, I just forgot that it was not itself auto-generated, it was taken from another table where it's auto-generated, and it was possible to add these rows in different order.
So I added another id for this table, which is auto-generated, and used it for row selection, and now it works as it should.
Can data in Hive be transposed? As in, the rows become columns and columns are the rows? If there is no function straight up, is there a way to do it in a couple of steps?
I have a table like this:
| ID | Names | Proc1 | Proc2 | Proc3 |
| 1 | A1 | x | b | f |
| 2 | B1 | y | c | g |
| 3 | C1 | z | d | h |
| 4 | D1 | a | e | i |
I want it to be like this:
| A1 | B1 | C1 | D1 |
| x | y | z | a |
| b | c | d | e |
| f | g | h | i |
I have been looking up other related questions and they all mention using lateral views and explode, but is there a way to selectively choose columns for lateral(ly) view(ing) and explod(ing)?
Also, what might be the rough process to achieve what I would like to do? Please help me out. Thanks!
Edit: I have been reading this link: https://cwiki.apache.org/Hive/languagemanual-lateralview.html and it shows me half of what I want to achieve. The first example in the link is basically what I'd like except that I don't want the rows to repeat and want them as column names. Any ideas on how to get the data to a form such that if I do an explode, it would result in my desired output, or the other way, ie, explode first to lead to another step that would then lead to my desired output table. Thanks again!
I don't know of a way out of the box in hive to do this, sorry. You get close with explode etc. but I don't think it can get the job done.
Overall, conceptually, I think it's hard to a transpose without knowing what the columns of the destination table are going to be in advance. This is true, in particular for hive, because the metadata related to how many columns, their types, their names, etc. in a database - the metastore. And, it's true in general, because not knowing the columns beforehand, would require some sort of in-memory holding of data (ok, sure with spills) and users may need to be careful about not overflowing the memory and such (just like dynamic partitioning in hive).
In any case, long story short, if you know the columns of the destination table beforehand, life is good. There isn't a set command in hive per se, to the best of my knowledge, but you could use a bunch of if clauses and case statements (ugly I know, but that's how I have done the same in the past) in the select clause to transpose the data. Something along the lines of SQL - How to transpose?
Do let me know how it goes!
As Mark pointed out there's no easy way to do this in Hive since PIVOT doesn't present in Hive and you may also encounter issues when trying to use the case/when 'trick' since you have multiple values (proc1,proc2,proc3).
As for testing purposes, you may try a different approach:
select v, o1, o2, o3 from (
select k,
v,
LEAD(v,3) OVER() as o1,
LEAD(v,6) OVER() as o2,
LEAD(v,9) OVER() as o3
from (select transform(name,proc1,proc2,proc3) using 'python strm.py' AS (k, v)
from input_table) q1
) q2 where k = 'A1';
where strm.py:
import sys
for line in sys.stdin:
line = line.strip()
name, proc1, proc2, proc3 = line.split('\t')
print '%s\t%s' % (name, proc1)
print '%s\t%s' % (name, proc2)
print '%s\t%s' % (name, proc3)
The trick here is to use a python script in the map phase which emits each column of a row as distinct rows. Then every third (since we have 3 proc columns) row will form the resulting row which we get by peeking forward (lead).
However, this query does the job, it has the drawback that as the input grows, you need to peek the next 3rd element in the query which may lead to performance hit. Anyway you may evaluate it for testing purposes.
I have a large database I use for plotting and data examination. For simplicity, say it looks something like this:
| id | day | obs |
+----------+-----------+-----------+
| 1 | 500 | 4.5 |
| 2 | 500 | 4.4 |
| 3 | 500 | 4.7 |
| 4 | 500 | 4.8 |
| 5 | 600 | 5.1 |
| 6 | 600 | 5.2 |
...
This could be stock market data, where we have many points per day that are measured.
What I want to do is look at much longer trends, where the multiple points per day are unnecessarily resolved, and clog my plotting application. (I want to look at 30000 days, each has about 100 observations).
Is there a way to do something like SELECT ... LIMIT 1 PER "day"
I suppose I could perform a few SELECT DISTINCT queries to find correct ID's, but I'd rather do something simple if it is built in.
It doesn't matter if its the first, last, or an average value per day. Just a single value. I just prefer what is fastest.
Also, this I'd like to do this for Postgres, MySQL, and SQLite. My application is built to use all three and I frequently switch between them.
Thanks!
Background: This is for a Ruby on Rails plotting application, so a trick with ActiveRecord will work too. https://github.com/ZachDischner/Rails-Plotter
You need to tag your question with the brand of RDBMS you're using. Frequently for Rails developers, they're using MySQL, but the answer to your question depends on this.
For all brands except for MySQL, the correct and standard solution is to use windowing functions:
SELECT * FROM (
SELECT ROW_NUMBER() OVER (PARTITION BY day) AS RN, *
FROM stockmarketdata
) AS t
WHERE t.RN = 1;
For MySQL, which doesn't support windowing functions yet, you can simulate them in a kind of clumsy way with session variables:
SELECT * FROM (SELECT #day:=0, #r:=0) AS _init,
(
SELECT IF(day=#day, #r:=#r+1, #r:=0) AS RN, #day:=day AS d, *
FROM stockmarketdata
) AS t
WHERE t.RN = 1
You left a lot of room for options with your statement:
It doesn't matter if its the first, last, or an average value per day. Just a single value. I just prefer what is fastest.
So, I'm going to leave the id out of it and first propose going with average of obs for each group as the simplest and probably the most practical, though maybe not the fastest to be running stat functions vs. limit:
MyModel.group(:day).average(:obs)
If you wanted the minimum:
MyModel.group(:day).minimum(:obs)
If you wanted the maximum:
MyModel.group(:day).maximum(:obs)
(Note: The following 2 examples are less efficient than just entering the SQL, but might be more portable.)
But you might want all three:
ActiveRecord::Base.connection.execute(MyModel.select('MIN(obs), AVG(obs), MAX(obs)').group(:day).to_sql).to_a
Or just the data without hashes:
ActiveRecord::Base.connection.exec_query(MyModel.select('MIN(obs), AVG(obs), MAX(obs)').group(:day).to_sql)
If you want median, see this question which is more DB specific, and there are other related posts about it if you search.
And for more, some DB's like postgres have variance(...), stddev(...), etc. built-in.
Finally, check out the query section in the Rails guide and ARel for more info on constructing queries. You can do a limit in an ActiveRecord relation via first or limit for example, and in ARel, take lets you do a limit. Subqueries are possible too, as shown in answers to this question, and so is group by, etc. If you are sharing this project with others, try to limit the amount of non-portable SQL you are using unless you plan on adding support for other databases on your own and maintaining that.
Let's say I had a table full of records that I wanted to pull random records from. However, I want certain rows in that table to appear more often than others (and which ones vary by user). What's the best way to go about this, using SQL?
The only way I can think of is to create a temporary table, fill it with the rows I want to be more common, and then pad it with other randomly selected rows from the table. Is there a better way?
One way I can think of is to create another column in the table which is a rolling sum of your weights, then pull your records by generating a random number between 0 and the total of all your weights, and pull the row with the highest rolling sum value less than the random number.
For example, if you had four rows with the following weights:
+---+--------+------------+
|row| weight | rollingsum |
+---+--------+------------+
| a | 3 | 3 |
| b | 3 | 6 |
| c | 4 | 10 |
| d | 1 | 11 |
+---+--------+------------+
Then, choose a random number n between 0 and 11, inclusive, and return row a if 0<=n<3, b if 3<=n<6, and so on.
Here are some links on generating rolling sums:
http://dev.mysql.com/tech-resources/articles/rolling_sums_in_mysql.html
http://dev.mysql.com/tech-resources/articles/rolling_sums_in_mysql_followup.html
I don't know that it can be done very easily with SQL alone. With T-SQL or similar, you could write a loop to duplicate rows, or you can use the SQL to generate the instructions for doing the row duplication instead.
I don't know your probability model, but you could use an approach like this to achieve the latter. Given these table definitions:
RowSource
---------
RowID
UserRowProbability
------------------
UserId
RowId
FrequencyMultiplier
You could write a query like this (SQL Server specific):
SELECT TOP 100 rs.RowId, urp.FrequencyMultiplier
FROM RowSource rs
LEFT JOIN UserRowProbability urp ON rs.RowId = urp.RowId
ORDER BY ISNULL(urp.FrequencyMultiplier, 1) DESC, NEWID()
This would take care of selecting a random set of rows as well as how many should be repeated. Then, in your application logic, you could do the row duplication and shuffle the results.
Start with 3 tables users, data and user-data. User-data contains which rows should be prefered for each user.
Then create one view based on the data rows that are prefered by the the user.
Create a second view that has the none prefered data.
Create a third view which is a union of the first 2. The union should select more rows from the prefered data.
Then finally select random rows from the third view.