SQL Select - transform rows in columns - sql

Quite basic, but I am stuck at the moment.
On an Informix database (no pivot option), I am searching for a dynamic way to transform the following table using SQL:
book | info | value
-----------------------------
Moby Dick | price | high
Moby Dick | stock | few
Hamlet | price | low
Hamlet | stock | many
Faust | price | medium
Faust | stock | normal
Resulting table:
book | price | stock
-----------------------------
Moby Dick | high | few
Hamlet | low | many
Faust | medium | normal
Thanks for your help!

You can aggregate based on CASE expression grouped by book. Try something like this.
SELECT book,
MAX(CASE WHEN info = 'price' THEN value END) as price,
MAX(CASE WHEN info = 'stock' THEN value END) as stock
FROM table1
GROUP BY book

Related

SQL - specific requirement to compare tables

I'm trying to merge 2 queries into 1 (cuts the number of daily queries in half): I have 2 tables, I want to do a query against 1 table, then the same query against the other table that has the same list just less entries.
Basically its a list of (let's call it for obfuscation) people and hobby. One table is ALL people & hobby, the other shorter list is people & hobby that I've met. Table 2 would all be found in table 1. Table 1 includes entries (people I have yet to meet) not found in table 2
The tables are synced up from elsewhere, what I'm looking to do is print a list of ALL people in the first column then print the hobby ONLY of people that are on both lists. That way I can see the lists merged, and track the rate at which the gap between both lists is closing. I have tried a number of SQL combinations but they either filter out the first table and match only items that are true for both (i.e. just giving me table 2) or just adding table 2 to table 1.
Example of what I'm trying to do below:
+---------+----------+--+----------+---------+--+---------+----------+
| table1 | | | table2 | | | query | |
+---------+----------+--+----------+---------+--+---------+----------+
| name | hobby | | activity | person | | name | hobby |
| bob | fishing | | fishing | bob | | bob | fishing |
| bill | vidgames | | hiking | sarah | | bill | |
| sarah | hiking | | planking | sabrina | | sarah | hiking |
| mike | cooking | | | | | mike | |
| sabrina | planking | | | | | sabrina | planking |
+---------+----------+--+----------+---------+--+---------+----------+
Normally I'd just take the few days to learn SQL a bit better however I'm stretched pretty thin at work as it is!
I should mention the table 2 is flipped and the headings are all unique (don't think this matters)!
I think you just want a left join:
select t1.name, t2.activity as hobby
from table1 t1 left join
table2 t2
on t1.name = t2.person;

Calculate Equation From Seperate Tables Data

I'm working on my senior High School Project and am reaching out to the community for help! (As my teacher doesn't know the answer to my question).
I have a simple "Products" table as shown below:
I also have a "Orders" table shown below:
Is there a way I can create a field in the "Orders" table named "Total Cost", and make that automaticly calculate the total cost from all the products selected?
Firstly, I would advise against storing calculated values, and would also strongly advise against using calculated fields in tables. In general, calculations should be performed by queries.
I would also strongly advise against the use of multivalued fields, as your images appear to show.
In general, when following the rules of database normalisation, most sales databases are structured in a very similar manner, containing with the following main tables (amongst others):
Products (aka Stock Items)
Customers
Order Header
Order Line (aka Order Detail)
A good example for you to learn from would be the classic Northwind sample database provided free of charge as a template for MS Access.
With the above structure, observe that each table serves a purpose with each record storing information pertaining to a single entity (whether it be a single product, single customer, single order, or single order line).
For example, you might have something like:
Products
Primary Key: Prd_ID
+--------+-----------+-----------+
| Prd_ID | Prd_Desc | Prd_Price |
+--------+-----------+-----------+
| 1 | Americano | $8.00 |
| 2 | Mocha | $6.00 |
| 3 | Latte | $5.00 |
+--------+-----------+-----------+
Customers
Primary Key: Cus_ID
+--------+--------------+
| Cus_ID | Cus_Name |
+--------+--------------+
| 1 | Joe Bloggs |
| 2 | Robert Smith |
| 3 | Lee Mac |
+--------+--------------+
Order Header
Primary Key: Ord_ID
Foreign Keys: Ord_Cust
+--------+----------+------------+
| Ord_ID | Ord_Cust | Ord_Date |
+--------+----------+------------+
| 1 | 1 | 2020-02-16 |
| 2 | 1 | 2020-01-15 |
| 3 | 2 | 2020-02-15 |
+--------+----------+------------+
Order Line
Primary Key: Orl_Order + Orl_Line
Foreign Keys: Orl_Order, Orl_Prod
+-----------+----------+----------+---------+
| Orl_Order | Orl_Line | Orl_Prod | Orl_Qty |
+-----------+----------+----------+---------+
| 1 | 1 | 1 | 2 |
| 1 | 2 | 3 | 1 |
| 2 | 1 | 2 | 1 |
| 3 | 1 | 1 | 4 |
| 3 | 2 | 3 | 2 |
+-----------+----------+----------+---------+
You might also opt to store the product description & price on the order line records, so that these are retained at the point of sale, as the information in the Products table is likely to change over time.

Query M:N contains

I am trying to filter a set of tables that includes an M:N junction table in Android Room (SQLite).
An image can have many subjects. I'd like to allow filtering by a subject, so that I get a row with complete image information (including all subjects). So if an image had (National Park, Yosemite) filtering for either would result in one row with both keywords. Unless I messed something up, a typical join will result in multiple rows such that matching Yosemite would get the right image, but you'd be lacking National Park. I came up with this:
SELECT *,
(SELECT GROUP_CONCAT(name)
FROM meta_subject_junction
JOIN subject
ON subject.id = meta_subject_junction.subjectId
WHERE meta_subject_junction.metaId = meta.id) AS keywords,
(SELECT documentUri
FROM image_parent
WHERE meta.parentId = image_parent.id ) AS parentUri
FROM meta
Now this gets me the complete rows, but I think at this point I'd need to:
WHERE keywords LIKE(%YOSEMITE%)
and I think the LIKE is less than ideal, not to mention an imprecise match. Is there a better way to accomplish this? Thanks, this is bending my novice SQL brain.
Further details
meta
+----+----------+--+
| id | name | |
+----+----------+--+
| 1 | yosemite | |
| 2 | bryce | |
| 3 | flowers | |
+----+----------+--+
subject
+----+---------------+--+
| id | name | |
+----+---------------+--+
| 1 | National Park | |
| 2 | Yosemite | |
| 3 | Tulip | |
+----+---------------+--+
junction
+--------+-----------+
| metaId | subjectId |
+--------+-----------+
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 3 | 3 |
+--------+-----------+
Although I may have done something wrong, as far as I can tell Android Room doesn't like:
+----+-----------+---------------+
| id | name | subject |
+----+-----------+---------------+
| 1 | yosemite | National Park |
| 1 | yosemite | Yosemite |
+----+-----------+---------------+
so I'm trying to reduce the rows:
+----+-----------+-------------------------+
| id | name | subject |
+----+-----------+-------------------------+
| 1 | yosemite | National Park, Yosemite |
+----+-----------+-------------------------+
which the above query does. However, I also want to query for a subject. So that National Park filter will yield:
+----+-----------+-------------------------+
| id | name | subject |
+----+-----------+-------------------------+
| 1 | yosemite | National Park, Yosemite |
| 2 | bryce | National Park |
+----+-----------+-------------------------+
I'd like to be more precise/efficient than LIKE with the already 'concat' subject. Most of my attempts end up with no results in Room (multi-row) or reducing the subject to only the filter keyword.
Update
Here's a test I've been using to compare the actual SQL results from a query to what Android Room ends up with:
http://sqlfiddle.com/#!7/0ac11/10/0
That join query is interpreted as four objects in Android Room, so I'm trying to reduce the rows, but retain the full subject results while filtering for any image containing the subject keyword.
If you want multiple keywords, then where and group by and having can be used:
select image_id
from image_subject
where subject_id in ('a', 'b', 'c') -- whatever
group by image-id
having count(distinct subject_id) = 3; -- same count as in `where`
This gets the result I need, though I'd love to hear a better option if this is particularly inefficient.
SELECT meta.*,
(SELECT GROUP_CONCAT(name)
FROM junction
JOIN subject
ON subject.id = junction.subjectId
WHERE junction.metaId = meta.id) AS keywords,
junction.subjectId
FROM meta
LEFT JOIN junction ON junction.metaId = meta.id
WHERE subjectId IN (1,2)
GROUP BY meta.id
+----+----------+------------------------+-----------+
| id | name | keywords | subjectId |
+----+----------+------------------------+-----------+
| 1 | yosemite | National Park,Yosemite | 2 |
| 2 | bryce | National Park | 1 |
+----+----------+------------------------+-----------+
http://sqlfiddle.com/#!7/86a76/13

How should I create an SQL table with stock information so that I can add new stocks and new fields easily?

I want to create an SQL table, where I can have any number of stocks (ie. MSFT, GOOG, IBM) and any number of fields (ie. Full Name, Sector, Country). But I want the flexibility to add new stocks and new fields as I go along. Say I want to add a new stock like AAPL, or I want a new boolean field for whether they pay dividends or not. I don't expect to store dynamic fields like CurrentStockPrice, but the information will have to change periodically. For instance, when a company changes its dividend policy. How do I design the table so that I don't have to change its structure?
I had one idea where I could have a new table for each stock, and a master table that has all the stocks, and a pointer to each individual stock's table. That way, I can freely add new stocks, and new fields easily. But I'm not very familiar with SQL, and would like an expert opinion on how it should be implemented.
The simple answer is that your requirements are not a good fit for SQL. The most important concern is not how to store the data, but how you will retrieve it - what kind of query will you need to run?
EAV allows you to store data whose schema you don't know in advance - but has lots of drawbacks when querying. Even moderately complex queries (find all stocks where the dividend was paid between 1 and 12 Jan, in the tech sector, whose CEO is female) run into a lot of compexity.
Creating a new table for each type of record very quickly gets crazy too - imagine the query above if you have to search dozens or hundreds of type-specific tables.
The relational model works best when you know the schema of the information in advance.
If you don't know the schema, consider using a NoSQL solution, or use SQL Server's support for XML or JSON. Store the fixed data in rows & columns, and the variable data in XML or JSON. Performance for searching is pretty good, and it's much less convoluted as a solution.
Just to expand on my comment, because the question itself begs for a couple of common schema anti-patterns. Some hybrid of EAV may actually be a good fit if you are willing to give up some flexibility and simplicity in your SQL and you aren't looking for fast queries.
EAV
EAV, or Entity-Attribute-Value is a design where, in your case, you would have a master table of stocks with some common attributes, or maybe even ticker info with a datetime. Something like:
+---------+--------+--------------+
| stockid | symbol | name |
+---------+--------+--------------+
| 1 | goog | Google |
| 2 | msft | Microsoft |
| 3 | gpro | GoPro |
| 4 | xom | Exxon Mobile |
+---------+--------+--------------+
And a second table (the EAV table) to store ever changing attributes:
+---------+-----------+------------+
| stockid | attribute | value |
+---------+-----------+------------+
| 1 | country | us |
| 1 | favorite | TRUE |
| 1 | startyear | 2004 |
| 3 | favorite | |
| 3 | bobspick | TRUE |
| 4 | country | us |
| 3 | country | us |
| 2 | startyear | 1986 |
| 2 | employees | 18000 |
| 3 | marketcap | 1850000000 |
+---------+-----------+------------+
And perhaps a third table to get that minute by minute ticker info stored:
+---------+----------------+--------+
| stockid | datetime | value |
+---------+----------------+--------+
| 1 | 9/21/2016 8:15 | 771.41 |
| 1 | 9/21/2016 8:14 | 771.39 |
| 1 | 9/21/2016 8:12 | 771.37 |
| 1 | 9/21/2016 8:10 | 771.35 |
| 1 | 9/21/2016 8:08 | 771.33 |
| 1 | 9/21/2016 8:06 | 771.31 |
| 1 | 9/21/2016 8:04 | 771.29 |
| 2 | 9/21/2016 8:15 | 56.81 |
| 2 | 9/21/2016 8:14 | 56.82 |
| 2 | 9/21/2016 8:12 | 56.83 |
| 2 | 9/21/2016 8:10 | 56.84 |
+---------+----------------+--------+
Generally this is considered not great design since stitching data back together in a format like:
+-------------+-----------+---------+-----------+----------+--------------+
| stocksymbol | stockname | country | startyear | bobspick | currentvalue |
+-------------+-----------+---------+-----------+----------+--------------+
causes you to write a query that is not fun to look at:
SELECT
stocks.stocksymbol,
stocks.name,
country.value,
bobspick.value,
startyear.value,
stockvalue.stockvalue
FROM
stocks
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'country') as country ON
stocks.stockid = country.stockid
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'Bobspick') as bobspick ON
stocks.stockid = bobspick.stockid
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'startyear') as startyear ON
stocks.stockid = startyear.stockid
LEFT OUTER JOIN (SELECT max(value) as stockvalue, stockid FROM ticketTable GROUP BY stockid) as stockvalue ON
stocks.stockid = stockvalue.stockid
WHERE symbol in ('goog', 'msft')
You can see that every "field" in the EAV table gets its own subquery, which means we read that table from storage three times. We gain the flexibility on the front end over the database design, but we lose flexibility when querying.
Imagine a more traditional schema:
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
| stockid | symbol | name | country | bobspick | favorite | startyear | marketcap | employees |
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
| 1 | goog | Google | us | | TRUE | 2004 | | |
| 2 | msft | Microsoft | | | | 1986 | | 18000 |
| 3 | gpro | GoPro | us | TRUE | | | 1850000000 | |
| 4 | xom | Exxon Mobile | us | | | | | |
| | | | | | | | | |
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
and
+---------+----------------+--------+
| stockid | datetime | value |
+---------+----------------+--------+
| 1 | 9/21/2016 8:15 | 771.41 |
| 1 | 9/21/2016 8:14 | 771.39 |
| 1 | 9/21/2016 8:12 | 771.37 |
| 1 | 9/21/2016 8:10 | 771.35 |
| 1 | 9/21/2016 8:08 | 771.33 |
| 1 | 9/21/2016 8:06 | 771.31 |
| 1 | 9/21/2016 8:04 | 771.29 |
| 2 | 9/21/2016 8:15 | 56.81 |
| 2 | 9/21/2016 8:14 | 56.82 |
| 2 | 9/21/2016 8:12 | 56.83 |
| 2 | 9/21/2016 8:10 | 56.84 |
+---------+----------------+--------+
To get the same results:
SELECT
stocks.stocksymbol,
stocks.name,
stocks.country,
stocks.bobspick,
stocks.startyear,
stockvalue.stockvalue
FROM
stocks
LEFT OUTER JOIN (SELECT max(value) as stockvalue, stockid FROM ticketTable GROUP BY stockid) as stockvalue ON
stocks.stockid = stockvalue.stockid
WHERE symbol in ('goog', 'msft')
Now we have the flexibility in the query where we can quickly change out fields without monkeying around in subqueries, but we have to hassle our DBA every time we want to add a field.
There is a further abstraction from EAV that is definitely something to avoid. I don't know if it has a name, but I call it "Database in a database". Here you have a table of tables, table of fields, and a table of values. The entire schema is kept as records as our the values that would be stored in the schema. Ultimatele flexibility is gained, but the sql you will write to get at your data will be nightmarish and your query speeds will degrade at a fast rate as you add to your data/schema/data/schema mess.
As for your last idea of adding a new table for each stock, if the fields you are going to track for each stock are different (startyear, employees, and market cap for one stock and marketmax, country, address, yearsinbusiness in another) and you aren't planning on adding new stocks often, then it may be a good fit. I'm betting though that the attributes/fields that you track on stock1 are going to also be tracked on stock2, and therefore suggest that your should have a single stock table with all those common attributes and maybe an EAV to track attributes that are particular to each stock so you can have the flexibility you need.
In each of these schemas I would also suggest that you put your ticker data in it's own table. Whether you are capturing ticket data by the minute, hour, day, week, or month, because it's datetime level data, it deserves it's own table. (Unless you are only going to track the most current value, then it becomes a field).
If you want to add fields dynamically, but without actually altering the schema of the table, then you should use a vertical schema for the table and retrieve the data via a PIVOT statement.
In this manner you can add as many Field/Value pairs as you wish for each stock/customer pairing.
The basic table would have 5 columns perhaps:
ID (Identity); StockName; AttributeName; Value; Timestamp;
If you take a look at how SQL organizes it's table schema in INFORMATION_SCHEMA.COLUMNS, it provides this very same vertical schema layout for you.

Sql - Sort by values in a cartesian product

Let's say I have the following data:
Product
IdPk | Name
-------------
guid1 | Printer
guid2 | Oil
guid3 | Etc.
guid4 | Etc..
ProductPart
PartIdPk| ItemId | PartName
-------------------------
guid100 | guid1 | Ink
guid101 | guid1 | Paper
guid102 | guid2 | Automobile Fuel
guid103 | guid2 | Cooking
ProductPartType
TypeIdPk| ItemId | PartId | TypeName
---------------------------------
guid200 | guid1 | guid100 | Cyan < Types of ink
guid201 | guid1 | guid100 | Magenta
guid202 | guid1 | guid100 | Black
guid203 | guid1 | guid100 | Yellow
guid204 | guid1 | guid101 | Photocopier < Types of paper
guid205 | guid1 | guid101 | Envelope
guid206 | guid1 | guid101 | Card
guid207 | guid2 | guid102 | Petrol < Types of automobile fuel
guid208 | guid2 | guid102 | Diesel
guid209 | guid2 | guid103 | Olive < Types of cooking oil
guid210 | guid2 | guid103 | Sunflower
So, each product has one or more parts, and each part has one or more types.
I'm wanting to select a Product, its Parts and its Types. Assume that I could have many thousands of entries, so typically I'd like to filter at the same time as selecting. These three tables will often lead to a cartesian product query, and given that scenario I need to run a query which is equivalent to "give me the top 2 products ordered by (type name where the part is Ink) then (type name where the part is Cooking)"
Does anyone have any ideas? Many thanks in advance
What you are looking for is just a basic join query on the tables:
select ppt.*, pp.PartName, p.Name
from ProductPartType ppt join
ProductPart pp
on ppt.PartId = pp.PartId join
Product p join
pp.ItemId = p.idPK
where <whatever>
Then you can do whatever queries you want. The "top" query is unclear, because there are no natural things to order on.
I do have a quibble with your data structure, because you have product at both the product part and product part type levels. For consistency purposes, it is better to have this at only only level, probably product part.
Usually, "cartesian product" join is used to refer to a cross join, not an equi-join. For performance, you can add indexes onto the tables. However, you probably won't need them.