Filtering Columns in PLSQL - sql
I have a table with tons and tons of columns and I'm trying to select only certain columns based on the data the columns contain. The table is part of an application I'm building in Oracle APEX and looks something like this:
|Row Header|Criteria 1|Criteria 2| Criteria 3 | Criteria 4 |Criteria 5 |
|Category | Type A | Type B | Type B | Type A | Type A |
| ID | 2.3 | 2.4 | 2.5 | 3.1 | 3.2 |
| Part A | Yes | Yes | Yes | No | Yes |
| Part B | Yes | No | Yes | Yes | Yes |
| Part C | No | Yes | Yes | Yes | No |
It goes on like this for around 1000ish criteria and 100ish parts I need to find a way to select all the columns that are of a specific type to its own table using SQL.
Id Like the return to look like this:
|Row Header|Criteria 1|Criteria 5 |
|Category | Type A | Type A |
| ID | 3.1 | 3.2 |
| Part A | No | Yes |
| Part B | Yes | Yes |
| Part C | Yes | No |
This way I only have the columns showing that are part of the "Type A" Category and have an ID greater than 3.
I've looked into GROUP BY and FILTER functions that SQL has to offer as well as PIVOT and I don't believe these will help me, but I'd be happy to be proven wrong.
In a relational database, columns are meant to be discrete, non-repeating attributes of a thing. Rows are meant to be multiple instances of that thing. Your table is reversed, using columns for what should be rows, and rows for what should be columns. Another factor is that Oracle limits you to 1000 columns, and you start undergoing severe performance degradation when you exceed 254 columns. Tables simply weren't meant to have hundreds, let alone thousands, of columns. So first step is to pivot your table like this:
Criteria_No, Cat, ID, PtA, PtB, PtC
---------------------------------------------
Row 1: Criteria 1, Type A, 2.3, Yes, Yes, No
Row 2: Criteria 2, Type B, 2.4, Yes, No, Yes
Row 3: Criteria 3, Type B, 2.5, Yes, Yes, Yes
. . . thousands more
But even then, you mentioned that you have 100s of "parts", so Parts A, B, C aren't the only three - the series continues. If so, it would be a violation of normal form to have such a repeating list in a single row. So you have one more step to fix your design: Break this into three tables.
CRITERIA
Criteria_No, Cat, ID
---------------------------------------------
Row 1: Criteria 1, Type A, 2.3
Row 2: Criteria 2, Type B, 2.4
Row 3: Criteria 3, Type B, 2.5
PARTS
Part, anything-else-about-part
-----------------
Part A, blah
Part B, blah,
Part C, blah
. . .
And now the bridge table between them:
CRITERIA_PARTS
Criteria_No, Part
-----------------
1, Part A
1, Part B
1, Part C
2, Part A,
2, Part B,
. . . and so on
You should also place a foreign key on each of the bridge table columns to point to their respective parent tables to ensure data integrity.
Now you query by joining the tables together in your SQL.
Updated: you asked how to move data into this new criteria table from your existing one. Use dynamic SQL like this:
BEGIN
FOR i IN 1..1000
LOOP
EXECUTE IMMEDIATE 'INSERT INTO criteria (criteria_no,cat,id) SELECT criteria_'||i||',category,id FROM oldtable';
END LOOP;
COMMIT;
END;
But of course set the 1000 to the real # of category_n columns.
Related
Sort SQL results and include missing keys
I have a Postgres table like this (greatly simplified): id | object_id (foreign id) | key (text) | value (text) 1 | 1 | A | 0foo 2 | 1 | B | 1bar 3 | 1 | C | 2baz 4 | 1 | D | 3ham 5 | 2 | C | 4sam 6 | 3 | F | 5pam … (billions of rows) I select object_ids according to some query (not relevant here), and then sort them according to the value of a specified key. def sort_query_result(query, sort_by, limit, offset): return query\ .with_entities(Table.object_id)\ .filter(Table.key == sort_by)\ .order_by(desc(Table.value))\ .limit(limit).offset(offset).subquery() For example, assume a query matches object_ids 1 and 2 above. When sort_by=C, I want the result to be returned in the order [2, 1], because 4sam > 2baz. This works well but there's one big problem: Object ids that are returned by query but do not have any row for the sort_by key, are not returned at all. For example, for a query that matches object_ids 1 and 2, sort_query_results(query, sort_by='D') == [1]. The object_id 2 is dropped because it has no D, which is undesirable. Instead, I'd like to return all object_ids from the query. Those without the sort key should be sorted at the end, in any order: sort_query_results(query, sort_by='D') == [1, 2]. What's the best way to achieve that? Note: I do not have the freedom to change the DB schema or business logic. But I can change the query code. I use SQLAlchemy ORM from Python, but could execute raw Postgres commands if necessary. Thank you.
Recursive self join over file data
I know there are many questions about recursive self joins, but they're mostly in a hierarchical data structure as follows: ID | Value | Parent id ----------------------------- But I was wondering if there was a way to do this in a specific case that I have where I don't necessarily have a parent id. My data will look like this when I initially load the file. ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,... 2 | *,record,abc,efg,hij,... 3 | ,,1,x,y,z,... 4 | ,,2,q,r,s,... 5 | 3,Formula,5,6,7,8,... 6 | *,record,lmn,opq,rst,... 7 | ,,1,t,u,v,... 8 | ,,2,l,m,n,... Essentially, its a CSV file where each row in the table is a line in the file. Lines 1 and 5 identify an object header and lines 3, 4, 7, and 8 identify the rows belonging to the object. The object header lines can have only 40 attributes which is why the object is broken up across multiple sections in the CSV file. What I'd like to do is take the table, separate out the record # column, and join it with itself multiple times so it achieves something like this: ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,5,6,7,8,... 2 | *,record,abc,efg,hij,lmn,opq,rst 3 | ,,1,x,y,z,t,u,v,... 4 | ,,2,q,r,s,l,m,n,... I know its probably possible, I'm just not sure where to start. My initial idea was to create a view that separates out the first and second columns in a view, and use the view as a way of joining in a repeated fashion on those two columns. However, I have some problems: I don't know how many sections will occur in the file for the same object The file can contain other objects as well so joining on the first two columns would be problematic if you have something like ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,... 2 | *,record,abc,efg,hij,... 3 | ,,1,x,y,z,... 4 | ,,2,q,r,s,... 5 | 3,Formula,5,6,7,8,... 6 | *,record,lmn,opq,rst,... 7 | ,,1,t,u,v,... 8 | ,,2,l,m,n,... 9 | ,4,Data,1,2,3,4,... 10 | *,record,lmn,opq,rst,... 11 | ,,1,t,u,v,... In the above case, my plan could join rows from the Data object in row 9 with the first rows of the Formula object by matching the record value of 1. UPDATE I know this is somewhat confusing. I tried doing this with C# a while back, but I had to basically write a recursive decent parser to parse the specific file format and it simply took to long because I had to get it in the database afterwards and it was too much for entity framework. It was taking hours just to convert one file since these files are excessively large. Either way, #Nolan Shang has the closest result to what I want. The only difference is this (sorry for the bad formatting): +----+------------+------------------------------------------+-----------------------+ | ID | header | x | value | +----+------------+------------------------------------------+-----------------------+ | 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 |3,Formula,1,2,3,4,5,6,7,8 | | 2 | ,, | ,1,x,y,z,t,u,v | ,1,x,y,z,t,u,v | | 3 | ,, | ,2,q,r,s,l,m,n | ,2,q,r,s,l,m,n | | 4 | *,record, | ,abc,efg,hij,lmn,opq,rst |*,record,abc,efg,hij,lmn,opq,rst | | 5 | ,4, | ,Data,1,2,3,4 |,4,Data,1,2,3,4 | | 6 | *,record, | ,lmn,opq,rst | ,lmn,opq,rst | | 7 | ,, | ,1,t,u,v | ,1,t,u,v | +----+------------+------------------------------------------+-----------------------------------------------+
I agree that it would be better to export this to a scripting language and do it there. This will be a lot of work in TSQL. You've intimated that there are other possible scenarios you haven't shown, so I obviously can't give a comprehensive solution. I'm guessing this isn't something you need to do quickly on a repeated basis. More of a one-time transformation, so performance isn't an issue. One approach would be to do a LEFT JOIN to a hard-coded table of the possible identifying sub-strings like: 3,Formula, *,record, ,,1, ,,2, ,4,Data, Looks like it pretty much has to be human-selected and hard-coded because I can't find a reliable pattern that can be used to SELECT only these sub-strings. Then you SELECT from this artificially-created table (or derived table, or CTE) and LEFT JOIN to your actual table with a LIKE to get all the rows that use each of these values as their starting substring, strip out the starting characters to get the rest of the string, and use the STUFF..FOR XML trick to build the desired Line. How you get the ID column depends on what you want, for instance in your second example, I don't know what ID you want for the ,4,Data,... line. Do you want 5 because that's the next number in the results, or do you want 9 because that's the ID of the first occurrance of that sub-string? Code accordingly. If you want 5 it's a ROW_NUMBER(). If you want 9, you can add an ID column to the artificial table you created at the start of this approach. BTW, there's really nothing recursive about what you need done, so if you're still thinking in those terms, now would be a good time to stop. This is more of a "Group Concatenation" problem.
Here is a sample, but has some different with you need. It is because I use the value the second comma as group header, so the ,,1 and ,,2 will be treated as same group, if you can use a parent id to indicated a group will be better DECLARE #testdata TABLE(ID int,Line varchar(8000)) INSERT INTO #testdata SELECT 1,'3,Formula,1,2,3,4,...' UNION ALL SELECT 2,'*,record,abc,efg,hij,...' UNION ALL SELECT 3,',,1,x,y,z,...' UNION ALL SELECT 4,',,2,q,r,s,...' UNION ALL SELECT 5,'3,Formula,5,6,7,8,...' UNION ALL SELECT 6,'*,record,lmn,opq,rst,...' UNION ALL SELECT 7,',,1,t,u,v,...' UNION ALL SELECT 8,',,2,l,m,n,...' UNION ALL SELECT 9,',4,Data,1,2,3,4,...' UNION ALL SELECT 10,'*,record,lmn,opq,rst,...' UNION ALL SELECT 11,',,1,t,u,v,...' ;WITH t AS( SELECT *,REPLACE(SUBSTRING(t.Line,LEN(c.header)+1,LEN(t.Line)),',...','') AS data FROM #testdata AS t CROSS APPLY(VALUES(LEFT(t.Line,CHARINDEX(',',t.Line, CHARINDEX(',',t.Line)+1 )))) c(header) ) SELECT MIN(ID) AS ID,t.header,c.x,t.header+STUFF(c.x,1,1,'') AS value FROM t OUTER APPLY(SELECT ','+tb.data FROM t AS tb WHERE tb.header=t.header FOR XML PATH('') ) c(x) GROUP BY t.header,c.x +----+------------+------------------------------------------+-----------------------------------------------+ | ID | header | x | value | +----+------------+------------------------------------------+-----------------------------------------------+ | 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 | 3,Formula,1,2,3,4,5,6,7,8 | | 3 | ,, | ,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v | ,,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v | | 2 | *,record, | ,abc,efg,hij,lmn,opq,rst,lmn,opq,rst | *,record,abc,efg,hij,lmn,opq,rst,lmn,opq,rst | | 9 | ,4, | ,Data,1,2,3,4 | ,4,Data,1,2,3,4 | +----+------------+------------------------------------------+-----------------------------------------------+
PostgreSQL: Distribute rows evenly and according to frequency
I have trouble with a complex ordering problem. I have following example data: table "categories" id | frequency 1 | 0 2 | 4 3 | 0 table "entries" id | category_id | type 1 | 1 | a 2 | 1 | a 3 | 1 | a 4 | 2 | b 5 | 2 | c 6 | 3 | d I want to put entries rows in an order so that category_id, and type are distributed evenly. More precisely, I want to order entries in a way that: category_ids that refer to a category that has frequency=0 are distributed evenly - so that a row is followed by a different category_id whenever possible. e.g. category_ids of rows: 1,2,1,3,1,2. Rows with category_ids of categories with frequency<>0 should be inserted from ca. the beginning with a minimum of frequency rows between them (the gaps should vary). In my example these are rows with category_id=2. So the result could start with row id #1, then #4, then a minimum of 4 rows of other categories, then #5. in the end result rows with same type should not be next to each other. Example result: id | category_id | type 1 | 1 | a 4 | 2 | b 2 | 1 | a 6 | 3 | d .. some other row .. .. some other row .. .. some other row .. 5 | 2 | c entries are like a stream of things the user gets (one at a time). The whole ordering should give users some variation. It's just there to not present them similar entries all the time, so it doesn't have to be perfect. The query also does not have to give the same result on each call - using random() is totally fine. frequencies are there to give entries of certain categories a higher priority so that they are not distributed across the whole range, but are placed more at the beginning of the result list. Even if there are a lot of these entries, they should not completely crowd out the frequency=0 entries at the beginning, through. I'm no sure how to start this. I think I can use window functions and ntile() to distribute rows by category_id and type. But I have no idea how to insert the non-0-category-entries afterwards.
Searching a "vertical" table in SQLite
Tables are usually laid out in a "horizontal" fashion: +-----+----+----+--------+ |recID|FirstName|LastName| +-----+----+----+--------+ | 1 | Jim | Jones | +-----+----+----+--------+ | 2 | Adam | Smith | +-----+----+----+--------+ Here, however, is a table with the same data in a "vertical" layout: +-----+-----+----+-----+-------+ |rowID|recID| Property | Value | +-----+-----+----+-----+-------+ | 1 | 1 |FirstName | Jim | \ +-----+-----+----+-----+-------+ These two rows constitute a single logical record | 2 | 1 |LastName | Jones | / +-----+-----+----+-----+-------+ | 3 | 2 |FirstName | Adam | \ +-----+-----+----+-----+-------+ These two rows are another single logical record | 4 | 2 |LastName | Smith | / +-----+-----+----+-----+-------+ Question: In SQLite, how can I search the vertical table efficiently and in such a way that recIDs are not duplicated in the result set? That is, if multiple matches are found with the same recID, only one (any one) is returned? Example (incorrect): SELECT rowID from items WHERE "Value" LIKE "J%" returns of course two rows with the same recID: 1 (Jim) 2 (Jones) What is the optimal solution here? I can imagine storing intermediate results in a temp table, but hoping for a more efficient way. (I need to search through all properties, so the SELECT cannot be restricted with e.g. "Property" = "FirstName". The database is maintained by a third-party product; I suppose the design makes sense because the number of property fields is variable.)
To avoid duplicate rows in the result returned by a SELECT, use DISTINCT: SELECT DISTINCT recID FROM items WHERE "Value" LIKE 'J%' However, this works only for the values that are actually returned, and only for entire result rows. In the general case, to return one result record for each group of table records, use GROUP BY to create such groups. For any column that does not appear in the GROUP BY clause, you then have to choose which rowID in the group to return; here we use MIN: SELECT MIN(rowID) FROM items WHERE "Value" LIKE 'J%' GROUP BY recID To make this query more efficient, create an index on the recID column.
Increasing a +1 to the id without changing the content of a column
I have this random table with random contents. id | name| mission 1 | aaaa | kitr 2 | bbbb | etre 3 | ccccc| qwqw 4 | dddd | qwert 5 | eeee | potentials 6 | ffffffff | toto What I want is to add in the above table a column with id=3 with different name and different mission BUT the OLD id =3 I want to have an id = 4 with the name and the mission that it had before when it was id=3, and the OLD id =4 become id=5 with the name and mission of id 5 and so on. its like i want to enter a column inside of the columns and the below column i want to increase there id +1 but the columns rest the same. example below: id | name| mission 1 | aaaa | kitr 2 | bbbb | etre 3 | zzzzzz| zzzzz 4 | ccccc| qwqw 5 | dddd | qwert 6 | eeee | potentials 7 | ffffffff | toto why I want to do this ? I have a table that has 2 CLOB. Inside of those CLOBS there are different queries ex: id =1 has clob of creation of a table id=2 inserts for the columns id=3 has creation of another table id=4 has functions if you add all of this id in one text(or clob) they will have to create then inserts then create then functions. that table it is like a huge script . Why I am doing this ? The developers are building their application and they want the sql to work in specific order and I have 6 developers and am organizing the data modeling and the performance and how the scripts are running .So the above table is to organize the calling of the scripts that they wany
Simply put, don't do it. This case highlights why you should never use any business value, i.e. any 'real world values' for a Primary Key. In your case I would recommend primary keys not be used for any other purposes. I recommend you add an extra column 'order' and then change THAT column in order to re-order the rows. That way your primary key and all the other records will not need to be touched. This avoid the issue that your approach would need to change ALL the database records below the current record which seems like a really bad approach. Just imagine trying to undo that update ;) Some more info here: https://stackoverflow.com/a/8777574/631619
UPDATE random_table r1 SET id = (SELECT CASE WHEN id > 2 THEN id+1 ELSE id END id FROM random_table r2 WHERE r1.mission=r2.mission ) Then insert the new value.