How to Pivot row to columns for this kind of scenerio? - sql

I have a data source that returns data in the following format:
id | subId | code | name | col1 | col2 | col3
1 | 1 | abc | xyz | "Whatever" | "WhateverA" | "WhateverB"
| "Whatever2" | "Whatever2A" | "Whatever2B"
I need to make col1 row values as headers, while col2 and col3 values as subcolumns under col1 for respective rows as:
id | subId | code | name | "Whatever" | "Whatever2" |
1 | 1 | abc | xyz | col2 | col3 | col2 | col3 |
|"WhateverA" | "WhateverB" |"Whatever2A" | "Whatever2B" |

There's nothing that I know of, out of the box, that would achieve this.Here's a high level overview of a possible solution:
Fetch and cache the dataset on the server
Bind the "main\top level" data to a GridView
Using jQuery handle tr\td click events captured inside the GridView
Make an AJAX call to the server -> fetch the sub data from the cached dataset -> return it -> append the fetched result to the GridView
Also, instead of making an AJAX call every single time a td is clicked you can fetch all the data at once and store it in a variable or localStorage.I've done it this way before and there are many resources on the web with step-by-step articles how to achieve step 1 - 4.Hope it gives you an idea

Related

How to update multiple rows of data using linq?

I am new to linq query and EFCore, I am needed to design an API which updates multiple rows of data. This is an example of my current database table in postgresql:
| id | value |
| ---- | ----- |
| 1 | 123 |
| 2 | 456 |
I want to write an update API using linq query to allow user to update using JSON into:
| id | value |
| ---- | ----- |
| 1 | abc |
| 2 | zxc |
What is the best approach to code this API?
My advice is you try to use the out-of-the-box functionality. So do the changes you want to do, and then when you are happy do the SaveChanges() that does everything.
And then ask about specific challenges you experience.

array clustering with unique identifier for file datasets

I have a dataset with big int array column in s3 and I want to filter rows efficiently based on array values. I know we can use gin index in sql table but need solution to work on s3 dataset. I am planning to use cluster id for each combinations of elements in array (as their cardinality is not huge. max 2500) and then store it as new column on which later on filter can applied.
Example,
Table A
+------+------+-----------+
| Col1 | Col2 | Col3 |
+------+------+-----------+
| 1 | 101 | [123,234] |
| 2 | 102 | [123] |
| 3 | 103 | [234,345] |
+------+------+-----------+
I am trying to add new column like,
Table B (column Col3 will be removed from actual schema)
+------+------+-----------+-----------+
| Col1 | Col2 | Col3 | Cid |
+------+------+-----------+-----------+
| 1 | 101 | [123,234] | 1 |
| 2 | 102 | [123] | 2 |
| 3 | 103 | [234,345] | 3 |
+------+------+-----------+-----------+
and there will be another table of mapping for col3 and Cid like,
Table C
+-----------+-----+
| Col3 | Cid |
+-----------+-----+
| [123,234] | 1 |
| [123] | 2 |
| [234,345] | 3 |
+-----------+-----+
This table C will be added a new entry if a new combination is created and B will be updated if any array element gets added or removed. Goal is to be able to filter out records from Table A based on values in array column efficiently. Queries like
123 = Any(Col3) can be served as Cid = 2 or queries like [123, 345] = Any(Col3) can be served as Cid in (2,3).
Is there any better way to do solve this problem?
Also I am thinking of creating required combinations at runtime to limit number of combinations. Is it a good idea to create minimum combinations?
In Postgres, you can create the table and use join to calculate the values:
create table array_dim as
select col3 as arr, row_number() over (order by min(col1)) as array_id
from t
group by col3;
You can then add the new column:
select a.*, ad.array_id
from a join
array_dim ad
on a.col3 = ad.arr

Pivot SSRS Dataset

I have a dataset which looks like so
ID | PName | Node | Val |
1 | Tag | Name | XBA |
2 | Tag | Desc | Dec1 |
3 | Tag | unit | Int |
6 | Tag | tids | 100 |
7 | Tag | post | AAA |
1 | Tag | Name | XBB |
2 | Tag | Desc | Des9 |
3 | Tag | unit | Float |
7 | Tag | post | BBB |
6 | Tag | tids | 150 |
I would like the result in my report to be
Name | Desc | Unit | Tids | Post |
XBA | Dec1 | int | 100 | AAA |
XBB | Des9 | Float | 150 | BBB |
I have tried using a SSRS Matrix with
Row: PName
Data: Node
Value: Val
The results were simply one row with Name and next row with desc and next with unit etc. Its not all in the same rows and also the second row was missing. This is possibly because there is no grouping on the dataset.
What is a good way of achieving the expected results?
I would not recommend this for a production scenario but if you need to knock out a report quickly or something you can try this. I would just not feel comfortable that the order of the records you get will always be what you expect.
You COULD try to insert the results of the SP into a table (regular table, temp table, table variable...doesn't matter really as long as you can get an identity column added). Assuming that the rows always come out in the correct order (which is probably not a valid assumption 100% of the time) then add an identity column on the table to get a unique row number for each row. From there you should be able to write some math logic to "group" your values together and then pivot out what you want.
create table #temp (ID int, PName varchar(100), Node varhar(100), Val varchar(100))
insert #temp exec (your stored proc)
alter table #temp add UniqueID int identity
then use UniqueID (modulo on 5 perhaps?) to group records together and then pivot

postgres: Multiply column of table A with rows of table B

Fellow SOers,
Currently I am stuck with the following Problem.
Say we have table "data" and table "factor"
"data":
---------------------
| col1 | col2 |
----------------------
| foo | 2 |
| bar | 3 |
----------------------
and table "factor" (the amount of rows is variable)
---------------------
| name | val |
---------------------
| f1 | 7 |
| f2 | 8 |
| f3 | 9 |
| ... | ... |
---------------------
and the following result should look like this:
---------------------------------
| col1 | f1 | f2 | f3 | ...|
---------------------------------
| foo | 14 | 16 | 18 | ...|
| bar | 21 | 24 | 27 | ...|
---------------------------------
So basically I want the column "col2" multiplicated with all the contents of "val" of table "factor" AND the content of column "name" should act as tableheader/columnname for the result.
We are using postgres 9.3 (upgrade to higher version may be possible), so an extended Search resulted in multiple possible solutions: using crosstab (though even with crosstab I was not able to figure this one out), using CTE "With" (preferred, but also no luck). Probably this may also be done with the correct use of array() and unnest().
Hence, any help is appreciated on how to achieve this (the less code, the better)
Tnx in advance!
This package seems to do what you want:
https://github.com/hnsl/colpivot

Excel VBA to transpose set of rows if value exists in another column

I'm trying to find a way via VB script that will transpose rows from column A into a new sheet but only if there is a value in column B for rows that contain numbers. I have a sheet with ~75K rows on it that I need to do this for, and I tried creating pivot tables which allowed me to get the data into its current format but I need the data to be in columns.
The tricky part of this is that in column A, I only need to look at the rows that are all numbers and not the other rows that have text.
I created a sample sheet to view, where the sample data is in the SOURCE tab and what I want the data to look like in the TRANSPOSED tab.
https://docs.google.com/spreadsheets/d/1ujbaouZFqiPw0DbO78PCnz25OY2ugF1HtUqMg_J7KeI/edit?usp=sharing
Any help would be appreciated.
UPDATE and Answer:
I modified my approach and went back to the original source data which was not part of a pivot table and was able to use a simple match formula between the 2 data sources. So, my original data looked like this:
+----------------+---------+--------+--------------+
| Gtin | Brand | Name | TaxonomyText |
+----------------+---------+--------+--------------+
| 00030085075605 | brand 1 | name 1 | cat1 |
| 00041100015112 | brand 2 | name 2 | cat2 |
| 00041100015099 | brand 3 | name 3 | cat3 |
| 00030085075608 | brand 4 | name 4 | cat4 |
+----------------+---------+--------+--------------+
I had another sheet containing the data I needed to match to in this format:
+----------------+---------+
| Gtin | Brand |
+----------------+---------+
| 00030085075605 | brand 1 |
| 00041100015112 | brand 2 |
| 00041100015098 | brand 3 |
| 00030085075608 | brand 4 |
+----------------+---------+
I created a new column in my source sheet and used a if error match formula:
=IFERROR(IF(MATCH(A14,data_to_match!$A:$A,0),"yes",),"no")
Then copied this formula down for every row, about 75K rows which very quickly added a yes or a no.
+----------------+---------+---------+--------+--------------+
| Gtin | matched | Brand | Name | TaxonomyText |
+----------------+---------+---------+--------+--------------+
| 00030085075605 | yes | brand 1 | name 1 | cat1 |
| 00041100015112 | yes | brand 2 | name 2 | cat2 |
| 00041100015098 | no | brand 3 | name 3 | cat3 |
| 00030085075608 | yes | brand 4 | name 4 | cat4 |
+----------------+---------+---------+--------+--------------+
The final step was to just filter for Yes values and I had all the data that I needed.
My mistake was going to a pivot table first which put the data in a very funky format causing me to have to do a transpose, which wasn't really necessary. Hopefully this can help others....