Row to string value in Pervasive View - pervasive-sql

I have a table as follows
+----------------+----------+--------+
| purchase_order | text_seq | text |
+----------------+----------+--------+
| 1001 | 1 | screw |
| 1001 | 2 | m5x10 |
| 1001 | 3 | socket |
| 1002 | 1 | washer |
| 1002 | 2 | m5x10 |
+----------------+----------+--------+
From view need to get data as follows
+----------------+-------------------------+
| Purchase_order | text |
+----------------+-------------------------+
| 1001 | screw,m5x10,socket head |
| 1002 | washer,m5 |
+----------------+-------------------------+

There's not an easy way to do what you want. If you are using a recent version of PSQL, you can create a stored procedure or function to create the "text" field in the order you want (based on text_seq I would guess) and then use that result in the overall results.
My personal preference would be to retrieve the results from the database and then format them to my needs.

Related

Teradata SQL Assistant - How can I pivot or transpose large tables with many columns and many rows?

I am using Teradata SQL Assistant Version TD 16.10.06.01 ...
I have seen a lot people transpose data for set smallish tables but I am working on thousands of clients and need the break the columns up into Line Item Values to compare orders/highlight differences between orders. Problem is it is all horizontally linked and I need to transpose it to Id,Transaction id,Version and Line Item Value 1, Line Item Value 2... then another column comparing values to see if they changed.
example:
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| Id | First Name | Last Name | DOB | transaction id | Make | Location | Postcode | Year | Price |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| 1 | John | Smith | 15/11/2001 | 1654654 | Audi | NSW | 2222 | 2019 | $ 10,000.00 |
| 2 | Mark | White | 11/02/2002 | 1661200 | BMW | WA | 8888 | 2016 | $ 8,999.00 |
| 3 | Bob | Grey | 10/05/2002 | 1667746 | Ford | QLD | 9999 | 2013 | $ 3,000.00 |
| 4 | Phil | Faux | 6/08/2002 | 1674292 | Holden | SA | 1111 | 2000 | $ 5,800.00 |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
hoping to change the data to :
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| id | trans_id | Vers_ord | Item Val | Ln_Itm_Dscrptn | Org_Val | Updt_Val | Amndd_Ord_chck | Lbl_Rnk | ... |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| 1 | 1654654 | 2 | 11169 | Make | Audi BLK | Audi WHT | Yes | 1 | |
| 1 | 1654654 | 2 | 11189 | Location | NSW | WA | Yes | 2 | |
| 1 | 1654654 | 2 | 23689 | Postcode | 2222 | 6000 | Yes | 3 | |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
Recently with smaller data I created a table added in Values then used a case statement when value 1 then xyz with a product join ... and the data warehouse admins didn't mention anything out of order. but I only had row 16 by 200 column table to transpose ( Sum, Avg, Count, Median(function) x 4 subsets of clients) , which were significantly smaller than my current tables to make comparisons with.
I am worried my prior method will probably slow the data Warehouse down, plus take me significant amount of time to type the SQL.
Is there a better way to transpose large tables?

How to write multiset SELECT query in SQL Server

Consider below tables:
T_WORK
-------------------
| Work_id | Cre_d |
-------------------
| 1 | 2016 |
| 2 | 2017 |
| 3 | 2018 |
-------------------
T_WORK_PARAM
-----------------------------------
| Work_id | Param_nm | Param_val |
-----------------------------------
| 1 | Name | John |
| 1 | Place | London |
| 1 | Date | 01-01-2018 |
| 2 | Name | Trump |
| 2 | Place | Newyork |
| 2 | Date | 02-02-2018 |
-----------------------------------
I need an output in below format
-----------------------------------
| | Name | John |
| 1 | Place | London |
| | Date | 01-01-2018 |
-----------------------------------
| | Name | Trump |
| 2 | Place | Newyork |
| | Date | 02-02-2018 |
-----------------------------------
In Oracle, I can achieve this with this query:
SELECT
T1.Work_id,
CAST (MULTISET (SELECT Param_nm, Param_val
FROM T_work_param T2
WHERE T2.Work_id = T1.Work_id) AS type_param_tbl)
FROM
T_work T1
Where type_param_tbl is table of (Param_nm varhar2(1000), PAram_val varhar2(1000));
How to write a similar query in SQL Server ?
If it is not possible in SQL Server - what is the best/usual way to return the desired output to the caller (web service)?
The way to achieve the result the way you want is actually by pivoting the Param_nm column. This will produce your result into a tabular format where you could parse/map it into your application.
Please, click here for see a SQLFiddle that shows what I'm talking about.
I hope this may help you!
Unfortunately you cannot do that in SQL server.
You may consider just joining the tables for somewhat similar result.

Outer Join multible tables keeping all rows in common colums

I'm quite new to SQL - hope you can help:
I have several tables that all have 3 columns in common: ObjNo, Date(year-month), Product.
Each table has 1 other column, that represents an economic value (sales, count, netsales, plan ..)
I need to join all tables on the 3 common columns giving. The outcome must have one row for each existing combination of the 3 common columns. Not every combination exists in every table.
If I do full outer joins, I get ObjNo, Date, etc. for each table, but only need them once.
How can I achieve this?
+--------------+-------+--------+---------+-----------+
| tblCount | | | | |
+--------------+-------+--------+---------+-----------+
| | ObjNo | Date | Product | count |
| | 1 | 201601 | Snacks | 22 |
| | 2 | 201602 | Coffee | 23 |
| | 4 | 201605 | Tea | 30 |
| | | | | |
| tblSalesPlan | | | | |
| | ObjNo | Date | Product | salesplan |
| | 1 | 201601 | Beer | 2000 |
| | 2 | 201602 | Sancks | 2000 |
| | 5 | 201605 | Tea | 2000 |
| | | | | |
| | | | | |
| tblSales | | | | |
| | ObjNo | Date | Product | Sales |
| | 1 | 201601 | Beer | 1000 |
| | 2 | 201602 | Coffee | 2000 |
| | 3 | 201603 | Tea | 3000 |
+--------------+-------+--------+---------+-----------+
Thx
Devon
It sounds like you're using SELECT * FROM... which is giving you every field from every table. You probably only want to get the values from one table, so you should be explicit about which fields you want to include in the results.
If you're not sure which table is going to have a record for each case (i.e. there is not guaranteed to be a record in any particular table) you can use the COALESCE function to get the first non-null value in each case.
SELECT COALESCE(tbl1.ObjNo, tbl2.ObjNo, tbl3.ObjNo) AS ObjNo, ....
tbl1.Sales, tbl2.Count, tbl3.Netsales

Import and Export Wizard SQL Server 2012

Is there an feature on the Import/Export Wizard of SQL Server which allows me to do the following? If so, how can I go about it...
Before
+---------+-------------+----------+------+
| Name | Description | Delivery | Cost |
+---------+-------------+----------+------+
| dfdsf | dfgdfgdf | 34 | |
| sdfsdgf | dfgdfgdf | 324 | |
| dfg | dfgdfgdf | 23 | |
| dfgfdg | gdf | 43 | |
| dfgdfg | gdf | 23 | |
| fdgfdg | fgd | 443 | |
+---------+-------------+----------+------+
+---------+-------------+----------+------+
| Name | Description | Delivery | Cost |
+---------+-------------+----------+------+
| dfdsf | dfgdfgdf | 34 | |
| sdfsdgf | dfgdfgdf | 324 | |
| dfg | dfgdfgdf | 23 | |
| dfgfdg | gdf | 43 | |
| dfgdfg | gdf | 23 | |
| fdgfdg | fgd | 443 | |
| | | | 324 |
| | | | 324 |
| | | | 234 |
| | | | 234 |
| | | | 23 |
| | | | 423 |
+---------+-------------+----------+------+
So I have the table above:
I want to add the cost column in from a Flat File (CSV source, made on Excel).
Can I do this on the wizard without removing the other data - I have tried multiple different options on the wizard, but I seem to keep removing the data or the data added creates another set of row, so it will look like bottom table.
I do have a primary key on this table and I have set that to increase automatically by 1 - I have also got foreign keys, though I had to remove the relationships temporarily to allow me to even get the data onto the table.
Sorry the formatting isn't clear - The top table represents my table and the bottom table shows what happens when I input data - What I want is a the cost column aligned with the old data.
Any help would be much appreciated.
If you want to update existing records in the table you need to be able to identify a relationship between the rows in your table and the rows in your CSV file.
To update the rows in the existing table import the CSV file into a separate table. Then run an UPDATE statement like the one below to update the rows in the existing table with the data from the new table.
UPDATE T
SET T.Cost = S.Cost
FROM dbo.MyTable AS T
JOIN dbo.CsvInput AS S
ON T.ID = S.ID

Multiple Table Trigger Update/Delete/Insert

I'm having troubles creating a SQL Server trigger to do what I want. I don't have much experience with triggers.
Basically I have a table, let's call it cluster_metadata, with metadata that describes certain attributes about an object. Then I have a second table, let's call it activities_table, with user entered data that may pertain to certain objects in the cluster_metadata table.
The cluster_metadata table is user updatable however new rows are created and deleted using a stored procedure, users can only update specific values.
The activities_table is completely user driven and users can insert/modify and delete rows.
I need a trigger that joins the data and will update the table on any modification of the cluster_metadata or activities_table.
For simplicity I've trimmed down the number of columns but the tables look something like this.
cluster_metadata:
+----------------------------------+
| Cluster | Eligible | Group |
+----------------------------------+
| Cluster1 | True | 1 |
| Cluster2 | True | 1 |
| Cluster3 | True | 2 |
| Cluster4 | False | 2 |
| Cluster5 | True | 3 |
| Cluster6 | True | 4 |
+----------------------------------+
activities_table:
+--------------------------------------------+
| Activity | ID | Group | Start Date |
+--------------------------------------------+
| Patches | 1000 | 1 | 02-01-2015 |
| Patches | 1000 | 2 | 02-10-2015 |
| Patches | 1000 | 3 | 02-20-2015 |
|SomeActivity| 1001 | 2 | 02-30-2015 |
+--------------------------------------------+
The table that I need to create and keep updated would look something like this using the data from the above two tables:
+---------------------------------------------------------------------+
| Cluster | Eligible | Group | Activity | ID | Start Date |
+---------------------------------------------------------------------+
| Cluster1 | True | 1 | Patches | 1000 | 02-01-2015 |
| Cluster2 | True | 1 | Patches | 1000 | 02-01-2015 |
| Cluster3 | True | 2 | Patches | 1000 | 02-10-2015 |
| Cluster3 | True | 2 |SomeActivity| 1001 | 02-30-2015 |
| Cluster4 | True | 2 | Patches | 1000 | 02-10-2015 |
| Cluster4 | True | 2 |SomeActivity| 1001 | 02-30-2015 |
| Cluster5 | True | 3 | Patches | 1000 | 02-20-2015 |
+---------------------------------------------------------------------+
How would I create a trigger that would do this? I would just create a view but there is some user additional input that I need to accept using this merged data.
Thanks!
Thanks for all your help. Basically what I ended up doing is created a joined view with the data from both the cluster_metadata and activities_table. From there I scripted a stored procedure which takes the appropriate data and inserts into a third table. The procedure also makes sure all the data is updated and matches the view on each execution. From there I run the procedure each time a user inputs anything to either of the tables from the web UI. Not the best solution but it's working.
Thanks everyone!