Splitting text in SQL Server stored procedure - sql

I'm working with a database, where one of the fields I extract is something like:
1-117 3-134 3-133
Each of these number sets represents a different set of data in another table. Taking 1-117 as an example, 1 = equipment ID, and 117 = equipment settings.
I have another table from which I need to extract data based on the previous field. It has two columns that split equipment ID and settings. Essentially, I need a way to go from the queried column 1-117 and run a query to extract data from another table where 1 and 117 are two separate corresponding columns.
So, is there anyway to split this number to run this query?
Also, how would I split those three numbers (1-117 3-134 3-133) into three different query sets?
The tricky part here is that this column can have any number of sets here (such as 1-117 3-133 or 1-117 3-134 3-133 2-131).
I'm creating these queries in a stored procedure as part of a larger document to display the extracted data.
Thanks for any help.

Since you didn't provide the DB vendor, here's two posts that answer this question for SQL Server and Oracle respectively...
T-SQL: Opposite to string concatenation - how to split string into multiple records
Splitting comma separated string in a PL/SQL stored proc
And if you're using some other DBMS, go search for "splitting text ". I can almost guarantee you're not the first one to ask, and there's answers for every DBMS flavor out there.
As you said the format is constant though, you could also do something simpler using a SUBSTRING function.
EDIT in response to OP comment...
Since you're using SQL Server, and you said that these values are always in a consistent format, you can do something as simple as using SUBSTRING to get each part of the value and assign them to T-SQL variables, where you can then use them to do whatever you want, like using them in the predicate of a query.

Assuming that what you said is true about the format always being #-### (exactly 1 digit, a dash, and 3 digits) this is fairly easy.
WITH EquipmentSettings AS (
SELECT
S.*,
Convert(int, Substring(S.AwfulMultivalue, V.Value * 6 - 5, 1) EquipmentID,
Convert(int, Substring(S.AwfulMultivalue, V.Value * 6 - 3, 3) Settings
FROM
SourceTable S
INNER JOIN master.dbo.spt_values V
ON V.Value BETWEEN 1 AND Len(S.AwfulMultivalue) / 6
WHERE
V.type = 'P'
)
SELECT
E.Whatever,
D.Whatever
FROM
EquipmentSettings E
INNER JOIN DestinationTable D
ON E.EquipmentID = D.EquipmentID
AND E.Settings = D.Settings
In SQL Server 2005+ this query will support 1365 values in the string.
If the length of the digits can vary, then it's a little harder. Let me know.

Incase if the sets does not increase by more than 4 then you can use Parsename to retrieve the result
Declare #Num varchar(20)
Set #Num='1-117 3-134 3-133'
select parsename(replace (#Num,' ','.'),3)
Result :- 1-117
Now again use parsename on the same resultset
Select parsename(replace(parsename(replace (#Num,' ','.'),3),'-','.'),1)
Result :- 117
If the there are more than 4 values then use split functions

Related

How to rotate a two-column table?

This might be a novice question – I'm still learning. I'm on PostgreSQL 9.6 with the following query:
SELECT locales, count(locales) FROM (
SELECT lower((regexp_matches(locale, '([a-z]{2,3}(-[a-z]{2,3})?)', 'i'))[1])
AS locales FROM users)
AS _ GROUP BY locales
My query returns the following dynamic rows:
locales
count
en
10
fr
7
de
3
n additional locales (~300)...
n-count
I'm trying to rotate it so that locale values end up as columns with a single row, like this:
en
fr
de
n additional locales (~300)...
10
7
3
n-count
I'm having to do this to play nice with a time-series db/app
I've tried using crosstab(), but all the examples show better defined tables with 3 or more columns.
I've looked at examples using join, but I can't figure out how to do it dynamically.
Base query
In Postgres 10 or later you could use the simpler and faster regexp_match() instead of regexp_matches(). (Since you only take the first match per row anyway.) But don't bother and use the even simpler substring() instead:
SELECT lower(substring(locale, '(?i)[a-z]{2,3}(?:-[a-z]{2,3})?')) AS locale
, count(*)::int AS ct
FROM users
WHERE locale ~* '[a-z]{2,3}' -- eliminate NULL, allow index support
GROUP BY 1
ORDER BY 2 DESC, 1
Simpler and faster than your original base query.
About those ordinal numbers in GROUP BY and ORDER BY:
Select first row in each GROUP BY group?
Subtle difference: regexp_matches() returns no row for no match, while substring() returns null. I added a WHERE clause to eliminate non-matches a-priori - and allow index support if applicable, but I don't expect indexes to help here.
Note the prefixed (?i), that's a so-called "embedded option" to use case-insensitive matching.
Added a deterministic ORDER BY clause. You'd need that for a simple crosstab().
Aside: you might need _ in the pattern instead of - for locales like "en_US".
Pivot
Try as you might, SQL does not allow dynamic result columns in a single query. You need two round trips to the server. See;
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
You can use a dynamically generated crosstab() query. Basics:
PostgreSQL Crosstab Query
Dynamic query:
PostgreSQL convert columns to rows? Transpose?
But since you generate a single row of plain integer values, I suggest a simple approach:
SELECT 'SELECT ' || string_agg(ct || ' AS ' || quote_ident(locale), ', ')
FROM (
SELECT lower(substring(locale, '(?i)[a-z]{2,3}(?:-[a-z]{2,3})?')) AS locale
, count(*)::int AS ct
FROM users
WHERE locale ~* '[a-z]{2,3}'
GROUP BY 1
ORDER BY 2 DESC, 1
) t
Generates a query of the form:
SELECT 10 AS en, 7 AS fr, 3 AS de, 3 AS "de-at"
Execute it to produce your desired result.
In psql you can append \gexec to the generating query to feed the generated SQL string back to the server immediately. See:
My function returned a string. How to execute it?

SQL Query for Search Page

I am working on a small project for an online databases course and i was wondering if you could help me out with a problem I am having.
I have a web page that is searching a movie database and retrieving specific columns using a movie initial input field, a number input field, and a code field. These will all be converted to strings and used as user input for the query.
Below is what i tried before:
select A.CD, A.INIT, A.NBR, A.STN, A.ST, A.CRET_ID, A.CMNT, A.DT
from MOVIE_ONE A
where A.INIT = :init
AND A.CD = :cd
AND A.NBR = :num
The way the page must search is in three different cases:
(initial and number)
(code)
(initial and number and code)
The cases have to be independent so if certain field are empty, but fulfill a certain case, the search goes through. It also must be in one query. I am stuck on how to implement the cases.
The parameters in the query are taken from the Java parameters in the method found in an SQLJ file.
If you could possibly provide some aid on how i can go about this problem, I'd greatly appreciate it!
Consider wrapping the equality expressions in NVL (synonymous to COALESCE) so if parameter inputs are blank, corresponding column is checked against itself. Also, be sure to kick the a-b-c table aliasing habit.
SELECT m.CD, m.INIT, m.NBR, m.STN, m.ST, m.CRET_ID, m.CMNT, m.DT
FROM MOVIE_ONE m
WHERE m.INIT = NVL(:init, m.INIT)
AND m.CD = NVL(:cd, m.CD)
AND m.NBR = COALESCE(:num, m.NBR)
To demonstrate, consider below DB2 fiddles where each case can be checked by adjusting value CTE parameters all running on same exact data.
Case 1
WITH
i(init) AS (VALUES('db2')),
c(cd) AS (VALUES(NULL)),
n(num) AS (VALUES(53)),
cte AS
...
Case 2
WITH
i(init) AS (VALUES(NULL)),
c(cd) AS (VALUES(2018)),
n(num) AS (VALUES(NULL)),
cte AS
...
Case 3
WITH
i(init) AS (VALUES('db2')),
c(cd) AS (VALUES(2018)),
n(num) AS (VALUES(53)),
cte AS
...
However, do be aware the fiddle runs a different SQL due to nature of data (i.e., double and dates). But query does reflect same concept with NVL matching expressions on both sides.
SELECT *
FROM cte, i, c, n
WHERE cte.mytype = NVL(i.init, cte.mytype)
AND YEAR(CAST(cte.mydate AS date)) = NVL(c.cd, YEAR(CAST(cte.mydate AS date)))
AND ROUND(cte.mynum, 0) = NVL(n.num, ROUND(cte.mynum, 0));

How to use aggregate function to filter a dataset in ssrs 2008

I have a matrix in ssrs2008 like below:
GroupName Zone CompletedVolume
Cancer 1 7
Tunnel 1 10
Surgery 1 64
ComplatedVolume value is coming by a specific expression <<expr>>, which is equal to: [Max(CVolume)]
This matrix is filled by a stored procedure that I am not supposed to change if possible. What I need to do is that not to show the data whose CompletedVolume is <= 50. I tried to go to tablix properties and add a filter like [Max(Q9Volume)] >= 50, but when I try to run the report it says that aggregate functions cannot be used in dataset filters or data region filters. How can I fix this as easy as possible?
Note that adding a where clause in sql query would not solve this issue since there are many other tables use the same SP and they need the data where CompletedVolume <= 50. Any help would be appreciated.
EDIT: I am trying to have the max(Q9Volume) value on SP, but something happening I have never seen before. The query is like:
Select r.* from (select * from results1 union select * from results2) r
left outer join procedures p on r.pid = p.id
The interesting this is there are some columns I see that does not included by neither results1/results2 nor procedures tables when I run the query. For example, there is no column like Q9Volume in the tables (result1, result2 and procedures), however when I run the query I see the columns on the output! How is that possible?
You can set the Row hidden property to True when [Max(CVolume)] is less or equal than 50.
Select the row and go to Row Visibility
Select Show or Hide based on an expression option and use this expression:
=IIF(
Max(Fields!Q9Volume.Value)<=50,
True,False
)
It will show something like this:
Note maximum value for Cancer and Tunnel are 7 and 10 respectively, so
they will be hidden if you apply the above expression.
Let me know if this helps.

SQL, comparing two values and returning values that don't match

i'm running a sql statement that reads values from two different databases and returns records that do not match.
The two fields i'm trying to compare are:
NAME_TYPE which is a number eg. 1 or 2
PartyType which is varchar eg. Person/Organisation
How might I compare this effectively?
Most of them have been easy to compare like NAM.NAME <> cl.ClientName.
But I'm finding this a bit more difficult, new user to sql so any help would be great, thanks.
This may work;
WHERE NOT ((NAM.NAME_TYPE = 1 and cl.PartyType = 'Person') OR (NAM.NAME_TYPE =2 and cl.PartyType = 'Organisation'))

How to make this selection?

I have a table with column name LIST_CODE where data like this A101,A102,B101.
How can I find all the data where LIST_CODE equal A102 OR B101
I try bellow sql but it returns 0
SELECT count(*) FROM details_user WHERE list_code IN ('A102','C101')
If you mean that list_code can contain 'A101, A102,B101' in one row then
select * from details_user
where list_code like'%A102%'
or list_code like '%B101%'
But this would be a bad database design.
if list_code can contain
A101
A102
B101
(one value per row)
Then your code is correct, and it's just mean that you have no row with A102 or C101.
As there's a limit on the query length, you may also want to consider constructing a regular expression like `SELECT count(*) FROM details_user RLIKE list_code RLIKE "A102|B100".
In any case, that's not a good schema design.
As others have said, when you have data that should be in rows, place it in rows. No amount of code is replacement for a good initial design.
That said, you can use the xml function in sql server 2005 and above to transform data that is packed into a single row into a somewhat more manageable situation. You can do something like:
select du.ID, Code.value('.','nvarchar(100)') as Code
from (select ID,
cast(('<c>'+replace(list_Code,',' ,'</c><c>')+'</c>') as xml) as xml
from details_user) du
cross apply du.xml.nodes('c') as T(Code)
This first transforms the string from a 'A101, A102' format into a XML-like format,
'<c>A101</c><c>A102</c>'.
Then the transformed value is fed into the nodes() function that separates the codes into a table, and from that table, the value of the code is extracted using the value() method.
You can play with this approach on this SQL Fiddle
I'm affraid the 'IN' operator only works on discrete sets and not on varchars like you're suggesting.
So you'd have to use a regular OR statement or use a different definition of your value set.
SELECT count(*) FROM details_user WHERE list_code = 'A102' OR list_code ='C101'