Morning All,
I have an Allowed systems field in my record that can contain an array such as: 22, 18, 21
What I want to do is if system 18 is accessing the database and is pulling all the records that are allowed for system 18, how can I write that in sql.
+----------+----------------+------------+
| AdvertID | AllowedSystems | AdvertName |
+----------+----------------+------------+
| 47 | 22, 18, 21, 3 | GeorgesAd |
+----------+----------------+------------+
| 49 | 2, 7, 9, 6 | StevesAd |
+----------+----------------+------------+
| 47 | 18, 12, 32, 8 | PetesAd |
+----------+----------------+------------+
Any assistance would be greatly appreciated.
Have a great day..!
Paul Jacobs
This will test for the best performing cases first, in the hopes of getting some performance out of what is destined to be an inefficient query due to the denormalized structure.
The first two WHERE clauses can take advantage of an index on AllowedSystems, although this may be helpful in only a small number of cases, depending on what your data is like.
select AdvertID, AdvertName
from MyTable
where AllowedSystems = '18'
or AllowedSystems like '18,%'
or AllowedSystems like '%, 18'
or AllowedSystems like '%, 18,%'
Note: I add a space to the beginning of the searched column and a comma to the end to account for the cases where 18 is the first or last entry in the list. I then search for ' 18,' within the string.
select *
from YourTable
where charindex(' 18,', ' ' + AllowedSystems + ',') <> 0
First, this is a bad idea for a data structure. You should probably normalize by adding an AllowedSystems table with a record for each pair of system and Advert.
Second, you would want to use this:
SELECT AdvertID, AdvertName
FROM MyTable
WHERE AllowedSystems LIKE '%18%'
You're going to have issues when you get over 100 systems, since there is no way to tell if the hit is for 18 or 118 or 181.
EDIT:
Or you can use RedFilter's multiple WHERE conditions to avoid the above issue with 18, 118, or 181.
The poor performance way would be:
SELECT AdvertID, AdvertName
FROM YourTable
WHERE ', ' + AllowSystems + ', ' LIKE '%, 18, %'
However, ideally you should have an extra table that has a compound key:
Table: AdvertAllowedSystem
AdvertID
SystemID
so that you can do a more performant, SARGable query:
SELECT a.AdvertID, a.AdvertName
FROM AdvertAllowedSystem s
INNER JOIN Advert a ON s.AdvertId = a.AdvertId
WHERE s.SystemId = 18
In addition if you have possibility to changed the design of create a materialized view, for this you may use hierarchyid Data Type or Rank and Unrank
Related
I have a data set like the following:
+-----------------+---------------------+
| job_code | job_title |
+-----------------+---------------------+
| finance_acct | Business Accountant |
| finance_manager | Business Manager |
| it_programmer | IT Programmer |
| it_manager | IT Manager |
+-----------------+---------------------+
etc.
I want to take all of the job titles that share the same first half of their job code and print them as a list. Like the following:
finance: Business Accountant, Business Manager
it: IT Programmer, IT Manager
How would I do so? I know how to use SUBSTR to pull the first part of the job code. Basically I can create the left column fine. I ran into a couple of problems though:
Using the GROUP BY command, I continually got ORA-00979 error ("not
a GROUP BY expression").
I can't figure out how to make the list delimited with commas. I
used CONCAT but even then you end up with a superfluous comma after
the last entry for a given line. I've seen some things online about
the STUFF command, but I know it's possible to do so without this.
Any tips? Thank you.
regexp_substr() comes to mind to extract the part you want from the job_code. The rest is just aggregation and listagg():
select regexp_substr(job_code, '[^_]+', 1, 1) as half_job_code,
listagg(job_title, ', ') within group (order by job_title) as job_titles
from t
group by regexp_substr(job_code, '[^_]+', 1, 1)
Try the listagg function. you can specify a delimited and group it based on the data you require.
Assume that I have records like this:
| id | equivalent_id |
+---------+-----------------+
|----11---|--------22-------|
|----22---|--------33-------|
|----33---|--------44-------|
|----44---|--------55-------|
|----55---|--------66-------|
I want to write a query in Oracle to get implicit relation between records. for example if I pass in 11 as in input to the query, it should return 22, 33, 44, 55, 66.
Because 11 -> 22 and 22 -> 33, then we can conclude that 11 -> 33 and so on.
UPDATE
The numbers can be outside of above range, for example numbers can be in 1 to 99999999 and doesn't exist any mathematical relation between records.
I would go with a hierarchical query:
SELECT equivalent_id
FROM <nameyourtable>
START WITH ID=11
CONNECT BY PRIOR equivalent_id=id;
Don't know, if I got this pseudo-code right, but you get the picture ...
I have a table named shoes:
Shoe_model price sizes
-----------------------------------------------
Adidas KD $55 '8, 9, 10, 10.5, 11, 11.5'
Nike Tempo $56 '8, 9, 11.5'
Adidas XL $30.99 '9, 10, 11, 13'
How can I select a row for a specific size?
My attempt:
SELECT * FROM shoes WHERE sizes = '10';
Is there a way around this? Using LIKE will get you both 10 and 10.5 sizes so I'm trying to use WHERE for this. Thanks
First, you should not be storing the sizes as a comma delimited string. You should have a separate row with one row per size per show. That is the SQLish way to store things.
Sometimes, we are stuck with other people's really bad design decisions. In you can do something like this:
SELECT *
FROM shoes
WHERE ',' || '10' || ',' LIKE '%,' || sizes || ',%';
The delimiters are important so "10" doesn't match "100".
I have the following data from 2 tables Notes (left) and scans (right) :
Imagine the picker and packers were all varying, like you can have JOHN, JANE etc.
I need a query that outputs like so :
On a given date range :
Name - Picked (units) - Packed (units)
MASI - 15 - 21
JOHN - 21 - 32
etc.
I can't figure out how to even start this, any tips will be helpful thanks.
Without a "worker" take that lists each Picker/Packer individually, I think you'd need something like this...
SELECT
CASE WHEN action.name = 'Picker' THEN scans.Picker ELSE scans.Packer END AS worker,
SUM(CASE WHEN action.name = 'Picker' THEN notes.Units ELSE 0 END) AS PickedUnits,
SUM(CASE WHEN action.name = 'Packer' THEN notes.Units ELSE 0 END) AS PackedUnits
FROM
notes
INNER JOIN
scans
ON scans.PickNote = notes.Number
CROSS JOIN
(
SELECT 'Picker' AS name
UNION ALL SELECT 'Packer' AS name
)
AS action
GROUP BY
CASE WHEN action.name = 'Picker' THEN scans.Picker ELSE scans.Packer END
(This is actually just an algebraic re-arrangement of the answer that #RaphaƫlAlthaus posted at the same time as me. Both use UNION to work out the Picker values and the Packer values separately. If you have separate indexes on scans.Picker and scans.Packer then I would expect mine MAY be slowest. If you don't have those two indexes then I would expect mine to be fastest. I recommend creating the indexes and testing on a realtisic data set.)
EDIT
Actually, what I would recommend is a change to scans table completely; normalise it.
Your de-normalised set has one row per PickNote, with fields picker and packer.
A normalised set would have two rows per PickNote with fields role and worker.
id | PickNote | Role | Worker
------+----------+------+--------
01 | PK162675 | Pick | MASI
02 | PK162675 | Pack | MASI
03 | PK162676 | Pick | FRED
04 | PK162676 | Pack | JOHN
This allows you to create simple indexes and simple queries.
You may initially baulk at the extra unecessary rows, but it will yield simpler queries, faster queries, better maintainability, increased flexibility, etc, etc.
In short, this normalisation may cost a little extra space, but it pays back dividends forever.
SELECT name, SUM(nbPicked) Picked, SUM(nbPacked) Packed
FROM
(SELECT n.Picker name, SUM(n.Units) nbPicked, 0 nbPacked
FROM Notes n
INNER JOIN scans s ON s.PickNote = n.Number
--WHERE s.ProcessedOn BETWEEN x and y
GROUP BY n.Picker
UNION ALL
SELECT n.Packer name, 0 nbPicked, SUM(n.Units) nbPacked
FROM Notes n
INNER JOIN scans s ON s.PickNote= n.Number
--WHERE s.ProcessedOn BETWEEN x and y
GROUP BY n.Packer)
GROUP BY name;
Basically, I'm dealing with a horribly set up table that I'd love to rebuild, but am not sure I can at this point.
So, the table is of addresses, and it has a ton of similar entries for the same address. But there are sometimes slight variations in the address (i.e., a room # is tacked on IN THE SAME COLUMN, ugh).
Like this:
id | place_name | place_street
1 | Place Name One | 1001 Mercury Blvd
2 | Place Name Two | 2388 Jupiter Street
3 | Place Name One | 1001 Mercury Blvd, Suite A
4 | Place Name, One | 1001 Mercury Boulevard
5 | Place Nam Two | 2388 Jupiter Street, Rm 101
What I would like to do is in SQL (this is mssql), if possible, is do a query that is like:
SELECT DISTINCT place_name, place_street where [the first 4 letters of the place_name are the same] && [the first 4 characters of the place_street are the same].
to, I guess at this point, get:
Plac | 1001
Plac | 2388
Basically, then I can figure out what are the main addresses I have to break out into another table to normalize this, because the rest are just slight derivations.
I hope that makes sense.
I've done some research and I see people using regular expressions in SQL, but a lot of them seem to be using C scripts or something. Do I have to write regex functions and save them into the SQL Server before executing any regular expressions?
Any direction on whether I can just write them in SQL or if I have another step to go through would be great.
Or on how to approach this problem.
Thanks in advance!
Use the SQL function LEFT:
SELECT DISTINCT LEFT(place_name, 4)
I don't think you need regular expressions to get the results you describe. You just want to trim the columns and group by the results, which will effectively give you distinct values.
SELECT left(place_name, 4), left(place_street, 4), count(*)
FROM AddressTable
GROUP BY left(place_name, 4), left(place_street, 4)
The count(*) column isn't necessary, but it gives you some idea of which values might have the most (possibly) duplicate address rows in common.
I would recommend you look into Fuzzy Search Operations in SQL Server. You can match the results much better than what you are trying to do. Just google sql server fuzzy search.
Assuming at least SQL Server 2005 for the CTE:
;with cteCommonAddresses as (
select left(place_name, 4) as LeftName, left(place_street,4) as LeftStreet
from Address
group by left(place_name, 4), left(place_street,4)
having count(*) > 1
)
select a.id, a.place_name, a.place_street
from cteCommonAddresses c
inner join Address a
on c.LeftName = left(a.place_name,4)
and c.LeftStreet = left(a.place_street,4)
order by a.place_name, a.place_street, a.id