Select from a list in a parameter value - sql

I am trying to have a dropdown with subject areas for a school report. The problem I am running into is that in my database, the subjects are grouped by grade and subject instead of just subject. So when I look at gt.standardid in (#SubjectArea) for "Literacy" the standard ids for literacy are (54,61,68,75,88,235) one for each grade level, but I want to have it show me all of them as Literacy. In my parameter "#subjectArea" I have specific values I want to add for each subject area, so for the Label of "Literacy" I want it to select the StandardIds (54,61,68,75,88,235). I am not sure how to accomplish this.
Select
CS.subjectArea
,CS.Name As Group_Name
,GT.Abbreviation
,GT.Name
,GT.standardID
From GradingTask as GT
inner join CurriculumStandard CS
on GT.Standardid = CS.standardid
where GT.ARCHIVED = 0
and GT.standardid in (#SubjectArea)
ORDER BY GT.seq

I would try a cascading parameter approach.
You can have the first parameter be a pre-defined list:
The specific values are not important, but will be used in the next step.
Ideally your IDs would be in a table already, but if not you can use something like this:
declare #SubjectIDs as table
(
[SubjectName] nvarchar(50),
[SubjectID] int
);
insert into #SubjectIDs
(
[SubjectName],
[SubjectID]
)
values
('Literacy', 54),
('Literacy', 61),
('Literacy', 68),
('Literacy', 75),
('Literacy', 88),
('Literacy', 23);
select
SubjectID
from #SubjectIDs
where SubjectName in (#SubjectList);
Make this into a data set. I'm going to call it DS_SubjectIDs.
Make a new hidden or internal parameter called SubjectIDs:
Set it to get its values from the DS_SubjectIDs query:
You can now use the parameter for SubjectIDs in your final query.

Related

How do I consolidate this table?

I have a problem that I will try to describe like this. I have a table in PostgreSQL like below
(here's what I have).
Now I'm wrapping my head around how to "merge" or "consolidate" this table to make it look like this one on -> Here's what I want to have.
Multiple rows are the result of having different ID or different value in any column after in general (but I don't need that information anymore, so I may get rid of it without any consequences).
Is there any function or any trick that might bring me desired result?
What I have tried:
select "name"
, "array_agg" [1][1] as math_grade
, "array_agg" [2][2] as history_grade
, "array_agg" [3][3] as geography_grade
from (select "name"
, array_agg(array[math_grade,history_grade,geography_grade])
from temp1234
group by "name") as abc
Here is a example table:
create table temp1234 (id int
, name varchar(50)
, math_grade int
, history_grade int
, geography_grade int)
And example data:
insert into temp1234 values (1, 'John Smith', 3, null, null)
insert into temp1234 values (2, 'John Smith', null, 4, null)
insert into temp1234 values (3, 'John Smith', null, null, 3)
Best Regards
This will give you what you want but I am sure that with more data you will find this query is not covering all you need ? Please do provide more data for more detailed help.
select min(id), name, max(math_grade), max(history_grade), max(geography_grade)
from temp1234
group by name
Here is a demo

Clustering/Similarity between text cells in an postgres aggregate

I've got a table that has a text column and some other identifying features. I want to be able to group by one of the features and find out whether the text in the groups are similar or not. I want to use this to determine if there are multiple groups in my data or a single group (with some possible bad spelling) so that I can provide a rough "confidence" value showing if the aggregate represents a single group or not.
CREATE TABLE data_test (
Id serial primary key,
Name VARCHAR(70) NOT NULL,
Job VARCHAR(100) NOT NULL);
INSERT INTO data_test
(Name, Job)
VALUES
('John', 'Astronaut'),
('John', 'Astronaut'),
('Ann', 'Sales'),
('Jon', 'Astronaut'),
('Jason', 'Sales'),
('Pranav', 'Sales'),
('Todd', 'Sales'),
('John', 'Astronaut');
I'd like to run a query that was something like:
select
Job,
count(Name),
Similarity_Agg(Name)
from data_test
group by Job;
and receive
Job count Similarity
Sales 4 0.1
Astronaut 4 0.9
Basically showing that Astronaut names are very similar (or, more likely in my data, all the rows are referring to a single astronaut) and the Sales names aren't (more people working in sales than in space). I see there is a Postgres Module that can handle comparing two strings but it doesn't seem to have any aggregate functions in it.
Any ideas?
One option is a self-join:
select
d.job,
count(distinct d.id) cnt,
avg(similarly(d.name, d1.name)) avg_similarity
from data_test d
inner join data_test d1 on d1.job = d.job
group by d.job

Search a text in a column

I have an ASP.NET page and have a table looks like
Create table Test(id int, Users Varchar(MAX));
Insert into Test select 1, 'admin;operator;user1';
Insert into Test select 2, 'superadmin';
Insert into Test select 3, 'superadmin;admin';
Any of Test row can include more than one user, so I am dividing them by semicolon. I want to return the value when the client search in textbox, they will insert only: admin I want to return only the rows which includes admin.
I can not use
select id,Users where Users like '%admin%'
Because in this case the query will return 2nd and 3rd columns which includes superadmin.
How can I get true result?
You shouldn't be storing delimited data, if possible, normalize your database.
As this isn't always possible, you can get what you want with:
where CONCAT(';', Users, ';') like '%;admin;%'
As I mentioned in the comment, you need to fix your design; what you're doing right now is breaking one of the fundamental Normal Form rules. That means, instead, have one row per user:
CREATE TABLE Test (uid int IDENTITY,
id int,
Username nvarchar(128));
INSERT INTO Test (id,
Username)
VALUES (1, N'admin'),
(1, N'operator'),
(1, N'user1'),
(2, N'supermadmin'),
(3, N'superadmin'),
(3, N'admin');
Then you can simply use a = operator:
SELECT *
FROM Test
WHERE Username = N'admin';
I guess you can try below query
SELECT * FROM #tmpTest where users like '%Admin%' AND Users not like 'superAdmin'

Storing operators with operands in table in SQL Server

I work at a company that sells many versions of a product to several different resellers, and each reseller adds parameters that change the resale price of the product.
For example, we sell a vehicle service contract where, for a certain vehicle, the reserve price of the contract is $36. The dealer marks up every reserve by 30% (to $47), adds a premium of $33 to the reserve price (now $80), and adds a set of fee--like commissions and administrative costs--to bring the contract total to $235.
The reserve price is the same for every dealer on this program, but they all use different increases that are either flat or a percentage. There are of course dozens of parameters for each contract.
My question is this: can I store a table of parameters like "x*1.3" or "y+33" that are indexed to a unique ID, and then join or cross apply that table to one full of values like the reserve price mentioned above?
I looked at the SQL Server "table valued parameters," but I don't see from the MSDN examples if they apply to my case.
Thanks so much for your kind replies.
EDIT:
As I feared, my example seems to be a little too esoteric (my fault). So consider this:
Twinings recommends different temperatures for brewing various kinds of tea. Depending on your elevation, your boiling point might be different. So there must be a way to store a table of values that looks like this--
(source: twinings.co.uk)
A user enters a ZIP code that has a corresponding elevation, and SQL Server calculates and returns the correct brew temperature for you. Is that any better an example?
Again, thanks to those who have already contributed.
I don't know if I like this solution, but it does seem to at least work. The only real way to iteratively construct totals is to use some form of "loop", and the most set-based way of doing that these days is with a recursive CTE:
declare #actions table (ID int identity(1,1) not null, ApplicationOrder int not null,
Multiply decimal(12,4), AddValue decimal(12,4))
insert into #actions (ApplicationOrder,Multiply,AddValue) values
(1,1.3,null),
(2,null,33),
(3,null,155)
declare #todo table (ID int not null, Reserve decimal(12,4))
insert into #todo(ID,Reserve) values (1,36)
;With Applied as (
select
t.ID, Reserve as Computed, 0 as ApplicationOrder
from
#todo t
union all
select a.ID,
CONVERT(decimal(12,4),
((a.Computed * COALESCE(Multiply,1)) + COALESCE(AddValue,0))),
act.ApplicationOrder
from
Applied a
inner join
#actions act
on
a.ApplicationOrder = act.ApplicationOrder - 1
), IdentifyFinal as (
select
*,ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ApplicationOrder desc) as rn
from Applied
)
select
*
from
IdentifyFinal
where
rn = 1
Here I've got a simple single set of actions to apply to each price (in #actions) and a set of prices to apply them to (in #todo). I then use the recursive CTE to apply each action in turn.
My result:
ID Computed ApplicationOrder rn
----------- --------------------------------------- ---------------- --------------------
1 234.8000 3 1
Which isn't far off your $235 :-)
I appreciate that you may have different actions to apply to each particular price, and so my #actions may instead, for you, be something that works out which rules to apply in each case. That may be one of more CTEs before mine that do that work, possibly using another ROW_NUMBER() expression to work out the correct ApplicationOrder values. You may also need more columns and join conditions in the CTE to satisfy this.
Note that I've modelled the actions so that each can apply a multiplication and/or an add at each stage. You may want to play around with that sort of idea (or e.g. add a "rounding" flag of some kind as well so that we might well end up with the $235 value).
Applied ends up containing the initial values and each intermediate value as well. The IdentifyFinal CTE gets us just the final results, but you may want to select from Applied instead just to see how it worked.
You can use a very simple structure to store costs:
DECLARE #costs TABLE (
ID INT,
Perc DECIMAL(18, 6),
Flat DECIMAL(18, 6)
);
The Perc column represents percentage of base price. It is possible to store complex calculations in this structure but it gets ugly. For example if we have:
Base Price: $100
Flat Fee: $20
Tax: 11.5%
Processing Fee: 3%
Then it will be stored as:
INSERT INTO #costs VALUES
-- op example
(1, 0.0, NULL),
(1, 0.3, NULL),
(1, NULL, 33.0),
(1, NULL, 155.0),
-- above example
(2, 0.0, NULL),
(2, NULL, 20.0),
(2, 0.115, NULL),
(2, NULL, 20.0 * 0.115),
(2, 0.03, NULL),
(2, NULL, 20.0 * 0.03),
(2, 0.115 * 0.03, NULL),
(2, NULL, 20 * 0.115 * 0.03);
And queried as:
DECLARE #tests TABLE (
ID INT,
BasePrice DECIMAL(18, 2)
);
INSERT INTO #tests VALUES
(1, 36.0),
(2, 100.0);
SELECT t.ID, SUM(
BasePrice * COALESCE(Perc, 0) +
COALESCE(Flat, 0)
) AS TotalPrice
FROM #tests t
INNER JOIN #costs c ON t.ID = c.ID
GROUP BY t.ID
ID | TotalPrice
---+-------------
1 | 234.80000000
2 | 137.81400000
The other, better, solution is to use a structure such as follows:
DECLARE #costs TABLE (
ID INT,
CalcOrder INT,
PercOfBase DECIMAL(18, 6),
PercOfPrev DECIMAL(18, 6),
FlatAmount DECIMAL(18, 6)
);
Where CalcOrder represents the order in which calculation is done (e.g. tax before processing fee). PercOfBase and PercOfPrev specify whether base price or running total is multiplied. This allows you to handle situations where, for example, a commission is added on base price but it must not be included in tax and vice-versa. This approach requires recursive or iterative query.

How can I intersect two ActiveRecord::Relations on an arbitrary column?

If I have a people table with the following structure and records:
drop table if exists people;
create table people (id int, name varchar(255));
insert into people values (1, "Amy");
insert into people values (2, "Bob");
insert into people values (3, "Chris");
insert into people values (4, "Amy");
insert into people values (5, "Bob");
insert into people values (6, "Chris");
I'd like to find the intersection of people with ids (1, 2, 3) and (4, 5, 6) based on the name column.
In SQL, I'd do something like this:
select
group_concat(id),
group_concat(name)
from people
group by name;
Which returns this result set:
id | name
----|----------
1,4 | Amy,Amy
2,5 | Bob,Bob
3,6 | Chris,Chris
In Rails, I'm not sure how to solve this.
My closest so far is:
a = Model.where(id: [1, 2, 3])
b = Model.where(id: [4, 5, 6])
a_results = a.where(name: b.pluck(:name)).order(:name)
b_results = b.where(name: a.pluck(:name)).order(:name)
a_results.zip(b_results)
This seems to work, but I have the following reservations:
Performance - is this going to perform well in the database?
Lazy enumeration - does calling #zip break lazy enumeration of records?
Duplicates - what will happen if either set contains more than one record for a given name? What will happen if a set contains more than one of the same id?
Any thoughts or suggestions?
Thanks
You can use your normal sql method to get this arbitrary column in ruby like so:
#people = People.select("group_concat(id) as somecolumn1, group_concat(name) as somecolumn2").group("group_concat(id), group_concat(name)")
For each record in #people you will now have somecolumn1/2 attributes.