Search a text in a column - sql

I have an ASP.NET page and have a table looks like
Create table Test(id int, Users Varchar(MAX));
Insert into Test select 1, 'admin;operator;user1';
Insert into Test select 2, 'superadmin';
Insert into Test select 3, 'superadmin;admin';
Any of Test row can include more than one user, so I am dividing them by semicolon. I want to return the value when the client search in textbox, they will insert only: admin I want to return only the rows which includes admin.
I can not use
select id,Users where Users like '%admin%'
Because in this case the query will return 2nd and 3rd columns which includes superadmin.
How can I get true result?

You shouldn't be storing delimited data, if possible, normalize your database.
As this isn't always possible, you can get what you want with:
where CONCAT(';', Users, ';') like '%;admin;%'

As I mentioned in the comment, you need to fix your design; what you're doing right now is breaking one of the fundamental Normal Form rules. That means, instead, have one row per user:
CREATE TABLE Test (uid int IDENTITY,
id int,
Username nvarchar(128));
INSERT INTO Test (id,
Username)
VALUES (1, N'admin'),
(1, N'operator'),
(1, N'user1'),
(2, N'supermadmin'),
(3, N'superadmin'),
(3, N'admin');
Then you can simply use a = operator:
SELECT *
FROM Test
WHERE Username = N'admin';

I guess you can try below query
SELECT * FROM #tmpTest where users like '%Admin%' AND Users not like 'superAdmin'

Related

Select from a list in a parameter value

I am trying to have a dropdown with subject areas for a school report. The problem I am running into is that in my database, the subjects are grouped by grade and subject instead of just subject. So when I look at gt.standardid in (#SubjectArea) for "Literacy" the standard ids for literacy are (54,61,68,75,88,235) one for each grade level, but I want to have it show me all of them as Literacy. In my parameter "#subjectArea" I have specific values I want to add for each subject area, so for the Label of "Literacy" I want it to select the StandardIds (54,61,68,75,88,235). I am not sure how to accomplish this.
Select
CS.subjectArea
,CS.Name As Group_Name
,GT.Abbreviation
,GT.Name
,GT.standardID
From GradingTask as GT
inner join CurriculumStandard CS
on GT.Standardid = CS.standardid
where GT.ARCHIVED = 0
and GT.standardid in (#SubjectArea)
ORDER BY GT.seq
I would try a cascading parameter approach.
You can have the first parameter be a pre-defined list:
The specific values are not important, but will be used in the next step.
Ideally your IDs would be in a table already, but if not you can use something like this:
declare #SubjectIDs as table
(
[SubjectName] nvarchar(50),
[SubjectID] int
);
insert into #SubjectIDs
(
[SubjectName],
[SubjectID]
)
values
('Literacy', 54),
('Literacy', 61),
('Literacy', 68),
('Literacy', 75),
('Literacy', 88),
('Literacy', 23);
select
SubjectID
from #SubjectIDs
where SubjectName in (#SubjectList);
Make this into a data set. I'm going to call it DS_SubjectIDs.
Make a new hidden or internal parameter called SubjectIDs:
Set it to get its values from the DS_SubjectIDs query:
You can now use the parameter for SubjectIDs in your final query.

postgres insert into multiple tables after each other and return everything

Given postgres database with 3 tables:
users(user_id: uuid, ...)
urls(slug_id:int8 pkey, slug:text unique not null, long_url:text not null)
userlinks(user_id:fkey users.user_id, slug_id:fkey urls.slug_id)
pkey(user_id, slug_id)
The userlinks table exists as a cross reference to associate url slugs to one or more users.
When a new slug is created by a user I'd like to INSERT into the urls table, take the slug_id that was created there, INSERT into userlinks with current users ID and slug_id
Then if possible return both results as a table of records.
Current users id is accessible with auth.uid()
I'm doing this with a stored procedure in supabase
I've gotten this far but I'm stuck:
WITH urls_row as (
INSERT INTO urls(slug, long_url)
VALUES ('testslug2', 'testlong_url2')
RETURNING slug_id
)
INSERT INTO userlinks(user_id, slug_id)
VALUES (auth.uid(), urls_row.slug_id)
--RETURNING *
--RETURNING (urls_record, userlinks_record)
Try this :
WITH urls_row as (
INSERT INTO urls(slug, long_url)
VALUES ('testslug2', 'testlong_url2')
RETURNING slug_id
), userlink_row AS (
INSERT INTO userlinks(user_id, slug_id)
SELECT auth.uid(), urls_row.slug_id
FROM urls_row
RETURNING *
)
SELECT *
FROM urls_row AS ur
INNER JOIN userlink_row AS us
ON ur.slug_id = us.slug_id

Join with json column?

I need find rows in table users by joining column in table queries.
I wrote some sql but it takes 0.200s to run, when SELECT * FROM ... takes 0.80s.
How can I improve performance?
db-fiddle example
The tables are :
CREATE TABLE users (
id INT,
browser varchar
);
CREATE TABLE queries (
id INT,
settings jsonb
);
INSERT INTO users (id,browser) VALUES (1, 'yandex');
INSERT INTO users (id, browser) VALUES (2, 'google');
INSERT INTO users (id, browser) VALUES (3, 'google');
INSERT INTO queries (id, settings) VALUES (1, '{"browser":["Yandex", "TestBrowser"]}');
and the query :
select x2.id as user_id, x1.id as query_id
FROM (
SELECT id, json_array_elements_text((settings->>'browser')::JSON) browser
FROM queries) x1
JOIN users x2 ON lower(x1.browser::varchar) = lower(x2.browser::varchar)
group by 1,2;
json_array_elements_text((settings->>'browser')::JSON)
'->>' converts the result to text. Then you cast it back to JSON. Doing that on one row (if you only have one) is not really going to be a problem, but it is rather pointless.
You could instead do:
jsonb_array_elements_text(settings->'browser')
ON lower(x1.browser::varchar) = lower(x2.browser::varchar)
You can create an index that can be used for this:
create index on users (lower(browser));
It won't do much good on a table with 3 rows. But presumably you don't really have 3 rows.

Insert a new record with a field containing total record in the table

I wonder if it exists a function that allows me to get total rec count of a table to for new record insertion into the table?
I have this table:
CREATE TABLE "TEST"
( "NAME" VARCHAR2(20 BYTE),
"ID" NUMBER,
"FLAG" NUMBER
) ;
Insert into TEST (NAME,ID,FLAG) values ('Ahlahslfh',1,1);
Insert into TEST (NAME,ID, FLAG) values ('Buoiuop',2,1);
Insert into TEST (NAME,ID, FLAG) values ('UOIP',12,0);
My intention is to issue a statement that is equivalent to this:
INSERT INTO TEST( NAME, ID, FLAG )
VALUES( 'TST', 3,1 );
The statement I used below generated error:
INSERT INTO TEST ( NAME, ID, FLAG )
VALUES ( 'TST', SELECT COUNT(*)+1 FROM TEST WHERE FLAG=1,1 );
Below is the final result I am expecting:
Is there a way around it? Of course, I can put them in a script, count the records into a variable and insert that variable into the field. I just wonder if there is more elegant solution and do this in 1 statement.
Thanks!
This is likely to be a very bad way to set an id. In general, I think you should use sequences/identity/auto_increment and not worry about gaps.
But, you can do what you want using parentheses -- these are needed for subqueries:
INSERT INTO TEST(NAME, ID, FLAG)
VALUES ('TST',
(SELECT COUNT(*)+1 FROM TEST WHERE FLAG = 1),
1
);
Or, alternatively:
INSERT INTO TEST(NAME, ID, FLAG)
SELECT 'TST', COUNT(*) + 1, 1
FROM TEST
WHERE FLAG = 1;
I must emphasize that this seems dangerous. It is quite possible that you will get duplicate ids. You should really let the database insert a new value and not worry about gaps.

Inserting SCOPE_IDENTITY() into a junction table

Consider the following little script:
create table #test
(testId int identity
,testColumn varchar(50)
)
go
create table #testJunction
(testId int
,otherId int
)
insert into #test
select 'test data'
insert into #testJunction(testId,otherId)
select SCOPE_IDENTITY(),(select top 10 OtherId from OtherTable)
--The second query here signifies some business logic to resolve a many-to-many
--fails
This, however, will work:
insert into #test
select 'test data'
insert into #testJunction(otherId,testId)
select top 10 OtherId ,(select SCOPE_IDENTITY())
from OtherTable
--insert order of columns is switched in #testJunction
--SCOPE_IDENTITY() repeated for each OtherId
The second solution works and all is well. I know it doesn't matter, but for continuity's sake I like having the insert done in the order in which the columns are present in the database table. How can I acheieve that? The following attempt gives a subquery returned more than 1 value error
insert into #test
select 'test data'
insert into #testJunction(otherId,testId)
values ((select SCOPE_IDENTITY()),(select top 10 drugId from Drugs))
EDIT:
On a webpage a new row is entered into a table with a structure like
QuizId,StudentId,DateTaken
(QuizId is an identity column)
I have another table with Quiz Questions like
QuestionId,Question,CorrectAnswer
Any number of quizzes can have any number of questions, so in this example testJunction
resolves that many to many. Ergo, I need the SCOPE_IDENTITY repeated for however many questions are on the quiz.
The version that fails
insert into #testJunction(testId,otherId)
select SCOPE_IDENTITY(),(select top 10 OtherId from OtherTable)
will insert one row with scope_identity() in the first column and a set of 10 values in the second column. A column can not have sets so that one fails.
The one that works
insert into #testJunction(otherId,testId)
select top 10 OtherId ,(select SCOPE_IDENTITY())
from OtherTable
will insert 10 rows from OtherTable with OtherId in the first column and the scalar value of scope_identity() in the second column.
If you need to switch places of the columns it would look like this instead.
insert into #testJunction(testId,otherId)
select top 10 SCOPE_IDENTITY(), OtherId
from OtherTable
You need the output clause. Look it up in BOL.
Try this way:
Declare #Var int
insert into #test
select 'test data'
select #var=scope_identity()
insert into #testJunction(otherId,testId)
select top 10 #var,drugId from Drugs