How to design task assignment system? - sql

I want to design a system which is similar with stackoverflow review feature. That is to say:
There are n tasks, which should assign to users (users count is unknown). At one time, one task should assign to at most one user, different users should not be assigned same task.
For example, n = 8, if one user enters the system default assign him 3 tasks.
At 17:00, Tom enters the system and get tasks 1, 2, 3.
At 17:01, Jim enters the system and get tasks 4, 5, 6.
At 17:02, Jerry enters the system and get tasks 7, 8.
At 17:03, Bob enters the system and get no task.
At 17:05, Tom completed task 1, 2, and leave the system.
At 17:06, Bob enters the system again and get task 3.
Suppose I use Database to store tasks info.
My solution is that when tasks 1, 2, 3 are assigned to Tom, delete the 3 records from DB and store them to memory. Then others will not get the 3 records. When Tom leaves the system, insert his completed tasks and uncompleted tasks to DB again (with task status "completed" or "uncompleted").
While the disadvantage is that store records to memory is not 100% safe, if the system crashed may cause data missing issue.
Could somebody know how stackoverflow designs review feature? Or share other solutions? I'm wondering whether SELECT ... FOR UPDATE is good in this use case.

What you need to implement is a FIFO stack or simple queue. In Oracle the best thing (unless you want to implement an actual queue with AQ) for such a thing is SELECT ... FOR UPDATE with the SKIP LOCKED clause. SKIP LOCKED allows us to easily operate a stack with multiple users.
Here's a simple interface:
create or replace package task_mgmt is
function get_next_task return tasks.id%type;
procedure complete_task (p_id in tasks.id%type);
procedure release_task (p_id in tasks.id%type);
end task_mgmt;
/
This is a bare-bones implementation:
create or replace package body task_mgmt is
function get_next_task return tasks.id%type
is
return_value tasks.id%type;
cursor c_tsk is
select id
from tasks
where status = 'open'
order by date_created, id
for update skip locked;
begin
open c_tsk;
fetch c_tsk into return_value;
update tasks
set status = 'progress'
, assigned = user
where current of c_tsk;
close c_tsk;
return return_value;
end get_next_task;
procedure complete_task (p_id in tasks.id%type)
is
begin
update tasks
set status = 'complete'
, date_completed = sysdate
where id = p_id;
commit;
end complete_task;
procedure release_task (p_id in tasks.id%type)
is
begin
rollback;
end ;
end task_mgmt;
/
Updating the status when the users pops the stack creates a lock. Because of the SKIP LOCKED clause, the next user won't see that task. This is a lot cleaner than deleting and re-inserting records.
Here's some data:
create table tasks (
id number not null
, descr varchar2(30) not null
, date_created date default sysdate not null
, status varchar2(10) default 'open' not null
, assigned varchar2(30)
, date_completed date
, constraint task_pk primary key (id)
)
/
insert into tasks (id, descr, date_created) values (1000, 'Do something', date '2015-05-28')
/
insert into tasks (id, descr, date_created) values (1010, 'Look busy', date '2015-05-28')
/
insert into tasks (id, descr, date_created) values (1020, 'Get coffee', date '2015-06-12')
/
Let's pop! Here's Session one:
SQL> var tsk1 number;
SQL> exec :tsk1 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk1
TSK1
----------
1000
SQL>
Meanwhile in Session two:
SQL> var tsk2 number;
SQL> exec :tsk2 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk2
TSK2
----------
1010
SQL>
Back in Session one:
SQL> exec task_mgmt.complete_task (:tsk1);
PL/SQL procedure successfully completed.
SQL> exec :tsk1 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk1
TSK
----------
1020
SQL>
The main drawback of this approach is that it requires users to maintain stateful sessions while they work on the task. It that's not the case then you need an API in which get_next_task() is a discrete transaction, and forget about locking.
Incidentally, it's probably better to let users grab a task rather than assign them through a logon trigger (or whatever you have in mind by "Tom enters the system and get tasks 1, 2, 3."). Pulling tasks is how the SO Review queue works.
Also, just assign one task at a time. That way you can get efficient distribution of work. You want to avoid the situation where Tom has three tasks on his plate, one of which he isn't going to complete, and Bob has nothing to do. That is, unless you're Bob.

Related

oracle concurrent select for update and insert

I have a table with two columns: k (primary key) and value.
I'd like to:
select for update by k, if k is not found, insert a new row with a default value.
with the returned value ( existent or new inserted row value) make some processing.
update the row and commit.
Is it possible to make this "select for update and insert default value if not found"?
If implement (1) as a select/check if found/insert if not found, we have concurrency problems, since two sessions could make the select concurrently on non existent key, both will try to insert and one of the instances will fail.
In this case the desired behavior is to perform atomically the select/insert and one of the instance perform it and the second one keep locked until the first one commits, and then use the value inserted by the first one.
We implement it always doing an "insert ... if not exist.../commit" before the "select for update" but this implies always trying to insert when it is a unlikely needed.
Is there any way to implement it on one sql sentence?
Thanks!!
See if k is available
SELECT * FROM table
WHERE k = value
FOR UPDATE
If no rows returned, then it doesn't exist. Insert it:
INSERT INTO table(k, col1, col2)
VALUES (value, val1, default))
select ... for update is the first step you should make; without it, you can't "reserve" that row for further processing (unless you're willing to lock the whole table in exclusive mode; if that "processing" takes no time, that could also be an option, especially if there are not many users who will be doing it).
If row exists, the rest is simple - process it, update it, commit.
But, if it doesn't exist, you'll have to insert a new row (just as you said), and here's a problem of two (or more) users inserting the same value.
To avoid it, create a function which
will return unique ID value for a new row
is an autonomous transaction
why? Because you're performing DML in it (update or insert), and you can't do that in a function unless it is an autonomous transaction
Users will have to use that function to get the next ID value. Here's an example: you'll need a table (my_id) which holds the last used ID (and every user who accesses it via the function will create a new value).
Table:
SQL> create table my_id (id number);
Table created.
Function:
SQL> create or replace function f_id
2 return number
3 is
4 pragma autonomous_transaction;
5 l_nextval number;
6 begin
7 select id + 1
8 into l_nextval
9 from my_id
10 for update of id;
11
12 update my_id set
13 id = l_nextval;
14
15 commit;
16 return (l_nextval);
17
18 exception
19 when no_data_found then
20 lock table my_id in exclusive mode;
21
22 insert into my_id (id)
23 values (1);
24
25 commit;
26 return(1);
27 end;
28 /
Function created.
Use it as
SQL> select f_id from dual;
F_ID
----------
1
SQL>
That's it ... code you'll use will then be something like this:
SQL> create table test
2 (id number constraint pk_test primary key,
3 name varchar2(10),
4 datum date
5 );
Table created.
SQL> create or replace procedure p_test (par_id in number)
2 is
3 l_id test.id%type;
4 begin
5 select id
6 into l_id
7 from test
8 where id = par_id
9 for update;
10
11 update test set datum = sysdate where id = par_id;
12 exception
13 when no_data_found then
14 insert into test (id, name, datum)
15 values (f_id, 'Little', sysdate); --> function call is here
16 end;
17 /
Procedure created.
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:21
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:21 --> row was inserted
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:30 --> row was updated
SQL>
Use a sequence to generate a surrogate primary key instead of using a natural key. If you had a real natural key, then it would be extremely unlikely for two users to submit the same value at the same time.
There are several ways to automatically generate primary keys. I prefer to use sequence defaults, like this:
create sequence test_seq;
create table test1
(
k number default test_seq.nextval,
value varchar2(4000),
constraint test1_pk primary key(k)
);
If you can't switch to a surrogate key or a real natural key:
Change the "insert ... if not exist.../commit" to a simpler "insert ... if not exist", and perform all operations in a single transaction. Inserting the same primary key in different sessions, even uncommitted, is impossible in Oracle. Although the SELECT won't cause a block, the INSERT will. This behavior is an exception to Oracle's implementation of isolation, the "I" in "ACID", and in this case that behavior can work to your advantage.
If two sessions attempt to insert the same primary key at the same time, the second session will hang, and when the first session finally commits, the second session will fail with the exception "ORA-00001: unique constraint (X.Y) violated". Let that exception become your flag for knowing when a user has submitted a duplicate value. You can catch the exception in the application and have the user try again.

Does SELECT start transaction in PL/SQL

I was told that following code won't help me to check duplicity, because result might be different before SELECT and UPDATE statement.
PROCEDURE AddNew(Pname VARCHAR2, Pcountry VARCHAR2)
AS
already_exists BOOLEAN;
BEGIN
SELECT COUNT(*)>0 INTO already_exists FROM Publishers WHERE name=Pname;
IF already_exists THEN
RAISE_APPLICATION_ERROR(-20014,'Publisher already exists!');
END IF;
INSERT INTO Publishers(id,name,country)
VALUES (NewPublisherId(),Pname,Pcountry);
END;
This post claims that SELECT starts a transaction:
Why do I get an open transaction when just selecting from a database View?
This part of documentation suggests otherwise:
A transaction implicitly begins with any operation that obtains a TX
lock:
When a statement that modifies data is issued
When a SELECT ... FOR UPDATE statement is issued
When a transaction is explicitly started with a SET TRANSACTION statement or the DBMS_TRANSACTION package
So? Does SELECT start a transaction or not?
The latter is true: https://docs.oracle.com/cloud/latest/db112/SQLRF/statements_10005.htm#SQLRF01705
A transaction implicitly begins with any operation that obtains a TX
lock:
When a statement that modifies data is issued
When a SELECT ... FOR UPDATE statement is issued
When a transaction is explicitly started with a SET TRANSACTION statement or the DBMS_TRANSACTION package
But it really does not matter, from the point of view of the main problem - to see if the record already exists in the database. Even if the transaction is explicitely startet using SET TRANSACTION ..., your code simply does not detect duplicate transactions !
Just do a simple test manually simulating the procedure in two simultaneous sessions and you will see:
CREATE TABLE Publishers(
id int,
name varchar2(100)
);
Let say that in session #1 the procedure begins at 8:00:00.0000:
SQL> Set transaction name 'session 1';
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Let say that in session #2 the same procedure begins at 8:00:00.0020, just after the insert was made in session 1, but still before the session#1 commits:
SQL> Set transaction name 'session 2';
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
The transaction #2 does not see uncommited changes done by the session 1, so the session 2 assumess that there is no record John, so it also inserts it to the table:
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Now the session 1 commits:
SQL> Commit;
Commit complete.
and a few milliseconds later the session2 commits too:
SQL> Commit;
Commit complete.
And the final result is - a duplicated record even though the transaction has been explicitelly started:
select * from publishers;
ID NAME
---------- ----------------------------------------------------------------------------------------------------
1 John
1 John
========== EDIT =================
You can avoid the duplicity by executing statement SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE in the beginning. – #Draex_
Many think that ISOLATION LEVEL SERIALIZABLE will solve the problem magically. Unfortunately, it will not help.
Let's see how it works on a simple example:
Session #1
SQL> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Session #2
SQL> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Session #1 again:
SQL> commit;
Commit complete.
SQL> select * from publishers;
ID NAME
---------- --------
1 John
and back to session #2
SQL> commit;
Commit complete.
SQL> select * from publishers;
ID NAME
---------- --------
1 John
1 John
As you can see, the magic of ISOLATION LEVEL SERIALIZABLE did not work.

what is the statement for "where" condtion in pl/sql

I want to make a trigger that will be executed for each row before delete on the table Clients, what I'm trying to do is when some one tries to delete a row :
DELETE FROM Clients WHERE id=5 AND name = 'test';
I want to print something on the screen like :
"you deleted with the following conditions : id is 5 , name is test "
or execute another query with same conditions ...
Any kind of help is very appreciated
EDIT
let's suppose the user typed the following query : DELETE FROM Clients where name = 'test'
create or replace
TRIGGER DELETECLIENT
BEFORE DELETE ON CLIENTS
DECLARE
pragma autonomous_transaction;
name1 clients.name%Type;
BEGIN
select name into name1 from clients where name = %I want here the name in the user query %;
IF name1 != null THEN
DELETE FROM clients WHERE name = name1;
commit;
END IF;
END;
what I've tested :
i tried adding :old.name , and :new.name , but this doesn't work when the the row doesn't exist in my database
PS : i'm doing this for educational purpose only, I know the script doesn't make sense. i just wan't to know how to achieve the following task : getting the parameters typed after WHERE in the query
The correlation names and pseudorecords the represent, i.e. :old and :new by default, only have any meaning for row-level triggers:
Note:
This topic applies only to triggers that fire at row level—that is, row-level simple DML triggers and compound DML triggers with row-level timing point sections.
You cannot refer to these in a statement-level trigger, and only :old would have any meaning in a delete trigger. On the other hand, statement-level triggers fire even if no data is affected, whereas a row-level trigger won't fire if, in your example, no data is deleted - because there are no matching rows in your table. (This applies for instead of triggers for views as well, in case you're wondering if that would be a workaround).
So basically you can't do what you're attempting - conditionally deciding whether to delete from another table instead - from a trigger. Triggers are rarely the right answer. For your 'scenario where if it doesn't exist on my table i should try to find it on other site and delete it' you can try to delete a row locally, test whether it did anything, and then decide to delete from the remote table if it didn't. In a PL/SQL block that might be as simple as:
create procedure delete_client(p_name clients.name%type) as
begin
delete from clients where name = p_name;
if sql%rowcount = 0 then
delete from clients#site2 where name = p_name;
end if;
end delete_client;
/
And rather than deleting directly from the table, you make everyone call the procedure instead. Say you start off with some data:
select * from clients;
ID NAME
---------- ----------
1 Joe
2 Anna
select * from clients#site2;
ID NAME
---------- ----------
1 Joe
3 Max
Then calling the procedure for two names:
exec delete_client('Joe');
exec delete_client('Max');
... has removed local and remote records appropriately:
select * from clients;
ID NAME
---------- ----------
2 Anna
select * from clients#site2;
ID NAME
---------- ----------
1 Joe
Joe was only deleted from the local schema despite existing in both; Max didn't exist locally so was deleted from the remote schema.
It doesn't have to be a procedure; if you're deleting through JDBC etc. you can test the result of an execute() call to see how many rows were affected, which is all sql%rowcount is doing really, and the application code can decide whether to do the second delete.
But with a procedure (probably in a package) you can grant execute on that, and remove delete privileges from the users, so they can't bypass this check and do a straight delete from clients where ...
If you really want some 'display' output for testing purposes:
create procedure delete_client(p_name clients.name%type) as
begin
delete from clients where name = p_name;
if sql%rowcount > 0 then
dbms_output.put_line('Deleted ' || sql%rowcount
|| ' rows from local schema for "' || p_name || '"');
else
delete from clients#site2 where name = p_name;
if sql%rowcount > 0 then
dbms_output.put_line('Deleted ' || sql%rowcount
|| ' rows from remote schema for "' || p_name || '"');
else
dbms_output.put_line('No rows deleted on local or remote schema for "'
|| p_name || '"');
end if;
end if;
end delete_client;
/
set serveroutput on
exec delete_client('Joe');
anonymous block completed
Deleted 1 rows from local schema for "Joe"
exec delete_client('Max');
anonymous block completed
Deleted 1 rows from remote schema for "Max"
exec delete_client('Fred');
anonymous block completed
No rows deleted on local or remote schema for "Fred"
But you shouldn't assume anyone calling your procedure will have serveroutput on, or even be using a client capable of consuming dbms_output messages.

PL/SQL Triggers Library Infotainment System

I am trying to make a Library Infotainment System using PL/SQL. Before any of you speculate, yes it is a homework assignment but I've tried hard and asking a question here only after trying hard enough.
Basically, I have few tables, two of which are:
Issue(Bookid, borrowerid, issuedate, returndate) and
Borrower(borrowerid, name, status).
The status in Borrower table can be either 'student' or 'faculty'. I have to implement a restriction using trigger, that per student, I can issue only 2 books at any point of time and per faculty, 3 books at any point of time.
I am totally new to PL/SQL. It might be easy, and I have an idea of how to do it. This is the best I could do. Please help me in finding design/compiler errors.
CREATE OR REPLACE TRIGGER trg_maxbooks
AFTER INSERT ON ISSUE
FOR EACH ROW
DECLARE
BORROWERCOUNT INTEGER;
SORF VARCHAR2(20);
BEGIN
SELECT COUNT(*) INTO BORROWERCOUNT
FROM ISSUE
WHERE BORROWER_ID = :NEW.BORROWER_ID;
SELECT STATUS INTO SORF
FROM BORROWER
WHERE BORROWER_ID = :NEW.BORROWER_ID;
IF ((BORROWERCOUNT=2 AND SORF='STUDENT')
OR (BORROWERCOUNT=3 AND SORF='FACULTY')) THEN
ROLLBACK TRANSACTION;
END IF;
END;
Try something like this:
CREATE OR REPLACE TRIGGER TRG_MAXBOOKS
BEFORE INSERT
ON ISSUE
FOR EACH ROW
BEGIN
IF ( :NEW.BORROWERCOUNT > 2
AND :NEW.SORF = 'STUDENT' )
OR ( :NEW.BORROWERCOUNT > 3
AND :NEW.SORF = 'FACULTY' )
THEN
RAISE_APPLICATION_ERROR (
-20001,
'Cannot issue beyond the limit, retry as per the limit' );
END IF;
END;
/
There should not be a commit or rollback inside a trigger. The logical exception is equivalent to ROLLBACK
This is so ugly I can't believe you're being asked to do something like this. Triggers are one of the worst ways to implement business logic. They will often fail utterly when confronted with more than one user. They are also hard to debug because they have hard-to-anticipate side effects.
In your example for instance what happens if two people insert at the same time? (hint: they won't see the each other's modification until they both commit, nice way to generate corrupt data :)
Furthermore, as you are probably aware, you can't reference other rows of a table inside a row level trigger (this will raise a mutating error).
That being said, in your case you could use an extra column in Borrower to record the number of books being borrowed. You'll have to make sure that the trigger correctly updates this value. This will also take care of the multi-user problem since as you know only one session can update a single row at the same time. So only one person could update a borrower's count at the same time.
This should help you with the insert trigger (you'll also need a delete trigger and to be on the safe side an update trigger in case someone updates Issue.borrowerid):
CREATE OR REPLACE TRIGGER issue_borrower_trg
AFTER INSERT ON issue
FOR EACH ROW
DECLARE
l_total_borrowed NUMBER;
l_status borrower.status%type;
BEGIN
SELECT nvl(total_borrowed, 0) + 1, status
INTO l_total_borrowed, l_status
FROM borrower
WHERE borrower_id = :new.borrower_id
FOR UPDATE;
-- business rule
IF l_status = 'student' and l_total_borrowed >= 3
/* OR faculty */ THEN
raise_application_error(-20001, 'limit reached!');
END IF;
UPDATE borrower
SET total_borrowed = l_total_borrowed
WHERE borrower_id = :new.borrower_id;
END;
Update: the above approach won't even work in your case because you record the issue date/return date in the issue table so the number of books borrowed is not a constant over time. In that case I would go with a table-level POST-DML trigger. After each DML verify that every row in the table validates your business rules (it won't scale nicely though, for a solution that scales, see this post by Tom Kyte).

Dynamic Cursors

I use a cursor for the statement:
SELECT NAME FROM STUDENT WHERE ROLL = 1;
I used:
CURSOR C IS SELECT NAME FROM STUDENT WHERE ROLL = roll;
--roll is a variable I receive via a procedure, and the procedure works fine for the received parameter.
Upon executing this, I am able to retrieve all records with roll = 1.
Now, I need to retrieve the records of a group (possibly via a cursor), just like:
SELECT NAME FROM STUDENT WHERE ROLL IN (2, 4, 6);
But the values in the IN clause are known only at run time. How should I do this? That is, is there any way I could assign parameters to the WHERE clause of the cursor?
I tried using an array in the declaration of the cursor, but an error pops up telling something like: standard types cannot be used.
I used:
CURSOR C IS SELECT NAME FROM STUDENT WHERE ROLL IN (rolls);
--rolls is an array initialized with the required roll numbers.
First, I assume that the parameter to your procedure doesn't actually match the name of a column in the STUDENT table. If you actually coded the statement you posted, roll would be resolved as the name of the column, not the parameter or local variable so this statement would return every row in the STUDENT table where the ROLL column was NOT NULL.
CURSOR C
IS SELECT NAME
FROM STUDENT
WHERE ROLL = roll;
Second, while it is possible to use dynamic SQL as #Gaurav Soni suggests, doing so generates a bunch of non-sharable SQL statements. That's going to flood the shared pool, probably aging other statements out of cache, and use a lot of CPU hard-parsing the statement every time. Oracle is built on the premise that you are going to parse a SQL statement once, generally using bind variables, and then execute the statement many times with different values for the bind variables. Oracle can go through the process of parsing the query, generating the query plan, placing the query in the shared pool, etc. only once and then reuse all that when you execute the query again. If you generate a bunch of SQL statements that will never be used again because you're using dynamic SQL without bind variables, Oracle is going to end up spending a lot of time caching SQL statements that will never be executed again, pushing useful cached statements that will be used again out of the shared pool meaning that you're going to have to re-parse those queries the next time they're encountered.
Additionally, you've opened yourself up to SQL injection attacks. An attacker can exploit the procedure to read any data from any table or execute any function that the owner of the stored procedure has access to. That is going to be a major security hole even if your application isn't particularly security conscious.
You would be better off using a collection. That prevents SQL injection attacks and it generates a single sharable SQL statement so you don't have to do constant hard parses.
SQL> create type empno_tbl is table of number;
2 /
Type created.
SQL> create or replace procedure get_emps( p_empno_arr in empno_tbl )
2 is
3 begin
4 for e in (select *
5 from emp
6 where empno in (select column_value
7 from table( p_empno_arr )))
8 loop
9 dbms_output.put_line( e.ename );
10 end loop;
11 end;
12 /
Procedure created.
SQL> set serveroutput on;
SQL> begin
2 get_emps( empno_tbl( 7369,7499,7934 ));
3 end;
4 /
SMITH
ALLEN
MILLER
PL/SQL procedure successfully completed.
create or replace procedure dynamic_cur(p_empno VARCHAR2) IS
cur sys_refcursor;
v_ename emp.ename%type;
begin
open cur for 'select ename from emp where empno in (' || p_empno || ')';
loop
fetch cur into v_ename;
exit when cur%notfound;
dbms_output.put_line(v_ename);
end loop;
close cur;
end dynamic_cur;
Procedure created
Run the procedure dynamic_cur
declare
v_empno varchar2(200) := '7499,7521,7566';
begin
dynamic_cur(v_empno);
end;
Output
ALLEN
WARD
JONES
Note:As mentioned by XQbert,dynamic cursor leads to SQL injection ,but if you're not working on any critical requirement ,where security is not involved then you can use this .
Maybe you can pass rolls as a set of quoted comma separated values.
e.g. '1', '2' etc
If this value is passes into the procedure in a varchar input variable, the it can be used to get multiple rows as per the table match.
Hence the cursor
SELECT NAME FROM STUDENT WHERE ROLL IN (rolls);
will be evaluated as
SELECT NAME FROM STUDENT WHERE ROLL IN ('1','2');
Hope it helps