How to display the list of books of a user in php code - sql

I am a beginner in programming.
I want to create a site in which I would like to display the list of books of a user that he has previously added. here is the schema of the code.
table book;
id ,name,detail ,urlimage.
table users;
id,name,email,password.
table add_book;
uses_id INT ,book_id INT , key primary (users_id ,book_id ).
I have three tables in my database; the book table, the users table and the add_book table. their structures is given in the above question add_book.users_id and add_book.book_id are foreign keys. I would like to make sure that when a user adds a book to his list, an entry is created in the add_book table with the request. ('INSERT INTO kal224_sory.add_book (users_id, book_id) VALUES (: users,: book)'); $ sql-> execute (['users' => $ users_id, 'book' => $ book_id]); .It works well where there is problem, I try to send in json the fields of the table book that correspond to book_id which have the same entries users_id where(add_book.users_id = $ users_id) of the table add_book. the code I tried is;

You need to SELECT the data from the add_book table that belongs to a particular uses_id (sp.)
SELECT * FROM add_book WHERE uses_id = 1 -- change '1' to whatever user's ID you'd like to view data for

Related

Creating an SQL FUNCTION that inserts data into two existing tables

I am trying to create a FUNCTION that will insert complete and relevant information into an existing table. I need the FUNCTION to check if certain entities exist in another table and, if not, inserts the data. Example code below:
CREATE FUNCTION insert_payment (customer_uuid uuid, customer_name varchar(63), payment_uuid uuid, total_amount integer
So let's say that I wanted to enter a payment into a table called Payments. I want the FUNCTION to check another existing table called Customers to see if customer_uuid and customer_name already exist within the table. If not, I would like the FUNCTION to insert the customer_uuid and customer_name information into Customers as well as enter the data from all four parameters into Payments.
This is my first question to ask on Stack Overflow so if greater clarification is needed please let me know. I am also a student and still learning how to communicate effectively when talking about coding so, again, if further clarification is needed I will try my best. Thank you!
You can make use of an insert on conflict do nothing to insert the customer and skip inserting if it already exists.
So with this table definition:
create table customer (id uuid primary key, name text);
create table payment (id uuid primary key, customer_id uuid not null references customer, amount integer);
This function would do what you want:
CREATE FUNCTION insert_payment (customer_uuid uuid, customer_name varchar(63),
payment_uuid uuid, total_amount integer)
returns void
as
$$
insert into customer (id, name)
values (customer_uuid, customer_name)
on conflict do nothing;
insert into payment (id, customer_id, amount)
values (payment_uuid, customer_uuid, total_amount);
$$
language sql;
This is just as efficient as first checking the existence (select exists (...)) before inserting, because the INSERT statement will do that check anyway.
Online demo
I can help you more after knowing your table fields name but here I'm sharing a code maybe it'll help you
//set variable
$customer_uuid= $_POST['customer_uuid'];
$customer_name= $_POST['customer_name'];
//WRITE QUERY to the check-in customers table
$duplicate_customer = $this->db->prepare( "SELECT customer_uuid,customer_name FROM customers WHERE customer_uuid = '$customer_uuid' AND customer_name='customer_name'" ); $duplicate_customer->execute();
//now check data if exist in the customer table
if($duplicate_customer->rowCount() > 0){
//write here insert query to insert data in customer table
}

How to insert data into a relation table correctly in SQL?

I have some data in a general table called ImportH. The data has been imposted from a csv file. I have also created two tables, Media and Host (each one has it's respective ID. These tables are related by a third table called HostMedia.
Each Host can have (or not) different types of Media (facebook, email, phone...).
I'll provide some images of the tables:
Table ImportH
Table Host
Table Media
How can I insert the data from the other tables into table HostMedia? This table looks like this:
create table HostMedia (
host_id int references Host (host_id),
id_media int references Media (id_verification),
primary key (host_id, id_media)
);
I have tried this:
insert into HostMedia (host_id, id_media)
select Host.host_id, Media.id_verification
from Host, Media;
But this does the cartesian product for all the hosts assigning them all the rows on the Media table. What's the correct way?
The "media" column in your "ImportH" table looks almost like a valid JSON, so this might work:
INSERT INTO HostMedia (host_id, id_media)
SELECT i.host_id, m.id_verification
FROM (
SELECT host_id,
json_array_elements_text(replace(media,'''','"')::json) AS media_name
FROM ImportH
) AS i
JOIN Media AS m ON m.media = i.media_name;
Notes: it would be easier if you
provided text data instead of screenshots
used logical column names

Sql - create entry in table A for every entry in table B

I have table Users model with id. Now i create table(model) Settings which should store settings for every User form Users table. How can i create entry in table Settings for every exist User from Users table?
Column Users:
id ;some value ; someothervalue;
Column Settings:
id ; user_id ; message_1 ; message_2
I want to create entry in the table Settings for every User. Right now every new User have before_create filter but old users havent got entry in Settings table.
Insert TableSettings(list of ColNames ...)
Select [list of colnames ...]
From TableUsers
Just make sure that the list of colNames in tableSettings match (in the number of columns and datatype) the list of columns in tableUsers.
to start with just the UserId and message_1
Insert tableSettings(userID, message_1)
Select id, 'A message with someotherValue:' + someothervalue
from tableUsers

How Do I Deep Copy a Set of Data, and Change FK References to Point to All the Copies?

Suppose I have Table A and Table B. Table B references Table A. I want to deep copy a set of rows in Table A and Table B. I want all of the new Table B rows to reference the new Table A rows.
Note that I'm not copying the rows into any other tables. The rows in table A will be copied into table A, and the rows in table B will be copied into table B.
How can I ensure that the foreign key references get readjusted as part of the copy?
To clarify, I'm trying to find a generic way to do this. The example I'm giving involves two tables, but in practice the dependency graph may be much more complicated. Even a generic way to dynamically generate SQL to do the work would be fine.
UPDATE:
People are asking why this is necessary, so I'll give some background. It may be way too much, but here goes:
I'm working with an old desktop application that's been moved to a client-server model. But, the application still uses a rudimentary in-house binary file format for storing data for its tables. A data file is just a header followed by a series of rows, each of which is just the binary serialized field values, the order of which is determined by a schema text file. The only thing good about it is that it's very fast. It's terrible in every other respect. I'm moving the application to SQL Server and trying not to degrade the performance too badly.
This is a kind of scheduling application; the data's not critical to anybody, and there's no audit tracking, etc. necessary. It's not a supermassive amount of data, and we don't necessarily need to keep very old data around if the database grows too large.
One feature that they are accustomed to is the ability to duplicate entire schedules in order to create "what-if" scenarios that they can muck with. Any user can do this as many times as they want, as often as they want. In the old database, the data files for each schedule are stored in their own data folder, identified by name. So, copying a schedule was as simple as copying the data folder and renaming it.
I must be able to do effectively the same thing with SQL Server or the migration will not work. Maybe you're thinking that I can just only copy the data that actually gets changed in order to avoid redundancy; but that honestly sounds too complicated to be feasible.
To throw another wrench into the mix, there can be a hierarchy of schedule data folders. So, a data folder may contain a data folder, which may contain a data folder. And the copying can occur at any level.
In SQL Server, I'm implementing a nested set hierarchy to mimic this. I have a DATA_SET table like this:
CREATE TABLE dbo.DATA_SET
(
DATA_SET_ID UNIQUEIDENTIFIER PRIMARY KEY,
NAME NVARCHAR(128) NOT NULL,
LFT INT NOT NULL,
RGT INT NOT NULL
)
So, there's a tree structure of data sets. Each data set represents a schedule, and may contain child data sets. Every row in every table has a DATA_SET_ID FK reference, indicating which data set it belongs to. Whenever I copy a data set, I copy all the rows in the table for that data set, and every other data set, into the same table, but referencing new data sets.
So, here's a simple concrete example:
CREATE TABLE FOO
(
FOO_ID BIGINT PRIMARY KEY,
DATA_SET_ID BIGINT FOREIGN KEY REFERENCES DATA_SET(DATA_SET_ID) NOT NULL
)
CREATE TABLE BAR
(
BAR_ID BIGINT PRIMARY KEY,
DATA_SET_ID BIGINT FOREIGN KEY REFERENCES DATA_SET(DATA_SET_ID) NOT NULL,
FOO_ID UNIQUEIDENTIFIER PRIMARY KEY
)
INSERT INTO FOO
SELECT 1, 1 UNION ALL
SELECT 2, 1 UNION ALL
SELECT 3, 1 UNION ALL
INSERT INTO BAR
SELECT 1, 1, 1
SELECT 2, 1, 2
SELECT 3, 1, 3
So, let's say I copy data set 1 into a new data set of ID 2. After I copy, the tables will look like this:
FOO
FOO_ID, DATA_SET_ID
1 1
2 1
3 1
4 2
5 2
6 2
BAR
BAR_ID, DATA_SET_ID, FOO_ID
1 1 1
2 1 2
3 1 3
4 2 4
5 2 5
6 2 6
As you can see, the new BAR rows are referencing the new FOO rows. It's not the rewiring of the DATA_SET_ID's that I'm asking about. I'm asking about rewiring the foreign keys in general.
So, that was surely too much information, but there you go.
I'm sure there are a lot of concerns about performance with the idea of bulk copying the data like this. The tables are not going to be huge. I'm not expecting more than 1000 records in any table, and most of the tables will be much much smaller than that. Old data sets can be deleted outright with no repercussions.
Thanks,
Tedderz
Here is an example with three tables that can probably get you started.
DB schema
CREATE TABLE users
(user_id int auto_increment PRIMARY KEY,
user_name varchar(32));
CREATE TABLE agenda
(agenda_id int auto_increment PRIMARY KEY,
`user_id` int, `agenda_name` varchar(7));
CREATE TABLE events
(event_id int auto_increment PRIMARY KEY,
`agenda_id` int,
`event_name` varchar(8));
An SP to clone a user with his agenda and events records
DELIMITER $$
CREATE PROCEDURE clone_user(IN uid INT)
BEGIN
DECLARE last_user_id INT DEFAULT 0;
INSERT INTO users (user_name)
SELECT user_name
FROM users
WHERE user_id = uid;
SET last_user_id = LAST_INSERT_ID();
INSERT INTO agenda (user_id, agenda_name)
SELECT last_user_id, agenda_name
FROM agenda
WHERE user_id = uid;
INSERT INTO events (agenda_id, event_name)
SELECT a3.agenda_id_new, e.event_name
FROM events e JOIN
(SELECT a1.agenda_id agenda_id_old,
a2.agenda_id agenda_id_new
FROM
(SELECT agenda_id, #n := #n + 1 n
FROM agenda, (SELECT #n := 0) n
WHERE user_id = uid
ORDER BY agenda_id) a1 JOIN
(SELECT agenda_id, #m := #m + 1 m
FROM agenda, (SELECT #m := 0) m
WHERE user_id = last_user_id
ORDER BY agenda_id) a2 ON a1.n = a2.m) a3
ON e.agenda_id = a3.agenda_id_old;
END$$
DELIMITER ;
To clone a user
CALL clone_user(3);
Here is SQLFiddle demo.
I recently found myself needing to solve a similar problem; that is, I needed to copy a set of rows in a table (Table A) as well as all of the rows in related tables which have foreign keys pointing to Table A's primary key. I was using Postgres so the exact queries may differ but the overall approach is the same. The biggest benefit of this approach is that it can be used recursively to go infinitely deep
TLDR: the approach looks like this
1) find all the related table/columns of Table A
2) copy the necessary data into temporary tables
3) create a trigger and function to propagate primary key column
updates to related foreign keys columns in the temporary tables
4) update the primary key column in the temporary tables to the next
value in the auto increment sequence
5) Re-insert the data back into the source tables, and drop the
temporary tables/triggers/function
1) The first step is to query the information schema to find all of the tables and columns which are referencing Table A. In Postgres this might look like the following:
SELECT tc.table_name, kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY'
AND ccu.table_name='<Table A>'
AND ccu.column_name='<Primary Key>'
2) Next we need to copy the data from Table A, and any other tables which reference Table A - lets say there is one called Table B. To start this process, lets create a temporary table for each of these tables and we will populate it with the data that we need to copy. This might look like the following:
CREATE TEMP TABLE temp_table_a AS (
SELECT * FROM <Table A> WHERE ...
)
CREATE TEMP TABLE temp_table_b AS (
SELECT * FROM <Table B> WHERE <Foreign Key> IN (
SELECT <Primary Key> FROM temp_table_a
)
)
3) We can now define a function that will cascade primary key column updates out to related foreign key columns, and trigger which will execute whenever the primary key column changes. For example:
CREATE OR REPLACE FUNCTION cascade_temp_table_a_pk()
RETURNS trigger AS
$$
BEGIN
UPDATE <Temp Table B> SET <Foreign Key> = NEW.<Primary Key>
WHERE <Foreign Key> = OLD.<Primary Key>;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_temp_table_a
AFTER UPDATE
ON <Temp Table A>
FOR EACH ROW
WHEN (OLD.<Primary Key> != NEW.<Primary Key>)
EXECUTE PROCEDURE cascade_temp_table_a_pk();
4) Now we just update the primary key column in to the next value of the sequence of the source table (). This will activate the trigger, and the updates will be cascaded out to the foreign key columns in . In Postgres you can do the following:
UPDATE <Temp Table A>
SET <Primary Key> = nextval(pg_get_serial_sequence('<Table A>', '<Primary Key>'))
5) Insert the data back from the temporary tables back into the source tables. And then drop the temporary tables, triggers, and functions after that.
INSERT INTO <Table A> (SELECT * FROM <Temp Table A>)
INSERT INTO <Table B> (SELECT * FROM <Temp Table B>)
DROP TRIGGER trigger_temp_table_a
DROP cascade_temp_table_a_pk()
It is possible to take this general approach and turn it into a script which can be called recursively in order to go infinitely deep. I ended up doing just that using python (our application was using django so I was able to use the django ORM to make some of this easier)

Need some help in creating a query in SQL?

I have 6 tables:
Staff ( StaffID, Name )
Product ( ProductID, Name )
Faq ( FaqID, Question, Answer, ProductID* )
Customer (CustomerID, Name, Email)
Ticket ( TicketID, Problem, Status, Priority, LoggedTime, CustomerID* , ProductID* )
TicketUpdate ( TicketUpdateID, Message, UpdateTime, TicketID* , StaffID* )
Question to be answered:
Given a Product ID, remove the record for that Product. When a product is removed all associated FAQ can stay in the database but should have a null reference in the ProductID field. The deletion of a product should, however, also remove any associated tickets and their updates. For completeness deleted tickets and their updates should be copied to an audit table or a set of tables that maintain historical data on products, their tickets and updates. (Hint: you will need to define a additional table or set or tables to maintain this audit information and automatically copy any deleted tickets and ticket updates when a product is deleted). Your audit table/s should record the user which requested the deletion and the timestamp for the deletion operation.
I have created additional maintain_audit table:
CREATE TABLE maintain_audit(
TicketID INTEGER NOT NULL,
TicketUpdateID INTEGER NOT NULL,
Message VARCHAR(1000),
mdate TIMESTAMP NOT NULL,
muser VARCHAR(128),
PRIMARY KEY (TicketID, TicketUpdateID)
);
Addittionally I have created 1 function and trigger:
CREATE OR REPLACE FUNCTION maintain_audit()
RETURNS TRIGGER AS $BODY$
BEGIN
INSERT INTO maintain_audit (TicketID,TicketUpdateID,Message,muser,mdate)
(SELECT Ticket.ID,TicketUpdate.ID,Message,user,now() FROM Ticket, TicketUpdate WHERE Ticket.ID=TicketUpdate.TicketID AND Ticket.ProductID = OLD.ID);
RETURN OLD;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_audit
BEFORE DELETE
ON Product
FOR EACH ROW
EXECUTE PROCEDURE maintain_audit()
DELETE FROM Product WHERE Product.ID=30;
When I run this all I get this :
ERROR: null value in column "productid" violates not-null constraint
CONTEXT: SQL statement "UPDATE ONLY "public"."faq" SET "productid" = NULL WHERE $1 OPERATOR(pg_catalog.=) "productid""
GUYS,Could you help me in sorting out this problem?
What you probably want is triggers. Not sure what RDBMS you are using, but that's where you should start. I started from zero and had triggers up and running in a somewhat similar situation within an hour.
In case you don't already know, triggers do something after a specific type of query happens on a table, such as an insert, update or delete. You can do any type of query.
Another tip I would give you is not to delete anything, since that could break data integrity. You could just add an "active" boolean field, set active to false, then filter those out in most of your system's queries. Alternatively, you could just move the associated records out to a Products_archive table that has the same structure. Easy to do with:
select * into destination from source where 1=0
Still, I would do the work you need done using triggers because they're so automatic.
create a foreign key for Ticket.product_id, and TicketUpdate.Ticket_id which has ON DELETE CASCADE. This will automatically delete all tickets and ticketupdates when you delete the product.
create an audit table for Product deleters with product_id, user and timestamp. audit tables for ticket,ticketUpdate should mirror them exactly.
create a BEFORE DELETE TRIGGER for table Ticket which copies tickets to the audit table.
Do the same for TicketUpdate
Create an AFTER DETETE Trigger on Products to capture who requested a product be deleted in the product audit table.
In table FAQ create Product_id as a foreign key with ON DELETE SET NULL