select, delete & update from multible tables using a single query in mysql - sql

for example i use following command to find a record
SELECT `users`.`mail`
FROM `users`
WHERE `users`.`uid` = %s
if found an match, then i should like to delete this record, and update in the same query an another table. i can solve this with 'joins' ?

if found an match, then i should like to delete this record, and update in the same query an another table. i can solve this with 'joins' ?
Not with a single SQL query, no.
But you could perform those actions, using separate SQL queries, within a single stored procedure. This would be faster than submitting three queries separately from your application, because there's no time/performance lost transferring data back and forth over the wire (to and from your application code).

No, there's no way to do three separate operations in one query.
Why do you need to do it in one?

If your goal is to improve performance you can do it using a single CALL to the DB using a store procedure that does 2 different queries inside.

I think the only reason you would want to select, then delete, is if you are doing it yourself and want to make sure you are deleting the right things. Something like
DELETE users.mail FROM users WHERE users.uid = %s
will return 0 if it deleted no rows, or a number of rows it deleted. You could just check the return and make a decision based on that as to what to update...

Related

Can I prevent duplicate data in bigquery?

I'm playing with BQ and I create a table and inserted some data. I reinserted it and it created duplicates. I'm sure I'm missing something, but is there something I can do to ignore it if the data exists in the table?
My use case is I get a stream of data from various clients and sometimes their data will include some data they previously already sent(I have no control on them submitting).
Is there a way to prevent duplicates when certain conditions are met? The easy one is if the entire data is the same but also if certain columns are present?
It's difficult to answer your question without a clear idea of the table structure, but it feels like you could be interested in the MERGE statement: ref here.
With this DML statement you can perform a mix of INSERT, UPDATE, and DELETE statements, hence do exactly what you are describing.

SQL query design with configurable variables

I have a web app that has a large number of tables and variables that the user can select (or not select) at run time. Something like this:
In the DB:
Table A
Table B
Table C
At run time the user can select any number of variables to return. Something like this:
Result Display = A.field1, A.Field3, B.field19
There are up to 100+ total fields spread across 15+ tables that can be returned in a single result set.
We have a query that currently works by creating a temp table to select and aggregate the desired fields then selecting the desired variables from that table. However, this query takes quite some time to execute (30 seconds). I would like to try and find a more efficient way to return the desired results while still allowing the ability for the user to configure the variables to see. I know this can be done as I have seen it done in other areas. Any suggestions?
Instead of using a temporary table, use a view and recompile the view each time your run the query (or just use a subquery or CTE instead of a view). SQL Server might be able to optimize the view based on the fields being selected.
The best reason to use a temporary table would be when intra-day updates are not needed. Then you could create the "temporary" table at night and just select from that table.
The query optimization approach (whether through a view, CTE, or subquery) may not be sufficient. This is a rather hard problem to solve in general. Typically, though, there are probably themes of variables that come from particular subqueries. If so, you can write a stored procedure to generate dynamic SQL that just has the requisite joins for the variables chosen for a given run. Then use that SQL for fetching from the database.
And finally, perhaps there are other ways to optimize the query regardless of the fields being chosen. If you think that might be the case, then simplify the query for human consumption and ask another question

How do I get count(*) from table where cond1=$cond1 and cond2=$cond2 real-time

guys:
Assuming that I have a base table, which records tuples. If users want to get the count(*) satisfying some conditions, they can use SQL query like this:
SELECT count(*) FROM table where cond1=$cond1 AND cond2 = $cond2 AND...
Question 1: If the condition keeps the same, how can we get the real-time count? for some reason that I can not use count(*) directly to fullfill the task.
Question 2: If new condition occurs, how to extend the case in question 1?
Although it's hard to imagine what exactly can prevent you from using COUNT() one possible way of achieving your goal (if I understand your requirements correctly) and assuming that number of possible combination of parameters are limited, might be:
Creating a table that keeps counts for different sets of conditions
Creating trigger(s) on the base table that repopulate(s)/recalculate(s) count values on update, insert, delete operations
Reading counts from that table
Update your triggers

Logging the results of a MERGE statement

I have 2 tables: A temporary table with raw data. Rows in it may be repeating (more then 1 time). The second is the target table with actual data (every row is unique).
I'm transfering rows using a cursor. Inside the cursor I use a MERGE statement. How can I print to the console using DBMS_OUTPUT.PUT_LINE which rows are updated and which are deleted?
According to the official documentation there is no such feature for this statement.
Are there any workaround?
I don't understand why you would want to do this. The output of dbms_output requires someone to be there to look at it. Not only that it requires someone to look through all of the output otherwise it's pointless. If there are more than say 20 rows then no one will be bothered to do so. If no one looks through all the output to verify but you need to actually log it then you are actively harming yourself by doing it this way.
If you really need to log which rows are updated or deleted there are a couple of options; both involve performance hits though.
You could switch to a BULK COLLECT, which enables you to create a cursor with the ROWID of the temporary table. You BULK COLLECT a JOIN of your two tables into this. Update / delete from the target table based on rowid and according to your business logic then you update the temporary table with a flag of some kind to indicate the operation performed.
You create a trigger on your target table which logs what's happening to another table.
In reality unless it is important that the number of updates / deletes is known then you should not do anything. Create your MERGE statement in a manner that ensures that it errors if anything goes wrong and use the error logging clause to log any errors that you receive. Those are more likely to be the things you should be paying attention to.
Previous posters already said that this approach is suspicious, both because of the cursor/loop and the output log for review.
On SQL Server, there is an OUTPUT clause in the MERGE statement that allows you to insert a row in another table with the $action taken (insert,update,delete) and any columns from the inserted or deleted/overwritten data you want. This lets you summarize exactly as you asked.
The equivalent Oracle RETURNING clause may not work for MERGE but does for UPDATE and DELETE.

What is the fastest way to compare 2 rows in SQL?

I have 2 different databases. Upon changing something in the big one (i don't have access to), i get some rows imported in my databases in a similar HUGE table. I have a job checking for records in this table, and if any, execute a stored procedure, process and delete from table.
Performance. (Huge amount of data) I would like to know what is the fastest way to know if something has changed using let's say 2 imported rows with 100 columns each. Don't have FK-s, don't need. Chances are, that even though I have records in my table, nothing has actually changed.
Also. Let's say there is actually changed something. Is it possible for example to check only for changes inside datetime columns?
Thanks
You can always use update triggers - these will give you access to two logical tables, inserted and updated. You can compare the values of these and base your action on the results.