I have a number of queries which are checking for a record in a table and if there is no such a record it is added. There are 2 queries involved in this process:
1) select id from table where <conditions>
if there is any ids corresponding to I do nothing(or update records with ids as I want), if there are no ids I executing second query:
2) insert into table(<columns>) values(<values>)
Of course <conditions>, <columns> and <values> are correct strings for their context.
What I want is to join these 2 queries into one:
insert into table(<values>)
select <parameter list>
where not exists(select id from table where <conditions>)
So there will be only one query involved instead of 2.
The example of this query is:
insert into persons(name) select #Name where not exists (select id from persons where name = #Name)
The problem is that I use parameters for queries and when I execute this composite query I get an exception saying "A parameter is not allowed in this location. Ensure that the '#' sign is in a valid location or that parameters are valid at all in this SQL statement."
Is there a way to bypass parameters and don't get an exception for this query?
I'm using C# and SqlCe-related classes.
What's the benefit of merging the queries into a composite? If the query is supported (and I actually doubt that the SQL Compact Query Parser supports this) the sql engine is still going to have to essentially do both queries anyway. I find the two separate queries more readable.
As separate queries, I'm also willing to bet you could make it substantially faster than the composite query anyway by using TableDirect. Take a look at the answer to this question and benchmark it in your app. It's likely 10x faster than your existing query pair becasue it foregoes the query parser altogether.
Related
After our SQL Server upgraded from 2012 to 2014, some stored procedure's performance has decreased.
We found that query hint can be used to solve this problem, however I wonder how can we add query hint in a SQL of following structure:
INSERT INTO TABLE1 ....
SELECT ...
FROM TABLE2
--OPTION (QUERYTRACEON 9481)
If I simply add it at the end of the snippet (the commented line), is it affecting the SELECT query, the INSERT query, or both? How about if I would like to use query hint on SELECT or INSERT only?
If you use this trace flag, what you are doing is that you instruct the query optimizer to use the 2012 cardinality estimator.
You can do this on query level as stated in the documentation:
If a trace flag affects any query execution plan in an unwanted way,
but improves some other query execution plan, you may want to enable a
corresponding trace flag for only a particular query.
If you use it as you are doing it now, it will affect the whole INSERT INTO SELECT statement, but if you only want one of the queries to be executed using the old cardinality estimator, you need to change your statement into two different queries (for example insert data into a temp table and then select into your table) so that you only use the trace on one of those queries.
You can read more about this also on this StackOverflow post.
Trace flag 9481 forces the query optimizer to use version 70 (the SQL Server 2012 version) of the cardinality estimator when creating the query plan (msdn)
It certainly affects all the operates in your query, both insert and select, but what could you expect to be different in the insert operator?
The cardinality estimation affects join strategies, and insert does not perform ani scan/seek/join
I was wondering if there is a reliable way to instrument SQL query statements such as
SELECT * from table where 1=1;
into a new statement like the follows that stores the result relation into a temporary table.
SELECT * into temp result from table where 1=1;
For this simple statement, I can parse the statement and add into clause before from. I was just wondering if there are some libraries that can do this for complicated statements with WITH etc, so that the end result of a set of query statements is stored into a result table.
BTW, I am using PHP/JavaScript with PostgresSQL 9.3.
Thanks.
Clarification:
This question is not a duplicate of Creating temporary tables in SQL or Creating temporary tables in SQL, because those two questions are about SQL grammar/usage issues around creating temporary tables. This question is not about SQL usage but about program instrumentation and whether/how a sequence of SQL query statements can be analyzed and transformed to achieve certain effects. So it's not about how to manually write new statements from scratch, but rather about how to transform an existing set of statements.
I have a particular SQL file in which i copy all contents from on table in a database to another table in another database.
The traditional INSERT statements are used to perform the same operation. However this table has 8.5 Million records and it fails. The queries succeed with a smaller database.
Also in when i run the select * query for that particular table the SQL query express shows out of memory exception.
In particular there is one table that has some many records. So this table alone i want to copy from the old Db to the new Db.
What are alternate ways to achieve this?
Is there any quick work around by which we can avoid this exception and make the queries succeed?
Let me put it this way. Why would this operation fail when there are a lot of records?
I don't know if this counts as "traditional INSERT", but have you tried "INSERT INTO"?
http://www.w3schools.com/sql/sql_select_into.asp
How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.
I'm working with an Oracle 10g database, and I want to extract a group of records from one table, and then use that for pulling records out of a bunch of related tables.
If this were T-SQL, I'd do it something like this:
CREATE TABLE #PatientIDs (
pId int
)
INSERT INTO #PatientIDs
select distinct pId from appointments
SELECT * from Person WHERE Person.pId IN (select pId from #PatientIDs)
SELECT * from Allergies WHERE Allergies.pId IN (select pId from #PatientIDs)
DROP TABLE #PatientIDs
However, all the helpful pages I look at make this look like a lot more work than it could possibly be, so I think I must be missing something obvious.
(BTW, instead of running this as one script, I'll probably open a session in Oracle SQL Developer, create the temp table, and then run each query off it, exporting them to CSV as I go along. Will that work?)
Oracle has temporary tables, but they require explicit creation:
create global temporary table...
The data in a temporary table is private for the session that created it and can be session-specific or transaction-specific. If data is not to be deleted until the session ends, you need to use ON COMMIT PRESERVE ROWS at the end of the create statement. There's also no rollback or commit support for them...
I see no need for temp tables in the example you gave - it risks that updates made to the APPOINTMENTS table since the temp table was populating won't be reflected. Use IN/EXISTS/JOIN:
SELECT p.*
FROM PERSON p
WHERE EXISTS (SELECT NULL
FROM APPOINTMENTS a
WHERE a.personid = a.id)
SELECT p.*
FROM PERSON p
WHERE p.personid IN (SELECT a.id
FROM APPOINTMENTS a)
SELECT DISTINCT p.*
FROM PERSON p
JOIN APPOINTMENTS a ON a.id = p.personid
JOINing risks duplicates if there are more than one APPOINTMENT records associated to a single PERSON record, which is why I added the DISTINCT.
Oracle doesn't have the facility to casually create temporary tables in the same way as SQL Server. You have to create the table explicitly in the database schema (create global tempory table). This also means that you need permissions that allow you to create tables, and the script must explicitly be deployed as a database change. The table is also visible in a global name space.
This is a significant idiomatic difference between Oracle and SQL Server programming. Idiomatic T-SQL can make extensive use of tempory tables and genuine requirements to write procedural T-SQL code are quite rare, substantially because of this facility.
Idiomatic PL/SQL is much quicker to drop out to procedural code, and you would probably be better off doing this than trying to fake temporary tables. Note that PL/SQL has performance oriented constructs such as flow control for explicit parallel processing over cursors and nested result sets (cursor expressions); recent versions have a JIT compiler.
You have access to a range of tools to make procedural PL/SQL code run quickly, and this is arguably idiomatic PL/SQL programming. The underlying paradigm is somewhat different from T-SQL, and the approach to temporary tables is one of the major points where the system architecture and programming idioms differ.
While the exact problem has been solved, if you want to build up some useful skills in this area, I would take a look at PL/SQL Collections, and particularly bulk SQL operations using pl/sql collections (BULK COLLECT / Bulk Binds), the RETURNING clause, and defining collections using %ROWTYPE.
You can dramatically reduce the amount of pl/sql code you write through understanding all the above - although always remember that an all-SQL solution will almost always beat a PL/SQL one.