I have just started on a project that involves building a web application replacement for an old Access application that was already backed by an SQL database.
The issue i have hit is that the Access application has a bunch of queries within it that use '*=' as condition operators (ie "where field1 *= 'Something'") which when i run the application cause it to crash.
I have tried to verify if the operator is valid or if the original developers have handed over an intentionally broken version of the application.
Can any one provide verification that '*=' (ie asterix equals) is or is not a valid operator in an Access query?
Don't worry; your original developers are not trying to pass you intentionally broken code. It's just that the code is very old.
*= (and its counterpart =*) were a non-standard SQL operator supported originally by Sybase SQL and inherited by Microsoft SQL Server in the mid-90's. *= meant LEFT JOIN (and its counterpart =* meant RIGHT JOIN)1
(Microsoft SQL Server was originally a repackaged edition of Sybase SQL, licensed from Sybase, adapted and recompiled to run on Microsoft's brand-new Windows NT Operating System. The partnership was eventually discontinued and Microsoft rewrote the code from scratch without Sybase's involvement, and that's the product we still have today)
The operator *= was a way to express LEFT JOIN operations, before there was such a thing as a LEFT JOIN operator:
SELECT *
FROM a, b
WHERE a.id *= b.id
Is the same as:
SELECT *
FROM a
LEFT JOIN b
ON a.id = b.id
These operators have been deprecated for more than a decade and are no longer supported at all, ever since SQL Server 2012. Using them in earlier versions of SQL Server is possible but requires the whole database to have a legacy-mode setting called compatibility_level to be set to 80 ("SQL Server 2000 mode").
Access has never supported these operators.2
You need to locate the code doing these outer joins and replace them with suitable LEFT JOIN (or RIGHT JOIN for =*) operations.
Finally, you should be aware that *= is not an exact mirror of LEFT JOIN. I don't remember all the details, but it goes something like this: If there are only two tables involved, or if there is a central table and all the LEFT JOINs go from the central table to immediate leaf tables, you can replace *= with LEFT JOIN in a straightforward fashion. However, if the outer joins cross more than two tables out, then *= behaves different than the naive replacement and you need to research it more carefully. I might be wrong about the details. Be careful!
1Compare this weird syntax for LEFT JOIN with this variant of an INNER JOIN, which is still perfectly acceptable today:
SELECT *
FROM a, b
WHERE a.id = b.id
Is exactly the same as:
SELECT *
FROM a
INNER JOIN b
ON a.id = b.id
2UPDATE: Upon re-reading your question I realize you were talking about pass-through queries executed by SQL Server. Then your choices depend on what version of SQL server you are using.
If you are able to run the application against SQL server 2008 R2 or earlier, you can temporarily switch compatibility_level to 80 to give you time to fix your queries.
More likely than not, you are having this problem precisely because you are trying to move the database to a version of SQL Server newer than 2008 R2, which doesn't support compatibility_level 80. When you loaded the database on a version of SQL Server newer than 2008 R2, the setting was automatically increased to the lowest value supported by your version of SQL Server (but higher than 80, which would no longer be supported). Then your only reasonable choice is to stay on SQL server 2008 R2 for now (and switch the database back to compatibility_level 80 if necessary) while you work on fixing the application queries.
Related
I have two databases. One I'll call a is a read-only synchronized database that is part of an availability group and the other is a plain ol database I'll call b on the same server as the synchronized database.
I need to write views in b that read from a, but they perform very poorly in this environment.
For example, I might have:
create view dbo.v1 --> in b
as
select
t1.col_a,
t2.col_b
from
a.dbo.Table1 t1
left outer join
a.dbo.Table2 t2 on t1.t2_id = t2.Id
We can't add artifacts to database a...such as views, functions, and stuff...but we're free to read from it. OpenQuery wouldn't be available, since we're not talking about a linked server... a and b are on the same server.
Given the above, I can execute the select statement within the view from either databases a or b.
For example:
use a;
select
t1.col_a,
t2.col_b
from
a.dbo.Table1 t1
left outer join
a.dbo.Table2 t2 on t1.t2_id = t2.Id
...vs...
use b;
select
t1.col_a,
t2.col_b
from
a.dbo.Table1 t1
left outer join
a.dbo.Table2 t2 on t1.t2_id = t2.Id
On super simple ones like this example, the difference in speed is not too bad...but as the queries gain complexity, The first one runs in nearly 0 time, while the second one waits for-seeming-ever and finally runs...in minutes or worse.
Is there something misconfigured that I might look at? Is there some magic query hit that might help? The server is Windows Server 2012 R2 and the database version is SQL Server 2016 Enterprise.
Dan kindly posted my a and b .sqlplan files of a genuine query that was too ugly to include here to PasteThePlan (a.sqlplan and b.sqlplan). Thanks Dan!
Something I noticed in the b plan is that it suggested a missing index (with 91 impact) in one of the tables...and the a plan (which runs fast) found no recommendations
I uploaded your query Plans to PasteThePlan and added the links to your question.
The first glaring difference between the 2 plans is the cardinality estimation version:
Plan a:
<StmtSimple ... CardinalityEstimationModelVersion="70"
Plan b:
<StmtSimple ... CardinalityEstimationModelVersion="130"
SQL Server optimizes queries based on the cardinality estimator version of the context database. Because the CE version of the context database is different, you get different plans for the same query. The CE version is tied to the database compatibility level unless overridden by a database scoped configuration or query hint.
The legacy cardinality estimator (used in SQL 2012 and earler versions) can provide better plans for some queries. Depending on your overall workload, you can choose to use the legacy CE with targeted query hints or for all database queries.
Query hint example:
use b;
select
t1.col_a,
t2.col_b
from
a.dbo.Table1 t1
left outer join
a.dbo.Table2 t2 on t1.t2_id = t2.Id
OPTION(USE HINT ('FORCE_LEGACY_CARDINALITY_ESTIMATION'));
Database scoped configuration example:
use b;
ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON;
The query hint may be the better choice if this is the only problematic query. However, I generally suggest due diligence with query and index tuning first (note the missing index suggestions in the plan with the newer CE).
I'll add it's not uncommon for newer CE versions to introduce performance regression for some queries while benefitting others. Queries already in the need of query/index tuning are the most susceptible to regression in my experience. It's a good practice to include workload performance testing and CE analysis as part of SQL upgrade plans.
I have a query that currently runs successfully on SQL Server Management Studio, gathering information from 2 databases on the SQL server where I run the query, and one database on a linked SQL Server. The query has this structure:
SELECT TOP 10
DB1.id_number as BLN,
DB1.field1,
DB2.field2,
DB3.field3
FROM
LinkedServer.database1.dbo.table1 DB1
INNER JOIN
database2.dbo.table2 DB2 ON DB1.id_number = DB2.id_number
INNER JOIN
database3.dbo.table3 DB3 ON DB3.id_number2 = DB1.id_number2
WHERE
DB1.id_number IS NOT NULL
AND DB1.field1 IS NOT NULL
How can I run this same query from a .Net application? I am looking for a solution that doesn't require saving a View on the database.
In whatever solution you propose, please detail connection strings and security issues.
Thank you.
You can run the query using SqlCommand. Although doing it with an ORM may be a little tricky, if it even can be done.
Hello all you wonderfully helpful people,
What is the alternative to EnumParameters in SQL Server 2008? This MSDN article mentions that this method is going away, so what should be used instead?
http://msdn.microsoft.com/en-us/library/ms133474(SQL.90).aspx
Here is the error we receive when attempting to use this method:
Microsoft SQL-DMO (ODBC SQLState:
42000) error '800a1033'
[Microsoft][ODBC SQL Server
Driver][SQL Server]The query uses
non-ANSI outer join operators ("=" or
"="). To run this query without
modification, please set the
compatibility level for current
database to 80, using the SET
COMPATIBILITY_LEVEL option of ALTER
DATABASE. It is strongly recommended
to rewrite the query using ANSI outer
join operators (LEFT OUTER JOIN, RIGHT
OUTER JOIN). In the future versions of
SQL Server, non-ANSI join operators
will not be supported even in
backward-compatibility modes.
Thanks!
Paul
Swap from using DMO to SMO, the object model exposes the stored procedure parameter collection.
http://msdn.microsoft.com/en-us/library/ms162209(SQL.90).aspx
Not only this method, but everything DMO that is deprecated since 2005. Use SMO instead:StoredProcedure.Parameters.
The developer environment db server is SqlServer 2005 (developer edition)
Is there any way to make sure my SQL Queries will run in SqlServer 2000?
This database is set to Compatibility level "SQL Server 2000 (80)" but some queries that run without problems in the development system can not run in the Test Server (SqlServer).
(The problems seems to be in subqueries)
Compatibility levels are designed to work the opposite way - to allow an older version of T-SQL code to work without modifications on a newer version of SQL Server. The changes typically involve T-SQL syntax and reserved words, and it's possible to use SQL Server 2005 features such as INCLUDED columns in indexes on a database in Compatibility Level 80. However, you can't use 2005 T-SQL features such as CROSS APPLY.
Your best option is to develop/test all your code against a SQL Server 2000 instance. Note that you can use 2005's Management Studio to connect to the SQL Server 2000 instance, so you don't have to go backwards with regards to tools.
Problem solved:
In correlated subqueries you have to (in SQL2000) explicitly define the external field.
SQL2005:
SELECT * FROM Loans WHERE EXISTS (SELECT * FROM Collaterals WHERE COLLATERAL_LOAN=LOAN_NUMBER)
SQL2000:
SELECT * FROM Loans WHERE EXISTS (SELECT * FROM Collaterals WHERE COLLATERAL_LOAN=Loans.LOAN_NUMBER)
You should always explicitly define all fields, otherwise you will not get an error when you make a mistake and write
SELECT * FROM Loans WHERE EXISTS (SELECT * FROM Collaterals WHERE LOAN_NUMBER=Loans.LOAN_NUMBER)
If Collaterals-table doesn't have column LOAN_NUMBER, the Loans-table is used instead.
Here's the situation: we have an Oracle database we need to connect to to pull some data. Since getting access to said Oracle database is a real pain (mainly a bureaucratic obstacle more than anything else), we're just planning on linking it to our SQL Server and using the link to access data as we need it.
For one of our applications, we're planning on making a view to get the data we need. Now the data we need is joined from two tables. If we do this, which would be preferable?
This (in pseudo-SQL if such a thing exists):
OPENQUERY(Oracle, "SELECT [cols] FROM table1 INNER JOIN table2")
or this:
SELECT [cols] FROM OPENQUERY(Oracle, "SELECT [cols1] FROM table1")
INNER JOIN OPENQUERY(Oracle, "SELECT [cols2] from table2")
Is there any reason to prefer one over the other? One thing to keep in mind: we are on a limitation on how long the query can run to access the Oracle server.
I'd go with your first option especially if your query contains a where clause to select a sub-set of the data in the tables.
It will require less work on both servers, assuming there are indices on the tables in the Oracle server that support the join operation.
If the inner join significantly reduces the total number of rows, then option 1 will result in much less network traffic (since you won't have all the rows from table1 having to go across the db link
What hamishmcn said applies.
Also, SQL Server doesn't really know anything about the indexes or statistics or cache kept by the oracle server. Therefore, the oracle server can probably do a much more efficient job with the join than the sql server can.