Azure SQL Update Performance Indexed - sql

On Azure SQL Database:
UPDATE SomeLargeTable
SET [nonPKbutIndexedColumn] = newValue
WHERE [nonPKbutIndexedColumn] = value;
UPDATE SomeLargeTable
SET [nonPKbutIndexedColumn] = newValue
WHERE [PKcolumn] IN (SELECT [PKcolumn] FROM SomeLargeTable
WHERE [nonPKbutIndexedColumn] = value);
What about the performance of these queries? Other suggestions also are welcome...

The performance of any Data Manipulation Language (DML) command depends on many factors like volume of data in the tables, how efficiently the schema is designed, etc.
As long as your tables are properly indexed, both queries will run fine. There shouldn't be any performance issue. You can check the time taken for the query at the bottom of Query Editor in Azure SQL Database.
Additionally, you can use Query Performance Insight in Azure SQL Database which provides intelligent query analysis for single and pooled databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource.

Related

How I can find sql query for execution plan?

Some programm generate and send queries to sql server(on high load production). I want take plan of concrete query of concrete table. I start profiler with "Showplan XML" and set filter on TextData(like %MyTable%) and DatabaseName. It show rows with xml in TextData that describe execution plans(for all queries of my table). But I know that exist 5 different sql queries for this table.
How I can match some concrete query with correspond plan without use statistic?
Is there a reason this has to be done on the production environment? Most really bad execution plans (missing indexes causing table scans etc.) will be obvious enough on a dev environment where you can use all the diagnostics you want.
Otherwise running the SQL on the query cache (as in the linked question someone else mentioned) will probably have the lowest impact as it just queries a system table rather than adding diagnostics to every query.

Resource Intensive Query

I am using ado.net entities, against a SQL azure database. One of the queries is taking an extremely long time, most likely pulling data it doesn't need. Is there a way to match up the query in C# with the query execution in Azure.
Please enable Query Store on SQL Azure to identify the T-SQL equivalent of the LINQ query. Use this article for more details.
Below command helps you enabled query store
alter database current set query_Store on
Hope this helps.

Performance issues with outer joins to view in Oracle 12c

Two of my clients have recently upgraded to Oracle 12c 12.1.0.2. Since the upgrade I am experiencing significant performance degradation on queries using views with outer joins. Below is an example of a simple query that runs in seconds on the old Oracle 11g 11.2.0.2 database but takes several minutes on the new 12c database. Even more perplexing, this query runs reasonably fast (but not as fast) on one of the 12c databases, but not at all on the other. The performance is so bad on the one 12c database that the reporting I've developed is unusable.
I've compared indexes and system parameters between the 11g and two 12c databases, and have not found any significant differences. There is a difference between the Execution Plans, however. On 11g the outer join is represented as VIEW PUSHED PREDICATE but on 12c it is represented as a HASH JOIN without the PUSHED PREDICATE.
When I add the hint /*+ NO_MERGE(pt) PUSH_PRED(pt) */ to the query on the 12c database, then the performance is within seconds.
Adding a hint to the SQL is not an option within our Crystal Reports (at least I don't believe so and also there are several reports), so I am hoping we can figure out why performance is acceptable on one 12c database but not on the other.
My team and I are stumped at what to try next, and particularly why the response would be so different between the two 12c databases. We have researched several articles on performance degradation in 12c, but nothing appears particularly applicable to this specific issue. As an added note, queries using tables instead of views are returning results within an acceptable timeframe. Any insights or suggestions would be greatly appreciated.
Query:
select pi.*
, pt.*
from policyissuance_oasis pi
, policytransaction_oasis pt
where
pi.newTranKeyJoin = pt.polTranKeyJoin(+)
and pi.policyNumber = '1-H000133'
and pi.DateChars='08/10/2017 09:24:51' -- 2016 data
--and pi.DateChars = '09/26/2016 14:29:37' --2013 data
order by pi.followup_time
as krokodilko says, perform these :
explain plan for
select pi.*
, pt.*
from policyissuance_oasis pi
, policytransaction_oasis pt
where
pi.newTranKeyJoin = pt.polTranKeyJoin(+)
and pi.policyNumber = '1-H000133'
and pi.DateChars='08/10/2017 09:24:51' -- 2016 data
--and pi.DateChars = '09/26/2016 14:29:37' --2013 data
order by pi.followup_time;
select * from table(dbms_xplan.display());
and then, you probably see this at the bottom of the plan :
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
there,
dynamic sampling
concept should be the center of concern for performance problems(level=2 is the default value, ranges between 0-11 ).
In fact, Dynamic sampling (DS) was introduced to improve the optimizer's ability to generate good execution plans. This feature was enhanced and renamed Dynamic Statistics in Oracle Database 12c. The most common misconception is that DS can be used as a substitute for optimizer statistics, whereas the goal of DS is to augment optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates.
For serial SQL statements the dynamic sampling level is controlled by the
optimizer_dynamic_sampling
parameter but note that from Oracle Database 12c Release 1 the existence of SQL plan directives can also initiate dynamic statistics gathering when a query is compiled. This is a feature of adaptive statistics and is controlled by the database parameter
optimizer_adaptive_features (OAF)
in Oracle Database 12c Release 1 and
optimizer_adaptive_statistics (OAS)
in Oracle Database 12c Release 2.
In other words, from Oracle Database 12c Release 1(we also use db12.1.0.2 at office) onwards, DS will be used if certain adaptive features are enabled by setting the relevant parameter to TRUE.
Serial statements are typically short running and any DS overhead at compile time can have a large impact on overall system performance (if statements are frequently hard parsed). For systems that match this profile, setting OAF=FALSE is recommended( alter session set optimizer_adaptive_features=FALSE notice that
you shouldn't alter system but session
).
For Oracle Database 12c Release 2 onwards, using the default OAS=FALSE is recommended.
Parallel statements are generally more resource intensive, so it's often worth investing in additional overhead at compile time to potentially find a better SQL execution plan.
For serial type SQL statements, you may try to manual set the value for optimizer_dynamic_sampling(assuming that there were no relevant SQL plan directives). If we were to issue a similar style of query against a larger table that had the parallel attribute set we can see the dynamic sampling kicking in.
When should you use dynamic sampling? DS is typically recommended when you know you are getting a bad execution plan due to complex predicates. But shouldn't be system-wide as i mentioned before.
When is it not a good idea to use dynamic sampling?
If the queries compile times need to be as fast as possible, for example, unrepeated OLTP queries where you can't amortize the additional cost of compilation over many executions.
As the last word, for your case, it could be beneficient to set
optimizer_adaptive_features parameter to FALSE for individual SQL
statements and see the gained results.
We discovered the cause of the performance issue. The following 2 system parameters were changed at the system level by the DBAs for the main application that uses our client's Oracle server:
_optimizer_cost_based_transformation = OFF
_optimizer_reduce_groupby_key = FALSE
When we changed them at the session level to the following, the query above that joins the 2 views returns results in less than 2 seconds:
alter session set "_optimizer_cost_based_transformation"=LINEAR;
alter session set "_optimizer_reduce_groupby_key" = TRUE;
COMMIT;
Changing this parameter did not have an impact on performance:
alter session set optimizer_adaptive_features=FALSE;
COMMIT;
We also found that changing the following parameter improved performance even more for our more complex views:
alter session set optimizer_features_enable="11.2.0.3";
COMMIT;

Can I use with(index(xxx)) in my SQL with DB2

I'm used to being able to tell my sql statement which index I'd like for it to use in MSSQL. But it seems like that doesn't work in DB2 the same way.
This statement works for me in MSSQL but not in Db2. :
SELECT ACT.COMPANY,ACT.ACCT_UNIT,ACT.ACTIVITY,ACT.ACTIVITY_GRP,ACT.ACCT_CATEGORY,ACT.TRAN_AMOUNT,ACT.DESCRIPTION as ACT_DESCRIPTION,
AP.VENDOR,AP.INVOICE,AP.PO_NUMBER,AC.DESCRIPTION as AC_DESCRIPTION
FROM
ACTRANS ACT WITH (INDEX(ATNSET12)),
APDISTRIB AP WITH (INDEX(APDSET9)),
ACACTIVITY AC WITH (INDEX(ACVSET1))
WHERE
ACT.OBJ_ID = AP.ATN_OBJ_ID AND
ACT.ACTIVITY = AC.ACTIVITY AND
ACT.ACCT_CATEGORY != 'CAPEX'
Thankyou!
Well, choosing an index or other ways of accessing the data should be the task of the database system, not the user. Data distribution, database technology, available resources like memory and disk might change, still your query should work in an optimial way due to the database system figuring out an optimal access plan.
If you still believe this should be influenced, then DB2 offers several features to do so: Database configuration parameters or better session-specific environment settings, optimization profiles, different ways of maintaining statistics, ...

Impact of bulk insertion and bulk deletion to the MS SQL server 2008

Does anybody know what's the impact of MSSQL 2008 Database when executing insert and delete SQL statement for around 100,000 records each run after a period of time?
I heard from my client saying that for mysql and for its specific data type, after loading and clearing the database for a period of time, the data will become fragmented/corrupted. I wonder if this also happens to MS SQL? Or what will be the possible impact to the database?
Right now the statements we use to load and reload the data in to all the tables in the database are simple INSERT and DELETE statements.
Please advice. Thank you in advance! :)
-Shen
The transaction log will likely grow due to all the inserts/deletes, and depending on the data which is being deleted/inserted and table structure there will likely be data fragmentation
The data won't be 'corrupted' - if this is happening in MySql, it sounds like a bug in that particular storage engine. Fragmentation shouldn't corrupt a database, but does hamper performance
You can combat this using a table rebuild, a table recreate or a reorganise.
There's plenty of good info regarding fragmentation online. A good article is here:
http://www.simple-talk.com/sql/database-administration/defragmenting-indexes-in-sql-server-2005-and-2008/