Teradata MaxParseTree other that DBSControl is there any other factor - sql

A report is throwing this error
insufficient parser memory , during optimizer phase
I am aware of the DBSControl parameter and how it relates to this.
My Questions are
Best of my K, the it would be a nay... but I just wanted to check ...is there any other ODBC driver related setting that can affect this error. We know the Server DBSControl setting is there already.
Another hopelessly hopeful hope .....if you are not given Console privs. Is there any table in the DD out there where DBSControl settings would be stored ( like for FYI purpose ). I know it wasn't till V6 and V12. But I wondered if it got any wiser with the newer versions
So this is not getting to know the error. Pl don't explain what it means- I know what it means. My questions are specific to the above ones.

Related

Is there a solution to decrease manual conversion time for migrating the Oracle database to SQL server

I am planning to migrate Oracle 11g to MS SQL Server 2016,
hence I performed Pre-Migration assessment through SSMA.
I received final conversion report on SSMA, But with Numerous Number of Errors.the report states that it will require 1263.6hours for manual conversion from Oracle to SQL
Please help me, how shall I resolve these errors with minimum manual conversion time.
Attached is the screenshot for the same.
Appreciate your Help
Thanks,
Amit
enter image description here
You must understand one important concept. Migrating from SQL Server to Oracle or vice versa would be as easier as the level of complexity in your source database. In your case, you are using SSMA to make an assessment for your Oracle source database.
Try to read --> rules on SSMA to migrate from Oracle to get more details about the rules applied for each possible transformation.
There is not a concrete answer to your question, you have a lot of different problems in the screenshot you have provided. Most important, even though SSMA will make automatic conversions ( for example Schemas ), you need to evaluate the impact in your application. I saw also problems in PL/SQL objects, that you somehow will need to convert to Transact SQL. Bottom line: you have a lot of manual work to do.
SSMA is making already a case in your situation providing some information that some of the source elements cannot be converted automatically, therefore a manual intervention is needed.
As you are discovering, migrating from one DB server to another is not just a simple matter of relocating the data, especially when (as it appears) your application leans heavily on Oracle-specific technology. By all appearances - as noted repeatedly in this thread - you will have a lot of manual reconciliation and rewriting of application code to do. There is a hidden cost here: the time and effort required to do this work doesn't come for free. It will cost your company real $$$ to make this happen. You should be prepared to answer the following questions:
Is cost of the amount of time and effort to rewrite the application and complete the transformation going to be less than any cost savings realized by switching from Oracle to SQL Server?
Understanding that it may cost more in the short term to rewrite the application than continuing with the status quo, how long will it take to realize any cost savings at all?
On a technical level, given the number of Oracle technologies in play (custom types, stored procedures, etc.), can SQL Server even replicate the functionality required by the application that is currently provided by Oracle?
What is the driving force behind this migration, and does it really make sense if this level of effort is required?
If data migration is still required, is it easier to rebuild the application from scratch and just move the data than it would be to port the entire existing application?

DynamicRecord - what is it?

When I run the following query:
match (n) return distinct labels(n);
I am seeing the following error:
DynamicRecord[396379,used=false,(0),type=-1,data=byte[],start=true,next=-1] not in use
Other people have asked how to deal with this situation. I am asking a different set of questions: what is a DynamicRecord in Neo4j? And, what can be done to avoid this type of error?
What is DynamicRecord
The source for DynamicRecord is here. This is largely useless.
Anyhow, all I can gather is that it is:
It is a very low-level construct in store kernal.
A multitude of tests use it with relation to consistency checking.
It appears to be a record that is dynamically created (meaning, at run time - not stored on disk), and it can represent different type of data (property blocks, schema, etc.)
This is also largely useless. I know.
What can be done to avoid this type of error.
This seems to be a very generic error, but most online resources (Github issues / SO questions) seem to relate to DB upgrades. Some pointed out in changes to some consts used by DynamicRecord that yield data-corruption after upgrades.
Based on that, I guess that the following steps could prevent such error:
Backup your data.
Migrate your data properly when upgrading.
Do not use different versions of neo against the same data.
You've guessed it - This is also rather useless, but I hope it is better than nothing.

Duplicated edges with the same #rid in OrientDB

I've discovered a strange behaviour when querying an Edge class using OrientDB (community-2.1-rc5). The database is returning the exact same edge with the exact same #rid and the exact same data, twice. My instinct says that this is a bug...
This is the query
SELECT FROM E WHERE #class='LIKES' AND (out IN [#12:0,#12:221]) AND in=#36:1913
And this is what orientDB studio returns
http://s29.postimg.org/hwruv0zif/Captura.png
This makes no sense. If I go to the vertex and query for LIKES relationship it only returns one registry... Anyone faced a problem like this?
This is the database I'm using if it helps
https://www.dropbox.com/sh/pkm28cfer1pwpqb/AAAVGeL1eftOGR4o0todTiAha?dl=0
To get help with this bug, you should make a request to join the google group. StackOverflow is not the best place to get help with this kind of bug.
The problem is that you somehow duplicated your edge by mistake. Orientdb let you do it for some unknown reason.
Here is the bug discussion on the orientdb google group : https://groups.google.com/forum/#!topic/orient-database/cAR7yUjCZcI
In the discussion Luca(creator of orientdb) says this :
"the problem is that without a transaction the creation of edge could
be dirty. OrientDB tries to fix dirty reference, so maybe that's the
reason why the next time the exception is raised. I've changed the
default behavior of all SQL commands against Graphs to be always
transactional"
Upgrading to the most recent version of orientdb would be good ideal. Maybe the bug has been fixed.

Is there a better way to debug SQL?

I have worked with SQL for several years now, primarily MySQL/PhpMyAdmin, but also Oracle/iSqlPlus and PL/SQL lately. I have programmed in PHP, Java, ActionScript and more. I realise SQL isn't an imperative programming language like the others - but why do the error messages seem so much less specific in SQL? In other environments I'm pointed straight to the root of the problem. More often that not, MySQL gives me errors like "error AROUND where u.id = ..." and prints the whole query. This is even more difficult with stored procedures, where debugging can be a complete nightmare.
Am I missing a magic tool/language/plugin/setting that gives better error reporting or are we stuck with this? I want a debugger or language which gives me the same amount of control that Eclipse gives me when setting breakpoints and stepping trough the code. Is this possible?
I think the answer lies in the fact that SQL is a set-based language with a few procedural things attached. Since the designers were thinking in set-based terms, they didn't think that the ordinary type of debugging that other languages have is important. However, I think some of this is changing. You can set breakpoints in SQL Server 2008. I haven't used it really as you must have SQL Server 2008 databases before it will work and most of ours are still SQL Server 2000. But it is available and it does allow you to step through things. You still are going to have problems when your select statement is 150 lines long and it knows that the syntax isn't right but it can't point out exactly where as it is all one command.
Personally when I am writing a long procedural SP, I build in a test mode that includes showing me the results of things I do, the values of key variables at specific points I'm interested in, and print staments that let me know what steps have been completed and then rolling the whole thing back when done. That way I can see what would have happened if it had run for real, but not have hurt any of the data in the database if I got it wrong. I find this very useful. It can vastly increase the size of your proc though. I have a template I use that has most of the structure I need set up in it, so it doesn't really take me too long to do. Especially since I never add an insert. update or delete to a proc without first testing the associated select to ensure I have the records I want.
I think the explanation is that "regular" languages have much smaller individual statements than SQL, so that single-statement granularity points to a much smaller part of the code in them than in SQL. A single SQL statement can be a page or more in length; in other languages it's usually a single line.
I don't think that makes it impossible for debuggers / IDEs to more precisely identify errors, but I suspect it makes it harder.
I agree with your complaint.
Building a good logging framework and overusing it in your sprocs is what works best for me.
Before and after every transaction or important piece of logic, I write out the sproc name, step timestamp and a rowcount (if relevant) to my log table. I find that when I have done this, I can usually narrow down the problem spot within a few minutes.
Add a debug parameter to the sproc (default to "N") and pass it through to any other sprocs that it calls so that you can easily turn logging on or off.
As for breakpoints and stepping through code, you can do this with MS SQL Server (in my opinion, it's easier on 2005+ than with 2000).
For the simple cases, early development debugging, the sometimes cryptic messages are usually good enough to get the error resolved -- syntax error, can't do X with Y. If I'm in a tough sproc, I'll revert to "printf debugging" on the sproc text because it's quick and easy. After a while with your database of choice, the simple issues become old hat and you just take them in stride.
However, once the code is released, the complexity of the issues is way too high. I consider myself lucky if I can reproduce them. Also, the places where the developer in me would want a debugger the DBA in me says "no way you're putting a debugger there."
I do use the following tactics.
During writing of the stored procedure have a #procStep var
each time a new logical step is executed
set #procStep = "What the ... is happening here " ;
the rest is here

At some point in your career with SQL Server does parameter sniffing just jump out and attack?

Today again, I have a MAJOR issue with what appears to be parameter sniffing in SQL Server 2005.
I have a query comparing some results with known good results. I added a column to the results and the known good results, so that each month, I can load a new months results in both sides and compare only the current month. The new column is first in the clustered index, so new months will add to the end.
I add a criteria to my WHERE clause - this is code-generated, so it's a literal constant:
WHERE DATA_DT_ID = 20081231 -- Which is redundant because all DATA_DT_ID are 20081231 right now.
Performance goes to pot. From 7 seconds to compare about 1.5m rows to 2 hours and nothing completing. Running the generated SQL right in SSMS - no SPs.
I've been using SQL Server for going on 12 years now and I have never had so many problems with parameter sniffing as I have had on this production server since October (build build 9.00.3068.00). And in every case, it's not because it was run the first time with a different parameter or the table changed. This is a new table and it's only run with this parameter or no WHERE clause at all.
And, no, I don't have DBA access, and they haven't given me enough rights to see the execution plans.
It's to the point where I'm not sure I'm going to be able to handle this system off to SQL Server users with only a couple years experience.
UPDATE Turns out that although statistics claim to be up to date, running UPDATE STATISTICS WITH FULLSCAN clears up the problem.
FINAL UPDATE Even with recreating the SP, using WITH RECOMPILE and UPDATE STATISTICS, it turned out the query had to be rewritten in a different way to use a NOT IN instead of a LEFT JOIN with NULL check.
Not quite an answer, but I'll share my experience.
Parameter sniffing took a few years of SQL Server to come and bite me, when I went back to Developer DBA after moving away to mostly prod DBA work. I understood more about the engine, how SQL works, what was best left to the client etc and I was a better SQL coder.
For example, dynamic SQL or CURSORs or just plain bad SQL code probably won't ever suffer parameter sniffing. But better set programming or how to avoid dynamic SQL or more elegant SQL more likely will.
I noticed it for complex search code (plenty of conditionals) and complex reports where parameter defaults affected the plan. When I see how less experienced developers would write this code, then it won't suffer parameter sniffing.
In any event, I prefer parameter masking to WITH RECOMPILE. Updating stats or indexes forces a recompile anyway. But why recompile all the time? I've answered elsewhere to one of your questions with a link that mentions parameters are sniffed during compilation, so I don't have faith in it either.
Parameter masking is an overhead, yes, but it allows the optimiser to evaluate the query case by case, rather than blanket recompiling. Especially with statement level recompilation of SQL Server 2005
OPTIMISE FOR UNKNOWN in SQL Server 2008 also appears to do exactly the same thing as masking. My SQL Server MVP colleague and I spent some time investigating and came to this conclusion.
I suspect your problem is caused by out of data statistics. Since you do not have DBA access to the server, I would encourage you to ask the DBA when the last time statistics were updated. This can have a huge impact on performance. It also sounds like your tables are not indexed very well.
Basically, this does not "feel" like a parameter sniffing issue, but more of a "healthy" database issue.
This article describes how you can determine the last time statistics were updated:
Statistics Update Time
I second the comment about checking the statistics - I have seen several instances where a query's performance has fallen off a cliff specifically because the statistics are out of date.
Specifically, if you have a date in your PK, and SQL Server thinks there are only a 10 or 100 records which after a specific date when in fact there are thousands, it may choose terribly inefficient query plans because it thinks the dataset is much smaller than it really is.
HTH,
Andrew
I had a production issue exactly like this. A tab in the application which called a stored proc would not show. I ran a trace for the specific proc and saw the call. The application times out in 30 secs and the proc would take close to 40 - 50 secs to complete (ran the proc exactly as called from the trace).
Next step was to figure out which statement was causing the scans I notice in the execution of the procedure. So I scripted out the proc, removed the procedure syntax and declared variables and ran in query analyser. It RAN in 3 secs!!!
I'm writing this to let anyone out there looking for answeres know that this can happen in SQL. It stems from the parameter sniffing issue. I was able t ofind this thread because I pin-pointed the cause as a faulty cached query plan! I've read posts where they said it happens to one specific users/ value. But it can happen to any value and once it starts, it can be a continuous thing.
The solution for me was to script out the proc and run it again. yeah. that simple. An alter works fine. No need to drop and re-create. This causes SQL to refresh the cached plan and things were fine. I have not figured out how to disable this at a server level. It is too cumbersome to clean up all the procs. Hope this helps