SQL Server - insufficient memory (mscorlib) / 'the operation could not be completed' - sql

I have been working on building a new database. I began by building the structure within the database it is replacing and populating this as I created each set of tables. Once I had made additions I would drop what had been created and execute the code to build the structure again and a separate file to insert the data. I repeated this until the structure and content was complete to ensure each stage was as I intended.
The insert file is approximately 30mb with 500,000 lines of code (I appreciate this is not the best way to do this but for various reasons I cannot use alternative options). The final insert completed and took approximately 30 minutes.
A new database was created for me, the structure executed successfully but the data would not insert. I received the first error message shown below. I have looked into this and it appears I need to use the sqlcmd utility to get around this, although I find it odd as it worked in the other database which is on the same sever and has the same autogrow settings.
However, when I attempted to save the file after this error I received the second error message seen below. When I selected OK it took me to my file directory as it would if I selected Save As, I tried saving in a variety of places but received the same error.
I attempted to copy the code into notepad to save my changes but the code would not copy to the clipboard. I accepted I would lose my changes and rebooted my system. If I reopen this file and attempt to save it I receive the second error message again.
Does anyone have an explanation for this behaviour?

Hm. This looks more like an issue with SSMS and not the SQL Server DB/engine.
If you've been doing few times, possibly Management Studio ran out of RAM?
Have you tried breaking INSERT into batches/smaller files?

Related

sqlplus hangs after being called from a batch file, without throwing up error message

Essentially, where I work we run a variety of reporting processes that follow the same basic structure...
A batch file calls an sql script which executes a stored procedure. Another script extracts the data from Oracle and writes to a csv. Finally, an excel macro runs to create the final output.
We have been encountering an issue recently where if the procedure takes approximately longer than an hour to run, it will then hang indefinitely without moving on to the next line of the batch file. No error message is thrown up.
The most frustrating part is that certain procedures sometimes have the issue, and then the next day they do not.
Has anyone else ever encountered this issue? Or have any idea what could be causing this problem? I feel like it could be connection/firewall related, but it really is not my area of expertise!
You should instrument the batch file and use extended SQL tracing to reveal where ALL of your time is going. Nothing can escape proper instrumentation. You will find the source of the problem. What you do about it varies depending upon the particular problem (i.e., anti-pattern).
I see issues like this all the time. What I do it to connect to the DB and see what is running by checking gV$session. Key is to identify what SQL the script is running, then see if there are any reasons for it to be "hung" (there are MANY possible reasons). for example, missing indexes ; missing or not up to date stats ; workload on the instance ; blocking locks ; ...
If you have the SQL Tuning Advisor, you can run the SQL through there to get some ideas on solutions. Also ADDM Report may provide some additional solutions.

Breaking/opening files after failed jobs (sas7bdat.lck issue)

Good day,
Tl;dr:
a) Is it feasibly possible to recover data from .lck file?
b) If .lck issue appears, the SAS would work around it.
We have automated mundane jobs running on SAS machines. Every now and then job fails. This sometimes leaves locked file behind. (< filename>.sas7bdat.lck instead of < filename>.sas7bdat file)
This issue prevents re-running the program as SAS sees that there is already specified filename and tries to access it, failing. Message:
Attempt to rename temporary member of <dataset> failed.
Currently we handle them by manually deleting the file and adjusting generation number.
Question is two folded: a) Is it feasibly possible to recover data from .lck file? b) If .lck issue appears, the SAS would work around it. (Note that we have a lot of jobs and inputting checking code in all of them is work intensive.)
The .sas7bdat.lck file is the one that SAS writes to as it's creating a data set. If the data step (or PROC) completes successfully, the original data set file is deleted and the .sas7bdat.lck file gets renamed to remove the .lck part. If any errors occur, the .lck file gets deleted and the original data set is left in place, unmodified. That's how SAS avoids overwriting existing data sets when errors occur.
Therefore, you should be able to just rename the file to remove the .lck, or maybe rename it to damaged.sas7bdat for example, and then try accessing the file. You can try a PROC DATASETS REPAIR (https://v8doc.sas.com/sashtml/proc/z0247721.htm) if you really need to get whatever data might be present.
The best solution will obviously be to correct whatever fault is causing your jobs to bomb out like this in the first place. No SAS program should ever leave .lck files lying about, even if it encounters errors - your jobs must actually be crashing the SAS environment itself, or perhaps they're being killed prematurely by another process. Simply accepting that this happens and trying to work around it is likely to just be storing up more problems for the future.

Timed out when access SQL Server 2008 r2

I'm developing a web project and for a start I need to create tables, procedures, views, etc.
At the first (and in debug mode) the code is running fine and the first tables are created but suddenly the transaction throws me an error Timed out
If I start again to run the code (in debug mode) it doesn't make any change
In order to run and continue to create the rest's I have to build the code again (without have made any change on it). I Published again and then continues to create more until will stops again and make the same actions as before.
I haven't test it yet in my web to see what will happen, and the reason is simple... my ISP didn't giving me the choice to create one more data base (because that is my contract I use).
Any way I need to know why that happens?
I have to say that I use some delays in my code especially when it reads from an xml file this file contains structure's of tables, procedures etc. And from where I read and execute thru my code behind in vb.net.
After many attempts finding solution, I noticed that the problem starts from the reading of the XML File, I set in this file some sleeps of totally 300 milliseconds and the code now runs ok but slowly...
Thanks to all for there assistance

SSIS Package Not Populating Any Results

I'm trying to load data from my database into an excel file of a standard template. The package is ready and it's running, throwing a couple of validation warnings stating that truncation may occur because my template has fields of a slightly smaller size than the DB columns i've matched them to.
However, no data is getting populated to my excel sheet.
No errors are reported, and when I click preview for my OLE DB source, it's showing me rows of results. None of these are getting populated into my excel sheet though.
You should first make sure that you have data coming through the pipeline. In the arrow connecting your Source task to Destination task (I'm assuming you don't have any steps between), double click and you'll open the Data Flow Path Editor. Click on Data Viewer, then Add and click OK. That will allow you to see what is moving through the pipeline.
Something to consider with Excel is that is prefers Unicode data types to Non-Unicode. Chances are you have a database collation that is Non-Unicode, so you might have to convert the values in a Data Conversion task.
ALSO, you may need to force the package to execute in 32bit runtime. The VS application develops in a 32bit environment, so the drivers you have visibility to are 32bit. If there is no 64bit equivalent, it will break when you try and run the package. Right click on your project and click Properties and under the Debug menu you'll need to change the setting Run64BitRuntime to FALSE.
you dont provide much informatiom. Add a Data View between your source and your excel destination to see if data is passing through. Do do it, just double click the data flow path, select data view and then add a grid.
Run your app. If you see data, provide more details so we can help you
Couple of questions that may lead to an answer:
Have you checked that data is actually passed through the SSIS package at run time?
Have you double checked your mapping?
Try converting within the package so you don't have the truncation issue
If you add some more details about what you're running, I may be able do give a better answer.
EDIT: Considering what you wrote in your comment, I'd defiantly try the third option. Let us know if this doesn't solve the problem.
Just as an assist for anyone else running into this - I had a similar issue and beat my head against the wall for a long time before I found out what was going on. My export WAS writing data to the file, but because I was using a template file as the destination, and that template file had previous data that had been deleted, the process was appending the data BELOW the previously used rows. So, I was writing out three lines of data, for example, but the data did not start until row 344!!!
The solution was to select the entire spreadsheet in my template file, and delete every bit of it so that I had a completely clean sheet to begin with. I then added my header lines to the clean sheet and saved it. Then I ran the data flow task and...ta-daa!!! Perfect export!
Hopefully this will help some poor soul who runs into this same issue in the future!

sql server batch database alters, batch database changes - best and safest way

We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.