SQL Sever 2012 - generating scripts - Save to File = Not Run - sql

I am creating scripts of a SQL Server 2012 database because I cannot backup the database to a local drive. I understand how to create the script but at the end of the process the application seems to get stuck at the Save to file = Not Run.
The database is a huge database, but it appears that not much data is being written to the drive.

This appears to be a bug in SQL Server 11.0.6020 tools.
There is no trace in event log or server log, the script generator wizard just stops at the last step (which is writing the script to the destination, which can be a file or a new script window - either remains in status "not run" forever, with "Cancel" as the only possible user action).
Some experimenting showed that it indeed depends on script size.
I was not able to reproduce the problem on any lower or newer version of Microsoft SQL Server.
The solution is annoying: click yourself through the wizard multiple times, first scripting only database definition in parts:
datatypes, functions and tables
then only views, and
then only procedures
You can later concatenate the three resulting files if that is a requirement.
This approach will not help if any of the three parts alone is bigger than the (unknown) treshold. Eventually, script the data, selecting only smaller sets of tables for each run.

As mentioned in one of the comments on the original post, this is still an issue with SQL Server Management Studio v17.9.1 To work around it, I was able to use the "Single file per object" option. I definitely would have preferred a single file, but at least it worked this way. There was still a bit of a delay between the time that the status for all database objects showed Completed and the time that the "Save to file" line item changed from "Not Run" to Completed.

I'm using SQL Server Management Studio version 18.6. I wanted to export to a query in a new window:
But it resulted in it saying "Not run":
And a few seconds later, it said "Error":
Instead, what worked, was to save the result to a script file:
And here is the successful result:

Related

Visual Studio Writes Partial Data to Excel File

I have a visual studio package that does not write all the data needed to a blank excel file.
More specifically, the package goes through these steps:
Copies a template excel file to overwrite a shell file.
Connects to a SQL DB.
Runs a Select Statement.
Converts one column to unicode.
Pastes to shell file.
There are a few more steps afterward (like emailing the excel file) but those work fine.
The issue comes up for step 4. when Visual Studio or SSIS runs the package, I pull about 1400 rows. When I just run the select statement in SQL Server Management Studio or as a connection in Excel I pull about 2800 rows. 2800 is the right number.
I've tried building the process from scratch (excel files, connection files, etc.) but that rebuild elicits the same result. It's like Visual Studio just doesn't like the select statement. Double checked the mappings - all good. The data is pasting and being delivered fine, just not enough. No errors on visual studio either - it gives me that lovely (albeit confusing) check mark.
This was running as an automated package for about a year before this happened and I have no explanation. Seriously a headscratcher.
The only other clue I have is that when I pull the data manually with the select statement, there are no null values in a particular column, but when I run the package with that exact same select statement the output contains a null in the referenced column - almost as if the select statement in Visual Studio is pulling slightly different data than the manual pull, but the statements are exactly the same, so I don't know why that would be.
Any ideas?
I've seen this issue before. The timeout was set to an low value on the connection and caused it to only pull part of data before the timeout hit and killed the connection. Make sure you are not swallowing any exceptions and double check your timeouts.
Thanks for replying folks!
In the end, I solved the issue by completely remaking the package. While trying your solutions above, I was using the same file but building the connections and queries from scratch. Once I started from a new file it ran without error.
I guess to all those new folks to Visual Studio - always consider remaking the file from nothing!

SQL Server database : amalgamate 90 database update scripts into a single script

I have an application that has been released for several years. I have updated the SQL Server database around 90 times and each time I have created an update SQL script, so when the user runs the new software version the database is updated.
I would like to amalgamate all the updates into a single SQL script to make deployment faster and simpler, is there an easy way to do this? I presume I could simply grab the latest database after it has run through all 90 updates via SQL Server Management Studio?
EDIT
For clarification; the software I wrote automatically applies new database updates when the user downloads the latest version. This is done via C# / .Net and look for an embedded sql script on startup in the format XX_update.sql calling each script one by one i.e.
1_update.sql - this creates the tables and initial data etc. This was my initial release database.
2_update.sql - updates to the initial database such as adding a new SP or changing column datatype etc
3_update.sql
4_update.sql
...
90_update.sql (4 years and lots of program updates later!).
Ideally, I would install my software and create a brand new database running through all 90 update scripts. Then take this database and convert it into a script which I can replace all 90 scripts above.
This is too long for a comment.
There is no "easy" way to do this. It depends on what the individual updates are doing.
There is a process you can follow in the future, though. Whenever an update occurs, you should maintain scripts both for incremental updates and for complete updates. You might also want to periodically introduce major versions, and be able to upgrade to and from those.
In the meantime, you'll need to build the complete update by walking through the individual ones.
I use a similar system at work and while I prefer to run the scripts separately I have amalgamated several scripts sometimes when they have to be deployed by another person with no problem.
In SQL Server the rule is that as long as you separate the scripts by go and use SQL Server Management Studio or another tool that process the batch separator properly there is no problem in amalgamating it, because it would look like separate scripts to SQL Server (instead of being sent to SQL Server as a one big script the tool send it in batches using the go as the batch separator).
The only problem is that if you have an error in the middle of the script, the tool would continue sending the rest of batches to the server (at least by default, maybe there is an option for changing this). That is why I prefer when possible using a tool to run then separately, just to err on the safer side and stop if there is a problem and locate easily the problematic command.
Also, for new deployments your best option is what you say of using a database that is already updated instead of using the original one and apply all the updates. And to prevent being in this situation again you can keep a empty template database that is not used for any testing and update it when you update the rest of databases.
Are you running your updates manaually on the server. Instead, you can create a .bat file, powershell script or a exe. Update scripts in your folder can be numbered. The script can just pick up updates one by one and execute it over the database connection instead of you doing it.
If you have multiple script files and you want them to combine into one,
rename these as txt, combine, rename the resulting file as sql.
https://www.online-tech-tips.com/free-software-downloads/combine-text-files/
This is easy. Download SQL Server Data Tools and use that to create the deployment script. If there's no existing database, it will create all the objects, and if targeting an older version it will perform a diff against the target database and create scripts that create the missing objects, and alter the existing ones.
You can also generate scripts for the current version using SSMS, or use SSMS to take a backup, and use a restore to in an install.

Some useful functions of MySQL Workbench for SQL Server management studio

Our project is moving from MySQL to MS SQL and after a long time working with MySQL Workbench I really miss some features in SQL Server Management studio (2014).
Do you know whether they exist in SSMS or there is an alternative/replacement application for SSMS to work with database?
Functions are listed below:
Generate update data script to review and to be able to copy-paste it. Do not update data when I move to another row when the table is opened for editing.
Some changes are still made in database in our project, and sometimes it's easier to add some rows manually in 5 tables, get the script, test it and run the script at production environment. I don't want to write a script for each update and I don't want to make a mistake when copying data to production server using edit table option.
Review update table script BEFORE the changes were made, not after (I am talking about Tools - Options - Designer - Auto generate change scripts).
Upload a file using select file dialog into a binary field.
Again, I know about using OPENROWSET function, just interested how to do it as I used to.
Ability to view large text fields in a convenient way in SSMS. Now I have to copy data from a field and paste it into notepad. (For example, error message with a long trace log)
Save a few tabs with some useful scripts and open all of them when I open SSMS.
Is there any way to organize tabs to be able to work with 10+ tabs more effectively? Now only 6 of them can be shown on the screen (compate it to 15 tabs in MySQL WB).
Simple 'search field' (like Ctrl+F in Excel) to be able to search data in all fields displayed on the screen.
I would appreciate any ideas.
Thank you.

Updating System variables in SSIS package

Similar to this post
I have an SSIS Package with a Script Task that creates an Excel file on disk and populates it with data from a SQL Stored Procedure (using Microsoft.Office.Interop.Excel). This works great when testing and when running the deployed package manually through the SSIS Catalog, but when I schedule the task to run automatically through SQL Server Agent, the Package fails in the Script Task step. I have the Job running as a Proxy account that is the same as the account I'm logged into the server with when testing (and the same as the account that works when manually running the packages).
My understanding is that even though the job is running using a Proxy, any desktop interaction occurs within the Profile context of the SQL Server Agent login. Since that profile isn't actively logged in, the interaction fails. Digging in more, there is a bool System Variable in the package called "InteractiveMode" that is set to "False". I have a feeling that if I could switch that to True, everything would be hunky dorey. Trouble is, that variable is only accessible to my Script Task as "ReadOnly"...
Is there any way to set the System:InteractiveMode Variable in an SSIS package manually or programatically at runtime? Please help! I'm having to run these scheduled jobs manually for now, which is a big pain.
Thanks.
I had this problem a few months ago and it turned out that the execution options needed to be set to use 32 bit runtime. If you're using SQL Server 2008 R2, you can open your job and double click on the step. It's under the Execution Options tab.
If you continue to have errors, you may want to consider changing the package so that it uses a file system task to create/rename the excel document and then a Data Flow Task to move the data from your stored procedure to your excel document. Depending on your data, you may need to add a Data Conversion step in between. Here's a good article on the topic: http://www.mssqltips.com/sqlservertip/3046/sql-server-integration-services-data-type-conversion-testing/
Edit:
I haven't used SQL Server 2012 yet, but according to MSDN, it looks like the option is under the Configuration tab. Here's their article: http://msdn.microsoft.com/en-us/library/gg471507(v=sql.110).aspx

Visual Foxpro SQL Server - Can't find the Call to SQL server in Foxpro

My DBA's are saying my foxpro application or .DBC (Database container) are hitting SQL server but searching all the code can't find the SQL call (FMTONLY ON/OFF).
This is the SQL command being sent:
FMTONLY ON/OFF
Getting called 16260 times every few minuets?
Any ideas how to find this or what could be causing it, maybe my DBC file?
If you can't find it embedded in the .DBC, but not entirely sure its NOT in there, you can use a VFP tool to dump its contents to a .prg file... GENDBC which is in your installation folder of {VFP}\Tools\GenDBC\GenDBC.prg
Open your database, then run that program, it will cycle through all the tables, indexes, relations, connections, etc and generate the code corresponding to everything in it... You could then look at the output .prg file and see if something in there might be triggering what you can't see otherwise.