Changing connection managers from SQLNCLI11.1 to SQLOLEDB.1? - sql

I've been doing some porting of old SSIS packages from a legacy system into a new system. I was running some tests only to see some kind of error output related to the ODBC connection with Code: 0xC0202009.
The package's two connection managers are both built with SQLNCLI11.1 as the provider.
I believe I can fix the error if I switch that to SQLOLEDB.1. Is there a simple way to do that without having to rebuild the entire package from scratch? Is there an XML file somewhere I can just replace the old value with the new one?

The only way is to open the package (.dtsx) file with a text editor (notepad, notepad++). And search for this property and replace it manually. (.Dtsx file is an xml file)
But replacing this property may cause other errors if each provider has different properties. So Take a backup of these packages before editing.
Take a look at this question it may help you (check my answer and the others. It will give you an idea on how a dtsx file can be readed outside of visual studio):
Automate Version number Retrieval from .Dtsx files

Related

Better sql lite gives error for the code written in migration file

Better sql lite gives error and stops executing further code written in .sql file for the migration. due to which we might have to perform\write manual code.
For Example
you have table x in db file.
you add column through migration .sql file which is not available in table x.
run exe, column gets added.
closing the exe
run exe again
Now you will find error for the column already exists and stop execution further for .sql content.
I found some issues like it also not supports 'IF NOT EXIST' etc. may I know the reason as I'm not much familiar with the Database things.
here is which I have used for the database and it's helper packages manager.
https://www.npmjs.com/package/better-sqlite3
https://www.npmjs.com/package/better-sqlite3-helper
Thank you for helping hand in advance.

Viewing SQL for SQL Server Integration Services (SSIS) Transformations

I am new to Database and SSIS. Can anyone please let me know is there a way to look or view SQL code generated by SSIS transformations.
I know in BI reporting tools such as Business Objects, when we pull fields or columns into the reporting panel, we can view its corresponding SQL.
Similarly in SSIS, is there any option to view the SQL for SSIS Transformations.
Thanks in Advance
Raj
SSIS unlike other tools does not generate SQL per se, although you can include your own SQL inside tasks and components, but I guess you are not interested in the SQL that you write yourself but rather what SSIS is doing behind the scenes.
An SSIS package is essentially an XML-structured file with a collection of properties marking up the flow and process of its components. You can access to this xml file by right clicking on the package and selecting View Code:
The example above is an empty package so it's a very small XML file. In a complex package, this file can be very large as you will see all the tasks, components, parameters, variables, etc. as well as your own SQL code and C#/VB scripts if any.
When the project is built it generates a .ispac file which is no other thing that a zip file containing the package(s) in project plus a manifest, a content type and any other file required for the package to be deployed and executed.
You can see what is inside a .ispac by renaming to .zip and opening it. In this example I've built the above empty package and renamed the ispac to zip, then opened it :
In summary, unlike other tools that are purely SQL generators, in SSIS there is not much you can see about the generated code, all you can see is its structure as shown above.
Also, as mentioned by Marko Ivkovic in comments it might be possible to get some more info about what is happening at run time by using tools like SQL profiler.

SQL Agent Job error when trying to run SSIS package

I have an SSIS package which when I run in VS it runs fine. I have changed the protection level to Dont Save Sensitive. I have also given any files related permission to be overwritten by the sql agent job account.
The error I am getting is:
Parsing XML with internal subset DTDs not allowed. Use CONVERT with style option 2 to enable limited internal subset DTD support.
So far I have got the impression that I need to change the code but if the code was really a problem my package wouldnt run in the first place which clearly isnt the case.
How can I reslve this?

VBA modules have been saved with errors

I have a database with tables that are linked to a different database in a network drive. (The other database is on a different machine, and a network drive on my machine is mapped to it.)
While I was running some VBA code the connection to the network drive was broken, and I got an error message.
When I tried accessing any local tables in my database, or when I tried closing Access I got error messages.
I closed access through the task manager, and now when I open it I get the following message:
The VBA modules in this database appear to have been saved with errors. Access can recover the modules, but you should backup the database first...
I backed up and then clicked OK, but the modules have been entirely wiped out.
In the backup I cannot access the modules, I just get that message again.
Please help! Sadly I do not have a backup from before, and I need the modules. Is there a way to recover at least the modules? Even in text file?
I tried importing the modules into a different database, but I get the same error message.
EDIT: When I try to recover I get the following message:
Cannot open database ''. It may not be a database your application recognizes, or your file may be corrupt.
What does this mean? It seems like Access is trying to access an empty string.
oh... That one hurts! I've been in a similar situation but not exactly the same way. The good news is that you can almost always recover from these situations. Try this: Create a new blank database and import all tables, queries, forms, reports, macros, and modules into the new database...
If that's not possible you may have to decompile the database. See: https://www.fmsinc.com/microsoftaccess/errors/Bad_DLL_Calling_Convention.asp
Do double-check the path. Most likely it is:
"C:\Program Files (x86)\Microsoft Office\root\Office16\MSACCESS.EXE"
and a space is missing:
"C:\Program Files (x86)\Microsoft Office\root\Office16\MSACCESS.EXE" /decompile
That's for A2016. It is Office15 for A2013.

What is the best way to manage "non-SQL Server" SQL objects within Visual Studio 2010?

Visual Studio has a Database Project for Sql Server. This has a number of advantages: it hosts configuration settings, and database objects in one place. The .sql files are part of the regular .NET solutions - visible in the Solution Explorer and editable in Visual Studio. And they have a mechanism for generating a deployment script. With each individual database object in it's own file, the tracking of changes and source control is greatly simplified.
Has anyone had any success with using Database Projects with "non-SQL Server" databases? We use Sybase - which uses T-SQL and is very similar to SQL Server so I'm hopeful.
Or is there an alternative approach? I guess I could use a standard project (.csproj) and call a custom commandline application as part of the post-build to convert the .sql files into a deployment script.
Any ideas would be welcome.
Thanks
OK, I'll answer my own question.
I added all of our SQL objects to their own .sql files within a Visual Studio .dbproj project. However, minor syntactic incompatibilities between the Sybase version of RAISERROR and the Microsoft version of RAISERROR caused the validation code built into Visual Studio to get unhappy. The problem with the database project was that this actually caused a compilation error - which basically made it into a show-stopper.
So I scrapped that idea and added the .sql files to a standard .csproj project file. I then implemented some custom code that would load all of the .sql files, and aggregate them into a deployment script when invoked. I added a call to the custom code to the post build of the .csproj file so that whenever it was compiled - it would output a deployment script - which works like a dream with our build server.
In order to get some of the benefits of the .dbproj, I looked into writing a full SQL parser, but was quickly discouraged by some of the posts on SO. Instead I did some rudimmentary parsing with regex - which got me a few cool features without a lot of effort:
The code could detect dependencies between the various .sql files, and add them to the deployment script in the correct order to avoid sysdepends warnings.
Where there were no dependencies, objects were ordered based on the object type (stored procedure, function, grant statement, etc) and then by name so that the resulting script was always ordered the same - which is very important if you need to diff two versions of the script.
The deployment script can figure out some of the required permissions, so I don't need to keep track of all of the GRANT statements.
Stored procedures that are in the database but not in the script can be dropped automatically - so I don't need to keep track of what state each database is in - we just run the script and everything is in the correct state.
We have a few stored procedures that our automated tests call that shouldn't be deployed. The code can detect these and include them in a Debug build and exclude them in a Release build.
The custom code also generates a diff script that determines what changes the deployment script will make to a database and prints them out. This allows the person who is running the script to get an idea of what it will do. For example, the diff script might tell them that no changes will be made - so they don't need to run the deployment script at all - which is kind of handy if it saves them logging in at 3am to take a database offline and take backups etc.
So the end result is that all of my SQL objects are in separate files making them easy to work with in Visual Studio and manage under source control. For the first time since I started this job, I can look at the history in source control and tell what files have been changed (before this we had one enormous .sql file with absolutely everything in it).