I have an on premise SQL Server database that is the backend for our project management software, a Azure sql table that contains limited data used for reporting with power bi and a linked server to connect the two. Both of the databases have a specific user/pass account just for this which is stored in the linked server. Heres the problem:
When I run a SQL Server Agent job to update the azure table from the on prem table using the linked server everything works fine.
When I manually run a sql update statement from an open window in SSMS to do the same everything works fine.
When I use a workflow in the project management software to trigger a Stored Procedure to execute the same code (update Azure from the on prem database) I get the following error:
The OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" reported an error. One or more arguments were reported invalid by the provider.
The operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" was unable to begin a distributed transaction.
OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" returned message "The parameter is incorrect.". Error occurred in: STORED_PROCEDURE_NAME[CRLF]Error occurred on line 23
There's nothing on line 23, and like I mentioned earlier, if I manually run the same update statement it works and if I have a SQL Server Agent Job run the same statement it works. Why does it fail when the code is executed by the project management software? Anyone have experience with this?
This is the code to insert the data from on prem into Azure:
INSERT INTO [LinkedServerName].DatabaseName.SchemaName.TableName ([ProjectNumber],[CreateDate],[SyncDate])
I'm not sure about this with Azure but I had a similar issue with a remote server and had to disable promotion of distributed transactions. It might not be the best thing to do in a production environment so read up carefully about the implications of doing this.
I'm only suggesting to try this to narrow down what the real issue is..
Change this setting and test.
I ended up taking a different strategy. We know using a scheduled SQL Agent Job to insert data in to azure works, it just wouldnt work in any script ran by our software and the user it uses to access the on prem database. So I created a SP in the on prem database that the software executes through a built in workflow. The SP executes saving the data to a staging table, then executes the SQL Job, which reads from the staging table and then inserts the data into an Azure table.
Everything worked in the testing environment, but when I replicated all the scripts into production I got a permissions error. After doing a lot of research and testing adjustments to the user I ended up getting it to work by assigning the role TargetServersRole and db_ddladmin to the user in the msdb database and that worked.
ssms screenshot
Below are the two articles that let me to this conclusion:
Article 1
Article 2
Related
We have a SQL Server 2014 database that is basically a set of views to a 3rd party remote SQL Server instance that we connect to as a linked server. They are upgrading to 2016 and said that we would need a 2016 instance to connect to their system soon.
I setup a new instance on the same server we have SQL Server 2014 and created a new database there. In the 2014 instance, we have a snapshot of a few tables, which we refresh a few times each day. Pulling data from their system is working fine with this setup, but we have a couple spots where we update the 3rd party database. This stopped working after the change.
I either receive the error
'OLE DB provider "SQLNCLI11" for linked server "" returned message "The transaction manager has disabled its support for remote/network transactions.".'
or an error about there not being transactions. The 2nd error only occurred when I was trying to troubleshoot the issue, yesterday.
I tried adjusting the DTC settings on our server, but that just gave me the 2nd error, instead, and seemed to cause an issue with remote connections to our database...
This is all that is in the update that is breaking:
UPDATE [2016Instance].[DBName].dbo.EmpPers
SET eepAddressEMail = #CurEmpEmail
WHERE eepEEID = #CurEEID
Is there something else that needs to be setup for this to work? I'm considering just reworking this to be able to run from the SQL Server 2016 instance, instead, but I thought I would ask here first.
I ended up getting this to work by moving our stored procedures to the SQL 2016 instance, setting up a link back to the SQL 2014 instance, and then changing the stored procedures on the SQL 2014 instance to just execute the new versions on the SQL 2016 instance. This also doesn't require the DTC settings from the comments on the main question.
I have created a database in SQL Server 2012 with mdf and ldf pointing to a external hard drive attached to my machine. I created tables, stored procedures, populated tables, etc. etc.
I removed the hard drive at the end of the day.
Today, when I attached the hard drive and tried to access the DB in Management Studio, I see the name of the database with (Recovery Pending).
What does this mean? I see the mdf and ldf files in the D drive.
What worked for me was to take the database offline*, then back online - no RESTORE DATABASE was necessary in this case, so far as I can tell.
In SQL Server Management Studio:
right-click on the database
select Tasks / Take Offline ... breathe deeply, cross fingers...
right-click on the database again
select Tasks / Take Online
When you removed the drive, you forcefully disconnected the database from the SQL Server service. SQL Server does not like that.
SQL Server is designed by default so that any database created is automatically kept open until either the computer shuts down, or the SQL Server service is stopped. Prior to removing the drive, you should have "Detached" the database, or stopped the SQL Server service.
You "may" be able to get the database running by executing the following command in a query window: RESTORE DATABASE [xxx] WITH RECOVERY;
You could, although I would not normally recommend this, alter the database to automatically close after there are no active connections.
To accomplish this, you would execute the following query:
ALTER DATABASE [xxx] SET AUTO_CLOSE ON WITH NO_WAIT;
Another way that works is to "Restart" the Database Engine. If feasible and/or practical for this server, it may be faster whenever you have several DB in the external drive.
In SQL Server Management Studio:
Attach the external drive
right-click on the database engine : Server Name(SQL Server
12.0.2000 ... etc)
Select "Restart"
Answer Yes when asked if you want to proceed
Below worked for me:
Run SQL Management Studio as Administrator (right click on SQL
Management Studio icon and select 'Run As')
Take database offline
Detach the database using DROP option
Attach the database
If you were using this database with a Web App running on IIS then you may need to restart the IIS Server
Hope this helps someone
If the SQL Server knows that database recovery needs to be run but something is preventing it from starting, the Server marks the db in ‘Recovery Pending’ state. This is different from the SUSPECT state because it cannot be said that recovery is going to fail – it just hasn’t started yet.
Check this thread: How to fix Recovery Pending State in SQL Server Database?
Scenario: System in a VM in Azure using MVC and a SQL Database (not in the VM) working under normal conditions for 2 or 3 months. Suddenly, stored procedures called from my MVC web app or SQL Management Studio return Time Out. Queries like Select * from Table work perfect.
EDIT: Timeouts while executing Alter or Create SP queries happened too.
No proper solutions or explanations found.
Workaround: Restore old backup in a new SQL Database and change the connection string to the new Database. While the system is running in the backup, try to backup the database with issues (first close all connections to that DB like Management Studio). It may take some time and some retries. After the backup is done, restore it in a new DB and change back the connectionString. You will lose a few minutes of data and some downtime but you will have your system working again in Azure.
Any ideas about this issue in the Stored Procedures in Azure?
At first glance, this smells like a parameter sniffing issue; it is probably not related to Azure.
Check this thread for details on what the issue is, and how to resolve it: Parameter Sniffing (or Spoofing) in SQL Server
Hi My company is deciding for switching its existing application to azure platform (only Sql Part). So we need to upload our db from local to cloud. For migration i came across various tools like
1. cerebrata 's tools
2. SqlAzure Migration wizard
3. Microsoft Sql Data Sync
4. Conventional Script way via management studio.
But all the above tools showed that they have limited capacity. A user cannot work flawlessly on either of the tool.
In cerebrata's tool - the main drawback was its field for Application User Name and Application Key , which my admin havent shared. Also there is manual mapping of fields between azure and local.
Sql Azure Migration wizard - generates scripts and executed too but with lots of error . I was using its version 2.1. Also it very slow. It seems that its a replica of Sql Srvr Mgmt Studio.
Sql Data Sync :- I found it cool as its a MS product but it has limitation too that it only connects with Windows Authentication based local sql server, or you need to explicitly allow the required but. Even after allowing while syncing , I got some Sql Azure Provisioning Error.
4 Sql Srvr Mgmt Studio :- This is most easiest way but requires a lot of manual work to do before actual migration. What i did is that I generated a script of entire db (almost 101123 lines of code for single db) and tried to execute on azure. On the very first time i faced some keyword mismatch error . Finally i removed all line after primary key declaration that With (Padding = Off ....)or something similar and also On Primary then i executed , but still got error on Set Identity Insert On. After doing a lot of hard work in removing unwanted lines waited more than 2 hrs to completed the script remotely, i got no Errors , errors and errors.
So you guys are requested to please suggest me any good alternative stated than above or i am lacking something and can do more with above.
Thanks
Amit Ranjan
I've faced a similar problem recently, running through the options you've listed.
You might give a try to Red-Gate beta for Azure (free for a few months). I found their tools to be quite good for SQL schema and data replication.
Never tried the Azure build myself, though (I migrated tables manually by the time I was told about the offer).
could you please help me?
I have an application, in which I BULK INSERT the contents of a csv file into a table through stored procedure. The stored Procedure uses BULK INSERT (SQL Server 2005). This works fine in a standalone system. However when I use the same in a multitier architecture (Web server, Application Server and DB Server) it is throwing 4861 error. Could you please help?
The files are stored on the Web Server.
The translated error message is:
Error – 2147217900:4861:
Since it can not be opened for the file
\\Servername\c$\Folder1\Folder2\Folder3\file.csv,
It can not be loaded with large capacity.
Operation system error code is 5 (error not ……….)
Thanks
Regards,
Chandru
That's probably a security issue. If you're running the bulk insert from a SQL Server job, make sure the user account of the SQL Server Agent service has rights to open the file.
If you're running the query from a regular connection, SQL Server will impersonate you and then try to read the file. However, by default, SQL Server is not allowed to act as someone else. See this answer by Remus Rusanu for more details.