No transaction is active on SQL server 2000 - sql

I am dealing with some legacy code. There is a program written in vb6. That connects to an sql server 2000.
When a transaction is begun I am getting No transaction is active error.
The problem is this error occurs only on one of the clients. There are 3 pcs one server 2 clients work just fine. One doesn't.
The TCP protocol is enabled on the server. I have uninstalled and installed msdtc on server.
A connection is made. Other queries execute just fine. I am unsure what might be the problem.

If I had sufficient reputation, I would have added this as comment instead of answer...
Because it is not entirely clear to me what you mean with "When a transaction is begun" and what the actual error is for "No transaction is active error"
There are a few settings that can be done on any connection. The settings can be part of the connection configuration or set explicitly afterwards. One of them is "implicit transactions". On my machines, it is always set to "off". Maybe this setting differs between your machines.
To test that theory, you could add the line
set implicit_transactions off
or
set implicit_transactions on
as first line in the batch / stored procedure that throws the error.
If that solves the problem, you should fix the connection configuration of the troublesome machine and change the batch / stored procedure back to the original.

If these servers communicate with each other, dns information must be defined in the host file. You can check it out.

Related

After doing SET NEW_BROKER, getting "target service name could not be found"

I made a mistake. After realizing that SB was causing crashes (and had been for a while), I had it patched. Now, I had millions of messages backed up in a queue, it was trying to catch up on messages from other machines, and disk space was running out. Even with 4 readers it was falling behind.
So I did the only thing I could think of at the time. Which was stupid of me.
ALTER DATABASE ... SET NEW_BROKER WITH ROLLBACK IMMEDIATE;
Now, I'm trying to clean up from that mistake. The first thing I tried to do is alter the ROUTEs on the sending servers so they matched. That doesn't seem to be working - now the sys.transmission_queue on the senders says "Target service name could not be found". And I'm stumped on that - I see the service on the receiver, and I don't believe I changed anything with it. I'm scripting out the CREATE ROUTE from the box via SSMS, then changing the broker instance with the results of service_broker_guid from sys.databases for the receiving database.
Looking at a profiler trace with broker, I'm seeing (on the receiving server) these messages:
Could not forward the message because forwarding is disabled in this SQL Server instance.
The message could not be delivered because it could not be classified.
Enable broker message classification trace to see the reason for the failure.
Next up is doing endpoint cleanup on the senders, pulling the conversation_handle from sys.transmission_queue and using that to end it.
Update: okay, so I've cleaned out msdb.sys.transmission_queue, but I still have a 15gb MSDB, and it's got to be service broker (no tables using more than a few MB that I can see). Considering doing the NEW_BROKER there as well, since I've turned everything off. But that still seems like A Bad Idea.
The receiver is a R2 box, just patched to SP3.
At this point, I'm at a loss. Any help appreciated. Thanks in advance.
We ran into this issue where service broker suddenly and inexplicably stops delivering messages, they just accumulate in sys.transmission_queue and a profiler trace shows the error message Could not forward the message because forwarding is disabled in this SQL Server instance.
Executing this command fixed it:
alter endpoint ServiceBrokerEndpoint for service_broker (message_forwarding=enabled)
..which is odd since we never disabled message forwarding and never had to explicitly enable it before.

Using transactions with EF4.1 and SQL 2012 - why is DTC required?

I've been doing a lot of reading on this one, and some of the documentation doesn't seem to relate to reality. Some of the potential causes would be appropriate here, however they're only related to 2008 or earlier.
I define a transaction scope. I use a number of different EF contexts (in different method calls) within the transaction scope, however all but one of them are only for data reads. The final use of a Context is to create and add some new objects to the context, and then call
context.SaveChanges()
IIS is running on one server. The DB (Sql2012) is running on another server (WinServer 2012).
When I execute this code, I receive the error:
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
Obviously, if I enable DTC on the IIS machine, this goes away. However why should I need to?
This:
http://msdn.microsoft.com/en-us/library/ms229978.aspx
states:
• At least one durable resource that does not support single-phase
notifications is enlisted in the transaction. • At least two durable
resources that support single-phase notifications are enlisted in the
transaction
Which I understand is not the case here.
Ok. I'm not entirely sure if this should have been happening (according to the MS doco), but I have figured out why and the solution.
I'm using the ASPNet membership provider, and have two connection strings in my web.config. I thought the fact that they were pointing to the same DB was enough for them to be considered the same "durable resource".
However I found that the membership connection string also had:
Connection Timeout=60;App=EntityFramework
whereas the Entity Framework connection string didn't.
Setting these values to the same connection string meant that the transaction is not escalated to MSDTC.

Run a SQL command in the event my connection is broken? (SQL Server)

Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.

SQL Server - Timed Out Exception

We are facing the SQL Timed out issue and I found that the Error event ID is either Event 5586 or 3355 (Unable to connect / Network Issue), also could see few other DB related error event ids (3351 & 3760 - Permission issues) reported at different times.
what could be the reason? any help would be appreciated..
Can you elaborate a little? When is this happening? Can you reproduce the behavior or is it sporadic?
It appears SharePoint is involved. Is it possible there is high demand for a large file?
You should check for blocking/locking that might be preventing your query from completing. Also, if you have lots of computed/calculated columns (or just LOTS of data), your query make take a long time to compute.
Finally, if you can't find something blocking your result or optimize your query, it's possible to increase the timeout duration (set it to "0" for no timeout). Do this in Enterprise Manager under the server or database settings.
Troubleshooting Kerberos Errors. It never fails.
Are some of your webapps running under either the Local Service or Network Service account? If so, if your databases are not on the same machine (i.e. SharePoint is on machine A and SQL on machine B), authentication will fail for some tasks (i.e. timerjob related actions etc.) but not all. For instance it seems content databases are still accessible (weird, i know, but i've seen it happen....).

Stop Monitoring SQL Services for Registered Servers in SMSS

Question: Is it possible to stop SSMS from monitoring the service status of registered servers?
Details:
SSMS 2008 monitors the service status of every registered server. From what I have seen it seems to reach out to every registered server every minute or so to check it's status, in my case that is over 100 servers. This process has raised issues with our Security and Network departments. Network identified it initially as suspicious traffic due to the fact that it appeard as an unknown utility was scanning the network for SQL Servers. Security was concerned because the Security Event Logs on each server are being filled up with my logon events.
I have looked all over for a setting but can't seem to find one. Am I missing it somewhere?
TIA,
Brian
I finally found an answer!!
While it is not possible (at least that I've found) to stop SSMS from checking the service status of registered servers it is possible to change the interval at which it checks it.
The short version is to create the following registry keys (DWORD):
(SQL Server 2008)
HKLM\Software\Microsoft\Microsoft SQL Server\100\Tools\Shell | PollingInterval = 600 (decimal)
(SQL Server 2005)
HKLM\Software\Microsoft\Microsoft SQL Server\90\Tools\Shell | PollingInterval = 600 (decimal)
This will make SSMS connect automatically every minute instead of every few seconds.
See this MS Connect Post for details.
Since it doesn't appear that there's any way to stop these status checks by SSMS, can you focus on helping them to see their harmlessness?
Can the network group allow certain exceptions to this particular rule (pinging servers on port 1433) in their scanning software, which would allow you and your group to monitor SQL Server uptime? Even if you weren't using SSMS, this type of sweeping monitoring activity is pretty common, and you'll know the requests will only ever come from a handful of workstations.
I don't think these SQL status checks generate any more events in the security log than any other activity, so maybe they were just concerned because it was something they weren't expecting. Could the security group be convinced that these events aren't dangerous, again as long as they're coming from certain approved workstations?
If neither of these is an option (or even if it is), you could help mitigate the problem by not connecting to all your SQL servers at once. Maybe just connect to the ones you need at the time - it looks like loading the entire list actively connects to each of them, but just connecting to the ones you intend to use in that session might help reduce the number of network sessions open.
I hope this helps - if it doesn't, or you've got some additional input that might help find a workaround, please post it!