I finished a program using VB.NET 2008 AND SQL SERVER 2005 AND Linq To SQL. I want to use the program in 2 or More PCs and get access to one Database
I'm using this connection string:
db = New connectionString("server=192.168.1.3;database=DBNAME;user=DBUSER;password=DBPASS;integrated security=true")
The problem here is I get this message:
Expiration of the waiting period. The waiting time has elapsed prior to completion of the operation or the server is not responding.
NB: The message I got is translated from French language to English..
This error occurs usually because of two issues,
1 - SQL server unreachable ( due to TCP/IP problems or Firewall problems like steve mentioned above)
2 - Badly written queries that are taking too long and the SQL time expires, in this case kindly see the blow link
https://www.simple-talk.com/sql/performance/how-to-identify-slow-running-queries-with-sql-profiler/
Related
I've been running into an issue in R Studio with a SQL connection.
We've had an on-prem SQL Server that's been upgraded over the years, and a colleague that set it up no longer is with the organization.
We also have an Azure Server that's loaded with a SQL Server as well that was much more recently set up before they departed.
We have a GUI program we're currently developing, and one of the early steps is a SQL Login connection for the user where the variable is declared (db_user) and changes with their login and passes the password correctly within system variables defined in .Renviron as posted on RStudio's site for references.
Our initial connection string looks like this, and this is the line of code that starts the connection and where I believe the issue may lie first:
db_conn_onprem <- DBI::dbConnect(odbc::odbc(),
Driver = "SQL Server",
Server = Sys.getenv("server"),
Database = Sys.getenv("database"),
UID = Sys.getenv("db_user"),
PWD = Sys.getenv("PWD")
Whenever the Azure connection succeeds, it connects as dbo#Azure\Azure vs On-Prem's guest#Server\Server.
(I can't post in-line screenshots yet)
On-Prem Connection Screenshot: https://i.ibb.co/PmbGt5y/RStudio-SQL.png
Azure Connection Screenshot: https://i.ibb.co/WFY3FqZ/azure1.png
I feel this is something dbo-related since that's where the connection drops.
(variable names anonymized)
Now for the issue:
Whenever we attempt to run a series of queries, our on-prem errors out with this:
Error: nanodbc/nanodbc.cpp:1655: 42000: [Microsoft][SQL Server][SQL Server]Cannot execute as the server principal because the principal "db_user" does not exist, this type of principal cannot be impersonated, or you do not have permission.
<SQL> 'EXECUTE AS LOGIN = 'db_user' SELECT name FROM master.sys.sysdatabases WHERE dbid > 4 AND HAS_DBACCESS(name) = 1 ORDER BY name ASC'
However, run the exact same procedure on the SQL Server in Azure with relatively no major configuration, and it succeeds.
Here's the SQL Code we run:
EXECUTE AS LOGIN = 'db_user' SELECT name
FROM master.sys.sysdatabases
WHERE dbid > 4
AND HAS_DBACCESS(name) = 1
ORDER BY name ASC
I feel like I've exhausted my resources for this, first I thought it was the initial R code or possibly SQL Drivers, however I don't believe that to be the issue since the SQL driver pulls a list of names in R Studio in the Connections context menu, but bounces back the error when attempting to complete the query.
Whenever I'm searching errors for references for this error, I see
Cannot execute as the server principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
Listed as the most commonly related error for the one I'm experiencing, however I've tried a number of those (From blank DB ownerships to unrelated solutions), but I've mostly hit a wall here.
Any assistance would be greatly appreciated.
I feel this is something dbo-related since that's where the connection drops, but I have no clue where to continue going on this issue.
Yep.
This
EXECUTE AS LOGIN = 'db_user'
requires impersonate permission for the login. Which the error message is clearly telling you. It's unclear why you want to impersonate that login instead of simply connecting as the login to begin with.
I am using odbc.eval on KDB to run a SQL stored procedure that generates over tens of millions of rows of data. We have 2 different RDBs (SQL Server 2012 and SQL Server 2016) set up with the same data, allocated memory, etc. The KDB code works against one of them, but doesn't work against the other. For the newer server, KDB crashes midway through the query. The stored procedure seems to work on SQL Management Studio 2016 just fine, though it does take a long time to fully execute - around an hour or so. Could this be a time-out error? Any suggestions for running a SQL query with this large amount of data on KDB without running into memory or timeout issues?
I have a bit a funny situation. Our Azure SQL instance maxes out at 100 DTU for a certain query and the query returns a timeout:
SqlException (0x80131904): Timeout expired. The timeout period
elapsed prior to completion of the operation or the server is not
responding. This failure occurred while attempting to connect to the
routing destination.
If I run exactly the same query (with the parameters hardcoded) in SQL Server Management Studio it still takes the DTU up to 25%, but that's still far away from 100%. Nothing else runs on that server. There are a few other queries that run before/after. But if we just run them, nothing spikes out.
Any ideas?
My analysis of the issue goes like this..
First when DTU's are maxed out and if a query fails due to that,you will not get time out..Below is the error message you will get..
Resource ID: %d. The %s limit for the database is %d and has been reached. For more information
You can try testing that by opening multiple resource intensive queries
Secondly when you get time out's as indicated in your question,this is mostly due to query waiting for resources like say some database IO,memory..
we faced similar time out's ,but most of them are fixed by updating stats and rebuilding indexes,rest of them we optimized
I am facing SQL Server Replication issue
(Identity Management in a Pull Merge Replication at Subscriber).
Replication situation:
Distributor and the Publisher are in one server running Windows Server 2012 Std and SQL Server 2012 Std
One Subscriber PC running Windows 7 Professional and SQL Server 2012 Express Edition
Both are connected through the internet using VPN
The Problem:
Subscriber has an article (Table) [DocumentItems] where its Identity field [DocumentItemsID] is managed by Replication and was assigned the following range:
([DocumentItemsID]>(280649) AND [DocumentItemsID]<=(290649) OR [DocumentItemsID]>(290649) AND DocumentItemsID]<=(300649)
The server was disconnected from electricity several times.
Every time the Subscriber PC is up, The [DocumentItemsID] field will pick an identity out of its range like 330035 when inserting new rows.
The issue happened 3 times.
I fixed the problem by a manual reseed:
DBCC CHECKIDENT('DocumentItems' , RESEED, xxxx)
Where xxxx is the MAX existing value for [DocumentItemsID] + 1
Once the electricity is disconnected again, the same problem occurs.
Does anybody have any idea what is happening?
And why the [DocumentItemsID] field was assigned values out of its range?
Thanks
OK, finally I knew what was going on.
It is an issue happening only in SQL Server 2012. when SQL Server instance is restarted, then table's Identity value is jumped (int will jump 1000, where big int will jump 10000).
To stop this increment, Register -t272 to SQL Server Startup Parameter.
This solved the problem.
Thanks to Code Project article by S. M. Ahasan Habib, I was totally in the dark when I read it.
For details on how to register the startup parameter, read the article. It shows how to reproduce the issue and provide 2 solutions.
I have a connection to a MS SQL Server 2012 database in classic ASP (VBScript). This is my connection string:
Provider=SQL Server Native Client 11.0;Server=localhost;
Database=databank;Uid=myuser;Pwd=mypassword;
When I execute this SQL command:
UPDATE [info] SET [stamp]='2014-03-18 01:00:02',
[data]='12533 characters goes here',
[saved]='2014-03-18 01:00:00',
[confirmed]=0,[ip]=0,[mode]=3,[rebuild]=0,
[updated]=1,[findable]=0
WHERE [ID]=193246;
I get the following error:
Microsoft SQL Server Native Client 11.0
error '80040e31'
Query timeout expired
/functions.asp, line 476
The SQL query is pretty long, the data field is updated with 12533 characters. The ID column is indexed so finding the post with ID 193246 should be fast.
When I execute the exact same SQL expression (copied and pasted) on SQL Server Management Studio it completes successfully in no time. No problem what so ever. So there isn't a problem with the SQL itself. I've even tried using a ADODB.Recordset object and update via that (no self-written SQL) but I still get the same timeout error.
If I go to Tools > Options > Query Execution in the Management Studio I see that execution time-out is set to 0 (infinite). Under Tools > Options > Designers I see that transaction time-out is set to 30 seconds, which should be plenty enough since the script and database is on the same computer ("localhost" is in the connection string).
What is going on here? Why can I execute the SQL in the Management Studio but not in my ASP code?
Edit: Tried setting the 30 sec timeout in the Designers tab to 600 sec just to make sure, but I still get the same error (happens after 30 sec of page loading btw).
Here is the code that I use to execute the SQL on the ASP page:
Set Conn = Server.CreateObject("ADODB.Connection")
Conn.Open "Provider=SQL Server Native Client 11.0;
Server=localhost;Database=databank;Uid=myuser;Pwd=mypassword;"
Conn.Execute "UPDATE [info] SET [stamp]='2014-03-18 01:00:02',
[data]='12533 characters goes here',[saved]='2014-03-18 01:00:00',
[confirmed]=0,[ip]=0,[mode]=3,[rebuild]=0,[updated]=1,[findable]=0
WHERE [ID]=193246;"
Edit 2: Using Conn.CommandTimeout = 0 to give infinite execution time for the query does nothing, it just makes the query execute forever. Waited 25 min and it was still executing.
I then tried to separate the SQL into two SQL statements, the long data update in one and the other updates in the other. It still wouldn't update the long data field, just got timeout.
I tried this with two additional connection strings:
Driver={SQL Server};Server=localhost;Database=databank;Uid=myuser;Pwd=mypassword;
Driver={SQL Server Native Client 11.0};Server=localhost;Database=databank;Uid=myuser;Pwd=mypassword;
Didn't work. I even tried changing the data to 12533 A's just to see if the actual data was causing the problem. Nope, same problem.
Then I found out something interesting: I tried to execute the short SQL first, before the long update of the data field. It ALSO got query timeout exception...
But why? It has so little stuff to update in it (the whole SQL statement is less than 200 characters). Will investigate further.
Edit 3: I thought it might have been something to do with the login but I didn't find anything that looked wrong. I even tried changing the connection string to use the sa-account but even that didn't work, still getting "Query timeout expired".
This is driving me mad. There is no solution, no workaround and worst of all no ideas!
Edit 4: Went to Tools > Options > Designers in the Management Studio and ticked off the "Prevent saving changes that require table re-creation". It did nothing.
Tried changing the "data" column data type from "nvarchar(MAX)" to the inferior "ntext" type (I'm getting desperate). It didn't work.
Tried executing the smallest change on the post I could think of:
UPDATE [info] SET [confirmed]=0 WHERE [ID]=193246;
That would set a bit column to false. Didn't work. I tried executing the exact same query in the Management Studio and it worked flawlessly.
Throw me some ideas if you have got them because I'm running out for real now.
Edit 5: Have now also tried the following connection string:
Provider=SQLOLEDB.1;Password=mypassword;Persist Security Info=True;User ID=myuser;Initial Catalog=databank;Data Source=localhost
Didn't work. Only tried to set confirmed to false but still got a time out.
Edit 6: Have now attempted to update a different post in the same table:
UPDATE [info] SET [confirmed]=0 WHERE [ID]=1;
It also gave the timeout error. So now we know it isn't post specific.
I am able to update posts in other tables in the same "databank" database via ASP. I can also update tables in other databases on localhost.
Could there be something broken with the [info] table? I used the MS Access wizard to auto move data from Access to MS SQL Server 2012, it created columns of data type "ntext" and I manually went and changed that to "nvarchar(MAX)" since ntext is deprecated. Could something have broken down? It did require me to re-create the table when I changed the data type.
I have to get some sleep but I will be sure to check back tomorrow if anybody has responded to me. Please do, even if you only have something encouraging to say.
Edit 7: Quick edit before bed. Tried to define the provider as "SQLNCLI11" in the connection string as well (using the DLL name instead of the actual provider name). It makes no difference. Connection is created just as fine but the timeout still happens.
Also I'm not using MS SQL Server 2012 Express (as far as I know, "Express" wasn't mentioned anywhere during installation). It's the full thing.
If it helps, here's the "Help" > "About..." info that is given by the Management Studio:
Microsoft SQL Server Management Studio: 11.0.2100.60
Microsoft Analysis Services Client Tools: 11.0.2100.60
Microsoft Data Access Components (MDAC): 6.3.9600.16384
Microsoft MSXML: 3.0 5.0 6.0
Microsoft Internet Explorer: 9.11.9600.16521
Microsoft .NET Framework: 4.0.30319.34011
Operating System: 6.3.9600
Edit 8 (also known as the "programmers never sleep" edit):
After trying some things I eventually tried to close the database connection and reopening it right before executing the SQL statements. It worked all of a sudden. What the...?
I have had my code inside a subroutine and it turns out that outside of it the post that I was trying to update was already opened! So the reason for the timeout was that the post or the whole table was locked by the very same connection that tried to update it. So the connection (or CPU thread) was waiting for a lock that would never unlock.
Hate it when it turns out to be so simple after trying so hard.
The post had been opened outside the subroutine by this simple code:
Set RecSet = Conn.Execute("SELECT etc")
I just added the following before calling the subroutine.
RecSet.Close
Set RecSet = Nothing
The reason why this never crossed my mind is simply because this was allowed in MS Access but now I have changed to MS SQL Server and it wasn't so kind (or sloppy, rather). The created RecSet by Conn.Execute() had never created a locked post in the database before but now all of a sudden it did. Not too strange since the connection string and the actual database had changed.
I hope this post saves someone else some headache if you are migrating from MS Access to MS SQL Server. Though I can't imagine there are that many Access users left in the world nowadays.
Turns out that the post (or rather the whole table) was locked by the very same connection that I tried to update the post with.
I had a opened record set of the post that was created by:
Set RecSet = Conn.Execute()
This type of recordset is supposed to be read-only and when I was using MS Access as database it did not lock anything. But apparently this type of record set did lock something on MS SQL Server 2012 because when I added these lines of code before executing the UPDATE SQL statement...
RecSet.Close
Set RecSet = Nothing
...everything worked just fine.
So bottom line is to be careful with opened record sets - even if they are read-only they could lock your table from updates.