Server.ScriptTimeout = 0 Not working - sql

I have an ASP Classic script that has to do a lot of things (admin only) including running a stored proc that takes 4 mins to execute.
However even though I have
Server.ScriptTimeout = 0
Right at the top of the page it seems to ignore it - 2003 and 2012 servers.
I have tried indexing the proc as much as possible but its for a drop down of commonly searched for terms so I have to clean out all the hacks, sex spam and so on.
What I don't understand is why the Server.ScriptTimeout = 0 is being ignored.
The actual error is this and comes just
Active Server Pages error 'ASP 0113'
Script timed out
/admin/restricted/fix.asp
The maximum amount of time for a script to execute was exceeded. You
can change this limit by specifying a new value for the property
Server.ScriptTimeout or by changing the value in the IIS
administration tools.
It never used to do this and it should override the default set in IIS of 30 seconds.
I have ensured the command timeout of the proc is 0 so unlimited and it errors AFTER coming back from the proc.
Anybody got any ideas?

0 is an invalid value for Server.ScriptTimeout for 2 reasons:
ASP pages have to have a timeout. They cannot run indefinitely.
ScriptTimeout can never be lower than the AspScriptTimeout set in IIS according to this statement from the MSDN docs:
For example, if NumSeconds is set to 10, and the metabase setting
contains the default value of 90 seconds, scripts will time out after
90 seconds.
The best way to work around this is to set Server.ScriptTimeout to a really high value.

Related

increase superset sqllab timeout

How to increase sqllab timeout in apache superset? I have tried increasing SUPERSET_TIMEOUT but this does not work. Query gets timedout in 30 secs. I am not able to run a complex query.
See https://superset.incubator.apache.org/faq.html#why-are-my-queries-timing-out for possible reasons for queries to time out, and solutions.
You can modify the time out parameter in File config.py
file path lib/python3.7/site-packages/superset-0.30.1-py3.7.egg/superset
#Timeout duration for SQL Lab synchronous queries
SQLLAB_TIMEOUT = 30
In my case (Superset 0.23.3 on HDP 3.1.0) adding this configuration option to superset_config.py increased the timeout:
SQLLAB_TIMEOUT=120
Also, keep in mind that SUPERSET_TIMEOUT should be equal or greater to the above number, otherwise gunicorn will drop the request before it finishes.

Automatic cache sizing for sequences in SQL Server 2012

I was trying to create a sequence in SQL Server 2012 using below T-SQL script:
Create Sequence ConteoCycling
AS int
START WITH 2
MINVALUE 0
MaxVALUE 4
INCREMENT BY 1
GO
I gave it a min (0) and max (4) range values so that it recycles itself once the given range is consumed. After successful creation, I saw following message in the messages tab in output window of SQL Server Management Studio. This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence
values.
So the message says that it was able to set the cache size automatically. But, when I went into the properties window of the sequence from object explorer it doesn't show up any specific cache size but Default size cache option selected as shown in the snapshot below:
My question is that -
Based on the range I've selected for my sequence if SQL Server engine is somehow able to guess the cache size required for running through this sequence then why it is not setting a specific size instead.
Why it still shows Default Size option selected? Is it somehow dynamic in nature if it is set to Default Size option? It will be great if someone can enlighten me on what the SQL Server engine is doing behind the scenes in regards to the cache size of the sequence during its creation and recycling process?
This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence values
The message(in italics) is not correct ..See the connect item raised for this and i am not able to repro the issue any more on SQLServer 2016 developer edition.
Relevant items from Connect item:
Thanks for reporting this. The behavior is overall correct. The message is meant as a hint that the cache size is greater than the number of remaining sequence values. In fact we are not storing a different value for the cache size than the one specified in the alter/create sequence statement.
The actual bug here is the message stating something else. We will fix this for a future release.
Posted by Jan [MSFT] on 4/27/2012 at 9:41 AM
Thanks again for reporting this issue. We have fixed the message and the fix will be available in a future update of SQL Server.
Message i am getting on SQL2016 developer edition for your test scenario..
The sequence object 'ConteoCycling' cache size is greater than the number of available values.

Sync Framework between SQL Server 2008 and SQL Server CE

I'm working with a moderately sized database of about 60,000 records. I am working on building a mobile application which will be able to check out a single table into a compact .sdf on for viewing and altering on the device, then allow the user to sync their changes back up with the main server and receive any new information.
I have set it up with the Sync Framework using a WCF Service Library. When setting up the connection for some reason the database won't let me check "Use SQL Server Change Tracking" and throws up the error:
"'Unable to initialize the client database, because the schema for table 'Inventory' could >not be retrieved by the GetSchema() method of DbServerSyncProvider. Make sure that you can >establish a connection to the client database and that either the >SelectIncrementalInsertsCommand property or the SelectIncrementalUpdatesCommand property of >the SyncAdapter is specified correctly."
So I leave it unchecked and set it to use some already created columns "AddDateTime" and "LastEditTime" it seems to work okay, and after a massive amount of tweaking I have it partially working. The changes on the device sync up perfectly with the database, updates, deletes, all get applied. However, changes on the server side...never get updated. I've made sure everything is set up right with the bidirectional setup so that shouldn't be the problem. And, I let it sit overnight so the database received ~500 new records, this morning it actually synced the latest 24 entries to the database...out of 500 new. So that should be further proof that it's able to receive information from the server, but for all useful purposes, it's not.
I've tried pretty much everything and I'm honestly getting close to losing it. If anyone has any ideas they can throw out I can chase after I would be most grateful.
I'm not sure if I just need to go back and figure out why I can't do it with the "SQL Server Change Tracking". Or if there is a simple explanation for why it's not actually syncing 99% of the changes on the server back to the client.
Also, the server database table schema can't be altered as a lot of other services use it. But the compact database can be whatever the heck in needs to be to just store the table and sync properly in both directions.
Thank you!!
Quick Overview:
Using WCF and syncing without SQL Server Change Tracking (Fully enabled on server and database)
Syncing changes from client to server works perfectly
Syncing from server back to the client not so much: out of 500 new entries overnight, on a sync it downloaded 24.
EDIT: JuneT got me thinking about the time and their anchors. When I synced this morning it pulled 54 of about 300 new added records. I went in to the line (there are about 60 or so columns, so I removed them for readability, this is kind of a joke)
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT [Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > #sync_last_received_anchor AND [LastEditDate] <= >#sync_new_received_anchor AND [AddDateTime] <= #sync_last_received_anchor)";
And replaced #sync_last_received_anchor with two DIFFERENT times. Upon syncing it now returns the rows trapped between those two and took out the middle one giving me:
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT[Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > '2012-06-13 01:03:07.470' AND [AddDateTime] <= '2012-06-14 >08:54:27.727')"; (NOTE: The second date is just the current time now)
Though it returned a few hundred more rows than initially planned (set the date gap for 600, it returned just over 800). It does in fact sync the client up with the the new server changes.
Can anyone explain why I can't use #sync_last_received_anchor and what I should be looking for. I suppose I could always add box that allows the user to select the date to begin updating from? Or maybe add some sort of xml file to store the sync date that would be updated anytime a sync was -successfully- completed?
Thanks!
EDIT:
Ran the SQL profiler on it...the date (#sync_last_received_anchor) is getting set to 8 hours ahead of whatever time it really is. I have no idea how or why it's doing this, but that would definitely make sense.
Turns out the anchors are collected like this:
this.SelectNewAnchorCommand.CommandText = "Select #sync_new_received_anchor = GETUTCDATE()";
That UTC date is what was causing the 8 hours gap. To fix it either change it to GETDATE(), or convert your columns to UTC time in the WHERE clause of the commands.
After spending many hours with many cups of coffee, I've figured out how to solve this error of mine. While I was running the code on desktop testing area, everything seemed to be working perfectly; however the same code and webservice on target device gave this error repeatedly. Then, suddenly, the "dbo_" prefixes on compact database table names started looking interesting, like they were trying to tell me something really important. So, I've listened...
Configuration.SyncTables.Add("Products);
on ClientSyncAgent.cs should be changed to
Configuration.SyncTables.Add("dbo_Products");
[Exeunt]

ADO Connection Timeout problem

Using TADOConnection class to connect to SQL server 2005 db.
Having ConnectionTimeOut := 5; // seconds.
Trying to open the connection synchronously.
When the server is available and running, the connection time out works fine. If the server is not available or network connection is lost, then attempting to open a connection waits for more than 5 seconds (may be 20 secs).
Is there any property or method that needs to be set to influence this behavior?
No, it's enough to set the ConnectionTimeout property
I've had the exact problem (D2009, MSSQL2005), but TADOConnection.ConnectionTimeout works fine for me (btw. the deafult value for this property is 15 seconds). Note, that the timeout dispersion is quite wide, so once you'll be timed out after 5 seconds and later on e.g. after 10 seconds, but 20 seconds is really too much for the connection attempt.
Probably you have a problem with CommandTimeout (if you are trying to execute a query with the associated ADO data set component). You need to remember, that if you set TADOConnection.ConnectionTimeout := 5 and in your data set component e.g. TADOQuery.CommandTimeout := 15, and you're trying to execute query, then you will get timeout after 20 seconds.
If you really have a problem with query execution, not only connection attempt, this post may help you
ADO components CommandTimeout
TADOConnection.ConnectionTimeout - Timeout in milliseconds to connect to data source
TADOConnection.CommandTimeout - Timeout in milliseconds to execute command
if you getting timeout error on trying to connect, increase value of ConnectionTimeout property, else if you getting an error on executing some query, - increase value of CommandTimeout property.

How to reduce timeout period when sql 2005 database unavailable

I have some ASP (Classic) code that queries a SQL 2005 Express database. I'm currently handling programmatically if this DB goes down by handling the error when someone tries to connect and can't. I capture the error and bypass subsequent db queries to this database using a session variable.
My problem is that this first query takes about 20 seconds before it timeouts.
I'd like to reduce this timeout length but can't find which property either in the code or database is the right one to reduce.
I've tried following in the code;
con.CommandTimeout = 5
con.CONNECTIONTIMEOUT = 5
Any suggestions please?
Thanks,
Andy
First off you should investigate why the DB is going down at all. We manage servers for hundreds of clients and have never run into a problem with the DB going down unless it was scheduled maintenance.
Besides that, you're already onto the right properties.
"Connect Timeout" is set in the connection string and controls how long the client waits to establish a connection to the database. It should be safe to lower this value in most cases--connections should never take long to establish.
"CommandTimeout" is a property on the IDbCommand implementation and controls how long the client waits for a particular query to return. You can lower this value if you know your queries will not take longer than the value you're setting.
Ended up using the "Connect Timeout" option within ADODB.Connection string.
e.g.
Set con = Server.CreateObject( "ADODB.Connection" )
con.Open "Provider=SQLOLEDB;Server=databaseserver;User ID=databaseuser;Password=databasepassword;Initial Catalog=databasename;Connect Timeout=5;"
If Err.Number = 0 And con.Errors.Count = 0 Then
'connected to database successfully
Else
'did not connect to database successfully within timeout period specified (5 seconds in this example)
End If