I was trying to create a sequence in SQL Server 2012 using below T-SQL script:
Create Sequence ConteoCycling
AS int
START WITH 2
MINVALUE 0
MaxVALUE 4
INCREMENT BY 1
GO
I gave it a min (0) and max (4) range values so that it recycles itself once the given range is consumed. After successful creation, I saw following message in the messages tab in output window of SQL Server Management Studio. This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence
values.
So the message says that it was able to set the cache size automatically. But, when I went into the properties window of the sequence from object explorer it doesn't show up any specific cache size but Default size cache option selected as shown in the snapshot below:
My question is that -
Based on the range I've selected for my sequence if SQL Server engine is somehow able to guess the cache size required for running through this sequence then why it is not setting a specific size instead.
Why it still shows Default Size option selected? Is it somehow dynamic in nature if it is set to Default Size option? It will be great if someone can enlighten me on what the SQL Server engine is doing behind the scenes in regards to the cache size of the sequence during its creation and recycling process?
This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence values
The message(in italics) is not correct ..See the connect item raised for this and i am not able to repro the issue any more on SQLServer 2016 developer edition.
Relevant items from Connect item:
Thanks for reporting this. The behavior is overall correct. The message is meant as a hint that the cache size is greater than the number of remaining sequence values. In fact we are not storing a different value for the cache size than the one specified in the alter/create sequence statement.
The actual bug here is the message stating something else. We will fix this for a future release.
Posted by Jan [MSFT] on 4/27/2012 at 9:41 AM
Thanks again for reporting this issue. We have fixed the message and the fix will be available in a future update of SQL Server.
Message i am getting on SQL2016 developer edition for your test scenario..
The sequence object 'ConteoCycling' cache size is greater than the number of available values.
Related
I have a dataset that has 1 billion rows. The data is stored in Hive. Also, I put Impala as a layer between Hive and Superset. The queries that are run in Superset have row limit max. 100.000. I need to change it with no row limit. Furthermore, I need to make a visualization from what the queries return from SQL lab, but it cannot be done because there is a timeout cache limit also. Therefore, if I change/increase the row limit in SQL lab and timeout cache in visualization, then I guess, there will be no problem.
i am trying my best to answer below. Pls backup all config files before changing.
For SQL row limit issue -
modify config.py file inside the 'anaconda3/lib/python3.7/site-packages' and set
DEFAULT_SQLLAB_LIMIT to 1000000000
QUERY_SEARCH_LIMIT to 1000000000
modify viz.py and set-
filter_row_limit to 1000000000
For timeout issue, increase below parameter values -
For synchronous queries - change superset_config.py
SUPERSET_WEBSERVER_TIMEOUT
SQLLAB_TIMEOUT
SUPERSET_TIMEOUT --This value should be >=SQLLAB_TIMEOUT
For async queries -
SQLLAB_ASYNC_TIME_LIMIT_SEC
There must be a config parameter to change the max row limit in site-packages/superset, DEFAULT_SQLLAB_LIMIT to set the default and SQL_MAX_ROW to set the max in SQL Lab.
I guess we have to run superset_init again to make the change appear on Superset.
I've been able to solve the problem as follows:
modify config.py in site-packages/superset:
increase SQL_MAX_ROW from 100'000.
I have an ASP Classic script that has to do a lot of things (admin only) including running a stored proc that takes 4 mins to execute.
However even though I have
Server.ScriptTimeout = 0
Right at the top of the page it seems to ignore it - 2003 and 2012 servers.
I have tried indexing the proc as much as possible but its for a drop down of commonly searched for terms so I have to clean out all the hacks, sex spam and so on.
What I don't understand is why the Server.ScriptTimeout = 0 is being ignored.
The actual error is this and comes just
Active Server Pages error 'ASP 0113'
Script timed out
/admin/restricted/fix.asp
The maximum amount of time for a script to execute was exceeded. You
can change this limit by specifying a new value for the property
Server.ScriptTimeout or by changing the value in the IIS
administration tools.
It never used to do this and it should override the default set in IIS of 30 seconds.
I have ensured the command timeout of the proc is 0 so unlimited and it errors AFTER coming back from the proc.
Anybody got any ideas?
0 is an invalid value for Server.ScriptTimeout for 2 reasons:
ASP pages have to have a timeout. They cannot run indefinitely.
ScriptTimeout can never be lower than the AspScriptTimeout set in IIS according to this statement from the MSDN docs:
For example, if NumSeconds is set to 10, and the metabase setting
contains the default value of 90 seconds, scripts will time out after
90 seconds.
The best way to work around this is to set Server.ScriptTimeout to a really high value.
I am using an ORACLE db with SAS/connect.
I recently implemented a change in my libname statement (a week ago) in which I added the following (don't know if related to issue):
insertbuff=10000 updatebuff=10000 readbuff=10000
Starting yesterday, I have been having an ORACLE issue when, after doing a
proc sql;
drop table oralib.mytable;
quit;
data oralib.mytable;
set work.mytable;
run;
I get the following error:
ERROR: ERROR: ERROR: ORACLE execute error: ORA-04031: unable to
allocate 4160 bytes of shared memory ("shared pool","unknown
object","sga heap(1,0)","modification "). With the occurrence of the above ERROR, the error limit of 1 set by the
ERRLIMIT= option has been reached. ROLLBACK has been issued(Any Rows processed after the last COMMIT are lost).
Total rows processed: 1001
Rows failed : 1
It seems to happen randomly on any table of any size. Sometimes it will go through, sometimes (most of the times) it won't. Is there a shared pool release I should do from SAS?
Thanks for your help!
The shared pool is a memory structure on Oracle which keeps the following stuff:
data dictionary cache
SQL query and PL/SQL function result caches
storage for recently executed code in its parsed form
It is possible to flush the shared pool, but this is not a good idea and I would not recommend it. What you have to do is size the shared pool of the database properly. Note that the shared pool is a pool for the entire Oracle instance - it is not on a per user base. So, if there are other users of the database, they might contribute the problem. I doubt that any particular query is the cause and I guess that the problem is that the shared pool is undersized.
In case you have some DBA privileges granted for your user, you can check the current shared pool size by running the following query:
SELECT * FROM v$sgainfo;
You can increase the size of the shared pool with the following query
ALTER SYSTEM SET SHARED_POOL_SIZE = 200M;
Nevertheless, the best solution will be turn to the DBA managing the database (if there is such).
I'm not a SAS guy, so, I'll answer your question from the POV of an Oracle DBA, which is what I am.
ORA-04031 means you're running out of space in the shared pool. With a product like SAS, I'm sure they have a recommended minimum size that the shared pool should be. So, you should check the SAP installaiton documentation, and confirm whether your database has a large enough shared pool size set. (Use show parameter shared_pool_size to see what size it's set in yourr database.)
Second, I'm not familiar with the changes you made, and I'm not sure if that would have an effect on the shared pool utilization. Perhaps check the SAS documentation on that one.
Third, it could be an Oracle bug. You should check My Oracle Support for your version of Oracle, and do a search on ORA-04031, with those specific arguments you are seeing in your error message. If it's a known bug, there may be a patch already available.
If it's none of the above, you may need to open an SR with Oracle.
Hope that helps.
I'm working with a moderately sized database of about 60,000 records. I am working on building a mobile application which will be able to check out a single table into a compact .sdf on for viewing and altering on the device, then allow the user to sync their changes back up with the main server and receive any new information.
I have set it up with the Sync Framework using a WCF Service Library. When setting up the connection for some reason the database won't let me check "Use SQL Server Change Tracking" and throws up the error:
"'Unable to initialize the client database, because the schema for table 'Inventory' could >not be retrieved by the GetSchema() method of DbServerSyncProvider. Make sure that you can >establish a connection to the client database and that either the >SelectIncrementalInsertsCommand property or the SelectIncrementalUpdatesCommand property of >the SyncAdapter is specified correctly."
So I leave it unchecked and set it to use some already created columns "AddDateTime" and "LastEditTime" it seems to work okay, and after a massive amount of tweaking I have it partially working. The changes on the device sync up perfectly with the database, updates, deletes, all get applied. However, changes on the server side...never get updated. I've made sure everything is set up right with the bidirectional setup so that shouldn't be the problem. And, I let it sit overnight so the database received ~500 new records, this morning it actually synced the latest 24 entries to the database...out of 500 new. So that should be further proof that it's able to receive information from the server, but for all useful purposes, it's not.
I've tried pretty much everything and I'm honestly getting close to losing it. If anyone has any ideas they can throw out I can chase after I would be most grateful.
I'm not sure if I just need to go back and figure out why I can't do it with the "SQL Server Change Tracking". Or if there is a simple explanation for why it's not actually syncing 99% of the changes on the server back to the client.
Also, the server database table schema can't be altered as a lot of other services use it. But the compact database can be whatever the heck in needs to be to just store the table and sync properly in both directions.
Thank you!!
Quick Overview:
Using WCF and syncing without SQL Server Change Tracking (Fully enabled on server and database)
Syncing changes from client to server works perfectly
Syncing from server back to the client not so much: out of 500 new entries overnight, on a sync it downloaded 24.
EDIT: JuneT got me thinking about the time and their anchors. When I synced this morning it pulled 54 of about 300 new added records. I went in to the line (there are about 60 or so columns, so I removed them for readability, this is kind of a joke)
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT [Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > #sync_last_received_anchor AND [LastEditDate] <= >#sync_new_received_anchor AND [AddDateTime] <= #sync_last_received_anchor)";
And replaced #sync_last_received_anchor with two DIFFERENT times. Upon syncing it now returns the rows trapped between those two and took out the middle one giving me:
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT[Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > '2012-06-13 01:03:07.470' AND [AddDateTime] <= '2012-06-14 >08:54:27.727')"; (NOTE: The second date is just the current time now)
Though it returned a few hundred more rows than initially planned (set the date gap for 600, it returned just over 800). It does in fact sync the client up with the the new server changes.
Can anyone explain why I can't use #sync_last_received_anchor and what I should be looking for. I suppose I could always add box that allows the user to select the date to begin updating from? Or maybe add some sort of xml file to store the sync date that would be updated anytime a sync was -successfully- completed?
Thanks!
EDIT:
Ran the SQL profiler on it...the date (#sync_last_received_anchor) is getting set to 8 hours ahead of whatever time it really is. I have no idea how or why it's doing this, but that would definitely make sense.
Turns out the anchors are collected like this:
this.SelectNewAnchorCommand.CommandText = "Select #sync_new_received_anchor = GETUTCDATE()";
That UTC date is what was causing the 8 hours gap. To fix it either change it to GETDATE(), or convert your columns to UTC time in the WHERE clause of the commands.
After spending many hours with many cups of coffee, I've figured out how to solve this error of mine. While I was running the code on desktop testing area, everything seemed to be working perfectly; however the same code and webservice on target device gave this error repeatedly. Then, suddenly, the "dbo_" prefixes on compact database table names started looking interesting, like they were trying to tell me something really important. So, I've listened...
Configuration.SyncTables.Add("Products);
on ClientSyncAgent.cs should be changed to
Configuration.SyncTables.Add("dbo_Products");
[Exeunt]
We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.