I'm working with a moderately sized database of about 60,000 records. I am working on building a mobile application which will be able to check out a single table into a compact .sdf on for viewing and altering on the device, then allow the user to sync their changes back up with the main server and receive any new information.
I have set it up with the Sync Framework using a WCF Service Library. When setting up the connection for some reason the database won't let me check "Use SQL Server Change Tracking" and throws up the error:
"'Unable to initialize the client database, because the schema for table 'Inventory' could >not be retrieved by the GetSchema() method of DbServerSyncProvider. Make sure that you can >establish a connection to the client database and that either the >SelectIncrementalInsertsCommand property or the SelectIncrementalUpdatesCommand property of >the SyncAdapter is specified correctly."
So I leave it unchecked and set it to use some already created columns "AddDateTime" and "LastEditTime" it seems to work okay, and after a massive amount of tweaking I have it partially working. The changes on the device sync up perfectly with the database, updates, deletes, all get applied. However, changes on the server side...never get updated. I've made sure everything is set up right with the bidirectional setup so that shouldn't be the problem. And, I let it sit overnight so the database received ~500 new records, this morning it actually synced the latest 24 entries to the database...out of 500 new. So that should be further proof that it's able to receive information from the server, but for all useful purposes, it's not.
I've tried pretty much everything and I'm honestly getting close to losing it. If anyone has any ideas they can throw out I can chase after I would be most grateful.
I'm not sure if I just need to go back and figure out why I can't do it with the "SQL Server Change Tracking". Or if there is a simple explanation for why it's not actually syncing 99% of the changes on the server back to the client.
Also, the server database table schema can't be altered as a lot of other services use it. But the compact database can be whatever the heck in needs to be to just store the table and sync properly in both directions.
Thank you!!
Quick Overview:
Using WCF and syncing without SQL Server Change Tracking (Fully enabled on server and database)
Syncing changes from client to server works perfectly
Syncing from server back to the client not so much: out of 500 new entries overnight, on a sync it downloaded 24.
EDIT: JuneT got me thinking about the time and their anchors. When I synced this morning it pulled 54 of about 300 new added records. I went in to the line (there are about 60 or so columns, so I removed them for readability, this is kind of a joke)
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT [Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > #sync_last_received_anchor AND [LastEditDate] <= >#sync_new_received_anchor AND [AddDateTime] <= #sync_last_received_anchor)";
And replaced #sync_last_received_anchor with two DIFFERENT times. Upon syncing it now returns the rows trapped between those two and took out the middle one giving me:
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT[Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > '2012-06-13 01:03:07.470' AND [AddDateTime] <= '2012-06-14 >08:54:27.727')"; (NOTE: The second date is just the current time now)
Though it returned a few hundred more rows than initially planned (set the date gap for 600, it returned just over 800). It does in fact sync the client up with the the new server changes.
Can anyone explain why I can't use #sync_last_received_anchor and what I should be looking for. I suppose I could always add box that allows the user to select the date to begin updating from? Or maybe add some sort of xml file to store the sync date that would be updated anytime a sync was -successfully- completed?
Thanks!
EDIT:
Ran the SQL profiler on it...the date (#sync_last_received_anchor) is getting set to 8 hours ahead of whatever time it really is. I have no idea how or why it's doing this, but that would definitely make sense.
Turns out the anchors are collected like this:
this.SelectNewAnchorCommand.CommandText = "Select #sync_new_received_anchor = GETUTCDATE()";
That UTC date is what was causing the 8 hours gap. To fix it either change it to GETDATE(), or convert your columns to UTC time in the WHERE clause of the commands.
After spending many hours with many cups of coffee, I've figured out how to solve this error of mine. While I was running the code on desktop testing area, everything seemed to be working perfectly; however the same code and webservice on target device gave this error repeatedly. Then, suddenly, the "dbo_" prefixes on compact database table names started looking interesting, like they were trying to tell me something really important. So, I've listened...
Configuration.SyncTables.Add("Products);
on ClientSyncAgent.cs should be changed to
Configuration.SyncTables.Add("dbo_Products");
[Exeunt]
Related
I know there is a system function to get the host name - which returns the Server Name (Application server name in my case).
SELECT HOST_NAME()
Is there a similar function like HOST_TIME() to get the time of application server?
Or is there any workaround at database side to get the time of application server when a procedure is called?
This doesn't sound like a database question to me. You should have the application pass along a timestamp along with whatever query is getting sent along to the database server.
If you must define this at the database level, why don't you simply add the time zone difference? I'm not sure where you're located, though here in the US if I had that issue I would simply do at GETDATE() and then use DATEADD to adjust to the timezone of the other table. The only thing which you would need to spend some time on would be what to do when the time changes (daylight savings time etc.). In my case, none of our users use our system at that hour, though that may be different in your case.
Good luck, and let us know what happens.
I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH
We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.
We got a legacy vb.net applicaction that was working for years
But all of a sudden it stops working yesterday and gives sql server timeout
Most part of application gives time out error , one part for example is below code :
command2 = New SqlCommand("select * from Acc order by AccDate,AccNo,AccSeq", SBSConnection2)
reader2 = command2.ExecuteReader()
If reader2.HasRows() Then
While reader2.Read()
If IndiAccNo <> reader2("AccNo") Then
CAccNo = CAccNo + 1
CAccSeq = 10001
IndiAccNo = reader2("AccNo")
Else
CAccSeq = CAccSeq + 1
End If
command3 = New SqlCommand("update Acc Set AccNo=#NewAccNo,AccSeq=#NewAccSeq where AccNo=#AccNo and AccSeq=#AccSeq", SBSConnection3)
command3.Parameters.Add("#AccNo", SqlDbType.Int).Value = reader2("AccNo")
command3.Parameters.Add("#AccSeq", SqlDbType.Int).Value = reader2("AccSeq")
command3.Parameters.Add("#NewAccNo", SqlDbType.Int).Value = CAccNo
command3.Parameters.Add("#NewAccSeq", SqlDbType.Int).Value = CAccSeq
command3.ExecuteNonQuery()
End While
End If
It was working and now gives time out in command3.ExecuteNonQuery()
Any ideas ?
~~~~~~~~~~~
Some information :
There isnt anything that has been changed on network and the app uses local database
The main issue is that even in development environment it donest work anymore
I'll state the obvious - something changed. It could be an upgrade that isn't having the desired effect - it could be a network component going south - it could be a flakey disk - it could be many things - but something in the access path has changed. What other problem indications are you seeing, including problems not directly related to this application? Where is the database stored (local disk, network storage box, written by angels on the head of a pin, other)? Has your system administrator "helped" or "improved" things somehow? The code has not worn out - something else has happened.
Is it possible that this query has been getting slower over time and is now just exceeded the default timeout?
How many records would be in the acc table and are there indexes on AccNo and AccSeq?
Also what version of SQL are you using?
How long since you updated statistics and rebuilt indexes?
How much has your data grown? Queries that work fine for small datasets can be bad for large ones.
Are you getting locking issues? [AMJ] Have you checked activity monitor to see if there are locks when the timeout occurs?
Have you run profiler to grab the query that is timing out and then run it directly onthe server? Is it faster then? Could also be network issues in moving the information from the database server to the application. That would at least tell you if it s SQl Server issue or a network issue.
And like Bob Jarvis said, what has recently changed on the server? Has something changed in the database structure itself? Has someone added a trigger?
I would suggest that there is a lock on one of the records that you are trying to update, or there are transactions that haven't been completed.
I know this is not part of your question, but after seeing your sample code i have to make this comment: is there any chance you could change your method of executing sql on your database? It is bad on so many levels.
Perhaps should you set the CommandTimeout property to a higher delay?
Doing so will allow your command to wait a little longer for the underlying database to respond. As I see it, perhaps are you not letting time enough for your database engine to perform all what is required before creating another command to perform your update.
Know that the SqlDataReader continues to "SELECT" while feeding the in-memory objects. Then, while reading, you require your code to update some other table, which your DBE just can't handle, by the time your SqlCommand requires, than times out.
any chances of a "quotes" as part of the strings you are passing to queries?
any chances of date dependent queries where a special condition is not working anymore?
Have you tested the obvious?
Have you run the "update Acc Set AccNo=#NewAccNo,AccSeq=#NewAccSeq where AccNo=#AccNo and AccSeq=#AccSeq" query directly on your SQL Server Management Studio? (Please replace the variables with some hard coded values)
Have you run the same test on another colleague's PC?
Can we make sure that the SQLConnection is working fine. It could be the case that SQL login criteria is changed and connection is getting a timeout. It will be probably more helpful if you post the error message here.
You can rewrite the update as a single query. This will run much faster than the original query.
UPDATE subquery
SET AccNo = NewAccNo, AccSeq = NewAccSeq
FROM
(SELECT AccNo, AccSeq,
DENSE_RANK() OVER (PARTITION BY AccNo ORDER BY AccNo) NewAccNo,
ROW_NUMBER() OVER (PARTITION BY AccNo ORDER BY AccDate, AccSeq)
+ 10000 NewAccSeq
FROM Acc) subquery
After HLGEM's suggestions, I would check the data and make sure it is okay. In cases like this, 95% of the time it is the data.
Make sure disk is defragged. Yes, I know, but it does make a difference. Not the built-in defragger. One that defrags and optimizes like PerfectDisk.
This may be a bit of a long shot, but if your entire application has stopped working, have you run out of space for the transaction log in your database? Either it's been specified to an absolute size, and that has been reached, or your disk is just full.
May be your tables include more information, and defined SqlConnection.ConnectionTimeout property value in config file with little value. And this value isn't necessary to execute your queries.
you can trying optimize your queries, and also rebuilt indexes.
I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?
Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.
I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?
I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.
Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.
Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.