We are experimenting with using RavenDB as a log target within our current NLog usage. We created a new RavenDB server and a database using Voron as the storage engine. Then it was just a matter of updating our nlog configurations, which we did.
For a while, everything was great, but then we ran into a situation where we are running calling Trace() within a database transaction. With ravenDB setup as the log target, this means the SaveChanges() call is made within the transaction as well.
Everything is great until we call Transaction.Commit(). RavenDB throws a 500 server error at that point, because DTC is not supported with the Voron engine (and is slated to be removed everywhere, from what I understand). I'm fine with this. I don't particularly care - writing to the log should not be part of the transaction anyway, because if something does go wrong, we don't want to remove the related log entries.
I searched the documentation hoping to find a configuration option that I could set that would just tell RavenDB to ignore transactions, or at least ignore DTC, but I can't find anything. Is there anyway I can get it to stop throwing these 500 server errors?
Set EnlistInDistributedTransactions = false on the document store, and it will work for you
Related
I have two legacy servers in GCE, which have both been flagged as using the deprecated metadata server endpoints. At this moment in time, they have hundreds of GB's of data between them in MySQL and MongoDB data, and risking upgrading something on these boxes which has an adverse affect is not an option.
We are currently in the process of migrating away from the data stored here, but for now, we need to keep them running.
Is anyone aware of any implications to either
a) doing nothing or
b) Just setting the disable-legacy-endpoints metadata flag to true ?
i.e. Will these instances stop working altogether if we leave them as they currently are?
After some more digging into what was actually using the Metadata API to start with, we found that they were being sent by stackdriver_agent which was installed an extremely long time ago while it was free, and just never removed.
Stopping this agent will remove all calls that we make with these two legacy servers.
If you are considering disabling with the disable-legacy-endpoints metadata flag, make sure to test it in a contained environment first, i.e. a new VM from a snapshot of the affected instance, before apply to production services.
For help identifying the instances making the calls, refer to this article
For help identifying the processes within the instances, refer to this article
Dears,
I have scenarios where I have to perform updates on the intrinsic database of LightSwitch and call some SQL stored procedures in the save pipeline all in ONE TRANSACTION, such that if an error happened in the LS save pipeline then my stored procedure calls are rolled back.
The recommended way of doing this is to set up an ambient transaction in the SaveChanges_Executing event, and to dispose of it in the SaveChanges_Executed and SaveChanges_ExecuteFailed events. As described in this articles http://www.codemag.com/Article/1103071
But this has two fatal problems:
It does not work when I publish the app on Azure since distributed transactions are not supported there.
Also it throws an error when I try to save changes to ApplicationData sources using the ServerApplicationContext. The error is this: The underlying provider failed on EnlistTransaction
Has anyone found a cleaner way to handle transactions in LightSwitch that works both on Azure and through ServerApplicationContext??
Thanks a lot
Currently, distributed transactions using MSDTC do not work against SQL Azure. They will work just fine against SQL Server in a VM running in Azure, however. MSDTC is tied to running on a domain controller, generally, and that doesn't make sense in the public cloud context. An alternative DTC is likely needed, but this is not something that has been publicly announced as of today.
I don't think Lightswitch is the main issue here (though perhaps it has some additional issue beyond what I have described).
I hope that at least explains why it doesn't work today - I wish I had a better answer for you, but right now it is not possible. The "workarounds" being used are to build applications that can be resilient to the commits happening on each side (or not) and recovering from the failed cases.
I'm starting to assess our company using RavenDB for storing some stuff that doesn't really belong in a relational database (we're traditionally a SQL Server shop). I installed RavenDB locally on my machine, created a database, added a document. Nice!
Being a DBA, I decided to see how backups/restores work. I backed up my database, deleted it, then restored it from the backup. After refreshing my admin screen, I saw my database. I clicked on it, and got a message that the database doesn't exist.
After a couple hours, I tried again. Still doesn't exist. A full day later, I walk into work, and try again. This time the database works. I've had similar situations with updating documents. The update seems to take anywhere between 1 second - several hours to show an update...
Is this normal for RavenDB?? Am I completely misconfigured?? I run SQL Server on my local machine and it's lightning-fast, so I can't imagine updating a single document could take that long. As-is, I can't imagine recommending we use RavenDB for anything.
Are you querying using indexes or getting documents by ID? Documents should be updated immediately (ACID). If indexes are slow to update (check their status using RavenDB Studio), it could be a configuration problem or something external like an anti-virus software can cause them to update slowly.
Apparently, at least for the document-update latency, the default for caching in queries is enabled, so I was getting cached results.
Jeffery,
No, that isn't normal by a long short. You should be able to immediately see what was changed.
Note that certain AV products will interfere with the HTTP pipeline and can affect RavenDB's usage. The studio will also auto update things only every 5 seconds (to reduce UI jitter), but that is about it.
Restoring a database (from the same machine), should take only as long as it take to copy the files (pure I/O bound operation).
If this is from another machine using a different version of Windows, we might need to run a check on the file, which can take a bit of time, but that doesn't sound like your scenario
I'm synchronizing SQL Server 2008 with ~6 SQL Server 2008 Express clients (everything R2 I believe), using the SyncOrchestrator or specifically using http://code.msdn.microsoft.com/windowsdesktop/Database-SyncSQL-Server-e97d1208 as a base with slight modifications. To my knowledge this means all connections are peers or nodes.
I have 2 scopes. One is download only and the other is upload only. The download only scope is ridden with identity columns primarily because I didn't know any better and still couldn't wrap my head around introducing Guids as the PK on the client side. It doesn't totally matter as all clients should have exact replicas of about 8 or so tables and these machines don't touch this data in any way, only read it.
The upload only scope uses Guids as fortunately I can control that portion of the database and there would be no way 10 clients all using the same identity seed could sync back to the server properly. Both scopes use the default provisioning with bulk inserts and the whole 9 yards so there shouldn't be anything I'm doing on the provisioning end to screw this up.
I initially set everything up not using PerformPostRestoreFixup AND the initial database would be manually synchronized with insert statements from the host. This seemed fine but no updates or deletes seemed to ever be applied. You can safely ignore this (only used for historical accuracy and to prove my ineptness) as I then used VS2010 Database Projects to rebuild the database down to schema only & synchronized. I then used the steps outlined here (http://social.microsoft.com/Forums/br/syncdevdiscussions/thread/9ac6d1a1-1565-4b82-a8d8-3d4a9ff5d07b) (sync, backup, restore, call performpostrestorefixup, sync on x clients) and on my dev box where I'm setting all this up I could see updates and deletes just fine. Its when I deploy this to the x clients that I'm not seeing a mirror of the database as I think I should.
The initial sync will complain and try to synchronize all records again. I believe this is expected. During ApplyChangeFailed event on the client I set everything other than DbConflictType.ErrorsOccurred to ApplyAction.RetryWithForceWrite. This may be a source of problems as I initially thought this should be done to force the change down to the client. I want the server to always win in this scenario but during trace I always see the phrase "Local wins" during the bulk insert/update calls. It's possible I'm seeing the error before the re-apply happens but it's awkward to look at.
The only problem I seem to be having is with the download only scope. The initial client database is about a week old now and if I use the performpostrestorefixup steps I don't see any of the updates that have applied between now and then as I think I should. It's as if SyncFx almost prefers a blank database on the client side to kick off the initial sync then all the updates seem to apply just fine with no ApplyChangesFailed events kicking off.
If anyone has seen this before or has a clue where to go I would greatly appreciate it. My brain has fried trying to determine what it is that's going on. My last ditch effort will be to deploy blank databases to all the clients and have them start the sync. I've had no issues with this on the dev side but I can only test one other client to know if that'll do anything different. Aside from that I don't know what to do other than to keep doing manual syncs which would defeat this purpose entirely. I thought PerformPostRestoreFixup would alleviate the issue entirely but I seem to be having the same problems with or without it or perhaps I'm not looking at what I need to be.
Thanks
I wanted to report and close the entry with my findings.
When I would deploy a previously configured client database, I'd often get ApplyChangeFailed events in the form of this log:
"[05:30:41 PM] - ApplyChange Failed: TableName: , Stage: ApplyingInserts, ConflictType: LocalInsertRemoteInsert, Action: RetryWithForceWrite"
This is what I thought would be expected as it tried to reinsert the data that is already there. What this should've been changed to was an update statement during RetryWithForceWrite but I found the data was not updating with what was being sent down.
Once I started each client with a completely blank database and provisioned locally, all of these errors went away. It's as if every client expects some unique id only it sets. I'm also using x64 builds versus x86 which may have some or no bearing on the results. I wish I could determine what exactly happened but it seems that when in doubt, and whenever possible, starting from absolute zero and letting sync fill in the data is your safest option.
We're using siteCore 6.5 and each time we start to publish items, users who are browsing the website will get server 500 errors which end up being
Transaction (Process ID ##) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
How can we setup SQL Server to give priority to a specific application? We cannnot modify any queries or code so it has to be done via SQL Server (or connection string)
I've seen "deadlock victim" in transaction, how to change the priority? and looked at http://msdn.microsoft.com/en-us/library/ms186736(v=SQL.105).aspx but these seem to be per session, not globally.
I don't care if it's a fix/change to SiteCore or a SQL solution.
I don't think you can set the deadlock priority globally - it's a session-only setting. There are not any connection string settings that I know of. The list of possible SqlConnection string settings can be found here.
It sounds to me like you're actually having a problem with the cache and every time you publish, it's clearing the cache and thus you're getting deadlocks with all these calls made at the same time. I haven't seen this sort of thing happen with 6.5 so you might also want to check into your caching. It would help a lot to look at your Sitecore logs and see if this is happening when caches are being created. Either way, check the caching guide on the SDN and see if that helps.