I'm trying to understand the Cache dependencies example, but some of it is eluding me.
var cts = new CancellationTokenSource();
_cache.Set(CacheKeys.DependentCTS, cts);
using (var entry = _cache.CreateEntry(CacheKeys.Parent))
{
// expire this entry if the dependant entry expires.
entry.Value = DateTime.Now;
entry.RegisterPostEvictionCallback(DependentEvictionCallback, this);
_cache.Set(CacheKeys.Child,
DateTime.Now,
new CancellationChangeToken(cts.Token));
}
CancellationTokenSource allows evicting multiple cache entries as a group. But what's the point of the using block in this example? I downloaded the sample project and replaced the using block with the following:
_cache.Set( CacheKeys.Parent, DateTime.Now, new CancellationChangeToken( cts.Token ) );
_cache.Set( CacheKeys.Child, DateTime.Now, new CancellationChangeToken( cts.Token ) );
While my two lines are simpler, they seem to have the same effect as the using block. The doc says:
With the using pattern in the code above, cache entries created inside the using block will inherit triggers and expiration settings.
What does "triggers and expiration settings" refer to here?
I must add dependent cache entries in my application, but they must be added at separate times. Therefore I can't take the approach shown in the example's using block since it requires both entries be added at the same time, right? What functionality am I missing out on by instead taking the approach shown above in my two lines of code?
Finally, shouldn't Dispose be called on CancellationTokenSource after the Cancel call? The sample doesn't do that.
Related
This is a bit of an edge case, and I would submit this as a bug in the repo if I could find it...
Consider the following LINQPad snippet:
void Main()
{
var memoryCache = new MemoryCache(new MemoryCacheOptions
{
SizeLimit = 1 // <-- Setting to 2 fixes the issue
});
Set(memoryCache);
memoryCache.Get("A").Dump(); // Yields 1
Set(memoryCache);
memoryCache.Get("A").Dump(); // Yields null
}
private void Set(MemoryCache memoryCache)
{
//memoryCache.Remove("A"); // <-- Also fixes the issue
memoryCache.Set("A", 1, new MemoryCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromDays(1),
SlidingExpiration = TimeSpan.FromDays(1),
Size = 1
});
}
My question is, when using .Set(), is a new entry added, then the old removed, thus requiring extra space allocated in the cache?
This seems to relate to a bug I logged (which was rejected). You can see the source for this here, and from what I read the logic in your (and my) case works like this:
Add item requested.
Size would be exceeded if the item was to be added (UpdateCacheSizeExceedsCapacity()) therefore reject request (silently, which is what I objected to).
Also, because this condition was detected, kick off OvercapacityCompaction(), which will remove your item.
There looks to be a race-condition, as the compaction work is queued to a background thread; perhaps occasionally your test finds the item still there?
To answer your specific question - no, it's not first added then the excess removed.
Edit:
Re the second Get() always returning null... I missed some extra handling of the existing entry, if found (it doesn't improve the outcome though). Prior to checking whether the size would be exceeded by adding the item, there is this:
if (_entries.TryGetValue(entry.Key, out CacheEntry priorEntry))
{
priorEntry.SetExpired(EvictionReason.Replaced);
}
I.e. if it finds your existing entry it marks it as evicted. It then applies the UpdateCacheSizeExceedsCapacity() test, without factoring in that it's just evicted the existing entry (which arguably it could).
Later on, still in Set()/SetEntry(), in the exceeds-capacity case, it does this:
if (priorEntry != null)
{
RemoveEntry(priorEntry);
}
...which immediately removes the previous entry. So whether and when OvercapacityCompaction() (or ScanForExpiredItems()) would have got it doesn't matter, it's gone before returning from the Set()/SetEntry().
(I've also updated the source link above to the current value; doesn't change the logic).
I am very new to NServiceBus, and in one of our project, we want to accomplish following -
Whenever table data is modified in Sql server, construct a message and insert in sql server broker queue
Read the broker queue message using NServiceBus
Publish the message again as another event so that other subscribers
can handle it.
Now it is point 2, that I do not have much clue, how to get it done.
I have referred the following posts, after which I was able to enter the message in broker queue, but unable to integrate with NServiceBus in our project, as the NServiceBus libraries are of older version and also many methods used are deprecated. So using them with current versions is getting very troublesome, or if I was doing it in improper way.
http://www.nullreference.se/2010/12/06/using-nservicebus-and-servicebroker-net-part-2
https://github.com/jdaigle/servicebroker.net
Any help on the correct way of doing this would be invaluable.
Thanks.
I'm using the current version of nServiceBus (5), VS2013 and SQL Server 2008. I created a Database Change Listener using this tutorial, which uses SQL Server object broker and SQLDependency to monitor the changes to a specific table. (NB This may be deprecated in later versions of SQL Server).
SQL Dependency allows you to use a broad selection of all the basic SQL functionality, although there are some restrictions that you need to be aware of. I modified the code from the tutorial slightly to provide better error information:
void NotifyOnChange(object sender, SqlNotificationEventArgs e)
{
// Check for any errors
if (#"Subscribe|Unknown".Contains(e.Type.ToString())) { throw _DisplayErrorDetails(e); }
var dependency = sender as SqlDependency;
if (dependency != null) dependency.OnChange -= NotifyOnChange;
if (OnChange != null) { OnChange(); }
}
private Exception _DisplayErrorDetails(SqlNotificationEventArgs e)
{
var message = "useful error info";
var messageInner = string.Format("Type:{0}, Source:{1}, Info:{2}", e.Type.ToString(), e.Source.ToString(), e.Info.ToString());
if (#"Subscribe".Contains(e.Type.ToString()) && #"Invalid".Contains(e.Info.ToString()))
messageInner += "\r\n\nThe subscriber says that the statement is invalid - check your SQL statement conforms to specified requirements (http://stackoverflow.com/questions/7588572/what-are-the-limitations-of-sqldependency/7588660#7588660).\n\n";
return new Exception(messageMain, new Exception(messageInner));
}
I also created a project with a "database first" Entity Framework data model to allow me do something with the changed data.
[The relevant part of] My nServiceBus project comprises two "Run as Host" endpoints, one of which publishes event messages. The second endpoint handles the messages. The publisher has been setup to IWantToRunAtStartup, which instantiates the DBListener and passes it the SQL statement I want to run as my change monitor. The onChange() function is passed an anonymous function to read the changed data and publish a message:
using statements
namespace Sample4.TestItemRequest
{
public partial class MyExampleSender : IWantToRunWhenBusStartsAndStops
{
private string NOTIFY_SQL = #"SELECT [id] FROM [dbo].[Test] WITH(NOLOCK) WHERE ISNULL([Status], 'N') = 'N'";
public void Start() { _StartListening(); }
public void Stop() { throw new NotImplementedException(); }
private void _StartListening()
{
var db = new Models.TestEntities();
// Instantiate a new DBListener with the specified connection string
var changeListener = new DatabaseChangeListener(ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
/* START OF EVENT HANDLING CODE */
//This uses LINQ against the EF data model to get the changed records
IEnumerable<Models.TestItems> _NewTestItems = DataAccessLibrary.GetInitialDataSet(db);
while (_NewTestItems.Count() > 0)
{
foreach (var qq in _NewTestItems)
{
// Do some processing, if required
var newTestItem = new NewTestStarted() { ... set properties from qq object ... };
Bus.Publish(newTestItem);
}
// Because there might be a number of new rows added, I grab them in small batches until finished.
// Probably better to use RX to do this, but this will do for proof of concept
_NewTestItems = DataAccessLibrary.GetNextDataChunk(db);
}
changeListener.Start(string.Format(NOTIFY_SQL));
/* END OF EVENT HANDLING CODE */
};
// Now everything has been set up.... start it running.
changeListener.Start(string.Format(NOTIFY_SQL));
}
}
}
Important The OnChange event firing causes the listener to stop monitoring. It basically is a single event notifier. After you have handled the event, the last thing to do is restart the DBListener. (You can see this in the line preceding the END OF EVENT HANDLING comment).
You need to add a reference to System.Data and possibly System.Data.DataSetExtensions.
The project at the moment is still proof of concept, so I'm well aware that the above can be somewhat improved. Also bear in mind I had to strip out company specific code, so there may be bugs. Treat it as a template, rather than a working example.
I also don't know if this is the right place to put the code - that's partly why I'm on StackOverflow today; to look for better examples of ServiceBus host code. Whatever the failings of my code, the solution works pretty effectively - so far - and meets your goals, too.
Don't worry too much about the ServiceBroker side of things. Once you have set it up, per the tutorial, SQLDependency takes care of the details for you.
The ServiceBroker Transport is very old and not supported anymore, as far as I can remember.
A possible solution would be to "monitor" the interesting tables from the endpoint code using something like a SqlDependency (http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx) and then push messages into the relevant queues.
.m
There is a sha value represent to the status for git repository in current status?
The sha value to be updated each time the Object Database updated and References changed if git has this sha value.
In other words, the sha value represent to the current version of whole repository.
Here is my code for calculate a sha for current references. There is a git level API or interface?
private string CalcBranchesSha(bool includeTags = false)
{
var sb = new StringBuilder();
sb.Append(":HEAD");
if (_repository.Head.Tip != null)
sb.Append(_repository.Head.Tip.Sha);
sb.Append(';');
foreach (var branch in _repository.Branches.OrderBy(s => s.Name))
{
sb.Append(':');
sb.Append(branch.Name);
if (branch.Tip != null)
sb.Append(branch.Tip.Sha);
}
sb.Append(';');
if (includeTags)
{
foreach (var tag in _repository.Tags.OrderBy(s => s.Name))
{
sb.Append(':');
sb.Append(tag.Name);
if (tag.Target != null)
sb.Append(tag.Target.Sha);
}
}
return sb.ToString().CalcSha();
}
There is a sha value represent to the status for git repository in current status?
In git parlance, the status of a repository usually refers to the paths that have differences between the working directory, the index and the current HEAD commit.
However, it doesn't look like you're after this. From what I understand, you're trying to calculate a checksum that represents the state of the current repository (ie. what your branches and tags point to).
Regarding the code, it may be improved in some ways to get a more precise checksum:
Account for all the references (beside refs/tags and refs/heads) in the repository (think refs/stash, refs/notes or the generated backup references in refs/original when one rewrite the history of a repository).
Consider disambiguating symbolic from direct references. (ie: HEAD->master->08a4217 should lead to a different checksum than HEAD->08a4127)
Below a modified version of the code which deals with those two points above, by leveraging the Refs namespace:
private string CalculateRepositoryStateSha(IRepository repo)
{
var sb = new StringBuilder();
sb.Append(":HEAD");
sb.Append(repo.Refs.Head.TargetIdentifier);
sb.Append(';');
foreach (var reference in repo.Refs.OrderBy(r => r.CanonicalName))
{
sb.Append(':');
sb.Append(reference.CanonicalName);
sb.Append(reference.TargetIdentifier);
sb.Append(';');
}
return sb.ToString().CalcSha();
}
Please keep in mind the following limits:
This doesn't consider the changes to the index or the workdir (eg. the same checksum will be returned either a file has been staged or not)
There are ways to create object in the object database without modifying the references. Those kind of changes won't be reflected with the code above. One possible hack-ish way to do this could be to also append to the StringBuuilder, the number of object in the object database (ie. repo.ObjectDatabase.Count()), but that may hinder the overall performance as this will enumerate all the objects each time the checksum is calculated).
There is a git level API or interface?
I don't know of any equivalent native function in git (although similar result may be achieved through some scripting). There's nothing native in libgit2 or Libgit2Sharp APIs.
What is a good case/example for using the ScheduledDisposable in Reactive Rx
I like the using the CompositeDisposable and SerialDisposable, but would you need the ScheduledDisposable.
The logic of using the Rx disposables is that code that performs some sort of set up operation can return an IDisposable that anonymously contains the code that will do the associated clean up at a later stage. If this pattern is used consistently then you can compose together many disposables to perform a single clean up operation without any specific knowledge of what is being cleaned up.
The problem is that if that clean up code needs to run on a certain thread then you need some way for Dispose called on one thread to be marshalled to required thread - and that's where ScheduledDisposable comes in.
The primary example is the SubscribeOn extension method which uses ScheduledDisposable to ensure that the "unsubscribe" (i.e. the Dispose) is run on the same IScheduler that the Subscribe was run on.
This is important for the FromEventPattern extension method, for example, that attaches to and detaches from event handlers which must happen on the UI thread.
Here's an example of where you might use ScheduledDisposable directly:
var frm = new SomeForm();
frm.Text = "Operation Started.";
var sd = new ScheduledDisposable(
new ControlScheduler(frm),
Disposable.Create(() =>
frm.Text = "Operation Completed."));
Scheduler.ThreadPool.Schedule(() =>
{
// Long-running task
Thread.Sleep(2000);
sd.Dispose();
});
A little contrived, but it should show a reasonable example of how you'd use ScheduledDisposable.
This question is a bit of a dupe, but I still don't understand the best way to handle flushing.
I am migrating an existing code base, which contains a lot of code like the following:
private void btnSave_Click()
{
SaveForm();
ReloadList();
}
private void SaveForm()
{
var foo = FooRepository.Get(_editingFooId);
foo.Name = txtName.Text;
FooRepository.Save(foo);
}
private void ReloadList()
{
fooRepeater.DataSource = FooRepository.LoadAll();
fooRepeater.DataBind();
}
Now that I am changing the FooRepository to Nhibernate, what should I use for the FooRepository.Save method? Should the FooRepository always flush the session when the entity is saved?
I'm not sure if I understand your question, but here is what I think:
Think in "putting objects to the session" instead of "getting and storing data". NH will store all new and changed objects in the session without any special call to it.
Consider this scenarios:
Data change:
Get data from the database with any query. The entities are now in the NH session
Change entities by just changing property values
Commit the transaction. Changes are flushed and stored to the database.
Create a new object:
Call a constructor to create a new object
Store it to the database by calling "Save". It is in the session now.
You still can change the object after Save
Commit the changes. The latest state will be stored to the database.
If you work with detached entities, you also need Update or SaveOrUpdate to put detached entities to the session.
Of course you can configure NH to behave differently. But it works best if you follow this default behaviour.
It doesn't matter whether or not you explicitly flush the session between modifying a Foo entity and loading all Foos from the repository. NHibernate is smart enough to auto-flush itself if you have made changes in the session that may affect the results of the query you are trying to run.
Ideally I try to use one session per "unit of work". This means one cohesive piece of work which may involve several smaller steps. If you feel that you do not have a seam in your architecture where you can achieve this, then managing the session inside the repository will also work. Just be aware that you are missing out on some of the power that NHibernate provides you.
I'd vote up Stefan Moser's answer if I could - I'm still getting to grips with Nh myself but I think it's nice to be able to write code like this:
private void SaveForm()
{
using (var unitofwork = UnitOfWork.Start())
{
var foo = FooRepository.Get(_editingFooId);
var bar = BarRepository.Get(_barId);
foo.Name = txtName.Text;
bar.SomeOtherProperty = txtBlah.Text;
FooRepository.Save(foo);
BarRepository.Save(bar);
UnitOfWork.CommitChanges();
}
}
so this way either the whole action succeeds or it fails and rolls back, keeping flushing/transaction management outside of the Repositories.