"Real" Object References in Distributed Cache? - distributed-cache

I'm personally committed to .net distributed caching solutions, but I think this question is interesting across all platforms.
Is there a distributed caching solution (or generic strategy) that allows to both store objects in the cache while maintaining the integrity of the references between them?
To exemplify - Suppose I have an object Foo foo that references an object Bar bar and also and object Foo foo2 that references that same Bar bar. If I load foo to the cache, a copy of bar is stored along with it. If I also load foo2 to the cache, a separate copy of bar is stored along with that. If I change foo.bar in the cache, the change does not impact foo2.bar :(
Is there an existing distributed cache solution that will enable me to load foo, foo2 and bar into the cache while maintaining the foo.bar foo2.bar references?

First and foremost
I do not know of any distributed system, and I do not pretend to build one. This post explains how you can simulate this behavior with .NET and C# using the IObjectReference interface with serializable objects.
Now, lets go on with the show
I do not know of such a distributed system, but you can somewhat easily achive this with .NET using the IObjectReference interface. Your implementation of ISerializable.GetObjectData would need to call SerializationInfo.SetType to point out a proxy class that implements IObjectReference, and would be able (with help from data provided by your GetObjectData method) to get a reference to the real object that should be used.
Example code:
[Serializable]
internal sealed class SerializationProxy<TOwner, TKey> : ISerializable, IObjectReference {
private const string KeyName = "Key";
private const string InstantiatorName = "Instantiator";
private static readonly Type thisType = typeof(SerializationProxy<TOwner, TKey>);
private static readonly Type keyType = typeof(TKey);
private static readonly Type instantiatorType = typeof(Func<TKey, TOwner>);
private readonly Func<TKey, TOwner> _instantiator;
private readonly TKey _key;
private SerializationProxy() {
}
private SerializationProxy(SerializationInfo info, StreamingContext context) {
if (info == null) throw new ArgumentNullException("info");
_key = (TKey)info.GetValue(KeyName, keyType);
_instantiator = (Func<TKey, TOwner>)info.GetValue(InstantiatorName, instantiatorType);
}
void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) {
throw new NotSupportedException("This type should never be serialized.");
}
object IObjectReference.GetRealObject(StreamingContext context) {
return _instantiator(_key);
}
internal static void PrepareSerialization(SerializationInfo info, TKey key, Func<TKey, TOwner> instantiator) {
if (info == null) throw new ArgumentNullException("info");
if (instantiator == null) throw new ArgumentNullException("instantiator");
info.SetType(thisType);
info.AddValue(KeyName, key, keyType);
info.AddValue(InstantiatorName, instantiator, instantiatorType);
}
}
This code would be called with SerializationProxy.PrepareSerialization(info, myKey, myKey => LoadedInstances.GetById(myKey)) from your GetObjectData method, and your LoadedInstances.GetById should return the instance from a Dictionary<TKey, WeakReference> or load it from cache/database if it isnt already loaded.
EDIT:
I've wrote some example code to show what I mean.
public static class Program {
public static void Main() {
// Create an item and serialize it.
// Pretend that the bytes are stored in some magical
// domain where everyone lives happily ever after.
var item = new Item { Name = "Bleh" };
var bytes = Serialize(item);
{
// Deserialize those bytes back into the cruel world.
var loadedItem1 = Deserialize<Item>(bytes);
var loadedItem2 = Deserialize<Item>(bytes);
// This should work since we've deserialized identical
// data twice.
Debug.Assert(loadedItem1.Id == loadedItem2.Id);
Debug.Assert(loadedItem1.Name == loadedItem2.Name);
// Notice that both variables refer to the same object.
Debug.Assert(ReferenceEquals(loadedItem1, loadedItem2));
loadedItem1.Name = "Bluh";
Debug.Assert(loadedItem1.Name == loadedItem2.Name);
}
{
// Deserialize those bytes back into the cruel world. (Once again.)
var loadedItem1 = Deserialize<Item>(bytes);
// Notice that we got the same item that we messed
// around with earlier.
Debug.Assert(loadedItem1.Name == "Bluh");
// Once again, force the peaceful object to hide its
// identity, and take on a fake name.
loadedItem1.Name = "Blargh";
var loadedItem2 = Deserialize<Item>(bytes);
Debug.Assert(loadedItem1.Name == loadedItem2.Name);
}
}
#region Serialization helpers
private static readonly IFormatter _formatter
= new BinaryFormatter();
public static byte[] Serialize(ISerializable item) {
using (var stream = new MemoryStream()) {
_formatter.Serialize(stream, item);
return stream.ToArray();
}
}
public static T Deserialize<T>(Byte[] bytes) {
using (var stream = new MemoryStream(bytes)) {
return (T)_formatter.Deserialize(stream);
}
}
#endregion
}
// Supercalifragilisticexpialidocious interface.
public interface IDomainObject {
Guid Id { get; }
}
// Holds all loaded instances using weak references, allowing
// the almighty garbage collector to grab our stuff at any time.
// I have no real data to lend on here, but I _presume_ that this
// wont be to overly evil since we use weak references.
public static class LoadedInstances<T>
where T : class, IDomainObject {
private static readonly Dictionary<Guid, WeakReference> _items
= new Dictionary<Guid, WeakReference>();
public static void Set(T item) {
var itemId = item.Id;
if (_items.ContainsKey(itemId))
_items.Remove(itemId);
_items.Add(itemId, new WeakReference(item));
}
public static T Get(Guid id) {
if (_items.ContainsKey(id)) {
var itemRef = _items[id];
return (T)itemRef.Target;
}
return null;
}
}
[DebuggerDisplay("{Id} {Name}")]
[Serializable]
public class Item : IDomainObject, ISerializable {
public Guid Id { get; private set; }
public String Name { get; set; }
// This constructor can be avoided if you have a
// static Create method that creates and saves new items.
public Item() {
Id = Guid.NewGuid();
LoadedInstances<Item>.Set(this);
}
#region ISerializable Members
public void GetObjectData(SerializationInfo info, StreamingContext context) {
// We're calling SerializationProxy to call GetById(this.Id)
// when we should be deserialized. Notice that we have no
// deserialization constructor. Fxcop will hate us for that.
SerializationProxy<Item, Guid>.PrepareSerialization(info, Id, GetById);
}
#endregion
public static Item GetById(Guid id) {
var alreadyLoaded = LoadedInstances<Item>.Get(id);
if (alreadyLoaded != null)
return alreadyLoaded;
// TODO: Load from storage container (database, cache).
// TODO: The item we load should be passed to LoadedInstances<Item>.Set
return null;
}
}

Related

.NET Core 3.1 Complex Session Wrapper Not Working

I am trying to create a complex session wrapper in .NET Core 3.1. I ran into an issue where my variables are not being set. This is the way I set up the session wrapper class.
public class SessionWrapper : ISessionWrapper
{
private static IHttpContextAccessor context;
public SessionWrapper(IHttpContextAccessor _context)
{
context = _context;
}
public static Course Course
{
get
{
var key = context.HttpContext.Session.GetString("course");
if (key == null)
{
return default;
}
else
{
return JsonConvert.DeserializeObject<Course>(key);
}
}
set
{
if(value != null)
{
context.HttpContext.Session.SetString("course", JsonConvert.SerializeObject(value));
}
}
}
}
I configured my services to use session and the sessionwrapper.
services.AddDistributedMemoryCache();
services.AddSession();
services.AddHttpContextAccessor();
services.AddScoped<ISessionWrapper, SessionWrapper>();
I configured the pipeline to use session
app.UseSession();
In my controller, I am initializing course and set the session wrapper. Then, I am setting the course id to 4. It's not complaining, but the course id is not being set. It's always null. I've been looking at it for so and is getting frustrated. What am I missing here?
Course myCourse = new Course();
SessionWrapper.Course = myCourse;
SessionWrapper.Course.Id = "4"
I feel like your wrapper in itself isn't really the best approach to do this. A self-aware subclass of Course that has the 'know how' to store itself in Session, seems more logical to me. That way you are freeing your controller(s) from the responsibility for managing the persistence.
public abstract class Course
{
public abstract int Id { get; set; }
}
public class SessionCourse : Course
{
private int _id;
public override int Id
{
get => _id;
set { _id = value; UpdateSession(); }
}
// The GetCourse method is a factory for creating the SessionCourse objects
// and providing it with a ISession object so they can store themselves.
public static Course GetCourse(IServiceProvider services)
{
ISession session = services.GetRequiredService<IHttpContextAccessor>()?.HttpContext.Session;
SessionCourse course = session?.GetJson<SessionCourse>("Course") ?? new SessionCourse();
course.Session = session;
return course;
}
[JsonIgnore]
private ISession Session { get; set; }
private void UpdateSession() {
Session.SetJson("Course", this);
}
}
Now the trick is to satisfy requests for the Course object with the SessionCourse object that will store itself in session. You can do that by adding a scoped service with a lambda expression for the course object. The result is that requests for the Course service will return the SessionCourse object.
services.AddScoped<Course>(sp => SessionCourse.GetCourse(sp));
services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
So the benefit of creating this kind of service is that it allows you to simplify the controllers where Course objects are used.
public class CourseController : Controller
{
private Course course;
public CartController(Course courseService)
{
course = courseService;
}
public void SetCourseId()
{
course.Id = "4";
}
SessionExtension.cs defines extension methods for adding objects to the session.
public static class SessionExtensions {
public static void SetJson(this ISession session, string key, object value) {
session.SetString(key, JsonConvert.SerializeObject(value));
}
public static T GetJson<T>(this ISession session, string key) {
var sessionData = session.GetString(key);
return sessionData == null ? default(T) : JsonConvert.DeserializeObject<T>(sessionData);
}
}

How organize and test this code?

I have a conceptual doubt about how to organize and test code like the following, where a call to an auxiliary method is the first instruction of all the public methods of the class. My idea is make the code clean and testable.
The code is an example to try to illustrate this by a class "cache". This class has an optional prefix will be applied to all keys in the cache if it is set.
import java.util.HashMap;
public class Cache {
private HashMap<String, Integer> inMemoryCache;
private String prefix;
public Cache() {
this.inMemoryCache = new HashMap<String, Integer>();
prefix = null;
}
public void setPrefix(String prefix) {
this.prefix = prefix;
}
public int getValue(String key) throws NullPointerException {
String prefixedKey = applyPrefixOrDefault(key);
return inMemoryCache.get(prefixedKey);
}
public void setValue(String key, int value) {
String prefixedKey = applyPrefixOrDefault(key);
inMemoryCache.put(prefixedKey, value);
}
public boolean isCached(String key) {
String prefixedKey = applyPrefixOrDefault(key);
return inMemoryCache.containsKey(prefixedKey);
}
private String applyPrefixOrDefault(String key) {
if (prefix == null) {
return key;
} else {
return prefix + key;
}
}
public static void main (String[] arg) {
Cache cache = new Cache();
cache.setPrefix("global:");
cache.setValue("id", 4);
int value = cache.getValue("id");
System.out.println(value);
}
}
This code poses two questions to me:
If I had many methods accessing the inner hash table, would it be right separate the behavior of the cache in one class and the behavior of the prefix in other?
What would be the cleanest way to test this? Test the getValue, setValue and isCached is simple if we do not consider the prefix. With the prefix we need to test two things, the correct internal behavior of the cache and we need test also that all methods call applyPrefixOrDefault before accessing the data.
This is a common use case and I'm sure there must be some design pattern to organize this. Any idea?
To my opinion, what we miss here is a constructor that let us set the state of the cache. So I would add one as follows:
public Cache() {
this(null, new HashMap<String, Integer>());
}
public Cache(String prefix, Map<String, Integer> cache) {
this.prefix = prefix;
this.inMemoryCache = cache;
}
With this new constructor, you should be able to write test-cases for every possible cache state. I would also change the visibility of the applyPrefixOrDefault method to protected or package so that test code can access it. For instance, to test the GetValue method, I would write:
public class EmptyCacheTests {
private final Map<String, Integer> memory;
private final String prefix;
private final Cache cache;
public EmptyCacheTests() {
this.memory = new HasMap<String, Integer>();
this.prefix = "foo";
this.cache = new Cache(prefix, memory);
}
public void testGetValue() {
String key = this.cache.applyPrefixOrDefault("bar")
this.memory.put(key, 50);
result = this.cache.getValue("bar");
assertEquals(50, result, "The value retrieved is wrong!");
}
}
The point here, it to allow the test to set up the internal state of the cache, so that we can then test against many different ones.

Implementing ICloneable with protobuf-net

Can you please explain why the following piece of code fails to work?
static void Main(string[] args)
{
var simpleObject = new SimpleObjectDTO { Id = 1, Name = "Jacob" };
const string format = "{2} object properties are: Id {0} Name {1}";
Console.WriteLine(format, simpleObject.Id, simpleObject.Name, "Original");
var clone = simpleObject.Clone() as SimpleObjectDTO;
// ReSharper disable PossibleNullReferenceException
Console.WriteLine(format, clone.Id, clone.Name, "Clone");
// ReSharper restore PossibleNullReferenceException
Console.ReadLine();
}
where
[ProtoContract]
public class SimpleObjectDTO : ICloneable
{
[ProtoMember(1)]
public int Id { get; set; }
[ProtoMember(2)]
public string Name { get; set; }
public object Clone()
{
using (var stream = new MemoryStream())
{
Serializer.Serialize(stream, this);
stream.Flush();
var clone = Serializer.Deserialize<SimpleObjectDTO>(stream);
return clone;
}
}
}
The code runs just fine but the deserialized object has 0 and an empty string
as the appropriate properties' values.
Upd.:
If I serialize into a binary file and then open if for reading thus creating a new stream
the code works. Is there any possibility of avoiding intermediate binary files and using only one stream for both serializing and deserializing?
Thr problem is the stream's position needs to be reset to zero.
As an alternative:
return Serializer.DeepClone(this);
Figured out the issue, forgot to reset the memory stream's position

Setting internal properties in composite WF4 Activities at design time

I want to create a composite Windows Workflow Activity (under .NET 4) that contains a predefined ReceiveAndSendReply Activity. Some of the properties are predefined, but others (particularly ServiceContractName) need to be set in the designer.
I could implement this as an Activity Template (the same way ReceiveAndSendReply is implemented), but would rather not. If I later change the template, I'd have to update all previously created workflows manually. A template would also permit other developers to change properties that should be fixed.
Is there a way to do this from a Xaml Activity? I have not found a way to assign an Argument value to a property of an embedded Activity. If not, what technique would you suggest?
I haven't done this using a composite XAML activity and am getting some errors when I try but doing so through a NativeActivity is no problem. See the example code below.
public class MyReceiveAndSendReply : NativeActivity
{
private Receive _receive;
private SendReply _sendReply;
public string ServiceContractName { get; set; }
public string OperationName { get; set; }
protected override bool CanInduceIdle
{
get { return true; }
}
protected override void CacheMetadata(NativeActivityMetadata metadata)
{
_receive = _receive ?? new Receive();
_sendReply = _sendReply ?? new SendReply();
_receive.CanCreateInstance = true;
metadata.AddImplementationChild(_receive);
metadata.AddImplementationChild(_sendReply);
_receive.ServiceContractName = ServiceContractName;
_receive.OperationName = OperationName;
var args = new ReceiveParametersContent();
args.Parameters["firstName"] = new OutArgument<string>();
_receive.Content = args;
_sendReply.Request = _receive;
var results = new SendParametersContent();
results.Parameters["greeting"] = new InArgument<string>("Hello there");
_sendReply.Content = results;
base.CacheMetadata(metadata);
}
protected override void Execute(NativeActivityContext context)
{
context.ScheduleActivity(_receive, ReceiveCompleted);
}
private void ReceiveCompleted(NativeActivityContext context, ActivityInstance completedInstance)
{
context.ScheduleActivity(_sendReply);
}
}

Where to store data for current WCF call? Is ThreadStatic safe?

While my service executes, many classes will need to access User.Current (that is my own User class). Can I safely store _currentUser in a [ThreadStatic] variable? Does WCF reuse its threads? If that is the case, when will it clean-up the ThreadStatic data? If using ThreadStatic is not safe, where should I put that data? Is there a place inside OperationContext.Current where I can store that kind of data?
Edit 12/14/2009: I can assert that using a ThreadStatic variable is not safe. WCF threads are in a thread pool and the ThreadStatic variable are never reinitialized.
There's a blog post which suggests implementing an IExtension<T>. You may also take a look at this discussion.
Here's a suggested implementation:
public class WcfOperationContext : IExtension<OperationContext>
{
private readonly IDictionary<string, object> items;
private WcfOperationContext()
{
items = new Dictionary<string, object>();
}
public IDictionary<string, object> Items
{
get { return items; }
}
public static WcfOperationContext Current
{
get
{
WcfOperationContext context = OperationContext.Current.Extensions.Find<WcfOperationContext>();
if (context == null)
{
context = new WcfOperationContext();
OperationContext.Current.Extensions.Add(context);
}
return context;
}
}
public void Attach(OperationContext owner) { }
public void Detach(OperationContext owner) { }
}
Which you could use like that:
WcfOperationContext.Current.Items["user"] = _currentUser;
var user = WcfOperationContext.Current.Items["user"] as MyUser;
An alternative solution without adding extra drived class.
OperationContext operationContext = OperationContext.Current;
operationContext.IncomingMessageProperties.Add("SessionKey", "ABCDEFG");
To get the value
var ccc = aaa.IncomingMessageProperties["SessionKey"];
That's it
I found that we miss the data or current context when we make async call with multiple thread switching. To handle such scenario you can try to use CallContext. It's supposed to be used in .NET remoting but it should also work in such scenario.
Set the data in the CallContext:
DataObject data = new DataObject() { RequestId = "1234" };
CallContext.SetData("DataSet", data);
Retrieving shared data from the CallContext:
var data = CallContext.GetData("DataSet") as DataObject;
// Shared data object has to implement ILogicalThreadAffinative
public class DataObject : ILogicalThreadAffinative
{
public string Message { get; set; }
public string Status { get; set; }
}
Why ILogicalThreadAffinative ?
When a remote method call is made to an object in another AppDomain,the current CallContext class generates a LogicalCallContext that travels along with the call to the remote location.
Only objects that expose the ILogicalThreadAffinative interface and are stored in the CallContext are propagated outside the AppDomain.