Is there any way to initialize settings model of c# on startup from the JSON settings available in the database in .net core? - asp.net-core

I have created a web application in which I am using .net core 3.0 as server-side technology and have Postgres as backend. I have a table called settings in the database in which I am storing settings as JSON using JSON data type available in Postgres.
Now I want to initialize my settings model in the application from the JSON data available in the database. I want to initialize the model at the start of the application and use it throughout the application wherever needed to avoid the database roundtrips for fetching the settings from the database on demand.
Is there any way to achieve this?
Following is the Json data in the database:
{
"CurrencyCode": "USD",
"CurrencySymbol": "$"
}
Here is my C# model in the application
public class SettingsModel
{
public string CurrencySymbol { get; set; }
public string CurrencyCode { get; set; }
}
I was thinking to achieve this in the following way but still does not have idea about what and how to use it.
Initializing the model using singleton service
services.AddSingleton<SettingsModel, GetJsonSettingsFromDatabase()>();
GetJsonSettingsFromDatabase() will return the SettingsModel after deserializing the settings from DB to SettingsModel .
I also wanted some function that will contain the logic for updating the SettingsModel so that I can invoke the same when there are changes in the database table for settings.

Initializing the model using singleton service should use the factory delegate
For example.
services.AddSingleton<SettingsModel>(sp => return GetJsonSettingsFromDatabase());
assuming
GetJsonSettingsFromDatabase() will return the SettingsModel after deserializing the settings from DB to SettingsModel.
The same could have been done with an instance
SettingsModel settings = GetJsonSettingsFromDatabase();
services.AddSingleton(settings);
As for
I also wanted some function that will contain the logic for updating the SettingsModel so that I can invoke the same when there are changes in the database table for settings.
Then do not register it as a singleton. Consider caching the settings instance to avoid round trips and only loading fresh instance as needed.

Related

Using RIA Domain Services how to refresh client generated code after making changes to EF

I am working on a N-tier Silverlight 4.0 solution using WCV RIA services, Domain Services and Entity Framework.
I have created the Entity Framework model and Domain Services in my DAL project.
Having clicked the 'Enable Client Code Generation' whilst creating the Domain Service everything works fine. The generated code is created in the client application as I can see the .g.cs file.
I decided to add a stored procedure to the EF model by adding a function and creating a complex type.
Then I proceeded to add this code to my Domain Service (MyDomainService.cs) class as shown below:
public IQueryable GetJournalItemList()
{
return this.ObjectContext.ExecuteFunction("GetJournalList", null).AsQueryable();
}
The problem I'm facing now is that when I build my solution I cannot see the new code in the client generated code class (.g.cs) in Silverlight client application. The proxy has no reference to the new GetJournalItemList which references the newly added stored procedure.
So here's my question: how to force a refresh of the client generated code so changes to the Domain Service class can be shown?
Thank You
You must return a strongly typed result. i.e.
public IQueryable<JournalItemList> GetJournalItemList()
{
return this.ObjectContext
.ExecuteFunction("GetJournalList", null).AsQueryable();
}
In addition, if this has not already been done, JournalItemList must have a key defined. You can do this using a Metadatatype attribute through a custom partial class. When you generate the Domain Service, a .metadata.cs file would have been created. There should be examples you can use.
[MetadataType(JournalItemList.Metadata)]
public partial class JournalItemList
{
public class Metadata
{
// Assuming that JournalItemList.JournalItemId exists
[Key]
public int JournalItemId { get; set; }
}
}

Writing a Custom Hive Provider using objects as datasource

Im trying to create a hive provider that would be able to work towards some objects.
An object may look something like this
public class MyContent
{
public System.Collections.Generic.List Content { get; set; }
}
public class ContentExample
{
public string Title { get; set; }
public string Text { get; set; }
}
public class MyFiles
{
public System.Collections.Generic.List Files { get; set; }
}
public class FileExample
{
public System.IO.FileInfo File { get; set; }
}
I've downloaded and checked the two Hive providers from the Visual Studio Gallery (Umbraco 5 Hive Provider and Umbraco 5 Simple Hive Provider), but the lack of documentation is a bit disturbing. I also downloaded some of the other example hives, like the Wordpress hive provider, but that one is rather different from the ones in the Visual Studio Gallery.
The Idea
Im used to working with stuff like ObjectDataSource, the example above could be complemented with full CRUD if required.
Now, I assume one Hive provider would be able to serve different parts of Umbraco with content (right?). Just set up a new Repository and go? I have now clue how to connect all parts or even how to get the data into the provider yet.
Any help in how I could bring all pieces together?
Thanks
The first step is to take a step back and evaluate your business requirements. Will you allow for users to be updating the information with forms in the frontend? Do you need a tree editor for the content in the backoffice? Do you need to be work with data outside of the built-in ORM?
If the answer to these is no, a hive provider is overkill. Evaluate solutions using either a simple surface controllers, or just a custom document type. Umbraco 5 is a full EAV/CR system, so unlike some CMS products, you'll be able to represent any rdbs structure you can imagine
ContentExample could be represented be a document type called 'Article', which has properties Title and Text. Just by defining this document type we're instantly given add and edit forms for our back office users in our content section. We can even restrict which nodes are able to have children of type 'Article', e.g News.
In the same way, an upload control is a field type that allows you to attach files to your document.
So what's point of a custom hive provider?
The goal of a custom hive provider is to unify CRUD actions for data access layers.
As a result data can be stored in the baked-in nhibernate orm, custom tables, rss feeds, or even flat files, while still using a common interface to retrieve and update it. If this sounds like what you're aiming for, read on.
Going back to the business requirements, specifically, where do you want to actually store the data?--Given that you have some fields and properties related to flat file storage, let's say that one TypedEntity (a model) is equivelant to one file and write some pseduocode:
The first step, is as you say 'get the data into the repository.' This involves going back to that VS template and filling in the 'not implemented' methods with your logic for storing and retrieving data.
protected override void PerformAddOrUpdate(TypedEntity entity)
{
// step 1: serialize the typed entity to xml
// step 2: write the file to the hdd, making sure that the file name is named using the hive id so that we can pull it back later.
}
Once you've written the data access layer, or DAL, you can hook it up in the hive config, giving it url to match. e.g. rather than matching content:\\, yours might match on file-manager:\\
We can allow our backoffice users to be able to add new entities (indirectly, new files) by writing a custom tree, and we can display the results to our front-end users via macros.

Subtype of shared data contract

Following advices from people on the internet about service references, I got rid of them now and split the service/data contracts into a common assembly accesible by both the server and the client. Overall this seems to work really well.
However I’m running into problems when trying to use custom objects, or rather custom subtypes, in the service. Initially I wanted to define only interfaces in the common assembly as the contract for the data. I quickly learned that this won’t work though because the client needs a concrete class to instantiate objects when receiving objects from the service. So instead I used a simple class instead, basically like this:
// (defined in the common assembly)
public class TestObject
{
public string Value { get; set; }
}
Then in the service contract (interface), I have a method that returns such an object.
Now if I simply create such an object in the service implementation and return it, it works just fine. However I want to define a subtype of it in the service (or the underlying business logic), that defines a few more things (for example methods for database access, or just some methods that work on the objects).
So for simplicity, the subtype looks like this:
// (defined on the server)
public class DbTestObject : TestObject
{
public string Value { get; set; }
public DbTestObject(string val)
{
Value = val;
}
}
And in the service, instead of creating a TestObject, I create the subtype and return it:
public TestObject GetTestObject()
{
return new DbTestObject("foobar");
}
If I run this now, and make the client call GetTestObject, then I immediately get a CommunicationException with the following error text: “The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:09:59.9380000'.”
I already found out, that the reason for this is that the client does not know how to deserialize the DbTestObject. One solution would be to declare the base type with the KnownTypeAttribute to make it know about the subtype. But that would require the subtype to be moved into the common assembly, which is of course something I want to avoid, as I want the logic separated from the client.
Is there a way to tell the client to only use the TestObject type for deserialization; or would the solution for this be to use data transfer objects anyway?
As #Sixto Saez has pointed out, inheritance and WCF don't tend to go together very well. The reason is that inheritance belongs very much to the OO world and not the messaging passing world.
Having said that, if you are in control of both ends of the service, KnownType permits you to escape the constraints of message passing and leverage the benefits of inheritance. To avoid taking the dependency you can utilise the ability of the KnownTypeAttribute to take a method name, rather than a type parameter. This allows you to dynamically specify the known types at run time.
E.g.
[KnownType("GetKnownTestObjects")]
[DataContract]
public class TestObject
{
[DataMember]
public string Value { get; set; }
public static IEnumerable<Type> GetKnownTestObjects()
{
return Registry.GetKnown<TestObject>();
}
}
Using this technique, you can effectively invert the dependency.
Registry is a simple class that allows other assemblies to register types at run-time as being subtypes of the specified base class. This task can be performed when the application bootstraps itself and if you wish can be done, for instance, by reflecting across the types in the assembly(ies) containing your subtypes.
This achieves your goal of allowing subtypes to be handled correctly without the TestObject assembly needing to take a reference on the subtype assembly(ies).
I have used this technique successfully in 'closed loop' applications where both the client and server are controlled. You should note that this technique is a little slower because calls to your GetKnownTestObjects method have to be made repeatedly at both ends while serialising/deserialising. However, if you're prepared to live with this slight downside it is a fairly clean way of providing generic web services using WCF. It also eliminates the need for all those 'KnownTypeAttributes' specifying actual types.

Mapping properties between two databases using Fluent NHibernate & S#arp Architecture

I have a scenario where I have a new application built using NHibernate and S#arp Architecture. This application is being integrated into a legacy application that does not use NHibernate. There will be separate databases for the legacy and new applications. I am able to successfully read objects from both databases using two S#arp NHiberate factory keys (as detailed here). This all works great. My problem is when I want to map a property from a class in my new application to an object in the legacy database. Consider the following scenario:
public class NewUser
{
public virtual LegacyUser LegacyUser { get; set; }
}
public class LegacyUser
{
public virtual string UserName { get; set; }
}
I can read my NewUser objects, but when I attempt to reference the LegacyUser property I get an error:
System.Data.SqlClient.SqlException: Invalid object name 'NewSchema.LegacyUser'.
I suspect this is happening because the session that is retrieving the NewUser objects doesn’t know anything about the session that is required to read the old user objects. The session factory key is usually specified by the S#arp repository.
I've tried specifying the schema in the mapping, but this doesn't work either.
mapping.Schema("OldSchema");
Question: Is there any way to specify in the Fluent NHibernate mapping that when the NewUser object attempts to read the LegacyUser object it needs to use a different session factory? Or do I have to manually read the objects from the legacy database and insert them into my new classes?
Thanks in advance!

Objects returned from WCF service have no properties, only 'ExtentionData'

Im am not new to WCF web services but there has been a couple of years since the last time I used one. I am certain that last time I used a WCF service you could determine the type of object returned from a service call when developing the code. EG;
MyService.Models.ServiceSideObjects.User user = myServiceClient.GetUser();
You were then free to use the 'user' object client-side. However now it seems as if the WCF service will not return anything more than objects containing basic value types (string, int ect). So far I have remedied this by defining transfer objects which contain only these basic value types and having the service map the complex 'User' objects properties to simple strings and int's in the transfer object.
This becomes a real pain when, for example you have custom type objects containing more complex objects such as my Ticket object.
public class Ticket
{
public Agent TicketAgent {get;set;}
public Client Owner {get;set;}
public PendingReason TicketPendingReason {get;set;}
}
As simply mapping this object graph to a single transfer class with a huge list of inter-related system-typed properties gives a very 'dirty' client-side business model. Am I wrong in thinking that I SHOULD be able to just receive my Ticket object from a service method call and deal with it client side in the same state it was server-side ?
I realise this is probably a violation of some SoA principal or similar but my desktop app currently consuming this service is the ONLY thing that will consume ever consume it. So i do not care if many other clients will be able to manage the data types coming back from the service and therefore require some hugely normalised return object. I just want my service to get an object of type Ticket from its repository, return this object to the client with all its properties intact. Currently all I get is an object with a single property 'ExtentionData' which is unusable client-side.
Hope this makes sense, thank you for your time.
I might've missed a memo, but I think you need to decorate your model classes with DataContractAttribute and your properties with DataMemberAttribute, like so:
[DataContract( Namespace = "http://example.com" )]
public class Ticket
{
[DataMember]
public Agent TicketAgent { get; set; }
[DataMember]
public Client Owner { get; set; }
[DataMember]
public PendingReason TicketPendingReason { get; set; }
}
This is why you probably want to set up a DTO layer, to avoid polluting your model classes.
As for ExtensionData, it's used for forward-compatibility: http://msdn.microsoft.com/en-us/library/ms731083.aspx
I have marked Niklas's response as an answer as it has solved my issue.
While it seems you do not NEED to use [DataContract] and [DataMember], in some cases, I believe it could cause the issues I was experiencing. When simply transferring custom typed objects which, in themselves, only have simply typed properties, no attributes needed. However, when I attempted to transfer a custom typed object which itself had collections / fields of more custom typed objects there attributes were needed.
Thank you for your time.