Existing WCF service of xml transformation , need to integrate with MEF? - wcf

My application is in WCF of xml transformation. now need to change to integrate with MEF. which is the best way to implement MEF or which type of architecture should i use to implement with less effort and less change in existing code?
EDIT
Explanation:
I have four hotel xml transformation
in wcf service. At one end it is fixed
format xml and another end different
xml format for each new hotel.and
another 20 hotel work will come. for
this repetative work i need some
re-usable and extendable architecture.
i want to convert existing
architecture upgrade with MEF for
future perspective. so i can do better
for next 20 hotel xml transformation.

How are you doing your Xml transformation? Is it through code, or XSLT?
If through code, I would define an IXmlTranslator that converts your xml into a common model:
public interface IXmlTranslator
{
XmlModel Translate(XElement element);
}
Where XmlModel is your common model:
public class XmlModel
{
// Properties
}
You'd need to specifically know which translator to use, so you'd need to pass in some sort of metadata, so we'll define a name:
public interface INamedMetadata
{
string Name { get; }
}
So an example translator could look like:
[Export(typeof(IXmlTranslator),
ExportMetadata("Name", "Null")]
public class NullXmlTranslator : IXmlTranslator
{
public XmlModel Translate(XElement element)
{
return null;
}
}
MEF will take care of projecting your metadata into an instance of INamedMetadata. Next, create a service which consumes IXmlTranslators:
[Export]
public class XmlTranslatorService
{
private readonly IEnumerable<Lazy<IXmlTranslator, INamedMetadata>> _translators;
[ImportingConstructor]
public XmlTranslatorService(IEnumerable<Lazy<IXmlTranslator, INamedMetadata>> translators)
{
_translators = translators;
}
public XmlModel Translate(string name, XElement element)
{
var translator = GetTranslator(name);
if (translator == null)
throw new ArgumentException("No translator is available to translate the target xml.");
return translator.Translate(element);
}
private IXmlTranslator GetTranslator(string name)
{
var translator = _translators
.Where(t => t.Metadata.Name.Equals(name, StringComparison.InvariantCultureIgnoreCase))
.Select(t => t.Value)
.FirstOrDefault();
return translator;
}
}
I've made the enumerable of available translators part of the constructor arguments, as it defines dependencies that are required for the service to work. MEF will take care of injecting this enumerable at composition time.
What you need to do, is either Import an instance of the XmlTranslatorService into whatever class you want to use it from, or you can initialise an instance directly from your CompositionContainer, e.g.:
var service = container.GetExportedValue<XmlTranslatorService>();
The only thing remaining would be
Creating specialised translators for each of the hotel types into the common XmlModel model class.
Serialisation of the XmlModel class into the target xml.
Hope that points you in the right direction?

Related

What is the difference between Provider and Resolver

I often need a class/service that will give me some data trough fetching it from a DB, transforming an existing data structure or do both internally but I sometimes have a difficulty naming them properly.
I am currently working with Sylius and they are using classes/services with suffixes such as Checker, Applicator, Processor... I have clear understanding of these names and their implications as to what and how they are doing things. But there are also suffixes Provider and Resolver and I have a difficulty differentiating between them. I don't understand the exact differences of their naming.
What I observed is:
Provider: fetching data that are not yet available (internally fetching data from DB or external API)
Resolver: I already have a bunch of data (and I don't need any additional data) and I need to filter, transform or get some subset of it.
Is there some convention or design pattern to names Resolver and Provider? Am I somewhat right here? Or is there more nuance to this naming?
In my view, patterns are not depend on technology or language, so this article can be applied here:
Content Providers provide an interface, e.g. for publishing and consuming data
and:
Content Resolver resolves a publishing and consuming data to a specific Content provider.
The Content Resolver includes the CRUD (create, read, update, delete) methods corresponding to the abstract methods (insert, query, update, delete) in the Content Provider class.
UPDATE
Provider is an abstraction that can be implemented by concrete providers. E.g., there is DataProvider and DataProvider is an abstraction. So we want concrete implementations of SqlServerProvider, PostgreProvider, OracleProvider.
Let me show an example via C#:
public interface IDataProvider
{
string GetById();
}
public class SqlServerProvider : IDataProvider
{
public string GetById()
{
return "Data retrieved with SqlServerProvider";
}
}
public class PostgreProvider : IDataProvider
{
public string GetById()
{
return "Data retrieved with PostgreProvider";
}
}
public class OracleProvider : IDataProvider
{
public string GetById()
{
return "Data retrieved with OracleProvider";
}
}
Then we need to resolve the above dependenies to use them. But how? We can create DataResolver:
public enum DataProviderType
{
SqlServer, Posgre, Oracle
}
public class DataResolver
{
private Dictionary<DataProviderType, IDataProvider> _dataProviderByType =
new Dictionary<DataProviderType, IDataProvider>()
{
{ DataProviderType.SqlServer, new SqlServerProvider() },
{ DataProviderType.Posgre, new PostgreProvider() },
{ DataProviderType.Oracle, new OracleProvider() },
};
public IDataProvider Resolve(DataProviderType dataProviderType)
{
return _dataProviderByType[dataProviderType];
}
}
and then we can run the above code like this:
DataResolver dataResolver = new DataResolver();
string someValue = dataResolver.Resolve(DataProviderType.SqlServer).GetById();
Console.WriteLine(someValue); // OUTPUT: Data retrieved with SqlServerProvider
See more examples of code here

enum vs Interface design

I have a design problem where the requirement is something like this :
Write a generate function that takes a parameter("TYPE")
Depending on the TYPE, I need to generate a String and return it. So TYPE effectively changes the way you generate the String.
I am deliberating between two design options :
Using enum : Create a enum having the TYPES. Then provide a generate method that depending on TYPE does the processing and returns a string.
Using Interface : Create an interface having a function generate(). Create implementations for each TYPE, to implement the generate().
Which do you feel is better and for what reasons.
Although, Approach # 2, follows the Open/Closed Principle of OOAD i.e You will be adding new interface implementation, everytime new TYPE is added and you will not modify existing code, which is very safe approach as it does not need testing of old code/method. So your code will be open for extension but closed for modification. However, if you are going to very frequently add new TYPE, then Approach # 2, makes sense.
IMO, in this case, I would suggest to use Approach # 1, as the business requirement is really simple i.e to generate a String based on Parameter TYPE. So using interface will be over-engineering in my opinion(if TYPES are not going to be added frequently).
It will be good to use some design pattern for this problem statement to make your code more robust and reusable. I will suggest to you Strategy Design Pattern. It is abstraction based pattern that uses Interface.
Basic Example:
public interface IMyStrategy
{
string Generate(string someValue);
}
public class StragegyA : IMyStrategy
{
public string Generate(string somevalue)
{
return /Implementation/;
}
}
public class StragegyB : IMyStrategy
{
public string Generate(string somevalue)
{
return /Implementation/;
}
}
public class MyStrategyContext
{
private readonly IMyStrategy _ImyStrategy;
public MyStrategyContextIMyStrategy(IMyStrategy myStragegy)
{
_ImyStrategy = myStragegy
}
public string GenerateResult(string someValue)
{
return _ImyStrategy .Generate(someValue);
}
}
[Test]
public void GenerateValue()
{
var abc = new MyStrategyContext(new StragegyA());
abc.GenerateResult("hey print");
}

WCF, Linq Error:cannot implicitly convert type System.linq.iorderedQueryable<> to System.Collection.Generic.List<>

I am getting an error : i am using entity framework, wcf.
Error:cannot implicitly convert type System.linq.iorderedQueryable<xDataModel.Info> to System.Collection.Generic.List<xServiceLibrary.Info>
Below are my code:
WCF Service:
namespace xServiceLibrary
{
public List<Info> GetScenario()
{
xEntities db = new xEntities();
var query = from qinfo in db.Infoes
select qinfo;
//return query.Cast<Info>().ToList(); (not working)
//return query.toList(); (not working)
return query;
}
}
Interface:
namespace xServiceLibrary
{
[OperationContract]
List<Info> GetScenario();
}
Class:
namespace xServiceLibrary
{
[DataContract]
public class Info
{
[DataMember]
public int Scenario_Id;
[DataMember]
public string Scenario_Name { get; set; }
[DataMember]
public string Company_Name { get; set; }
}
}
update:(2)
I have two class library files.
One is xDataModel namespace in which i have created xmodel.edmx file.
second is xServiceLibrary namespace where i am implementing Wcf Service.
i have attached the xDataModel.dll file in my xServiceLibrary so that i could query my EF Model.
i am not able to understand the concept. any help would be appreciated.
The problem is that you have two different types named Info: DataModel.Info and ServiceLibrary.Info - because these are different types you cannot cast one into the other.
If there is no strong reason for both being there I would eliminate one of them. Otherwise as a workaround you could project DataModel.Info to ServiceLibrary.Info by copying the relevant properties one by one:
var results = (from qinfo in db.Infoes
select new ServiceLibrary.Info()
{
Scenario_Id = qinfo.Scenario_Id,
//and so on
}).ToList();
The problem is that you have two different classes, both called Info, both in scope at the time you run your query. This is a very very bad thing, especially if you thought they were the same class.
If DataModel.Info and ServiceLibrary.Info are the same class, you need to figure out why they are both in scope at the same time and fix that.
If they are different classes, you need to be explicit about which one you are trying to return. Assuming that your EF model includes a set of DataModel.Info objects, your options there are:
Return a List<DataModel.Info> which you can get by calling query.ToList()
Return a List<ServiceLibrary.Info> which you can get by copying the fields from your DataModel.Info objects:
var query = from qinfo in db.Info
select new ServiceLibrary.Info
{
Scenario_Id = q.Scenario_Id,
Scenario_Name = q.Scenario_Name
Company_Name = q.Company_Name
};
Return something else, such as your custom DTO object, similar to #2 but with only the specific fields you need (e.g. if ServiceLibrary.Info is a heavy object you don't want to pass around.
In general, though, your problem is centered around the fact that the compiler is interpreting List<Info> as List<ServiceLibrary.Info> and you probably don't want it to.

Sending an Interface definition over the wire (WCF service)

I have a WCF service that generates loads Entity Framework objects (as well as some other structs and simple classes used to lighten the load) and sends them over to a client application.
I have changed 2 of the classes to implement an interface so that I can reference them in my application as a single object type. Much like this example:
Is it Possible to Force Properties Generated by Entity Framework to implement Interfaces?
However, the interface type is not added to my WCF service proxy client thingymebob as it is not directly referenced in the objects that are being sent back over the wire.
Therefore in my application that uses the service proxy classes, I can't cast or reference my interface..
Any ideas what I'm missing?
Here's some example code:
//ASSEMBLY/PROJECT 1 -- EF data model
namespace Model
{
public interface ISecurable
{
[DataMember]
long AccessMask { get; set; }
}
//partial class extending EF generated class
//there is also a class defined as "public partial class Company : ISecurable"
public partial class Chart : ISecurable
{
private long _AccessMask = 0;
public long AccessMask
{
get { return _AccessMask; }
set { _AccessMask = value; }
}
public void GetPermission(Guid userId)
{
ChartEntityModel model = new ChartEntityModel();
Task task = model.Task_GetMaskForObject(_ChartId, userId).FirstOrDefault();
_AccessMask = (task == null) ? 0 : task.AccessMask;
}
}
}
//ASSEMBLY/PROJECT 2 -- WCF web service
namespace ChartService
{
public Chart GetChart(Guid chartId, Guid userId)
{
Chart chart = LoadChartWithEF(chartId);
chart.GetPermission(userId); //load chart perms
return chart; //send it over the wire
}
}
Interfaces won't come across as separate entities in your WSDL - they will simply have their methods and properties added to the object that exposes them.
What you want to accomplish you can do using abstract classes. These will come across as distinct entities.
Good luck. Let us know how you decided to proceed.

Accessing more than one data provider in a data layer

I'm working on a business application which is being developed using DDD philosophy. Database is accessed through NHibernate and data layer is implemented using DAO pattern.
The UML class diagram is shown below.
UML Class Diagram http://img266.imageshack.us/my.php?image=classdiagramhk0.png
http://img266.imageshack.us/my.php?image=classdiagramhk0.png
I don't know the design is good or not. What do you think?
But the problem is not the design is good or not. The problem is after starting up the application an IDaoFactory is instantiated in presentation layer and send as parameter to presenter classes(which is designed using MVC pattern) as below
...
IDaoFactory daoFactory = new NHibernateDaoFactory(); //instantiation in main class
...
SamplePresenterClass s = new SamplePresenterClass(daoFactory);
...
Using just one data provider (which was just one database) was simple. But now we should get data from XML too. And next phases of the development we should connect to different web services and manipulate incoming and outgoing data.
The data from XML is going to be got using a key which is an enum. We add a class named XMLLoader to the data layer and add an interface ILoader to the domain. XMLLoader has a method whose signature is
List<string> LoadData(LoaderEnum key)
If we instantiate ILoader with XMLLoader in presentation layer as below we have to send it to objects which is going to get some XML data from data layer.
ILoader loader = new XMLLoader();
SamplePresenterClass s = new SamplePresenterClass(daoFactory, xmlLoader);
After implementing web service access classes
SamplePresenterClass s = new SamplePresenterClass(daoFactory, xmlLoader, sampleWebServiceConnector1, sampleWebServiceConnector2, ...);
The parameters is going to be grown in time. I think i can hold all instances of data access objects in a class and pass it to required presenters (maybe singleton pattern can helps too). In domain layer there must be a class like this,
public class DataAccessHolder
{
private IDaoFactory daoFactory;
private ILoader loader;
...
public IDaoFactory DaoFactory
{
get { return daoFactory; }
set { daoFactory = value; }
}
...
}
In main class the instantiation can be made with this design as follows
DataAccessHolder dataAccessHolder = new DataAccessHolder();
dataAccessHolder.DaoFactory = new NHibernateDaoFactory();
dataAccessHolder.Loader = new XMLLoader();
...
SamplePresenterClass s = new SamplePresenterClass(dataAccessHolder);
What do you think about this design or can you suggest me a different one?
Thanks for all repliers...
IMO, it would be cleaner to use a "global" or static daoFactory and make it generic.
DaoFactory<SamplePresenterClass>.Create(); // or
DaoFactory<SamplePresenterClass>.Create(id); // etc
Then, you can define DaoFactory<T> to take only, say, IDao's
interface IDao
{
IDaoProvider GetProvider();
}
interface IDaoProvider
{
IDao Create(IDao instance);
void Update(IDao instance);
void Delete(IDao instance);
}
Basically instead of passing every constructor your DaoFactory, you use a static generic DaoFactory. Its T must inherit from IDao. Then the DaoFactory class can look at the T provider at runtime:
static class DaoFactory<T> where T : IDao, new()
{
static T Create()
{
T instance = new T();
IDaoProvider provider = instance.GetProvider();
return (T)provider.Create(instance);
}
}
Where IDaoProvier is a common interface that you would implement to load things using XML, NHibernate, Web Services, etc. depending on the class. (Each IDao object would know how to connect to its data provider).
Overall, not a bad design though. Add a bit more OO and you will have a pretty slick design. For instance, each file for the XmlEnums could be implemented as IDao's
class Cat : IDao
{
IDaoProvider GetProvider()
{
return new XmlLoader(YourEnum.Cat);
}
// ...
}