Consider the following test code structure:
class TestClass<T>
{
public object TestObject;
}
class TestClass2<T> :TestClass<T>
{
public int TestMethod()
{}
}
When I add Instruction.Create(OpCodes.Ldfld, TestObjectField) to TestMethod, I get the following result:
ldfld class Object TestNamespace.TestClass`1::TestObject
This causes an issue in the secure plugin system of Dynamics CRM (more information could be found here). However, when I add TestObject = new object() to TestMethod, I get the following result, which runs fine:
ldfld class Object class TestNamespace.TestClass`1<!T>::TestObject
This only happens when the object I am trying to access is in a generic base class. Is there a way that I can simulate the desired result from within Fody please?
UPDATE:
I managed to partially solve the issue by using the following lines:
var testObjectFieldRef = testObjectFieldDefinition?.Resolve().GetGeneric();
var testClassTypeRef = testObjectFieldRef?.DeclaringType.Resolve().GetGeneric();
if (testClassTypeRef != null)
{
testObjectFieldRef.DeclaringType = testClassTypeRef;
}
It seems that by default the ModuleDefinition does not provide a generic type/field definition; so it has to be done explicitly. In addition, the types of the generic parameters are not specified, so it's still an issue.
I managed to solve this issue by using the following code (reference):
var genericBaseType = (GenericInstanceType) testClassTypeRef.BaseType;
var genericArgs = genericBaseType .GenericArguments;
var fullBaseTypedName = genericBaseType.ElementType.MakeGenericInstanceType(genericArgs.ToArray()).FullName
Which returns ldfld class Object class TestNamespace.TestClass`1<!T>::TestObject as required. It will also fill in the type of the type parameter (in place of <!T>) if given in the derived class.
Related
Given the following:
public class Parent
{
public ChildType childType;
}
public class ChildA : Parent { ... }
public class ChildB : Parent { ... }
public enum ChildType {
childA,
childB
}
public class Content {
public long contentId;
public string? name;
public ICollection<Parent>? contentCollection; <--
...
}
I would like to use the Content class as part of an API. Is it possible to load both children into the collection just using the enum as a discriminator to determine which to cast to?
My understanding is the child objects would need to be loaded from EF as their child class first, then cast to the parent class before being added to the collection as they would be missing properties upon casting back to the child class otherwise. Is this correct? And how can the dbContext be configured to handle this when accessing through the Content class?
Apologies for all the questions, I have not done this before and cannot find an example online. I would like to know any thoughts, pointers or general info before proceeding. Please say if anything is unclear or more info is required.
Edit:
I was trying to map the child objects as their types from the DB, upcast to the parent type to be able to add multiple types to the one collection and then downcast when required for use. As far as I was aware, EF did not have the functionality to do this.
For anyone else who comes across this which needs assistance, I solved my issue by just using ADO.NET which is what entity framework is built around. I was getting stuck by trying to get this working using EF but my belief is it is not able to be done with EF.
Formatting is off a little, and I have renamed everything to suit my original question but here is the solution involved:
Writing an SP to retrieve the data similar to if the objects were stored in a TPH pattern.
Calling that SP using SQLConnection/SQLCommand. (I added this into my context class to keep DAL together but unsure if this is best practice)
public async Task<Collection<Parent>> GetModelMapCollectionAsync(long id) {
Collection<Parent> parentCollection;
using (SqlConnection connection = new SqlConnection(this.Database.GetConnectionString()))
{
using (SqlCommand sqlCommand = new SqlCommand("GetModelMapCollectionAsync", connection))
{
sqlCommand.CommandType = System.Data.CommandType.StoredProcedure;
sqlCommand.Parameters.Add(new SqlParameter("#id", id));
await connection.OpenAsync();
await sqlCommand.ExecuteNonQueryAsync();
using (SqlDataReader sqlDataReader = await sqlCommand.ExecuteReaderAsync())
{
MapCollectionResult(sqlDataReader, out parentCollection);
}
}
}
return parentCollection; }
Using a nuget package called Dapper, create row parsers for each type (easiest solution for readability/simplicity IMO)
Use the discriminator column to determine which parser to use on each row returned from the SP. This creates the child object from the row which allows it to be downcast back later.
Add that parsed object to the collection.
private void MapCollectionResult(SqlDataReader sqlDataReader, out Collection parentCollection)
{
parentCollection= new Collection();
var parentParser = sqlDataReader.GetRowParser<Parent>(typeof(Parent));
var paramClassParser = sqlDataReader.GetRowParser<ParamClass>(typeof(ParamClass));
var childAParser = sqlDataReader.GetRowParser<ChildA>(typeof(ChildA));
var childBParser = sqlDataReader.GetRowParser<ChildB>(typeof(ChildB));
ChildType type = ChildType.None;
Parent parent;
while (sqlDataReader.Read())
{
type = (ChildType)sqlDataReader["ChildTypeId"];
switch(type)
{
case ChildType.ChildA:
parent = childAParser(sqlDataReader);
break;
case ChildType.ChildB:
parent = childBParser(sqlDataReader);
break;
default:
parent = parentParser(sqlDataReader);
break;
}
parent.paramClass = paramClassParser(sqlDataReader);
parentCollection.Add(parent);
}}
Hey Guys i'm very new in software development,I still no idea when to use which,whats the meaning of service lifetime!it may seem stupid but please help me,i have an interface :
public interface IAccessInfo
{
public IEnumerable<AccessInfo> getResult();
}
what it supposed to do is to returns me the information about my Turbines;here is the implementation of it :
public class AcessInfoData:IAccessInfo
{
private DbContextClass db;
public AcessInfoData(DbContextClass context)
{
db = context;
}
public IEnumerable<AccessInfo> getResult()
{
var turbines = (from c in db.accessinf
where s.user_id == "i0004912"
select new AccessInfo
{
InfoType = c.type,
TurbineId = c.m_plc_id.ToString(),
TurbineIP = c.turbine_ip.ToString(),
TurbineIdSorting = c.turbine_id,
Blade = c.blade,
Certification = c.certification,
}).Distinct();
return turbines;
}
}
it gets an instance of my DB and gets the data;and in my controller i use it like this:
public class AcessInfoController : ControllerBase
{
private IAccessInfo _acess;
public AcessInfoController(IAccessInfo access)
{
_acess = access;
}
[HttpGet]
public IActionResult Index()
{
var rsult = _acess.getResult();
return Ok( rsult);
}
}
now in the Startup i registered it :
services.AddScoped<IAccessInfo, AcessInfoData>();
it works,but if you sk me why i user Scoped and not Singleton or transient i have no idea why,really,any one can make it clear for me?
I will try to explain a little about the mentioned cases:
scoped : For all needs of an object during the life of an operation (such as a request from the client) a single instance of the object is created. (It means that only one instance of the object is sent for all requirements during life time of a request)
Singleton: Creates only one instance of object and sends it for all requirements in the application scope.(For all needs everywhere in the program, only one instance of the object is sent, a bit like static objects).
Transient: Ioc container, makes an instance of object whenever code needs it, that is, it makes an instance for each requirement anywhere in the program and at any time, which means that if the program needs an object 3 times, it makes an independent instance for each.
Instance: In this case, each time an object is needed, only one instance of it is provided to the program, which you defined it in the startup section. (when defining it in the startup section, you specify how to create an instance).
I hope to reduce some of the ambiguities.
I want to extend Linq's DataContext class to implement the ORM. Currently my model looks like this:
public class Trial : DataContext
{
public Trial(string connectionString) : base(connectionString) { }
[Column(DbType = "System.Guid", IsPrimaryKey = true, IsDbGenerated = true, CanBeNull = false)]
public Guid TrialID { get; set; }
//...
}
However when I try to instantiate a new Trial object to insert it into the database I get an error complaining that Trial does not have a constructor that takes 0 arguments. When I try to create such a constructor, VS complains that DataContext does not have a constructor that takes 0 arguments.
Am I missing something here? How do I seperate the data context from the model definition?
(First time using Linq!)
Thanks in advance,
Max.
Your data context that represents the database view should inherit from DataContext. It should expose Tables where T is the entities (rows) that you want to add. Try generating a model from the database using the designer or SQLMetal and take a closer look at the generated code to see what's going on.
I am getting an error : i am using entity framework, wcf.
Error:cannot implicitly convert type System.linq.iorderedQueryable<xDataModel.Info> to System.Collection.Generic.List<xServiceLibrary.Info>
Below are my code:
WCF Service:
namespace xServiceLibrary
{
public List<Info> GetScenario()
{
xEntities db = new xEntities();
var query = from qinfo in db.Infoes
select qinfo;
//return query.Cast<Info>().ToList(); (not working)
//return query.toList(); (not working)
return query;
}
}
Interface:
namespace xServiceLibrary
{
[OperationContract]
List<Info> GetScenario();
}
Class:
namespace xServiceLibrary
{
[DataContract]
public class Info
{
[DataMember]
public int Scenario_Id;
[DataMember]
public string Scenario_Name { get; set; }
[DataMember]
public string Company_Name { get; set; }
}
}
update:(2)
I have two class library files.
One is xDataModel namespace in which i have created xmodel.edmx file.
second is xServiceLibrary namespace where i am implementing Wcf Service.
i have attached the xDataModel.dll file in my xServiceLibrary so that i could query my EF Model.
i am not able to understand the concept. any help would be appreciated.
The problem is that you have two different types named Info: DataModel.Info and ServiceLibrary.Info - because these are different types you cannot cast one into the other.
If there is no strong reason for both being there I would eliminate one of them. Otherwise as a workaround you could project DataModel.Info to ServiceLibrary.Info by copying the relevant properties one by one:
var results = (from qinfo in db.Infoes
select new ServiceLibrary.Info()
{
Scenario_Id = qinfo.Scenario_Id,
//and so on
}).ToList();
The problem is that you have two different classes, both called Info, both in scope at the time you run your query. This is a very very bad thing, especially if you thought they were the same class.
If DataModel.Info and ServiceLibrary.Info are the same class, you need to figure out why they are both in scope at the same time and fix that.
If they are different classes, you need to be explicit about which one you are trying to return. Assuming that your EF model includes a set of DataModel.Info objects, your options there are:
Return a List<DataModel.Info> which you can get by calling query.ToList()
Return a List<ServiceLibrary.Info> which you can get by copying the fields from your DataModel.Info objects:
var query = from qinfo in db.Info
select new ServiceLibrary.Info
{
Scenario_Id = q.Scenario_Id,
Scenario_Name = q.Scenario_Name
Company_Name = q.Company_Name
};
Return something else, such as your custom DTO object, similar to #2 but with only the specific fields you need (e.g. if ServiceLibrary.Info is a heavy object you don't want to pass around.
In general, though, your problem is centered around the fact that the compiler is interpreting List<Info> as List<ServiceLibrary.Info> and you probably don't want it to.
I have looked at the Dozer's FAQs and docs, including the SourceForge forum, but I didn't see any good tutorial or even a simple example on how to implement a custom BeanFactory.
Everyone says, "Just implement a BeanFactory". How exactly do you implement it?
I've Googled and all I see are just jars and sources of jars.
Here is one of my BeanFactories, I hope it helps to explain the common pattern:
public class LineBeanFactory implements BeanFactory {
#Override
public Object createBean(final Object source, final Class<?> sourceClass, final String targetBeanId) {
final LineDto dto = (LineDto) source;
return new Line(dto.getCode(), dto.getElectrified(), dto.getName());
}
}
And the corresponding XML mapping:
<mapping>
<class-a bean-factory="com.floyd.nav.web.ws.mapping.dozer.LineBeanFactory">com.floyd.nav.core.model.Line</class-a>
<class-b>com.floyd.nav.web.contract.dto.LineDto</class-b>
</mapping>
This way I declare that when a new instance of Line is needed then it should create it with my BeanFactory. Here is a unit test, that can explain it:
#Test
public void Line_is_created_with_three_arg_constructor_from_LineDto() {
final LineDto dto = createTransientLineDto();
final Line line = (Line) this.lineBeanFactory.createBean(dto, LineDto.class, null);
assertEquals(dto.getCode(), line.getCode());
assertEquals(dto.getElectrified(), line.isElectrified());
assertEquals(dto.getName(), line.getName());
}
So Object source is the source bean that is mapped, Class sourceClass is the class of the source bean (I'm ignoring it, 'cause it will always be a LineDto instance). String targetBeanId is the ID of the destination bean (too ignored).
A custom bean factory is a class that has a method that creates a bean. There are two "flavours"
a) static create method
SomeBean x = SomeBeanFactory.createSomeBean();
b) instance create method
SomeBeanFactory sbf = new SomeBeanFactory();
SomeBean x = sbf.createSomeBean();
You would create a bean factory if creating and setting up your bean requires some tricky logic, like for example initial value of certain properties depend on external configuration file. A bean factory class allows you to centralize "knowledge" about how to create such a tricky bean. Other classes just call create method without worying how to correctly create such bean.
Here is an actual implementation. Obviously it does not make a lot of sense, since Dozer would do the same without the BeanFactory, but instead of just returning an object, you could initialized it somehow differently.
public class ComponentBeanFactory implements BeanFactory {
#Override
public Object createBean(Object source, Class<?> sourceClass,
String targetBeanId) {
return new ComponentDto();
}
}
Why do you need a BeanFactory anyways? Maybe that would help understanding your question.