group XDocument by multiple nodes (dynamic) - asp.net-core

I have below data coming in as DataTable. If I directly convert it to XML I get nice xdocument object.
However problem is I need to group it by first 4 columns so that XML is shown as below. I know of only three nodes 'Segment', 'Price' and 'Qty' rest columns in datatable could be dynamic and cannot use hardcoded names (except for above 3)
<ROOT>
<ROW>
<Col1>CESLP</Col1>
<Col2>MRP</Col2>
<Col3>372</Col3>
<Date>20040101</Date>
<BID_INTERVALS>
<SEGMENT>1</SEGMENT>
<Price>10</Price>
<QTY>5</QTY>
</BID_INTERVALS>
<BID_INTERVALS>
<SEGMENT>2</SEGMENT>
<Price>15</Price>
<QTY>6</QTY>
</BID_INTERVALS>
</ROW>
<ROW>
<Col1>CESLP</Col1>
<Col2>MRP</Col2>
<Col3>372</Col3>
<Date>20040102</Date>
<BID_INTERVALS>
<SEGMENT>1</SEGMENT>
<Price>11</Price>
<QTY>5</QTY>
</BID_INTERVALS>
<BID_INTERVALS>
<SEGMENT>2</SEGMENT>
<Price>14.5</Price>
<QTY>6</QTY>
</BID_INTERVALS>
</ROW>
Any solution? I'm stuck for quite some time, tried xdocument group by 'except' but didn't worked for me.
Edit1:
I'm using below code to group the records (using solution from here
dataTable.AsEnumerable()
.GroupBy(r => new NTuple<object>(from column in colNames select r[column]))
.Select(g => g.CopyToDataTable()).ToList();

It's not entirely clear from the bitmap in your question whether you intially have a DataTable or an XDocument. So let's assume you have an XDocument, and you would like to group the child rows of the root element by the values of their first four columns, with the remaining values collected under a sequence of elements named <BID_INTERVALS>.
This can be accomplished using the following extension method:
public static partial class XNodeExtensions
{
public static XElement CopyAndGroupChildrenByColumns(this XElement root, Func<XName, int, bool> columnFilter, XName groupName) =>
new XElement(root.Name,
root.Attributes(),
root.Elements()
.Select((row) => (row, key : row.Elements().Where((e, i) => columnFilter(e.Name, i)).Select(e => (e.Name, e.Value)).ToHashSet()))
.GroupByKeyAndSet(pair => pair.row.Name, pair => pair.key)
.Select(g => new XElement(g.Key.Key,
g.Key.Set.Select(p => new XElement(p.Name, p.Value)).Concat(g.Select(i => new XElement(groupName, i.row.Elements().Where((e, i) => !columnFilter(e.Name, i))))))));
public static IEnumerable<IGrouping<(TKey Key, HashSet<TItem> Set), TSource>> GroupByKeyAndSet<TSource, TKey, TItem>(this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TSource, HashSet<TItem>> setSelector) =>
Enumerable.GroupBy(source, (i) => (keySelector(i), setSelector(i)), new CombinedComparer<TKey, HashSet<TItem>>(null, HashSet<TItem>.CreateSetComparer()));
}
public class CombinedComparer<T1, T2> : IEqualityComparer<ValueTuple<T1, T2>>
{
readonly IEqualityComparer<T1> comparer1;
readonly IEqualityComparer<T2> comparer2;
public CombinedComparer(IEqualityComparer<T1> comparer1, IEqualityComparer<T2> comparer2) => (this.comparer1, this.comparer2) = (comparer1 ?? EqualityComparer<T1>.Default, comparer2 ?? EqualityComparer<T2>.Default);
public bool Equals(ValueTuple<T1, T2> x, ValueTuple<T1, T2> y) => comparer1.Equals(x.Item1, y.Item1) && comparer2.Equals(x.Item2, y.Item2);
public int GetHashCode(ValueTuple<T1, T2> obj) => HashCode.Combine(comparer1.GetHashCode(obj.Item1), comparer2.GetHashCode(obj.Item2));
}
Then, given some XDocument doc, you can do:
// Group by the first four columns with all remaining elements collected under a <BID_INTERVALS> sequence of elements:
XName groupName = doc.Root.Name.Namespace + "BID_INTERVALS";
var grouped = doc.Root.CopyAndGroupChildrenByColumns((n, i) => (i < 4), groupName);
var newDoc = new XDocument(grouped);
If, on the other hand, you have a DataTable dt not an XDocument, you can convert the table to an XDocument directly using the following extension method:
public static partial class XNodeExtensions
{
public static XDocument ToXDocument(this DataTable dt, XmlWriteMode mode = XmlWriteMode.IgnoreSchema)
{
var doc = new XDocument();
using (var writer = doc.CreateWriter())
dt.WriteXml(writer, mode);
return doc;
}
}
And then do:
var doc = dt.ToXDocument(XmlWriteMode.IgnoreSchema);
Demo fiddle here.

Related

Get data from database with where clause using API

I have a table named SerialNumbers containing some columns in it.
I want to get the data with SNum based on scanned value which has been listed in an Array.
Below is my code:
public class SNController : ApiController
{
[HttpGet]
public HttpResponseMessage AllSN()
{
using (SNDBContext dbContext = new SNDBContext())
{
string[] SNum = { "01070A2", "01070A3", "01070A4" };
var SerialNum = dbContext.SNumbers.Where(x => x.SN == "01070A2")
.Select(p => new { p.Name, p.Status})
.ToList();
return Request.CreateResponse(HttpStatusCode.OK, SerialNum );
}
}
When I try to hardcoded this part var SerialNum = dbContext.SNumbers.Where(x => x.SN == "01070A2"), its working.
How can I solve this issue?
Use Contains method in Where clause.
var SerialNum = dbContext.SNumbers.Where(x => SNum.Contains(x.SN))
.Select(p => new { p.Name, p.Status})
.ToList();

Ravendb LoadDocument showing NULL value

I have following two Collections in RavenDB.Please help me for creating index for getting data from both collection.
public class Ticket
{
public string TicketID{get;set;}
public double Total{get;set;}
}
public class ImportTiming
{
public string Id{get;set;}
public DateTime ExtractTime{get;set;}
}
AND
public class ResultClass
{
public string TicketID{get;set;}
public double Total{get;set;}
public DateTime ExtractTime{get;set;}
}
TicketID(Ticket) & Id(ImportTiming) are same.I am using LoadDocument for ExtractTime but it is showing NULL value.
Thanks in advance!!!
Finally i got solution...
Bellow is the Map-Reduce function,in which i have used LoadDocument<> for selecting data from ImportTiming Document.
public class IdxJoinBetweenCollections : AbstractIndexCreationTask<Ticket,JoinBetweenCollections.ResultClass>
{
public IdxJoinBetweenCollections()
{
Map = docs => from doc in docs
let TimeDoc = LoadDocument<ImportTiming>("ImportTiming/" + doc.TicketID)
select new
{
ID = doc.TicketID,
Total = doc.Total,
ExtractTime = TimeDoc.ExtractComplete,
};
Reduce = results => from res in results
group res by res.ID into g
select new
{
ID = g.Key,
Total = g.Select(x => x.Total).FirstOrDefault(),
ExtractTime = g.Select(x => x.ExtractTime).FirstOrDefault(),
};
}
}
In LoadDocument("ImportTiming/" + doc.TicketID),i have used CollectionName followed by Id so that i gets whole document.If i dont use CollectionName then it shows NULL value.
Reference:http://ravendb.net/docs/2.0/client-api/querying/static-indexes/indexing-related-documents

How to get nested objects using ICriteria Projections

I have data model like this:
class Hand {
public int id;
...
}
class Person {
public int id;
public string name;
public IList<Hand> hands;
...
}
To get data from database, I do this:
ICriteria criteria = databaseSession.CreateCriteria(typeof(Person));
ProjectionList projections = Projections.ProjectionList();
projections
.Add(Projections.Property("id").As("id"))
.Add(Projections.Property("name").As("name"))
.Add(Projections.Property("hands").As("hands"));
projections.Add(Projections.GroupProperty("id"));
projections.Add(Projections.Count("id"), "count");
criteria.SetProjection(projections);
criteria.SetResultTransformer(
NHibernate.Transform.Transformers.AliasToBean(typeof(PersonDTO)));
But NHibernate does not load nested objects in hands property. It just gives null.
Can anyone help me how to get nested objects filled as well (for more than one level depth). Using projections instead of query would be better for me.
Note: It would not be issue in the mapping, because when I loaded data without any projection, it worked well.
a possible solution
var query = databaseSession.CreateCriteria(typeof(Person))
.JoinAlias("hands", "hand")
.SetProjection(Projections.ProjectionList()
.Add(Projections.Property("Id"))
.Add(Projections.Property("Name"))
.Add(Projections.Property("hand.Id"))
.Add(Projections.Property("hand.Foo")))
.List<object[]>()
.GroupBy(arr => (int)arr[0])
.Select(g => new PersonDTO
{
Id = g.Key,
Name = g.First().Name,
Hands = g.Select(arr => new Hand { Id = arr[2], Foo = arr[3] }).ToList(),
});
var results = query.ToList();

Fluent nHibernate, Hi-Lo table with entity-per-row using a convention

Is there a way to specify a table to use for Hi-Lo values, with each entity having a per-row entry, via a convention (while still having nHibernate create the table structure for you)? I would like to replicate what Phil Haydon blogged about here, but without having to manually manage the table. As it stands, migrating his row-per-table code to its own convention will work only if you've already created the appropriate entries for 'TableKey' in the table already.
Alternatively, is this possible via the XML mappings?
And if all else fails, is the only other appropriate option to use a custom generator, a la this post?
Fabio Maulo talked about this in one of his mapping-by-code posts.
Mapping by code example:
mapper.BeforeMapClass += (mi, type, map) =>
map.Id(idmap => idmap.Generator(Generators.HighLow,
gmap => gmap.Params(new
{
table = "NextHighValues",
column = "NextHigh",
max_lo = 100,
where = string.Format(
"EntityName = '{0}'", type.Name.ToLowerInvariant())
})));
For FluentNHibernate, you could do something like:
public class PrimaryKeyConvention : IIdConvention
{
public void Apply(IIdentityInstance instance)
{
var type = instance.EntityType.Name;
instance.Column(type + "Id");
instance.GeneratedBy.HiLo(type, "NextHigh", "100",
x => x.AddParam("where", String.Format("EntityName = '{0}'", type));
}
}
Also, Fabio explained how you could use IAuxiliaryDatabaseObject to create Hi-Lo script.
private static IAuxiliaryDatabaseObject CreateHighLowScript(
IModelInspector inspector, IEnumerable<Type> entities)
{
var script = new StringBuilder(3072);
script.AppendLine("DELETE FROM NextHighValues;");
script.AppendLine(
"ALTER TABLE NextHighValues ADD EntityName VARCHAR(128) NOT NULL;");
script.AppendLine(
"CREATE NONCLUSTERED INDEX IdxNextHighValuesEntity ON NextHighValues "
+ "(EntityName ASC);");
script.AppendLine("GO");
foreach (var entity in entities.Where(x => inspector.IsRootEntity(x)))
{
script.AppendLine(string.Format(
"INSERT INTO [NextHighValues] (EntityName, NextHigh) VALUES ('{0}',1);",
entity.Name.ToLowerInvariant()));
}
return new SimpleAuxiliaryDatabaseObject(
script.ToString(), null, new HashedSet<string> {
typeof(MsSql2005Dialect).FullName, typeof(MsSql2008Dialect).FullName
});
}
You would use it like this:
configuration.AddAuxiliaryDatabaseObject(CreateHighLowScript(
modelInspector, Assembly.GetExecutingAssembly().GetExportedTypes()));
For users of Fluent NHibernate, Anthony Dewhirst has posted a nice solution over here: http://www.anthonydewhirst.blogspot.co.uk/2012/02/fluent-nhibernate-solution-to-enable.html
Building off of Anthony Dewhirst's already excellent solution, I ended up with the following, which adds a couple improvements:
Adds Acceptance Criteria so that it doesn't try to handle non-integral Id types (e.g. Guid) and won't stomp on Id mappings which have a generator explicitly set
Script generation takes Dialect into consideration
public class HiLoIdGeneratorConvention : IIdConvention, IIdConventionAcceptance
{
public const string EntityColumnName = "entity";
public const string MaxLo = "500";
public void Accept(IAcceptanceCriteria<IIdentityInspector> criteria)
{
criteria
.Expect(x => x.Type == typeof(int) || x.Type == typeof(uint) || x.Type == typeof(long) || x.Type == typeof(ulong)) // HiLo only works with integral types
.Expect(x => x.Generator.EntityType == null); // Specific generator has not been mapped
}
public void Apply(IIdentityInstance instance)
{
instance.GeneratedBy.HiLo(TableGenerator.DefaultTableName, TableGenerator.DefaultColumnName, MaxLo,
builder => builder.AddParam(TableGenerator.Where, string.Format("{0} = '{1}'", EntityColumnName, instance.EntityType.FullName)));
}
public static void CreateHighLowScript(NHibernate.Cfg.Configuration config)
{
var dialect = Activator.CreateInstance(Type.GetType(config.GetProperty(NHibernate.Cfg.Environment.Dialect))) as Dialect;
var script = new StringBuilder();
script.AppendFormat("DELETE FROM {0};", TableGenerator.DefaultTableName);
script.AppendLine();
script.AppendFormat("ALTER TABLE {0} {1} {2} {3} NOT NULL;", TableGenerator.DefaultTableName, dialect.AddColumnString, EntityColumnName, dialect.GetTypeName(SqlTypeFactory.GetAnsiString(128)));
script.AppendLine();
script.AppendFormat("CREATE NONCLUSTERED INDEX IX_{0}_{1} ON {0} ({1} ASC);", TableGenerator.DefaultTableName, EntityColumnName);
script.AppendLine();
if (dialect.SupportsSqlBatches)
{
script.AppendLine("GO");
script.AppendLine();
}
foreach (var entityName in config.ClassMappings.Select(m => m.EntityName).Distinct())
{
script.AppendFormat("INSERT INTO [{0}] ({1}, {2}) VALUES ('{3}',1);", TableGenerator.DefaultTableName, EntityColumnName, TableGenerator.DefaultColumnName, entityName);
script.AppendLine();
}
if (dialect.SupportsSqlBatches)
{
script.AppendLine("GO");
script.AppendLine();
}
config.AddAuxiliaryDatabaseObject(new SimpleAuxiliaryDatabaseObject(script.ToString(), null));
}
}

RavenDB Index is not working when using SelectMany in Map Function

Based on this article from Ayende i have created the following index definition
public class ProductsSearch : AbstractIndexCreationTask<Product, ProductsSearch.Result>
{
public class Result
{
public string Query { get; set; }
}
public ProductsSearch()
{
Map = products => from product in products
select new
{
Query = new object[]
{
product.Title,
product.Tags.Select(tag => tag.Name),
product.Tags.SelectMany(tag => tag.Aliases, (tag, alias) => alias.Name)
}
};
Index(x => x.Query, FieldIndexing.Analyzed);
}
}
One difference is that i must use a SelectMany statement to get the aliases of a tag.
A tag can have many aliases (i. e. tag: mouse alias:pointing device)
I have no idea why the SelectMany line breaks the index. If i remove it, the index works.
This should work:
Map = products => from product in products
from tag in product.Tags
from alias in tag.Aliases
select new
{
Query = new object[]
{
product.Title,
tag.Name,
alias.Name
}
};