How to filter Azure logs, or WCF Data Services filters for Dummies - wcf

I am looking at my Azure logs in the WADLogsTable and would like to filter the results, but I'm clueless as to how to do so. There is a textbox that says:
"Enter a WCF Data Services filter to limit the entities returned"
What is the syntax of a "WCF Data Services filter"? The following gives me an InvalidValueType error saying "The value specified is invalid.":
Timestamp gt '2011-04-20T00:00'
Am I even close? Is there a handy syntax reference somewhere?

This query should be in the format:
Timestamp gt datetime'2011-04-20T00:00:00'
Remembering to put that datetime in there is the important bit.
This trips me up every time, so I use the OData overview for reference.

Adding to knightffhor's response, you can certainly write a query which filters by Timstamp but this is not recommended approach as querying on "Timestamp" attribute will lead to full table scan. Instead query this table on PartitionKey attribute. I'm copying my response from other thread here (Can I capture Performance Counters for an Azure Web/Worker Role remotely...?):
"One of the key thing here is to understand how to effectively query this table (and other diagnostics table). One of the things we would want from the diagnostics table is to fetch the data for a certain period of time. Our natural instinct would be to query this table on Timestamp attribute. However that's a BAD DESIGN choice because you know in an Azure table the data is indexed on PartitionKey and RowKey. Querying on any other attribute will result in full table scan which will create a problem when your table contains a lot of data.The good thing about these logs table is that PartitionKey value in a way represents the date/time when the data point was collected. Basically PartitionKey is created by using higher order bits of DateTime.Ticks (in UTC). So if you were to fetch the data for a certain date/time range, first you would need to calculate the Ticks for your range (in UTC) and then prepend a "0" in front of it and use those values in your query.
If you're querying using REST API, you would use syntax like:
PartitionKey ge '0<from date/time ticks in UTC>' and PartitionKey le '0<to date/time in UTC>'."
I've written a blog post about how to write WCF queries against table storage which you may find useful: http://blog.cerebrata.com/specifying-filter-criteria-when-querying-azure-table-storage-using-rest-api/
Also if you're looking for a 3rd party tool for viewing and managing diagnostics data, may I suggest that you take a look at our product Azure Diagnostics Manager: /Products/AzureDiagnosticsManager. This tool is built specifically for surfacing and managing Windows Azure Diagnostics data.

The answer I accepted helped me immensely in directly querying the table through Visual Studio. Eventually, however, I needed a more robust solution. I used the tips I gained here to develop some classes in C# that let me use LINQ to query the tables. In case it is useful to others viewing this question, here is roughly how I now query my Azure logs.
Create a class that inherits from Microsoft.WindowsAzure.StorageClient.TableServiceEntity to represent all the data in the "WADLogsTable" table:
public class AzureDiagnosticEntry : TableServiceEntity
{
public long EventTickCount { get; set; }
public string DeploymentId { get; set; }
public string Role { get; set; }
public string RoleInstance { get; set; }
public int EventId { get; set; }
public int Level { get; set; }
public int Pid { get; set; }
public int Tid { get; set; }
public string Message { get; set; }
public DateTime EventDateTime
{
get
{
return new DateTime(EventTickCount, DateTimeKind.Utc);
}
}
}
Create a class that inherits from Microsoft.WindowsAzure.StorageClient.TableServiceContext and references the newly defined data object class:
public class AzureDiagnosticContext : TableServiceContext
{
public AzureDiagnosticContext(string baseAddress, StorageCredentials credentials)
: base(baseAddress, credentials)
{
this.ResolveType = s => typeof(AzureDiagnosticEntry);
}
public AzureDiagnosticContext(CloudStorageAccount storage)
: this(storage.TableEndpoint.ToString(), storage.Credentials) { }
// Helper method to get an IQueryable. Hard code "WADLogsTable" for this class
public IQueryable<AzureDiagnosticEntry> Logs
{
get
{
return CreateQuery<AzureDiagnosticEntry>("WADLogsTable");
}
}
}
I have a helper method that creates a CloudStorageAccount from configuration settings:
public CloudStorageAccount GetStorageAccount()
{
CloudStorageAccount.SetConfigurationSettingPublisher(
(name, setter) => setter(RoleEnvironment.GetConfigurationSettingValue(name)));
string configKey = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
return CloudStorageAccount.FromConfigurationSetting(configKey);
}
I create an AzureDiagnosticContext from the CloudStorageAccount and use that to query my logs:
public IEnumerable<AzureDiagnosticEntry> GetAzureLog(DateTime start, DateTime end)
{
CloudStorageAccount storage = GetStorageAccount();
AzureDiagnosticContext context = new AzureDiagnosticContext(storage);
string startTicks = "0" + start.Ticks;
string endTicks = "0" + end.Ticks;
IQueryable<AzureDiagnosticEntry> query = context.Logs.Where(
e => e.PartitionKey.CompareTo(startTicks) > 0 &&
e.PartitionKey.CompareTo(endTicks) < 0);
CloudTableQuery<AzureDiagnosticEntry> tableQuery = query.AsTableServiceQuery();
IEnumerable<AzureDiagnosticEntry> results = tableQuery.Execute();
return results;
}
This method takes advantage of the performance tip in Gaurav's answer to filter on PartitionKey rather than Timestamp.
If you wanted to filter the results by more than just date, you could filter the returned IEnumerable. But, you'd probably get better performance by filtering the IQueryable. You could add a filter parameter to your method and call it within the IQueryable.Where(). Eg,
public IEnumerable<AzureDiagnosticEntry> GetAzureLog(
DateTime start, DateTime end, Func<AzureDiagnosticEntry, bool> filter)
{
...
IQueryable<AzureDiagnosticEntry> query = context.Logs.Where(
e => e.PartitionKey.CompareTo(startTicks) > 0 &&
e.PartitionKey.CompareTo(endTicks) < 0 &&
filter(e));
...
}
In the end, I actually further abstracted most of these classes into base classes in order to reuse the functionality for querying other tables, such as the one storing the Windows Event Log.

Related

Save complex object to session ASP .NET CORE 2.0

I am quite new to ASP .NET core, so please help. I would like to avoid database round trip for ASP .NET core application. I have functionality to dynamically add columns in datagrid. Columns settings (visibility, enable, width, caption) are stored in DB.
So I would like to store List<,PersonColumns> on server only for actual session. But I am not able to do this. I already use JsonConvert methods to serialize and deserialize objects to/from session. This works for List<,Int32> or objects with simple properties, but not for complex object with nested properties.
My object I want to store to session looks like this:
[Serializable]
public class PersonColumns
{
public Int64 PersonId { get; set; }
List<ViewPersonColumns> PersonCols { get; set; }
public PersonColumns(Int64 personId)
{
this.PersonId = personId;
}
public void LoadPersonColumns(dbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(dbContext dbContext, string code)
{
PersonCols = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public static List<ViewPersonColumns> GetFormViewColumns(SatisDbContext dbContext, string code, Int64 formId, string viewName, Int64 personId)
{
var columns = ViewPersonColumns.GetPersonColumns(dbContext, code, personId);
return columns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
I would like to ask also if my approach is not bad to save the list of 600 records to session? Is it better to access DB and load columns each time user wants to display the grid?
Any advice appreciated
Thanks
EDIT: I have tested to store in session List<,ViewPersonColumns> and it is correctly saved. When I save object where the List<,ViewPersonColumns> is property, then only built-in types are saved, List property is null.
The object I want to save in session
[Serializable]
public class UserManagement
{
public String PersonUserName { get; set; }
public Int64 PersonId { get; set; }
public List<ViewPersonColumns> PersonColumns { get; set; } //not saved to session??
public UserManagement() { }
public UserManagement(DbContext dbContext, string userName)
{
var person = dbContext.Person.Single(p => p.UserName == userName);
PersonUserName = person.UserName;
PersonId = person.Id;
}
/*public void PrepareUserData(DbContext dbContext)
{
LoadPersonColumns(dbContext);
}*/
public void LoadPersonColumns(DbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(DbContext dbContext, string code)
{
PersonColumns = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public List<ViewPersonColumns> GetFormViewColumns(Int64 formId, string viewName)
{
if (PersonColumns == null)
return null;
return PersonColumns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
Save columns to the session
UserManagement userManagement = new UserManagement(_context, user.UserName);
userManagement.LoadPersonColumns(_context);
HttpContext.Session.SetObject("ActualPersonContext", userManagement);
HttpContext.Session.SetObject("ActualPersonColumns", userManagement.PersonColumns);
Load columns from the session
//userManagement build-in types are set. The PersonColumns is null - not correct
UserManagement userManagement = session.GetObject<UserManagement>("ActualPersonContext");
//The cols is filled from session with 600 records - correct
List<ViewPersonColumns> cols = session.GetObject<List<ViewPersonColumns>>("ActualPersonColumns");
Use list for each column is better than use database.
you can't create and store sessions in .net core like .net framework 4.0
Try Like this
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
//services.AddDbContext<GeneralDBContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddMvc().AddSessionStateTempDataProvider();
services.AddSession();
}
Common/SessionExtensions.cs
sing Microsoft.AspNetCore.Http;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace IMAPApplication.Common
{
public static class SessionExtensions
{
public static T GetComplexData<T>(this ISession session, string key)
{
var data = session.GetString(key);
if (data == null)
{
return default(T);
}
return JsonConvert.DeserializeObject<T>(data);
}
public static void SetComplexData(this ISession session, string key, object value)
{
session.SetString(key, JsonConvert.SerializeObject(value));
}
}
}
Usage
==> Create Session*
public IActionResult Login([FromBody]LoginViewModel model)
{
LoggedUserVM user = GetUserDataById(model.userId);
//Create Session with complex object
HttpContext.Session.SetComplexData("loggerUser", user);
return Json(new { status = result.Status, message = result.Message });
}
==> Get Session data*
public IActionResult Index()
{
//Get Session data
LoggedUserVM loggedUser = HttpContext.Session.GetComplexData<LoggedUserVM>("loggerUser");
}
Hope this is helpful. Good luck.
This is an evergreen post, and even though Microsoft has recommended serialisation to store the object in session - it is not a correct solution unless your object is readonly, I have a blog explaining all scenario here and i have even pointed out the issues in GitHub of Asp.Net Core in issue id 18159
Synopsis of the problems are here:
A. Serialisation isn't same as object, true it will help in distributed server scenario but it comes with a caveat that Microsoft have failed to highlight - that it will work without any unpredictable failures only when the object is meant to be read and not to be written back.
B. If you were looking for a read-write object in the session, everytime you change the object that is read from the session after deserialisation - it needs to be written back to the session again by calling serialisation - and this alone can lead to multiple complexities as you will need to either keep track of the changes - or keep writing back to session after each change in any property. In one request to the server, you will have scenarios where the object is written back multiple times till the response is sent back.
C. For a read-write object in the session, even on a single server it will fail, as the actions of the user can trigger multiple rapid requests to the server and not more than often system will find itself in a situation where the object is being serialised or deserialised by one thread and being edited and then written back by another, the result is you will end up with overwriting the object state by threads - and even locks won't help you much since the object is not a real object but a temporary object created by deserialisation.
D. There are issues with serialising complex objects - it is not just a performance hit, it may even fail in certain scenario - especially if you have deeply nested objects that sometimes refer back to itself.
The synopsis of the solution is here, full implementation along with code is in the blog link:
First implement this as a Cache object, create one item in IMemoryCache for each unique session.
Keep the cache in sliding expiration mode, so that each time it is read it revives the expiry time - thereby keeping the objects in cache as long as the session is active.
Second point alone is not enough, you will need to implement heartbeat technique - triggering the call to session every T minus 1 min or so from the javascript. (This we anyways used to do even to keep the session alive till the user is working on the browser, so it won't be any different
Additional Recommendations
A. Make an object called SessionManager - so that all your code related to session read / write sits in one place.
B. Do not keep very high value for session time out - If you are implementing heartbeat technique, even 3 mins of session time out will be enough.

Nhibernate mapping at run time

I am developing a site in which nhibernate is using. that is working fine for static mapping. but problem that i apply this application on existing database. so is there any way that mapping of classes took place at run time. i mean user provide tables and column names for mapping. Thanks
From your question I interpret you saying that the POCO classes exists, but you don't know the table or column names at build time.
So, if you already had this class:
public class MyGenericClass
{
public virtual long Id { get; set; }
public virtual string Title { get; set; }
}
You could bind it to a table and columns at runtime:
string tableName; // Set somewhere else by user input
string idColumnName; // Set somewhere else by user input
string titleColumnName; // Set somewhere else by user input
var configuration = new NHibernate.Cfg.Configuration();
configuration.Configure();
var mapper = new NHibernate.Mapping.ByCode.ModelMapper();
mapper.Class<MyGenericClass>(
classMapper =>
{
classMapper.Table(tableName);
classMapper.Id(
myGenericClass => myGenericClass.Id,
idMapper =>
{
idMapper.Column(idColumnName);
idMapper.Generator(Generators.Identity);
}
);
classMapper.Property(c => c.Title,
propertyMapper =>
{
propertyMapper.Column(titleColumnName);
}
);
}
);
ISessionFactory sessionFactory = configuration.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();
////////////////////////////////////////////////////////////////////
// Now we can run an SQL query over this newly specified table
//
List<MyGenericClass> items = session.QueryOver<MyGenericClass>().List();
I don't think that could be possibly with NHibernate, but you could use a workaround.
You could use a view instead a table for the NHibernate mapping.
And in runtime, you could create that View or update it with the especified user mapping you need.
For example, you define a mapping in NHibernate to a view named ViewMapped with two columns Name and Mail.
And in the other hand, the user has a table with three columns Name, SecondName, EMail.
you can create a view on runtime with the following select:
(SELECT Name + ' ' + SecondName as Name, EMail as Mail FROM tableName) AS ViewMapped
I hope that helps you, or at least leads you to a solution.

LightSwitch - bulk-loading all requests into one using a domain service

I need to group some data from a SQL Server database and since LightSwitch doesn't support that out-of-the-box I use a Domain Service according to Eric Erhardt's guide.
However my table contains several foreign keys and of course I want the correct related data to be shown in the table (just doing like in the guide will only make the key values show). I solved this by adding a Relationship to my newly created Entity like this:
And my Domain Service class looks like this:
public class AzureDbTestReportData : DomainService
{
private CountryLawDataDataObjectContext context;
public CountryLawDataDataObjectContext Context
{
get
{
if (this.context == null)
{
EntityConnectionStringBuilder builder = new EntityConnectionStringBuilder();
builder.Metadata =
"res://*/CountryLawDataData.csdl|res://*/CountryLawDataData.ssdl|res://*/CountryLawDataData.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString =
WebConfigurationManager.ConnectionStrings["CountryLawDataData"].ConnectionString;
this.context = new CountryLawDataDataObjectContext(builder.ConnectionString);
}
return this.context;
}
}
/// <summary>
/// Override the Count method in order for paging to work correctly
/// </summary>
protected override int Count<T>(IQueryable<T> query)
{
return query.Count();
}
[Query(IsDefault = true)]
public IQueryable<RuleEntryTest> GetRuleEntryTest()
{
return this.Context.RuleEntries
.Select(g =>
new RuleEntryTest()
{
Id = g.Id,
Country = g.Country,
BaseField = g.BaseField
});
}
}
public class RuleEntryTest
{
[Key]
public int Id { get; set; }
public string Country { get; set; }
public int BaseField { get; set; }
}
}
It works and all that, both the Country name and the Basefield loads with Autocomplete-boxes as it should, but it takes VERY long time. With two columns it takes 5-10 seconds to load one page.. and I have 10 more columns I haven't implemented yet.
The reason it takes so long time is because each related data (each Country and BaseField) requires one request. Loading a page looks like this in Fiddler:
This isn't acceptable at all, it should be a way of combining all those calls into one, just as it does when loading the same table without going through the Domain Service.
So.. that was a lot explaining, my question is: Is there any way I can make all related data load at once or improve the performance by any other way? It should not take 10+ seconds to load a screen.
Thanks for any help or input!s
My RIA Service queries are extremely fast, compared to not using them, even when I'm doing aggregation. It might be the fact that you're using "virtual relationships" (which you can tell by the dotted lines between the tables), that you've created using your RuleEntryTest entity.
Why is your original RuleEntry entity not related to both Country & BaseUnit in LightSwitch BEFORE you start creating your RIA entity?
I haven't used Fiddler to see what's happening, but I'd try creating "real" relationships, instead of "virtual" ones, & see if that helps your RIA entity's performance.

RavenDB - Can I create/query an index, from an application that doesn't share the CLR objects that were persisted?

This post goes some way to answering this question (I'll include the answer later), but I was hoping for some further details.
We have a number of applications that each need to access/manipulate data from Raven in their own way. Data is only written via the main web application. Other apps include batch-style tasks, reporting etc. In an attempt to keep each of these as de-coupled as possible, they are separate solutions.
That being the case, how can I, from the reporting application, create indexes over the existing data, using my locally defined types?
The answer from the linked question states
As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference.
The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary.
However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced
It's the first part of that that I wanted clarification on. Do the object graphs for the docs on the server and types in my application have to match exactly? If the Click document on the server is
{
"Visit": {
"Version": "0",
"Domain": "www.mydomain.com",
"Page": "/index",
"QueryString": "",
"IPAddress": "127.0.0.1",
"Guid": "10cb6886-cb5c-46f8-94ed-4b0d45a5e9ca",
"MetaData": {
"Version": "1",
"CreatedDate": "2012-11-09T15:11:03.5669038Z",
"UpdatedDate": "2012-11-09T15:11:03.5669038Z",
"DeletedDate": null
}
},
"ResultId": "Results/1",
"ProductCode": "280",
"MetaData": {
"Version": "1",
"CreatedDate": "2012-11-09T15:14:26.1332596Z",
"UpdatedDate": "2012-11-09T15:14:26.1332596Z",
"DeletedDate": null
}
}
Is it possible (and if so, how?), to create a Map index from my application, which defines the Click class as follows?
class Click
{
public Guid Guid {get;set;}
public int ProductCode {get;set;}
public DateTime CreatedDate {get;set;}
}
Or would my class have to be look like this? (where the custom types are defined as a sub-set of the properties on the document above, with matching property names)
class Click
{
public Visit Visit {get;set;}
public int ProductCode {get;set;}
public MetaData MetaData {get;set;}
}
UPDATE
Following on from the answer below, here's the code I managed to get working.
Index
public class Clicks_ByVisitGuidAndProductCode : AbstractIndexCreationTask
{
public override IndexDefinition CreateIndexDefinition()
{
return new IndexDefinition
{
Map =
"from click in docs.Clicks select new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate}",
TransformResults =
"results.Select(click => new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate})"
};
}
}
Query
var query = _documentSession.Query<ReportClick, Clicks_ByVisitGuidAndProductCode>()
.Customize(x => x.WaitForNonStaleResultsAsOfNow())
.Where(x => x.CreatedDate >= start.Date && x.CreatedDate < end.Date);
where Click is
public class Click
{
public Guid Guid { get; set; }
public int ProductCode { get; set; }
public DateTime CreatedDate { get; set; }
}
Many thanks #MattJohnson.
The shape is a partial match, then it will fill in where it can, so your second example would work ok.
You can, however, create an index that projects the results you're showing in your first example. You would map by whatever you actually were going to filter or sort by, and then you would add a TransformResults section:
TransformResults = (database, clicks) =>
from click in clicks
select new {
click.Visit.Guid,
click.ProductCode,
click.MetaData.CreatedDate
};
When you query this index, it will come out in the shape you specified in the transform. This is a features called "Live Projections", which you can read more about here. (You won't need an .As() call, just use .Query<Click, YourIndex>() and it should work fine.)
Separately - what are you doing with MetaData is extraneous. Raven keeps metadata separated from the document. Read more on metadata here.
It looks like you have versioning concerns. If you are just keeping an audit trail, you should look at Raven's standard Versioning Bundle. If you have temporal effectivity concerns, consider using my new Temporal Versioning Bundle.

Does Fluent Nhibernate Lazy loads IList from Criteria

I want to make a kind of "news feed" in the application of my company.
In the scenario, User actions will generate "Activity" of different kinds, and other users will see in their "news feed".
However, an "Activity" is not related to all users, and to determine the relation, we have a complex piece of code.
Here is my Activity class
public class Activity: IActivity
{
public virtual int Id { get; set; }
public virtual ActivityType Type { get; set; }
public virtual User User { get; set; }
public virtual bool IsVisibleToUser(User userLook)
{
// Complex business calculation etc.
return true;
}
}
I want to get latest 10 news that is visible to User. But since the Activity table will be quite huge, and performance is an issue, I want to do the best practice about it.
What i am about to do, is get 25 last Activity, and check if we fill the list to show to user. For example, if only 5 Activity is visible to user, i will get another 25 Activities and so on.
IList<Activity> resultList = session.CreateCriteria(typeof(Activity))
.SetMaxResults(25)
.AddOrder(Order.Desc("Id"))
.List<Activity>();
I want to learn, if I get the whole list ordered by Id, and check one by one if it is visible to User, would NHibernate only loads the objects that i use for me or not?
IList<Activity> resultList = session.CreateCriteria(typeof(Activity))
.AddOrder(Order.Desc("Id"))
.List<Activity>();
int count = 0;
foreach( Activity act in resultList){
if (act.IsVisible(CurrentUser)){
count++;
// Do something with act
if (count == 10)
break;
}
}
EDIT:
Here is ActivityMapping for Activity model.
public class ActivityMap : ClassMap<Activity>
{
public ActivityMap()
{
Id(x => x.Id);
Map(x => x.Type).CustomType(typeof(Int32));
References(x => x.User).Nullable();
}
}
If your question is about how the generated SQL would look like, my guess would be :
SELECT
this_.Id as Id0_0_,
this_.ActivityTypeas ActivityType_0_0_,
--Other fields
FROM dbo.ActivityType this_
WHERE
--condition
ORDER BY
--condition
Since you have mentioned that the Activity count is huge, you can make use of
ICriteria's SetFirstResult and SetMaxResult.
SetFirstResult(int) indicates the index of the first item that you wish to obtain and SetMaxResult(int) indicates the number of rows you wish to get, 25 in your case.
The ToList would load all the records in memory at once.
[UPDATE] If you need the records to be returned one by one, make use of Enumerable() -
If you expect your query to return a very large number of objects, but you don't expect to use them all, you might get better performance from the Enumerable() methods, which return a System.Collections.IEnumerable. The iterator will load objects on demand, using the identifiers returned by an initial SQL query (n+1 selects total).
Source - Link
No, the List() method pulls everything into memory at once.