I want to publish Sitefinity MediaContent (Document, Image, and Video in Sitefinity Library). As i look on the documentation, they gave an example using WorkflowManager:
//Publish the DocumentLibraries item. The live version acquires new ID.
var bag = new Dictionary<string, string>();
bag.Add("ContentType", typeof(Document).FullName);
WorkflowManager.MessageWorkflow(masterDocumentId, typeof(Document), null, "Publish", false, bag);
Code snippet above is taken from their documentation.
The problem is, the underlying implementation of WorkflowManager uses HttpContext to check whether the current user has the required privilege. The snippet below is from decompiling the Telerik.Sitefinity.dll:
public static string MessageWorkflow(System.Guid itemId, System.Type itemType, string providerName, string operationName, bool isCheckedOut, System.Collections.Generic.Dictionary<string, string> contextBag)
{
....
dictionary.Add("operationName", operationName);
dictionary.Add("itemId", itemId);
dictionary.Add("providerName", providerName);
dictionary.Add("isCheckedOut", isCheckedOut);
dictionary.Add("contextBag", contextBag);
dictionary.Add("culture", System.Globalization.CultureInfo.CurrentCulture.Name);
dictionary.Add("httpContext", System.Web.HttpContext.Current);
if (workflowDefinition != null)
{
dictionary.Add("workflowDefinitionId", workflowDefinition.Id);
}
contextBag.Add("userHostAddress", System.Web.HttpContext.Current.Request.UserHostAddress);
...
}
As shown above it at least calls System.Web.HttpContext.Current twice which is bad. Especially if i have exection deferred using HostingEnvironment.QueueBackgroundWorkItem or even using Quartz that runs outside HttpContext.Current obviously.
My question is, is there another way to publish Sitefinity MediaContent in Elevated Mode and did not depend on HttpContext at all?
Currently i am using Sitefinity 9.2, as far as i am aware, the APIs above also exists on Sitefinity 7.3.
"As shown above it at least calls System.Web.HttpContext.Current twice which is bad." - this is most probably an issue with the decompiler - I am sure they are not calling it twice.
For Elevated Mode you can use this, but it will still require HttpContext
...
SystemManager.RunWithElevatedPrivilege(p =>
{
WorkflowManager.MessageWorkflow(id, itemType, null, "Publish", false, bag);
});
Related
In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}
My goal
I want to create a new IdentityUser and show all the users already created through the same Blazor page. This page has:
a form through you will create an IdentityUser
a third-party's grid component (DevExpress Blazor DxDataGrid) that shows all users using UserManager.Users property. This component accepts an IQueryable as a data source.
Problem
When I create a new user through the form (1) I will get the following concurrency error:
InvalidOperationException: A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread-safe.
I think the problem is related to the fact that CreateAsync(IdentityUser user) and UserManager.Users are referring the same DbContext
The problem isn't related to the third-party's component because I reproduce the same problem replacing it with a simple list.
Step to reproduce the problem
create a new Blazor server-side project with authentication
change Index.razor with the following code:
#page "/"
<h1>Hello, world!</h1>
number of users: #Users.Count()
<button #onclick="#(async () => await Add())">click me</button>
<ul>
#foreach(var user in Users)
{
<li>#user.UserName</li>
}
</ul>
#code {
[Inject] UserManager<IdentityUser> UserManager { get; set; }
IQueryable<IdentityUser> Users;
protected override void OnInitialized()
{
Users = UserManager.Users;
}
public async Task Add()
{
await UserManager.CreateAsync(new IdentityUser { UserName = $"test_{Guid.NewGuid().ToString()}" });
}
}
What I noticed
If I change Entity Framework provider from SqlServer to Sqlite then the error will never show.
System info
ASP.NET Core 3.1.0 Blazor Server-side
Entity Framework Core 3.1.0 based on SqlServer provider
What I have already seen
Blazor A second operation started on this context before a previous operation completed: the solution proposed doesn't work for me because even if I change my DbContext scope from Scoped to Transient I still using the same instance of UserManager and its contains the same instance of DbContext
other guys on StackOverflow suggests creating a new instance of DbContext per request. I don't like this solution because it is against Dependency Injection principles. Anyway, I can't apply this solution because DbContext is wrapped inside UserManager
Create a generator of DbContext: this solution is pretty like the previous one.
Using Entity Framework Core with Blazor
Why I want to use IQueryable
I want to pass an IQueryable as a data source for my third-party's component because its can apply pagination and filtering directly to the Query. Furthermore IQueryable is sensitive to CUD
operations.
UPDATE (08/19/2020)
Here you can find the documentation about how to use Blazor and EFCore together
UPDATE (07/22/2020)
EFCore team introduces DbContextFactory inside Entity Framework Core .NET 5 Preview 7
[...] This decoupling is very useful for Blazor applications, where using IDbContextFactory is recommended, but may also be useful in other scenarios.
If you are interested you can read more at Announcing Entity Framework Core EF Core 5.0 Preview 7
UPDATE (07/06/2020)
Microsoft released a new interesting video about Blazor (both models) and Entity Framework Core. Please take a look at 19:20, they are talking about how to manage concurrency problem with EFCore
General solution
I asked Daniel Roth BlazorDeskShow - 2:24:20 about this problem and it seems to be a Blazor Server-Side problem by design.
DbContext default lifetime is set to Scoped. So if you have at least two components in the same page which are trying to execute an async query then we will encounter the exception:
InvalidOperationException: A second operation started on this context before a previous operation completed. Any instance members are not guaranteed to be thread-safe.
There are two workaround about this problem:
(A) set DbContext's lifetime to Transient
services.AddDbContext<ApplicationDbContext>(opt =>
opt.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")), ServiceLifetime.Transient);
(B) as Carl Franklin suggested (after my question): create a singleton service with a static method which returns a new instance of DbContext.
anyway, each solution works because they create a new instance of DbContext.
About my problem
My problem wasn't strictly related to DbContext but with UserManager<TUser> which has a Scoped lifetime. Set DbContext's lifetime to Transient didn't solve my problem because ASP.NET Core creates a new instance of UserManager<TUser> when I open the session for the first time and it lives until I don't close it. This UserManager<TUser> is inside two components on the same page. Then we have the same problem described before:
two components that own the same UserManager<TUser> instance which contains a transient DbContext.
Currently, I solved this problem with another workaround:
I don't use UserManager<TUser> directly instead, I create a new instance of it through IServiceProvider and then it works. I am still looking for a method to change the UserManager's lifetime instead of using IServiceProvider.
tips: pay attention to services' lifetime
This is what I learned. I don't know if it is all correct or not.
I downloaded your sample and was able to reproduce your problem. The problem is caused because Blazor will re-render the component as soon as you await in code called from EventCallback (i.e. your Add method).
public async Task Add()
{
await UserManager.CreateAsync(new IdentityUser { UserName = $"test_{Guid.NewGuid().ToString()}" });
}
If you add a System.Diagnostics.WriteLine to the start of Add and to the end of Add, and then also add one at the top of your Razor page and one at the bottom, you will see the following output when you click your button.
//First render
Start: BuildRenderTree
End: BuildRenderTree
//Button clicked
Start: Add
(This is where the `await` occurs`)
Start: BuildRenderTree
Exception thrown
You can prevent this mid-method rerender like so....
protected override bool ShouldRender() => MayRender;
public async Task Add()
{
MayRender = false;
try
{
await UserManager.CreateAsync(new IdentityUser { UserName = $"test_{Guid.NewGuid().ToString()}" });
}
finally
{
MayRender = true;
}
}
This will prevent re-rendering whilst your method is running. Note that if you define Users as IdentityUser[] Users you will not see this problem because the array is not set until after the await has completed and is not lazy evaluated, so you don't get this reentrancy problem.
I believe you want to use IQueryable<T> because you need to pass it to 3rd party components. The problem is, different components can be rendered on different threads, so if you pass IQueryable<T> to other components then
They might render on different threads and cause the same problem.
They most likely will have an await in the code that consumes the IQueryable<T> and you'll have the same problem again.
Ideally, what you need is for the 3rd party component to have an event that asks you for data, giving you some kind of query definition (page number etc). I know Telerik Grid does this, as do others.
That way you can do the following
Acquire a lock
Run the query with the filter applied
Release the lock
Pass the results to the component
You cannot use lock() in async code, so you'd need to use something like SpinLock to lock a resource.
private SpinLock Lock = new SpinLock();
private async Task<WhatTelerikNeeds> ReadData(SomeFilterFromTelerik filter)
{
bool gotLock = false;
while (!gotLock) Lock.Enter(ref gotLock);
try
{
IUserIdentity result = await ApplyFilter(MyDbContext.Users, filter).ToArrayAsync().ConfigureAwait(false);
return new WhatTelerikNeeds(result);
}
finally
{
Lock.Exit();
}
}
Perhaps not the best approach but rewriting async method as non-async fixes the problem:
public void Add()
{
Task.Run(async () =>
await UserManager.CreateAsync(new IdentityUser { UserName = $"test_{Guid.NewGuid().ToString()}" }))
.Wait();
}
It ensures that UI is updated only after the new user is created.
The whole code for Index.razor
#page "/"
#inherits OwningComponentBase<UserManager<IdentityUser>>
<h1>Hello, world!</h1>
number of users: #Users.Count()
<button #onclick="#Add">click me. I work if you use Sqlite</button>
<ul>
#foreach(var user in Users.ToList())
{
<li>#user.UserName</li>
}
</ul>
#code {
IQueryable<IdentityUser> Users;
protected override void OnInitialized()
{
Users = Service.Users;
}
public void Add()
{
Task.Run(async () => await Service.CreateAsync(new IdentityUser { UserName = $"test_{Guid.NewGuid().ToString()}" })).Wait();
}
}
I found your question looking for answers about the same error message you had.
My concurrency issue appears to have been due to a change that triggered a re-rendering of the visual tree to occur at the same time as (or due to the fact that) I was trying to call DbContext.SaveChangesAsync().
I solved this by overriding my component's ShouldRender() method like this:
protected override bool ShouldRender()
{
if (_updatingDb)
{
return false;
}
else
{
return base.ShouldRender();
}
}
I then wrapped my SaveChangesAsync() call in code that set a private bool field _updatingDb appropriately:
try
{
_updatingDb = true;
await DbContext.SaveChangesAsync();
}
finally
{
_updatingDb = false;
StateHasChanged();
}
The call to StateHasChanged() may or may not be necessary, but I've included it just in case.
This fixed my issue, which was related to selectively rendering a bound input tag or just text depending on if the data field was being edited. Other readers may find that their concurrency issue is also related to something triggering a re-render. If so, this technique may be helpful.
Well, I have a quite similar scenario with this, and I 'solve' mine is to move everything from OnInitializedAsync() to
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if(firstRender)
{
//Your code in OnInitializedAsync()
StateHasChanged();
}
{
It seems solved, but I had no idea to find out the proves. I guess just skip from the initialization to let the component success build, then we can go further.
/******************************Update********************************/
I'm still facing the problem, seems I'm giving a wrong solution to go. When I checked with this Blazor A second operation started on this context before a previous operation completed I got my problem clear. Cause I'm actually dealing with a lot of components initialization with dbContext operations. According to #dani_herrera mention that if you have more than 1 component execute Init at a time, probably the problem appears.
As I took his advise to change my dbContext Service to Transient, and I get away from the problem.
#Leonardo Lurci Had covered conceptually. If you guys are not yet wanting to move to .NET 5.0 preview, i would recommend looking at Nuget package 'EFCore.DbContextFactory', documentation is pretty neat. Essential it emulates AddDbContextFactory. Ofcourse, it creates a context per component.
So far, this is working fine for me so far without any problems...
I ensure single-threaded access by only interacting with my DbContext via a new DbContext.InvokeAsync method, which uses a SemaphoreSlim to ensure only a single operation is performed at a time.
I chose SemaphoreSlim because you can await it.
Instead of this
return Db.Users.FirstOrDefaultAsync(x => x.EmailAddress == emailAddress);
do this
return Db.InvokeAsync(() => ...the query above...);
// Add the following methods to your DbContext
private SemaphoreSlim Semaphore { get; } = new SemaphoreSlim(1);
public TResult Invoke<TResult>(Func<TResult> action)
{
Semaphore.Wait();
try
{
return action();
}
finally
{
Semaphore.Release();
}
}
public async Task<TResult> InvokeAsync<TResult>(Func<Task<TResult>> action)
{
await Semaphore.WaitAsync();
try
{
return await action();
}
finally
{
Semaphore.Release();
}
}
public Task InvokeAsync(Func<Task> action) =>
InvokeAsync<object>(async () =>
{
await action();
return null;
});
public void InvokeAsync(Action action) =>
InvokeAsync(() =>
{
action();
return Task.CompletedTask;
});
#Leonardo Lurci has a great answer with multiple solutions to the problem. I will give my opinion about every solution and which I think it is the best one.
Making DBContext transient - it is a solution but it is not optimized for this cases..
Carl Franklin suggestion - the singleton service will not be able to control the lifetime of the context and will depend on the service requester to dispose the context after use.
Microsoft documentation they talk about injecting DBContext Factory into a component with the IDisposable interface to Dispose the context when the component is destroied. This is not a very good solution, because a lot of problems happen with it, like: performing a context operation and leaving the component before it finishes that operation, will dispose the context and throw exception..
Finally. The best solution so far is to inject the DBContext Factory in the component yes, but whenever you need it, you create a new instance with using statement like bellow:
public async Task GetSomething()
{
using var context = DBFactory.CreateDBContext();
return await context.Something.ToListAsync();
}
Since DbFactory is optimazed when creating new context instances, there is no significante overhead, making it a better choice and better performing than Transient context, it also disposes the context at the end of the method because of "using" statement.
Hope it was useful.
I am using MVCSitemapProvider by Maarten Balliauw with Ninject DI in MVC4. Being a large-scale web app, enumerating over the records to generate the sitemap xml accounts for 70% of the page load time. For that purpose, I went for using new sitemap files for each level-n dynamic node provider.
<mvcSiteMap xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-4.0" xsi:schemaLocation="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-4.0 MvcSiteMapSchema.xsd">
<mvcSiteMapNode title="$resources:SiteMapLocalizations,HomeTitle" description="$resources:SiteMapLocalizations,HomeDescription" controller="Controller1" action="Home" changeFrequency="Always" updatePriority="Normal" metaRobotsValues="index follow noodp noydir"><mvcSiteMapNode title="$resources:SiteMapLocalizations,AboutTitle" controller="ConsumerWeb" action="Aboutus"/>
<mvcSiteMapNode title="Sitemap" controller="Consumer1" action="SiteMap"/><mvcSiteMapNode title=" " action="Action3" controller="Consumer2" dynamicNodeProvider="Comp.Controller.Utility.NinjectModules.PeopleBySpecDynamicNodeProvider, Comp.Controller.Utility" />
<mvcSiteMapNode title="" siteMapFile="~/Mvc2.sitemap"/>
</mvcSiteMapNode>
</mvcSiteMap>
But, it doesn't seem to work. For localhost:XXXX/sitemap.xml, the child nodes from Mvc2.sitemap don't seem to load.
siteMapFile is not a valid XML attribute in MvcSiteMapProvider (although you could use it as a custom attribute), so I am not sure what guide you are following to do this. But, the bottom line is there is no feature that loads "child sitemap files", and even if there was, it wouldn't help with your issue because all of the nodes are loaded into memory at once. Realistically on an average server there is an upper limit of around 10,000 - 15,000 nodes.
The problem that you describe is a known issue. There are some tips available in issue #258 that may or may not help.
We are working a new XML sitemap implementation that will allow you to connect the XML sitemap directly to your data source, which can be used to circumvent this problem (at least as far as the XML sitemap is concerned). This implementation is stream-based and has paging that can be tied directly to the data source, and will seamlessly page over multiple tables, so it is very efficient. However, although there is a working prototype, it is still some time off from being made into a release.
If you need it sooner rather than later, you are welcome to grab the prototype from this branch.
You will need some code to wire it into your application (this is subject to change for the official release). I have created a demo project here.
Application_Start
var registrar = new MvcSiteMapProvider.Web.Routing.XmlSitemapFeedRouteRegistrar();
registrar.RegisterRoutes(RouteTable.Routes, "XmlSitemap2");
XmlSitemap2Controller
using MvcSiteMapProvider.IO;
using MvcSiteMapProvider.Web.Mvc;
using MvcSiteMapProvider.Xml.Sitemap.Configuration;
using System.Web.Mvc;
public class XmlSitemap2Controller : Controller
{
private readonly IXmlSitemapFeedResultFactory xmlSitemapFeedResultFactory;
public XmlSitemap2Controller()
{
var builder = new XmlSitemapFeedStrategyBuilder();
var xmlSitemapFeedStrategy = builder
.SetupXmlSitemapProviderScan(scan => scan.IncludeAssembly(this.GetType().Assembly))
.AddNamedFeed("default", feed => feed.WithMaximumPageSize(5000).WithContent(content => content.Image().Video()))
.Create();
var outputCompressor = new HttpResponseStreamCompressor();
this.xmlSitemapFeedResultFactory = new XmlSitemapFeedResultFactory(xmlSitemapFeedStrategy, outputCompressor);
}
public ActionResult Index(int page = 0, string feedName = "")
{
var name = string.IsNullOrEmpty(feedName) ? "default" : feedName;
return this.xmlSitemapFeedResultFactory.Create(page, name);
}
}
IXmlSiteMapProvider
And you will need 1 or more IXmlSitemapProvider implementations. For convenience, there is a base class XmlSiteMapProviderBase. These are similar to creating controllers in MVC.
using MvcSiteMapProvider.Xml.Sitemap;
using MvcSiteMapProvider.Xml.Sitemap.Specialized;
using System;
using System.Linq;
public class CategoriesXmlSitemapProvider : XmlSitemapProviderBase, IDisposable
{
private EntityFramework.MyEntityContext db = new EntityFramework.MyEntityContext();
// This is optional. Don't override it if you don't want to use last modified date.
public override DateTime GetLastModifiedDate(string feedName, int skip, int take)
{
// Get the latest date in the specified page
return db.Category.OrderBy(x => x.Id).Skip(skip).Take(take).Max(c => c.LastUpdated);
}
public override int GetTotalRecordCount(string feedName)
{
// Get the total record count for all pages
return db.Category.Count();
}
public override void GetUrlEntries(IUrlEntryHelper helper)
{
// Do not call ToList() on the query. The idea is that we want to force
// EntityFramework to use a DataReader rather than loading all of the data
// at once into RAM.
var categories = db.Category
.OrderBy(x => x.Id)
.Skip(helper.Skip)
.Take(helper.Take);
foreach (var category in categories)
{
var entry = helper.BuildUrlEntry(string.Format("~/Category/{0}", category.Id))
.WithLastModifiedDate(category.LastUpdated)
.WithChangeFrequency(MvcSiteMapProvider.ChangeFrequency.Daily)
.AddContent(content => content.Image(string.Format("~/images/category-image-{0}.jpg", category.Id)).WithCaption(category.Name));
helper.SendUrlEntry(entry);
}
}
public void Dispose()
{
db.Dispose();
}
}
Note that there is currently not an IXmlSiteMapProvider implementation that reads the nodes from the default (or any) SiteMap, but creating one is similar to what is shown above, except you would query the SiteMap for nodes instead of a database for records.
Alternatively, you could use a 3rd party XML sitemap generator. Although, nearly all of them are set up in a non-scalable way for large sites, and most leave it up to you to handle the paging. If they aren't streaming the nodes, it will not realistically scale to more than a few thousand URLs.
The other detail you might need to take care of is to use the forcing a match technique to reduce the total number of nodes in the SiteMap. If you are using the Menu and/or SiteMap HTML helpers, you will need to leave all of your high-level nodes alone. But any node that does not appear in either is a good candidate for this. Realistically, nearly any data-driven site can be reduced to a few dozen nodes using this technique, but keep in mind every node that is forced to match multiple routes in the SiteMap means that individual URL entries will need to be added in the XML sitemap.
My silverlight solution has 3 project files
Silverlight part(Client)
Web part(Server)
Entity model(I maintained the edmx along with Metadata in a seperate project)
Metadata file is a partial class with relavent dataannotation validations.
[MetadataTypeAttribute(typeof(User.UserMetadata))]
public partial class User
{
[CustomValidation(typeof(UsernameValidator), "IsUsernameAvailable")]
public string UserName { get; set; }
}
Now my question is where I need to keep this class UsernameValidator
If my Metadata class and edmx are on Server side(Web) then I know I need to create a .shared.cs class in my web project, then add the proper static method.
My IsUserAvailable method intern will call a domainservice method as part of asyc validation.
[Invoke]
public bool IsUsernameAvailable(string username)
{
return !Membership.FindUsersByName(username).Cast<MembershipUser>().Any();
}
If my metadata class is in the same project as my domain service is in then I can call domain service method from my UsernameValidator.Shared.cs class.
But here my entity models and Metadata are in seperate library.
Any idea will be appreciated
Jeff wonderfully explained the asyc validation here
http://jeffhandley.com/archive/2010/05/26/asyncvalidation-again.aspx
but that will work only when your model, metadata and Shared class, all are on server side.
There is a kind of hack to do this. It is not a clean way to do it it, but this is how it would probably work.
Because the .shared takes care of the code generation it doesn't complain about certain compile errors in the #if brackets of the code. So what you can do is create a Validator.Shared.cs in any project and just make sure it generates to the silverlight side.
Add the following code. and dont forget the namespaces.
#if SILVERLIGHT
using WebProject.Web.Services;
using System.ServiceModel.DomainServices.Client;
#endif
#if SILVERLIGHT
UserContext context = new UserContext();
InvokeOperation<bool> availability = context.DoesUserExist(username);
//code ommited. use what logic you want, maybe Jeffs post.
#endif
The compiler will ignore this code part because it does not meet the condition of the if statement. Meanwhile on the silverlight client side it tries to recompile the shared validator where it DOES meet the condition of the if-statement.
Like I said. This is NOT a clean way to do this. And you might have trouble with missing namespaces. You need to resolve them in the non-generated Validator.shared.cs to finally let it work in silverlight. If you do this right you can have the validation in silverlight with invoke operations. But not in your project with models and metadata like you would have with Jeff's post.
Edit: I found a cleaner and better way
you can create a partial class on the silverlight client side and doing the following
public partial class User
{
partial void OnUserNameChanging(string value)
{
//must be new to check for this validation rule
if(EntityState == EntityState.New)
{
var ctx = new UserContext();
ctx.IsValidUserName(value).Completed += (s, args) =>
{
InvokeOperation invop = (InvokeOperation) s;
bool isValid = (bool) invop.Value;
if(!isValid)
{
ValidationResult error = new ValidationResult(
"Username already exists",
new string[] {"UserName"});
ValidationErrors.Add(error;
}
};
}
}
}
This is a method generated by WCF RIA Services and can be easily partialled and you can add out-of-band validation like this. This is a much cleaner way to do this, but still this validation now only exists in the silverlight client side.
Hope this helps
I have a route defined last in my ASP.Net MVC 2 app that will map old urls that are no longer used to the appropriate new urls to be redirected to. This route returns the action and controller that is responsible for actually performing the redirect and it also returns a url to the controller action which is the url to redirect to. Since the route is responsible for generating the new url to redirect to, it is calling the router to get the appropriate urls. This has worked just fine with .Net 3.5, but when I upgraded to .Net 4, the GetVirtualPath method throws a System.Threading.LockRecursionException: "Recursive read lock acquisitions not allowed in this mode.". The following code resolves the problem but is pretty ugly:
public static string GetActionUrl(HttpContextBase context, string routeName, object routeValues)
{
RequestContext requestContext = new RequestContext(context, new RouteData());
VirtualPathData vp = null;
try
{
vp = _Routes.GetVirtualPath(requestContext, routeName, new RouteValueDictionary(routeValues));
}
catch (System.Threading.LockRecursionException)
{
var tmpRoutes = new RouteCollection();
Router.RegisterRoutes(tmpRoutes);
vp = tmpRoutes.GetVirtualPath(requestContext, routeName, new RouteValueDictionary(routeValues));
}
if (vp == null)
throw new Exception(String.Format("Could not find named route {0}", routeName));
return vp.VirtualPath;
}
Does anybody know what changes in .Net 4 might have caused this error? Also, is calling a route from another route's GetRouteData method just bad practice and something I should not be doing at all?
As you figured out, it is not supported to call a route in the global route table from within another route in the global route table.
The global route table is a thread-safe collection to enable multiple readers or a single router. Unfortunately the code you have was never supported, even in .NET 3.5, though in some scenarios it may have coincidentally worked.
As a general note, routes should function independent of one another, so I'm not sure what your scenario is here.
v3.5 RouteCollection uses the following code:
private ReaderWriterLockSlim _rwLock;
public IDisposable GetReadLock()
{
this._rwLock.EnterReadLock();
return new ReadLockDisposable(this._rwLock);
}
v4.0 RouteCollection uses the following code:
private ReaderWriterLock _rwLock;
public IDisposable GetReadLock()
{
this._rwLock.AcquireReaderLock(-1);
return new ReadLockDisposable(this._rwLock);
}
GetRouteData(HttpContextBase httpContext) in both versions use the following code:
public RouteData GetRouteData(HttpContextBase httpContext)
{
...
using (this.GetReadLock())
{
GetVirtualPath uses the same logic.
The ReaderWriterLock used in v4.0 does not allow Recursive read locks by default which is why the error is occurring.
Copy routes to a new RouteCollection for second query, or change the ReaderWriterLock mode by reflecting in.