Broken WF4 workflow rehydration - wcf

Consider a WF4 project running in IIS, with a single workflow definition (xamlx) and a SqlInstanceStore for persistence.
Instead of hosting the xamlx directly, we host a WorkflowServiceHostFactory which spins up a dedicated WorkflowServiceHost on a seperate endpoint for every customer.
This has been running fine for a while, until we required a new version of the workflow definition, so now on top of Flow.xamlx I have Flow1.xamlx.
Since all interactions with the workflow service are wrapped with business logic which is smart enough to identify the required version, this homebrew versioning works fine for newly started workflows (both on Flow.xamlx and Flow1.xamlx).
However, workflows started before this change fail to be reactivated (on a post the servicehost throws an UnknownMessageReceived exception).
Since WF isn't overly verbose in telling you WHY it can't reactivate the workflow (wrong version, instance not found, lock, etc), we attached a SQL profiler to the database.
It turns out the 'WorkflowServiceType' the WorkflowServiceHost uses in its queries is different from the stored instances' WorkflowServiceType. Likely this is why it fails to detect the persisted instance.
Since I'm pretty sure I instance the same xamlx, I can't understand where this value is coming from. What parameters go into the calculation of this Guid, does the environment matter (sitename), and what can I do to reactivate the workflow ?

In the end I decompiled System.Activities.DurableInstancing. The only setter for WorkflowHostType on SqlWorkflowInstanceStore was in ExtractWorkflowHostType:
private void ExtractWorkflowHostType(IDictionary<XName, InstanceValue> commandMetadata)
{
InstanceValue instanceValue;
if (commandMetadata.TryGetValue(WorkflowNamespace.WorkflowHostType, out instanceValue))
{
XName xName = instanceValue.Value as XName;
if (xName == null)
{
throw FxTrace.Exception.AsError(new InstancePersistenceCommandException(SR.InvalidMetadataValue(WorkflowNamespace.WorkflowHostType, typeof(XName).Name)));
}
byte[] bytes = Encoding.Unicode.GetBytes(xName.ToString());
base.Store.WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
this.fireRunnableInstancesEvent = true;
}
}
I couldn't clearly disentangle the calling code path, so I had to find out at runtime by attaching WinDbg/SOS to IIS and breaking on HashHelper.ComputeHash.
I was able to retreive the XName that goes into the hash calculation, which has a localname equal to the servicefile, and a namespace equal to the [sitename]/[path]/.
In the end the WorkflowHostType calculation comes down to:
var xName = XName.Get("Flow.xamlx.svc", "/examplesite/WorkflowService/1/");
var bytes = Encoding.Unicode.GetBytes(xName.ToString());
var WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
Bottomline: apparently workflows can only be rehydrated when the service filename, sitename and path are all identical (case sensitive) as when they were started

Related

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

Read SQL Server Broker messages and publish them using NServiceBus

I am very new to NServiceBus, and in one of our project, we want to accomplish following -
Whenever table data is modified in Sql server, construct a message and insert in sql server broker queue
Read the broker queue message using NServiceBus
Publish the message again as another event so that other subscribers
can handle it.
Now it is point 2, that I do not have much clue, how to get it done.
I have referred the following posts, after which I was able to enter the message in broker queue, but unable to integrate with NServiceBus in our project, as the NServiceBus libraries are of older version and also many methods used are deprecated. So using them with current versions is getting very troublesome, or if I was doing it in improper way.
http://www.nullreference.se/2010/12/06/using-nservicebus-and-servicebroker-net-part-2
https://github.com/jdaigle/servicebroker.net
Any help on the correct way of doing this would be invaluable.
Thanks.
I'm using the current version of nServiceBus (5), VS2013 and SQL Server 2008. I created a Database Change Listener using this tutorial, which uses SQL Server object broker and SQLDependency to monitor the changes to a specific table. (NB This may be deprecated in later versions of SQL Server).
SQL Dependency allows you to use a broad selection of all the basic SQL functionality, although there are some restrictions that you need to be aware of. I modified the code from the tutorial slightly to provide better error information:
void NotifyOnChange(object sender, SqlNotificationEventArgs e)
{
// Check for any errors
if (#"Subscribe|Unknown".Contains(e.Type.ToString())) { throw _DisplayErrorDetails(e); }
var dependency = sender as SqlDependency;
if (dependency != null) dependency.OnChange -= NotifyOnChange;
if (OnChange != null) { OnChange(); }
}
private Exception _DisplayErrorDetails(SqlNotificationEventArgs e)
{
var message = "useful error info";
var messageInner = string.Format("Type:{0}, Source:{1}, Info:{2}", e.Type.ToString(), e.Source.ToString(), e.Info.ToString());
if (#"Subscribe".Contains(e.Type.ToString()) && #"Invalid".Contains(e.Info.ToString()))
messageInner += "\r\n\nThe subscriber says that the statement is invalid - check your SQL statement conforms to specified requirements (http://stackoverflow.com/questions/7588572/what-are-the-limitations-of-sqldependency/7588660#7588660).\n\n";
return new Exception(messageMain, new Exception(messageInner));
}
I also created a project with a "database first" Entity Framework data model to allow me do something with the changed data.
[The relevant part of] My nServiceBus project comprises two "Run as Host" endpoints, one of which publishes event messages. The second endpoint handles the messages. The publisher has been setup to IWantToRunAtStartup, which instantiates the DBListener and passes it the SQL statement I want to run as my change monitor. The onChange() function is passed an anonymous function to read the changed data and publish a message:
using statements
namespace Sample4.TestItemRequest
{
public partial class MyExampleSender : IWantToRunWhenBusStartsAndStops
{
private string NOTIFY_SQL = #"SELECT [id] FROM [dbo].[Test] WITH(NOLOCK) WHERE ISNULL([Status], 'N') = 'N'";
public void Start() { _StartListening(); }
public void Stop() { throw new NotImplementedException(); }
private void _StartListening()
{
var db = new Models.TestEntities();
// Instantiate a new DBListener with the specified connection string
var changeListener = new DatabaseChangeListener(ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
/* START OF EVENT HANDLING CODE */
//This uses LINQ against the EF data model to get the changed records
IEnumerable<Models.TestItems> _NewTestItems = DataAccessLibrary.GetInitialDataSet(db);
while (_NewTestItems.Count() > 0)
{
foreach (var qq in _NewTestItems)
{
// Do some processing, if required
var newTestItem = new NewTestStarted() { ... set properties from qq object ... };
Bus.Publish(newTestItem);
}
// Because there might be a number of new rows added, I grab them in small batches until finished.
// Probably better to use RX to do this, but this will do for proof of concept
_NewTestItems = DataAccessLibrary.GetNextDataChunk(db);
}
changeListener.Start(string.Format(NOTIFY_SQL));
/* END OF EVENT HANDLING CODE */
};
// Now everything has been set up.... start it running.
changeListener.Start(string.Format(NOTIFY_SQL));
}
}
}
Important The OnChange event firing causes the listener to stop monitoring. It basically is a single event notifier. After you have handled the event, the last thing to do is restart the DBListener. (You can see this in the line preceding the END OF EVENT HANDLING comment).
You need to add a reference to System.Data and possibly System.Data.DataSetExtensions.
The project at the moment is still proof of concept, so I'm well aware that the above can be somewhat improved. Also bear in mind I had to strip out company specific code, so there may be bugs. Treat it as a template, rather than a working example.
I also don't know if this is the right place to put the code - that's partly why I'm on StackOverflow today; to look for better examples of ServiceBus host code. Whatever the failings of my code, the solution works pretty effectively - so far - and meets your goals, too.
Don't worry too much about the ServiceBroker side of things. Once you have set it up, per the tutorial, SQLDependency takes care of the details for you.
The ServiceBroker Transport is very old and not supported anymore, as far as I can remember.
A possible solution would be to "monitor" the interesting tables from the endpoint code using something like a SqlDependency (http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx) and then push messages into the relevant queues.
.m

JayData oData request with custom headers - ROUND 2

Few month back I was working on some Odata WCF project and I had some problems with parsing custom headers for token auth (apiKey).
At that time, being quite a noob (still am!), I posted this SO question: JayData oData request with custom headers
Today I am working on a new project with Jaydata Odata server and client library and this:
application.context.prepareRequest = function (r) {
r[0].headers['apikey'] = '123456';
};
was working fine till I had to do a MERGE request. I found out that somehow MERGE request was overriding my headers so I investigated further.
It appears at first that in the oDataProvider.js (~line 617) in the _saveRest method the headers are not inherited:
request = {
requestUri: this.providerConfiguration.oDataServiceHost + '/',
headers: {
MaxDataServiceVersion: this.providerConfiguration.maxDataServiceVersion
}
};
but a few lines later we get:
this.context.prepareRequest.call(this, requestData);
which "should" call my own prepareRequest, but doesnt... Instead it still points to:
//Line 11302 jaydata.js
prepareRequest: function () { },
which of course does... nothing! Funnilly enough, when you execute a simple GET the same code supposedly on the same context instance works and points to my prepareRequest override.
I can assert with enough confidence that somehow the context between GET/MERGE is not the same instance. I cant see, however, any place where the context instance is reassigned.
Has anyone got a clue?
PS: this is NOT a CORS issue. My OPTIONS is passing fine and manually feeding the headers in oDataProvider works.
More
I followed the lead on different context instances and found something interesting. calling EntitySet.save() ends up calling the EntityContext constructor. see trace:
$data.Class.define.constructor (jaydata.js:10015)
EntityContext (VM110762:7)
Service (VM110840:8)
storeToken.factory (jaydata.js:14166)
$data.Class.define._getContextPromise (jaydata.js:13725)
$data.Class.define._getStoreContext (jaydata.js:13700)
$data.Class.define._getStoreEntitySet (jaydata.js:13756)
$data.Class.define.EntityInstanceSave (jaydata.js:13837)
$data.Entity.$data.Class.define.save (jaydata.js:9774)
(anonymous function) (app.js:162) // My save()
That explains why I get two different instances...
Hack
Replacing the prepareRequest function direcly in the class definition works, but its ugly!
for now I can cope with this:
$data.EntityContext.prototype.prepareRequest = function (r) {
r[0].headers['apikey'] = '12345';
};
This works fine as long as you only need to talk to a single endpoint.
Final word based on my experience
As much as I like JayData, it is obvious that they created a monster and its getting out of their hands (poor forum, no community, half-documented,...).
I chose JD because I was lazy and wanted to keep working with my old WCF DataService. Switching to Web API seemed wrong or too much work for me.
Also as a .net dev I liked strong typing of my entities and the ability to work with a concrete model generated from the JD tools. However, in the end, I was adding confusion. Every time my server side model changed I had to fetch the new metadata and scaffold a new entityModel.
I ended up by switching to Web Api and migrated my data service layer to Breeze. And seriously! its a breeze to work with it!
The documentation is absolutely brilliant and here on S.O you can always count on Ward or Jay Tarband to reply with a very high amount of professionalism.
In the end I realize this should probably be more a wiki than a Question.....

Determine request Uri from WCF Data Services LINQ query for FirstOrDefault against Azure without executing it?

Problem
I would like to trace the Uri that will be generated by a LINQ query executed against a Microsoft.WindowsAzure.StorageClient.TableServiceContext object. TableServiceContext just extends System.Data.Services.Client.DataServiceContext with a couple of properties.
The issue I am having is that the query executes fine against our Azure Table Storage instance when we run the web role on a dev machine in debug mode (we are connecting to Azure storage in the cloud not using Dev Storage). I can get the resulting query Uri using Fiddler or just hovering over the statement in the debugger.
However, when we deploy the web role to Azure the query fails against the exact same Azure Table Storage source with a ResourceNotFound DataServiceClientException. We have had ResoureNotFound errors before that dealt with the behavior of FirstOrDefault() on empty tables. This is not the problem here.
As one approach to the problem, I wanted to compare the query Uri that is being generated when the web role is deployed versus when it is running on a dev machine.
Question
Does anyone know a way to get the query Uri for the query that will be sent when the FirstOrDefault() method is called. I know that you can call ToString() on the IQueryable returned from the TableServiceContext but my concern is that when FirstOrDefault() is called the Uri might be further optimized and ToString() on IQueryable might not be what is ultimately sent to the server when FirstOrDefault() is called.
If someone has another approach to the problem I am open to suggestions. It seems to be a general problem with LINQ when trying to determine what will happen when the expression tree is finally evaluated. I am open to suggestions here as well because my LINQ skills could use some improvement.
Sample Code
public void AddSomething(string ProjectID, string Username) {
TableServiceContext context = new TableServiceContext();
var qry = context.Somethings.Where(m => m.RowKey == Username
&& m.PartitionKey == ProjectID);
System.Diagnostics.Trace.TraceInformation(qry.ToString());
// ^ Here I would like to trace the Uri that will be generated
// and sent to the server when the qry.FirstOrDefault() call below is executed.
if (qry.FirstOrDefault() == null) {
// ^ This statement generates an error when the web role is running
// in the fabric
...
}
}
Edit Update and Answer
Steve provided the write answer. Our problem was as exactly described in this post which describes an issue with PartitionKey/RowKey ordering in Single Entity query which was fixed with an update to the Azure OS. This explains the discrepancy between our dev machines and when the web role was deployed to Azure.
When I indicated we had dealt with the ResourceNotFound issue before in our existence checks, we had dealt with it in two ways in our code. One way was using exception handling to deal with the ResourceNotFound error the other way was to put the RowKey first in the LINQ query (as some MS people had indicated was appropriate).
It turns out we have several places where the RowKey was first instead of using the exception handling. We will address this by refactoring our code to target .NET 4 and using the .IgnoreResourceNotFoundException = true property of theTableServiceContext .
Lesson learned (more than once): Don't depend on quirky undocumented behavior.
Aside
We were able to get the query Uri's. They did turn out to be different (as indicated they would be in the blog post). Here are the results:
Query Uri from Dev Fabric
`https://ourproject.table.core.windows.net/Somethings()?$filter=(RowKey eq 'test19#gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
Query Uri from Azure Fabric
`https://ourproject.table.core.windows.net/Somethings(RowKey='test19#gmail.com',PartitionKey='41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
I can do one better... I think I know what the problem is. :)
See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.
Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.
I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.

Executing untrusted code

I'm building a C# application which uses plug-ins. The application must guarantee to the user that plug-ins will not do whatever they want on the user machine, and will have less privileges that the application itself (for example, the application can access its own log files, whereas plug-ins cannot).
I considered three alternatives.
Using System.AddIn. I tried this alternative first, because it seamed much powerful, but I'm really disappointed by the need of modifying the same code seven times in seven different projects each time I want to modify something. Besides, there is a huge number of problems to solve even for a simple Hello World application.
Using System.Activator.CreateInstance(assemblyName, typeName). This is what I used in the preceding version of the application. I can't use it nevermore, because it does not provide a way to restrict permissions.
Using System.Activator.CreateInstance(AppDomain domain, [...]). That's what I'm trying to implement now, but it seems that the only way to do that is to pass through ObjectHandle, which requires serialization for every used class. Although plug-ins contain WPF UserControls, which are not serializable.
So is there a way to create plug-ins containing UserControls or other non serializable objects and to execute those plug-ins with a custom PermissionSet ?
One thing you could do is set the current AppDomain's policy level to a restricted permission set and add evidence markers to restrict based on strong name or location. The easiest would probably be to require plugins are in a specific directory and give them a restrictive policy.
e.g.
public static void SetRestrictedLevel(Uri path)
{
PolicyLevel appDomainLevel = PolicyLevel.CreateAppDomainLevel();
// Create simple root policy normally with FullTrust
PolicyStatement fullPolicy = new PolicyStatement(appDomainLevel.GetNamedPermissionSet("FullTrust"));
UnionCodeGroup policyRoot = new UnionCodeGroup(new AllMembershipCondition(), fullPolicy);
// Build restrictred permission set
PermissionSet permSet = new PermissionSet(PermissionState.None);
permSet.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution));
PolicyStatement permissions = new PolicyStatement(permSet, PolicyStatementAttribute.Exclusive);
policyRoot.AddChild(new UnionCodeGroup(new UrlMembershipCondition(path.ToString()), permissions));
appDomainLevel.RootCodeGroup = policyRoot;
AppDomain.CurrentDomain.SetAppDomainPolicy(appDomainLevel);
}
static void RunPlugin()
{
try
{
SetRestrictedLevel(new Uri("file:///c:/plugins/*"));
Assembly a = Assembly.LoadFrom("file:///c:/plugins/ClassLibrary.dll");
Type t = a.GetType("ClassLibrary.TestClass");
/* Will throw an exception */
t.InvokeMember("DoSomething", BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Static,
null, null, null);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
Of course this isn't rigorously tested and CAS policy is notoriously complex so there is always a risk that this code might allow some things to bypass the policy, YMMV :)