I want to create workflow which will start on creating new project. The workflow should create approval tasks for group of users. Count of users in group may change. So, I use ReplicatorActivity and set InitialChildData to this group. Inside Replicator I have a createTaskActivity which creates tasks for each user in the group.
I follow the microsoft tutorial on msdn and it's working fine. My workflow diagram is equal to tutorial diagram.
In Replicator ChildInitialized method I set properties for approval tasks which I pass in TaskActivity
private void replicateTasks_ChildInitialized(object sender, ReplicatorChildEventArgs e)
{
spTaskActivity1.TaskTitle = "New Project Approve";
spTaskActivity1.TaskDescription = "Approve the project";
spTaskActivity1.TaskAssignedTo = e.InstanceData.ToString();
spTaskActivity1.TaskDueDate = DateTime.Today.AddDays(7);
}
In TaskActivity I set this properties for task on creating
private void CreateApprovalTask_Invoking(object sender, EventArgs e)
{
//Create Task
TaskId = Guid.NewGuid();
TaskProp.Title = TaskTitle;
TaskProp.Description = TaskDescription;
TaskProp.AssignedTo = TaskAssignedTo;
TaskProp.StartDate = DateTime.Today;
TaskProp.DueDate = TaskDueDate;
}
All works. Tasks are created and all it's properties are correct and not empty.
Problems appeared when I add projectSequence in my workflow and move the ReplicatorActivity inside it, because I want the workflow will start on project creating. In this case workflow started on project creating and replicator creates tasks with empty properties! Number of tasks is correct and equal to users count.
On debugging I see that all properties are null, although ChildInitialized method has been executed.
What I'm doing wrong?
Is replicator ExecutionType Parallel? If so, I guess you should create a custom activity with all properties inside it and then place it into replicator. It works for me, but I'm still looking for another approach without building custom activity.
Related
I have a batch job in AX 2012 R2 that runs, essentially iterating over a table and creating an instance of a class (that extends RunBaseBatch) that gets added as a task.
I also have some post processing items I need to do, after all the tasks have completed.
So far, the following is working:
while select stagingTable where stagingTable.OperationNo == params.paramOperationNo()
{
batchHeader = this.getCurrentBatchHeader();
batchTask = OperationTask::construct();
batchHeader.addRuntimeTask(batchTask,this.getCurrentBatchTask().RecId);
}
batchHeader.save();
postTask = PostProcessingTask::construct();
batchHeader.addRuntimeTask(postTask,this.getCurrentBatchTask().RecId);
batchHeader.addDependency(postTask,batchTask,BatchDependencyStatus::FinishedOrError);
batchHeader.save();
My thought is that this will add a dependency on the post process task to not start until we get Finished or Error on the last task added in the loop. What I get instead is an exception "The dependency could not be created because task '' does not exist."
I'm uncertain what I'm missing, as the tasks all get added executed successfully, it seems that just the dependency doesn't want to work.
Several things, where this code is being called matters. Is the code already in batch? Is the code calling in doBatch() before/after the super? etc.
You have a while-select, does this create multiple batch tasks? If it does, then you need to create a dependency on each batch task object. This is one problem I see. If your while-select statement only selects 1 record and adds one task, then the problem is something else, but you shouldn't do a while-select to select one record.
Also, you call batchHeader.save(); two times. I'd probably remove the first call. I'd need to see what is instantiating your code.
Where you have this.getCurrentBatchTask().RecId, depending on if your code is in batch or not, try replacing that with BatchHeader::getCurrentBatchTask().RecId
And where you have batchHeader = this.getCurrentBatchHeader(); replace that with batchHeader = BatchHeader::getCurrentBatchHeader();
EDIT Try this code (fix whatever to make it compile)
BatchHeader batchHeader = BatchHeader::getCurrentBatchHeader();
Set set = new Set(Types::Class);
SetEnumerator se;
BatchTask batchTask;
PostTask postTask;
while select stagingTable where stagingTable.OperationNo == params.paramOperationNo()
{
batchTask = OperationTask::construct();
set.add(batchTask);
batchHeader.addRuntimeTask(batchTask,BatchHeader::getCurrentBatchTask().RecId);
}
// Create post task
postTask = PostProcessingTask::construct();
batchHeader.addRuntimeTask(postTask,BatchHeader::getCurrentBatchTask().RecId);
// Create dependencies
se = set.getEnumerator();
while (se.moveNext())
{
batchTask = se.current(); // Task to make dependent on
batchHeader.addDependency(postTask,batchTask,BatchDependencyStatus::FinishedOrError);
}
batchHeader.save();
I am very new to NServiceBus, and in one of our project, we want to accomplish following -
Whenever table data is modified in Sql server, construct a message and insert in sql server broker queue
Read the broker queue message using NServiceBus
Publish the message again as another event so that other subscribers
can handle it.
Now it is point 2, that I do not have much clue, how to get it done.
I have referred the following posts, after which I was able to enter the message in broker queue, but unable to integrate with NServiceBus in our project, as the NServiceBus libraries are of older version and also many methods used are deprecated. So using them with current versions is getting very troublesome, or if I was doing it in improper way.
http://www.nullreference.se/2010/12/06/using-nservicebus-and-servicebroker-net-part-2
https://github.com/jdaigle/servicebroker.net
Any help on the correct way of doing this would be invaluable.
Thanks.
I'm using the current version of nServiceBus (5), VS2013 and SQL Server 2008. I created a Database Change Listener using this tutorial, which uses SQL Server object broker and SQLDependency to monitor the changes to a specific table. (NB This may be deprecated in later versions of SQL Server).
SQL Dependency allows you to use a broad selection of all the basic SQL functionality, although there are some restrictions that you need to be aware of. I modified the code from the tutorial slightly to provide better error information:
void NotifyOnChange(object sender, SqlNotificationEventArgs e)
{
// Check for any errors
if (#"Subscribe|Unknown".Contains(e.Type.ToString())) { throw _DisplayErrorDetails(e); }
var dependency = sender as SqlDependency;
if (dependency != null) dependency.OnChange -= NotifyOnChange;
if (OnChange != null) { OnChange(); }
}
private Exception _DisplayErrorDetails(SqlNotificationEventArgs e)
{
var message = "useful error info";
var messageInner = string.Format("Type:{0}, Source:{1}, Info:{2}", e.Type.ToString(), e.Source.ToString(), e.Info.ToString());
if (#"Subscribe".Contains(e.Type.ToString()) && #"Invalid".Contains(e.Info.ToString()))
messageInner += "\r\n\nThe subscriber says that the statement is invalid - check your SQL statement conforms to specified requirements (http://stackoverflow.com/questions/7588572/what-are-the-limitations-of-sqldependency/7588660#7588660).\n\n";
return new Exception(messageMain, new Exception(messageInner));
}
I also created a project with a "database first" Entity Framework data model to allow me do something with the changed data.
[The relevant part of] My nServiceBus project comprises two "Run as Host" endpoints, one of which publishes event messages. The second endpoint handles the messages. The publisher has been setup to IWantToRunAtStartup, which instantiates the DBListener and passes it the SQL statement I want to run as my change monitor. The onChange() function is passed an anonymous function to read the changed data and publish a message:
using statements
namespace Sample4.TestItemRequest
{
public partial class MyExampleSender : IWantToRunWhenBusStartsAndStops
{
private string NOTIFY_SQL = #"SELECT [id] FROM [dbo].[Test] WITH(NOLOCK) WHERE ISNULL([Status], 'N') = 'N'";
public void Start() { _StartListening(); }
public void Stop() { throw new NotImplementedException(); }
private void _StartListening()
{
var db = new Models.TestEntities();
// Instantiate a new DBListener with the specified connection string
var changeListener = new DatabaseChangeListener(ConfigurationManager.ConnectionStrings["TestConnection"].ConnectionString);
// Assign the code within the braces to the DBListener's onChange event
changeListener.OnChange += () =>
{
/* START OF EVENT HANDLING CODE */
//This uses LINQ against the EF data model to get the changed records
IEnumerable<Models.TestItems> _NewTestItems = DataAccessLibrary.GetInitialDataSet(db);
while (_NewTestItems.Count() > 0)
{
foreach (var qq in _NewTestItems)
{
// Do some processing, if required
var newTestItem = new NewTestStarted() { ... set properties from qq object ... };
Bus.Publish(newTestItem);
}
// Because there might be a number of new rows added, I grab them in small batches until finished.
// Probably better to use RX to do this, but this will do for proof of concept
_NewTestItems = DataAccessLibrary.GetNextDataChunk(db);
}
changeListener.Start(string.Format(NOTIFY_SQL));
/* END OF EVENT HANDLING CODE */
};
// Now everything has been set up.... start it running.
changeListener.Start(string.Format(NOTIFY_SQL));
}
}
}
Important The OnChange event firing causes the listener to stop monitoring. It basically is a single event notifier. After you have handled the event, the last thing to do is restart the DBListener. (You can see this in the line preceding the END OF EVENT HANDLING comment).
You need to add a reference to System.Data and possibly System.Data.DataSetExtensions.
The project at the moment is still proof of concept, so I'm well aware that the above can be somewhat improved. Also bear in mind I had to strip out company specific code, so there may be bugs. Treat it as a template, rather than a working example.
I also don't know if this is the right place to put the code - that's partly why I'm on StackOverflow today; to look for better examples of ServiceBus host code. Whatever the failings of my code, the solution works pretty effectively - so far - and meets your goals, too.
Don't worry too much about the ServiceBroker side of things. Once you have set it up, per the tutorial, SQLDependency takes care of the details for you.
The ServiceBroker Transport is very old and not supported anymore, as far as I can remember.
A possible solution would be to "monitor" the interesting tables from the endpoint code using something like a SqlDependency (http://msdn.microsoft.com/en-us/library/62xk7953(v=vs.110).aspx) and then push messages into the relevant queues.
.m
I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.
I have a little problem. I'm trying to add a timer job following this tutorial : http://dotnetfinder.wordpress.com/2010/07/24/creatingcustomsharepointtimerjob2010/
I came to the point where my timer job is enabled and is launching every five minutes.
The problem is that it doesn't execute all the Execute method.
public override void Execute(Guid contentDbId)
{
// get a reference to the current site collection's content database
SPWebApplication webApplication = this.Parent as SPWebApplication;
SPContentDatabase contentDb = webApplication.ContentDatabases[contentDbId];
// get a reference to the "ListTimerJob" list in the RootWeb of the first site collection in the content database
SPList Listjob = contentDb.Sites[0].RootWeb.Lists["Liens"];
// create a new list Item, set the Title to the current day/time, and update the item
SPListItem newList = Listjob.Items.Add();
//newList["URL"] = "http://"+DateTime.Now.ToString()+".fr";
//newList.Update();
}
I attached the debugger to the OWSTIMER.EXE.
If i try to add a breakpoint at the line : SPList ListJob = ..., it's ok,
But if i try to add a new breakpoint at the next line (SPListItem newList = ...) then i have the following message :
"The following breakpoint cannot be set : ...
The CLR was unable to set the breakpoint".
Does anyone has any idea how i can make it work ?
You seem to be attaching to the correct service. See How to: Debug a Timer Job to double check your steps.
Also, as pointed out in this comment, you should restart the timer service when deploying a timer job:
mark
February 11, 2011 at 4:41 am
There is a very important step that
needs to be completed with any Timer Project. You hav to recycle the
SharPoint timer service in between deployments. Best way to do this is
to add
net stop SPTimerV4 – Pre Build
net start SPTimerV4 – Post Build
to your sharepoint project. If you do not do the above – you will be
puzzled as to why your code seems not to be up to date. The reason is
that the timer service Caches the assembly with your class. This can
cost you hours of troubleshooting, in trying to identify why your code
does not deploy.
Make sure your project configuration is in Debug mode (in Release mode compiler is setting is enabled for optimized code). Refer to this blog post
What is a good case/example for using the ScheduledDisposable in Reactive Rx
I like the using the CompositeDisposable and SerialDisposable, but would you need the ScheduledDisposable.
The logic of using the Rx disposables is that code that performs some sort of set up operation can return an IDisposable that anonymously contains the code that will do the associated clean up at a later stage. If this pattern is used consistently then you can compose together many disposables to perform a single clean up operation without any specific knowledge of what is being cleaned up.
The problem is that if that clean up code needs to run on a certain thread then you need some way for Dispose called on one thread to be marshalled to required thread - and that's where ScheduledDisposable comes in.
The primary example is the SubscribeOn extension method which uses ScheduledDisposable to ensure that the "unsubscribe" (i.e. the Dispose) is run on the same IScheduler that the Subscribe was run on.
This is important for the FromEventPattern extension method, for example, that attaches to and detaches from event handlers which must happen on the UI thread.
Here's an example of where you might use ScheduledDisposable directly:
var frm = new SomeForm();
frm.Text = "Operation Started.";
var sd = new ScheduledDisposable(
new ControlScheduler(frm),
Disposable.Create(() =>
frm.Text = "Operation Completed."));
Scheduler.ThreadPool.Schedule(() =>
{
// Long-running task
Thread.Sleep(2000);
sd.Dispose();
});
A little contrived, but it should show a reasonable example of how you'd use ScheduledDisposable.