Testing Dexterity content creation in isolation - testing

For a project, I have a complex master object that contains a number of subcomponents. Set up of these objects is controlled by a Constructor interface, which I bind to various lifecycle & workflow events, like so:
#grok.subscribe(schema.ICustomFolder, lifecycleevent.IObjectAddedEvent)
def setup_custom_folder(folder, event):
interfaces.IConstructor(folder).setup()
#grok.subscribe(schema.ICustomFolder, lifecycleevent.IObjectModifiedEvent)
def setup_custom_folder(folder, event):
interfaces.IConstructor(folder).update()
What I'd like to be able to do is test the Constructor methods without relying on the event handlers. I've tried doing this by creating objects directly to avoid the lifecycle events:
def test_custom_item_constructor(self):
master = createContent('model.master_object',
needed_attribute = 2
)
folder = createContent('model.custom_folder',
__parent__ = master
)
self.assertEqual(0, len(folder))
constructor = interfaces.IConstructor(folder)
constructor.setup()
self.assertEqual(2, len(folder))
The setup method creates a number of items inside the Custom_Folder instance, dependent on the provided attribute on the master object. However, this is hanging, which I think is due to neither object actually belonging to the site, so there's no acquisition of permissions. I can get this by changing the createContent on the master object to createContentInContainer and adding it to the appropriate part of the test site, but that triggers all of the lifecycle events, which end up doing the Constructor calls, which doesn't let me test them in isolation.
I've tried using mock objects for this, but that got messy dealing with the content creation that is meant to occur during the Constructor .setup.
What's the best way to approach this?

I'm not sure if this is the best way, but I managed to get the result I wanted by disabling the relevant event handlers first, and then creating the content properly within the site:
def test_custom_item_constructor(self):
zope.component.getGlobalSiteManager().unregisterHandler(
adapters.master.constructor.setup_masterobject,
required=[schema.IMasterObject, lifecycleevent.IObjectAddedEvent]
)
zope.component.getGlobalSiteManager().unregisterHandler(
adapters.custom.constructor.setup_customfolder,
required=[schema.ICustomFolder, lifecycleevent.IObjectAddedEvent]
)
master = createContentInContainer(self.portal, 'model.master_object',
needed_attribute = 2
)
folder = createContentInContainer(master, 'model.custom_folder',
__parent__ = master
)
self.assertEqual(0, len(folder))
constructor = interfaces.IConstructor(folder)
constructor.setup()
self.assertEqual(2, len(folder))
This was enough to disengage the chain of events triggered by the addition of a new master object.

Related

Camunda : Set Assignee to all UserTasks of the process instance

I have a requirement where I need to set assignee's to all the "user-tasks" in a process instance as soon as the instance is created, which is based on the candidate group set to the user-task.
i tries getting the user-tasks using this :
Collection<UserTask> userTasks = execution.getBpmnModelInstance().getModelElementsByType(UserTask.class);
which is correct in someway but i am not able to set the assignee's , Also, looks like this would apply to the process itself and not the process instance.
secondly , I tried getting it from the taskQuery which gives me only the next task and not all the user-tasks inside a process.
Please help !!
It does not work that way. A process flow can be simplified to "a token moves through the bpmn diagram" ... only the current position of the token is relevant. So naturally, the tasklist only gives you the current task. Not what could happen after ... which you cannot know, because if you had a gateway that continues differently based on the task outcome? So drop playing with the BPMN meta model. Focus on the runtime.
You have two choices to dynamically assign user tasks:
1.) in the modeler, instead of hard-assigning the task to "a-user", use an expression like ${taskAssignment.assignTask(task)} where "taskAssignment" is a bean that provides a String method that returns the user.
2.) add a taskListener on "create" to the task and set the assignee in the listener.
for option 2 you can use the camunda spring boot events (or the (outdated) camunda-bpm-reactor extension) to register one central component rather than adding a listener to every task.

WCF Data Services: ChangeInterceptor not firing for Update

I have a WCF Data Service (OData) that serves as the data repository for a larger system. I'm trying to fire off specific methods based on operations on Entities in the repository.
Specifically, if someone changes a Message record, I want to hook into the pipeline. I'm using ChangeInterceptors for this.
They work for Add and Delete. However, nothing fires when an entity is updated. I am concerned that the DbContext can not resolve the fact that the entity has changed, since the request is stateless.
This does not trigger the handler:
var whatever = from m in Messages
where m.MessageKey == 3
select m;
whatever.First().UpdatedDate = DateTime.Now;
this.SaveChanges();
Has anyone else faced this problem?
So, I was trying to use AttachTo() to handle the fact that my record was detached. This flat out didn't work, and led to runtime exceptions like the following:
This operation requires the entity be of an Entity Type, and has at least one key
property.Parameter name: entity
At any rate, just use the update method and the change will be intercepted (and actually applied)
var whatever = (from m in Messages where m.MessageKey == 1
select m ).Single();
whatever.UpdatedDate = DateTime.Now;
this.UpdateObject(whatever);
this.SaveChanges();

How to instantiate a BSP controller manually

I tried initially
DATA: cl_rest_bo_list TYPE REF TO zcl_rm_rest_bulk_orders.
CREATE OBJECT cl_rest_bo_list.
cl_rest_bo_list->request->if_http_request~set_method( 'GET' ).
cl_rest_bo_list->do_request( ).
This resulted into an abend, accessing request which was not initialized.
Then I tried to instantiate the request and the response
DATA: cl_rest_bo_list TYPE REF TO zcl_rm_rest_bulk_orders.
DATA: cl_request TYPE REF TO cl_http_request.
DATA: cl_response TYPE REF TO cl_http_response.
CREATE OBJECT cl_rest_bo_list.
CREATE OBJECT cl_request.
CREATE OBJECT cl_response.
request->if_http_request~set_method( 'GET' ).
cl_rest_bo_list->request = cl_request.
cl_rest_bo_list->response = cl_response.
cl_rest_bo_list->do_request( ).
This, at least, does not abend, but the set_method return error code here and does not actually set the method.
system-call ict
did
ihttp_scid_set_request_method
parameters
m_c_msg " > c handle
method " > method
m_last_error. " < return code
Since Google does not know about ihttp_scid_set_request_method, I am pretty sure that I am doing this wrong. Maybe there is no provision to instantiate BSP controllers, though I am not sure what this means for ABAP Unit testing BSP controllers.
As a solution for now I have lifted all business logic into a separate method which gets called/tested without trouble. Still, if anybody knows how to instantiate CL_BSP_CONTROLLER2 classes, that would be great.
As far as I know, the BSP controller can only be instantiated from within the ICF processing because it retrieves information about the call from the kernel. I'm not sure why you would want to install unit tests for the UI in the first place, unless you didn't separate the UI and the business logic as your comment about "lifting" suggests....

Handling traversal in Zope2 product

I want to create a simple Zope2 product that implements a "virtual" folder where a part of the path is processed by my code. A URI of the form
/members/$id/view
e.g.
/members/isaacnewton/view
should be handled by code in the /members object, i.e. a method like members.view(id='isaacnewton').
The Zope TTW Python scripts have traverse_subpath but I have no idea how to do this in my product code.
I have looked at the IPublishTraverse interface publishTraverse() but it seems very generic.
Is there an easier way?
Still the easiest way is to use the __before_publishing_traverse__ hook on the members object:
from zExceptions import Redirect
def __before_publishing_traverse__(self, object, request):
stack = request.TraversalRequestNameStack
if len(stack) > 1 and stack[-2] == 'view':
try:
self.request.form['member_id'] = stack.pop(-1)
if not validate(self.request['member_id']):
raise ValueError
except (IndexError, ValueError):
# missing context or not an integer id; perhaps some URL hacking going on?
raise Redirect(self.absolute_url()) # redirects to `/members`, adjust as needed
This method is called by the publisher before traversing further; so the publisher has already located the members object, and this method is passed itself (object), and the request. On the request you'll find the traversal stack; in your example case that'll hold ['view', 'isaacnewton'] and this method moves 'isaacnewton' to the request under the key 'member_id' (after an optional validation).
When this method returns, the publisher will use the remaining stack to continue the traverse, so it'll now traverse to view, which should be a browser view that expects a member_id key in the request. It then can do it's work:
class MemberView(BrowserView):
def __call__(self):
if 'member_id' in self.request.form: # Huzzah, the traversal worked!

Unit Of Work for non-trivial CRUD operations for multiple Repositories

I have seen unit of work pattern implemented with something like a following code:
private HashSet<object> _newEntities = new HashSet<object>();
private HashSet<object> _updatedEntities = new HashSet<object>();
private HashSet<object> _deletedEntities = new HashSet<object>();
and then there are methods for adding entities to each of these HashSets.
On Commit UnitOfWork creates some Mapper instance for each entity and calls Insert, Update, Delete methods from some imaginary Mapper.
The problem for me with this approach is: the names of Insert, Update, Delete methods are hard-coded, so it seems such a UnitOfWork is capable only of doing simple CRUD operations. But what if I need the following usage:
UnitOfWork ouw = new UnitOfWork();
uow.Start();
ARepository arep = new ARepository();
BRepository brep = new BRepository();
arep.DoSomeNonSimpleUpdateHere();
brep.DoSomeNonSimpleDeleteHere();
uow.Commit();
Now the three-HashSet approach fails because I then I could register A and B entities only for Insert, Update, Delete operations but I need those custom operations now.
So it seems that I cannot always stack the Repository operations and then perform them all with UnitOfWork.Commit();
How to solve this problem? The first idea is - I could store addresses of methods
arep.DoSomeNonSimpleUpdateHere();
brep.DoSomeNonSimpleDeleteHere();
in UoW instance and execute them on uow.Commit() but then I have also to store all the method parameters. That sounds complicated.
The other idea is to make Repositories completely UoW-aware: In DoSomeNonSimpleUpdateHere I can detect that there is a UoW running and so I do not perform DoSomeNonSimpleUpdateHere but save the operation parameters and 'pending' status in some stack of the Repository instance (obviously I cannot save everything in UoW because UoW shouldn't depend on concrete Repository implementations). And then I register the involved Repository in the UoW instance. When UoW calls Commit, it opens a transaction, and calls some thing like Flush() for each pending Repository. Now every method of Repository needs some stuff for UoW detection and operation postponing for later Commit().
So the short question is - what is the easiest way to register all the pending changes in multiple repositories in UoW and then Commit() them all in a single transaction?
It would seem that even complicated updates can be broken down into a series of modifications to one or more DomainObjects. Calling DoSomeNonSimpleUpdateHere() may modify several different DomainObjects, which would trigger corresponding calls to UnitOfWork.registerDirty(DomainObject) for each object. In the sample code below, I have replaced the call to DoSomeNonSimpleUpdateHere with code that removes inactive users from the system.
UnitOfWork uow = GetSession().GetUnitOfWork();
uow.Start();
UserRepository repository = new UserRespository();
UserList users = repository.GetAllUsers();
foreach (User user in users)
{
if (!user.IsActive())
users.Remove( user );
}
uow.Commit();
If you are concerned about having to iterate over all users, here is an alternative approach that uses a Criteria object to limit the number of users pulled from the database.
UnitOfWork uow = GetSession().GetUnitOfWork();
uow.Start();
Repository repository = new UserRespository();
Criteria inactiveUsersCriteria = new Criteria();
inactiveUsersCriteria.equal( User.ACTIVATED, 0 );
UserList inactiveUsers = repository.GetMatching( inactiveUsersCriteria );
inactiveUsers.RemoveAll();
uow.Commit();
The UserList.Remove and UserList.RemoveAll methods will notify the UnitOfWork of each removed User. When UnitOfWork.Commit() is called, it will delete each User found in its _deletedEntities. This approach allows you to create arbitrarily complex code without having to write SQL queries for each special case. Using batched updates will be useful here, since the UnitOfWork will have to execute multiple delete statements instead of only one statement for all inactive users.
The fact that you have this problem suggests that you aren't using the Repository pattern as such, but something more like multiple table data gateways. Generally, a repository is for loading and saving an aggregate root. As such, when you save an entity, your persistence layer saves all the changes in that aggregate root entity instance's object graph.
If, in your code, you have roughly one "repository" per one table (or Entity), you're probably actually using a table data gateway or a data transfer object. In that case, you probably need to have a means of passing in a reference to the active transaction (or the Unit of Work) in each Save() method.
In Evans DDD book, he recommends leaving transaction control to the client of a repository, and I would agree that it's not a good practice, though it may be harder to avoid if you're actually using a table data gateway pattern.
I finally found this one:
http://www.goeleven.com/Blog/82
The author solves the problem using three Lists for update/insert/delete, but he does not store entities there. Instead repository delegates and their parameters are stored. So on Commit the author calls each registered delegate. With this approach I could register even some complex repository methods and so avoid using a separate TableDataGateway.