I have a simple business workflow with the following conditions
Users need to change the workflow itself using a desinger
The workflow is a long rinning workflow, so it will be serialized
Is there a way to automate the task of versioning different workflow assemblies?
The versioning of different workflow assemblies is not a trivial task and has a lot of complications. Here you can find a series of posts that deal exactly with this.
You can rehost the WF designer in your own application to let the end users change workflows. As you are hosting the designer you pretty much control what they can do. For example you can prevent them from removing or disabling activities and only allow them to add specific new activities in predefined area's of the workflow. The best approach is to save these workflows as XOML files and start them as such. This does mean you cannot add code to the workflow itself but you are free to define your workflow base class derived from SequentialWorkflowActivity (or the state equivalent) and use that as the workflow base class. This allows you to add code and properties. For example you can still add a CodeActivity but you need to link to code in the base class.
Workflow serialization, or dehydration as it is called, is used with running workflows to persist them to disk. This uses standard .NET binary serialization and can be a but tricky due to the long running nature of workflows. But no big deal once you know what to look for. See http://msmvps.com/blogs/theproblemsolver/archive/2008/09/10/versioning-long-running-workfows.aspx for the start of a series of blog posts.
Not sure if you need it but there is also the capability to change already executing workflows. This uses the WorkflowChanges object. See here http://wiki.windowsworkflowfoundation.eu/default.aspx/WF/RuntimeModificationOfWorkflows.html for more details.
Here is another article on workflow versioning:
http://www.adefwebserver.com/DotNetNukeHELP/Workflow/VacationRequest3.htm
Basically you can version workflows that use assemblies if:
Any assembly used with workflows
must be strong named.
If a assembly
uses an interface it also must be strong
named and placed in a separate
assembly.
An entry in the web.config
can instruct asp.net where to find
the proper assembly.
Related
I thought these two are same thing.
Since library has also some defined building blocks and function as the API has and it also responsible for interaction also. If not then please mark difference.
A library is a collection of classes / methods you can use via referencing a compiled file. So your application is going to "include" those items and you'll need to take care of updates, deployment, etc.
An API is just an interface, so you can interact with other external applications without a direct relationship.
I am working on a silverlight project and I am using MEF to download xap file of other silverlight project and use its pages and functions in my main Project.
I can do the same thing using referencing dll of that project into my main project.
So I want to know what is the difference between using MEF to reusing components and Simply Adding Reference to the DLL of another project in current project? I mean that we also add reference to the project we import in our current project. Then how it is different from conventional form of component use?
Thanks,
First, we need to separate MEF and PRISM (since you used it in your tags).
MEF is primarily used to provide inversion of control (IoC). It makes it easy to manage dependencies your viewmodels and other classes to separate concerns and improve testability (amongst other benefits).
PRISM however is primarily designed for the following scenario: You don't know, what view goes into a specific container at compile time, and want ViewA for CustomerA, ViewB for CustomerB and so on. PRISM helps you to losely couple your regions and views in a way, so that the application can decide at runtime, what view will be displayed. Another scenario, is that administrators get one view, other users another etc. PRISM also has other features (like the event aggregator), but I'd say the former is the most important one.
Now, I'd say MEF is never a bad thing to use for a bigger project. But I'd only use PRISM, if you really need the functionality it provides, since it can be very limiting. If you don't, simply add the references as you explained and let MEF know about those assemblies with the AssemblyCatalog.
So for MEF, I'd suggest you learn about Depdendency Injection and IoC. I found this blogpost by Martin Fowler quite good. As for PRISM, get familiar with what it does, and decide if you really need it.
Hope this helps.
Let me complement Lue's answer on the difference between MEF and referencing dlls a bit:
The two things are orthogonal activities, meaning that if you reference a dll directly you might still want to use MEF to detect the types in it - and vice versa you might grab a specific type in a dll you dynamically loaded directly (without MEF).
MEF basically finds types in dlls according to certain criteria and has a bit of convenience stuff in it to automatically populate properties and collections with such types. It can be used to make a system more decoupled and thus more maintainable. For example, a video editing software may look for all types implementing a certain interface in all known dlls to use as filters. Whether you include the filters directly as a dll or let the user download them on demand: In both cases your application becomes slightly cleaner by using MEF, since there is no hard-coded list of filters anywhere. Still, in the presence of dynamic library loading MEF is especially useful.
I intend to add a COM interface to an existing application (which, by the way, is written in C++ using Win32). I have some experience using COM objects, so I know the basic COM concepts of interfaces, etc., but this is the first time I'm actually implementing a component.
Ultimately I want to be able to use the COM interface to automate my application from scripts such as VB. I understand that there are two steps:
My application must act as an out-of-process server (i.e. I have to use MIDL and generate code for a proxy DLL and a stub DLL).
Once I have the server I can add automation capabilities by implementing the IDispatch interface.
Since the server-in-an-EXE thing with MIDL and what not is already a bit steep, I wanted to get a grasp on all that first before moving on to IDispatch.
I am reading the book "Inside COM" by Dale Rogerson and have completed the chapter on servers in EXEs (the following chapter will cover Automation).
The "Servers in EXEs" chapter provides example code that implements a server and a client. But it is necessary to start the server manually. This confuses me. Obviously, when my application (= server) is used by a client process, this extra manual step should not be necessary. Is there no mechanism to start the server automatically? Or is automation necessary to achieve that? At the moment, the prospect of having to start my server manually (once I even have one) makes me doubt I am moving in the right direction.
Hopefully someone with more knowledge of this can see what information I'm missing and point me in the right direction.
No, COM servers are not normally started by hand. Not sure why the book proposed it, possibly because it wanted to avoid talking about the registry keys you need to allow COM to automatically start the EXE. It isn't otherwise very complicated, you register the Application coclass of your app with the LocalServer32 key value giving the path to the EXE.
It is however not completely uncommon, especially with an existing program. One design decision to make is whether you let the client code completely control your program. Or if your program already has an existing user interface but you also want to expose services to other code. In the latter case it makes sense to let the user start the app by hand, like she'd normally does.
When your application is registered as LocalServer32, it will be invoked with the commandline specified there if no running process has registered a factory object for your CLSID yet.
This way, you can get the best of both worlds -- if the application is running already, this instance can provide the server side, and if it isn't, it will be started.
Automation is completely orthogonal to that -- your component becomes Automation compatible by implementing IDispatch.
I have a VB.NET 2010 solution, that contains 2 projects, a class library and a Windows Forms Application.
The class library basically is a model, used for doing database integration.
I currently have the connection string placed in the class library project settings, but they do not seem to be listed anywhere in the config file of the application. What's the best practice for retrieving the connection string in the class library? I don't want to use a singleton. Should it be stored in the application or class library?
It seems the Class Library would make more sense since that is project interacting with the database. It would probably be beneficial to encrypt the connection string and store it in a file or registry key so that way if the system is compromised, the intruder will still have to crack the key to view the connection string, yet still offers you the ability to change it without recompiling your app.
I still go with what I said in your earlier question - leave the settings/configurations out of the class library. Put them in the config file for the application(s) that use the class library.
What happens if the connection string changes? Since class library's don't use config files, you'll most likely have to update the code, recompile, and redeploy it. Not a big deal if it's one program on one machine, but what if it's multiple programs and/or multiple machines?
Granted, you'd still have to make a lot of changes in a multi-program/multi-system environment via the config file, but that's a lot simpler, IMO, than recompiling (and regression testing) a class library.
Another factor to consider is what if different applications want to use this same class library? What if you have different environments that have different connection strings? And so on.
In a nutshell, I would opt to leave configuration items for the application, not the supporting class libraries. From a resusability and scalability perspective I feel that gives you the most bang for your buck.
If you only have one application and its only ever going to use this one class library, and no one else will, you can probably leave the configuration settings in it - but using the phrase "We'll never change" or "It will always be like this" is a good way to get a lot of headaches down the road.
All of the above is, of course, in my opinion, and should not be taken as me speaking officially for any other programmer or corporation :)
Edited to add
You'll have to manually move the settings you need from the class library's config to the application's config. VS won't do it for you.
And why do you keep bringing up the singleton design pattern? What potential benefit do you see from it? Or have other people been suggesting it to you?
My question is very simple, and I want a clear answer with a simple example.
What's the main difference between API, Toolkit, Framework, and Library?
I prefer following:
An API is an abstract description of how to use an application. For example, an API may describe the function syntax (declaration) of a chat server. i.e. login, publish_message, subscribe_messages. And, it describes any protocols to use the application. i.e. must login before sending or recieving messages, or clients are dropped after 2 minutes if not sending or receiving messages.
A library is an implementation of an API, it containes the compiled code that implements the functions and protocols (maintains usage state).
A toolkit is a set of libraries (API) and services grouped together to provide the developer with a wider range of possible solutions. For example, the Globus Toolkit provides services (such as File transfering, Job Subission and Scheduling) that a devleoper can install and start on their servers. They also provide API's to build applications that may use the services deployed in an integrated fashion. For example, the developer may build a program that uses the Job Submission API to communicate with the Job Submission Service.
A Framework is a set of guidelines that prevents inappropriate use or developement. The developer must contruct their applications within the rules and boundaries of the framework. This is done by forcing the developer to extend the current framework to develope new software. by extending the framework, you force adhearence to the framework.
I'm not saying these are completely correct, but its worked ok for me so far!
This has always been my understanding, you will no doubt see differing opinions on the subject:
API (Application Programming Interface) - Allows you to use code in an already functional application in a stand-alone fasion.
Framework - Code that gives you base classes and interfaces for a certain task/application type, usually in the form of a design pattern. (Though not always)
Library - Related code that can be swapped in and out at will to accomplish tasks at a class level
Toolkit - Related code that can be used to accomplish tasks at a component level.
Those terms sometimes are misinterchanged.
Similar posts, read:
What is the major difference between a framework and a toolkit?
Framework vs. Toolkit vs. Library
I prefer to call a library as an alias of module or namespace. Toolkit and A.P.I. is usually a set of libraries for a common task. Altought, A.P.I. is more used for Procedural Programming than Object Oriented Programming.