Biztalk alternative for systems management automation? - process

I am working with a small hosting provider looking for a "motor" to be the central hub for automation their IT-related processes (and possibly other processes as well. An example of this could be a customer ordering a custom virtual server from their website. This server would need to pass through an approval chain (if it's a new customer) or go straight to deployment, where various servers would set this up through scripts. Basically, we're looking for something to be the "hub" where all these scripts are tied together and the various processes described and executed.
I'm keeping a half eye on Biztalk server for this, but I know it's a complex product. Does anyone have any tips on other products we should check out? Although this is a mixed (linuz and windows) environment the process system would run on Windows.
best regards,
Trond

Sounds like the Windows Workflow Foundation "stuff" might be useful: http://msdn.microsoft.com/en-us/library/ms735967.aspx

Related

Running the same web app on 2 or more physically separate servers?

I am not sure if I should be posting this question here or over at ServerFault so apologies if it is in the wrong place.
I have a small web app that is starting to get some more business.
Currently I have a single dedicated LAMP server for this, and this has worked well - the single server is able to handle all of our traffic.
However... Recently I have been approached by some potential customers who are interested in using the app, but only if their data can be stored on a server in the same province as they are (legal reasons).
I could migrate the server, but I am reluctant to do this. I like where it is now.
So, I am wondering what is involved in having multiple servers in physically separate datacentres far apart, running the same web app? Data between the servers would not need to stay synced, necessarily.
I have never done anything like this before, and am not sure how complicated a job it is. Any suggestions on how and where to start looking into this would be much appreciated.
Thanks (in advance) for your advice.
As long as each customer has their own set of data you can just install another copy of the application in the other datacenter. It will require you to get some structure to your source control and deployment process, but it works. This option will give you two separate databases.
If you have to have one common database for all the customers (e.g. some kind of booking/reservation system of common resources) then you're up to a completely other level of complexity with replicating databases etc. It's doable, but it's hard.

How to perform stress test against sharepoint site using threads

I want to analyze the performance (hence its weak points) of a sharepoint site doing stress test activity. What is needed to be done is call some methods exposed via web service that do the following things inside the sharepoint site:
-create a new group
-add a content to the group
-add an attachment to the content
-delete the content
-delete the previously created group
What is required is a simulation of a situation where there are 4500 users trying to do these operations concurrently (at the same time or more realistically within a short timespan, for example within 5 seconds).
We want to register the execution time of each operation (web method, for example of the "create new group"), too. I thought I could simulate these operations via a console applications using threads and stopwatchs. Is there anyone who has encountered a similar problem and can give me any existing solutions or hints to do it "the right way"? For
example how can I obtain that all threads start at the same instant? Thanks in advance.
I am a user of Visual Studio Load Testing since 2 years, and I find it very powerfull and easy to use. You can run integration tests, navigation in a web site, simulate database load, ... in fact, everything. Because it is a MS application, it is also fully compatible with all MS products like Sharepoint : it's easier to call a WCF service from a unit test than another technology (how to test nettcpbinding ?). You can also use the Visual Studio Profiler for instrumenting your code (and see what line of code is expensive or event ADO.net interactions). You can also easily extend the load testing by many extensibility points.
One important thing is that VS laod testing is "intrusive". It will note only collect response time, request lengths, ... but also all performance counters, database queries, ... All this metrics are saved in a dedicated database like SQLExpress for reporting. There is an AddOn for Excel.
Juste one important note (available for all load testing solutions !) :
You can run load tests from a developer machine or even a single dedicated machine, but you usually can't generate enough traffic to really see how the application responds (you machine can not simulate 500 concurrent users because of limited CPU/Memory/Network) . In order to simulate a lot of users, you'll set up what is known as a Load Test Rig.
A test rig is made up of a Test Controller machine and one or more Test Agent machines as shown in Figure 1. The controller manages and coordinates the agent machines and the agents generate load against the application. The test controller is also responsible for collecting performance monitor data from the servers under test and optionally from the test rig machines.
Here are some links :
MSDN
Dave's introduction
Not saying Visual Studio Load Testing is not a great tool. There are tools, like Tsung, Eventlet (and many others) that can support well over thousands of concurrent users.
Good luck.

long running agents in f#

I use agents in different ways, one of which consists of 100 agents monitoring website changes, and reporting back to a supervisor which I can call to spawns new monitor off, or listen to the merged changes.
This is only part of my program, and I am happy with it.
I now would like to spin it off and that it runs truly independently of my main program.
(Yet I would like this independent spinoff to stay as much as possible inside the langage, and use the least amount of glue code possible)
What strategies do I have here / would you recommend ?
One option for executing long-running agents is to write a Windows Service that starts with the operating system (possibly even before login) and runs in the background. Your main application can then connect to the service and communicate with it.
Here is a basic example of F# Windows Service on MSDN.
Running the agent in a service is quite easy. The communication between service and main application is more tricky, because they are two separate processes. The sample uses .NET Remoting, which has now been replaced with WCF, so I think that would be a thing to look at (especially if you want asynchronous communication). Alternatively, there are some F# projects that implement simple socket-based communication, which might be easier to use.

Nservicebus in production setup

I just came across a business case where Nservicebus would fit really well.
What I can't find is any advice on how to set it up in a production environment. Is it just the choice of profile or are there other things to consider.
The scenario is calling a webservice on the other side of the planet which is pretty slow, so I will need a queue of some sort anyway since there are times when there will be between 1500-2000 requests lined up.
What caveats could I expect setting it up on a Win2008 server standard Ed.
I doubt it's just plug'n play when it comes to managing security on the server.
/J
NServiceBus sits on top of MSMQ. It is hosted in a generic host container which is out-of-the-box and can run inside any managed process. It really is trivial to set it up.
I am not sure what specific caveats you are asking about. The only difference I can see with a production set-up is things like disaster recovery and monitoring.
Hope this helps.

Pros/Cons of Binary Reference VS WCF

I am in the process of implementing an enhancement to an existing web application(A). The new solution will provide features(charts/images/data) to the application A. The new enhancement will be a new project and will generate new assemblies. I am trying to identify what would be most elegant way to read this information.
1) Do a binary reference and read the data directly. The new assemblies live with your application and are married together
2) Write a WCF call and get the data. This will help to decouple the application.
The new application will involve me to buy some expensive licences. So if i go with the 2nd option i can limit the license fee to a single server or atmost 2-3. My current applicaiton runs under a webfarm of 8 servers.
Please share out the pros/cons of both approach.
Thanks.
If you decouple the two pieces sufficiently, you will also permit the use of clients running something other than .NET. Using the first option, you could only support .NET clients. This may turn out to be important, even if today you are absolutely certain that only .NET will ever be used - tomorrow, your company may be purchased by another which is a Java or PHP shop.
Even if you never need to support a non .NET client, coupling to the assemblies will require you to maintain version compatibility between the client and server. If this is not necessary, then use option #2.
The benefit of using WCF (decoupled approach) is that you get a deployment option to take it outside of the machine if it impacts the machine too much in terms of processing or storage.
The downside is that you'll likely pay some performance hit compared to linking directly.
I'm sure you can do some dynamic linking so you don't have to deploy to all 8 servers.