Deploy multiple WCF services? - wcf

I have one large solution containing 27 WCF services and 3 shared projects (DAL, Models, and Core).
Lets say something critical in the DAL changes. Now I need to publish all 27 WCF services. I am currently doing this manually by right clicking on each of the 27 and choosing publish. I have set up publish profiles on each of the services using "File System" as the publish method. As you can imagine this is quite a pain.
I have created different solution configurations and web.config transforms as well. This allows me to publish to production and the test box with different config contents. Going back to the original issue, when something low level changes in the DAL and I need to re-publish 27 times, I actually re-publish 54 times, 27 to the test box and then 27 to the live box.
How can you publish multiple WCF services or what are best practices for doing something like this? I'm using VS2013 and TFS2013.
FYI - we are currently installing and reading about the new release management tools for VS/TFS 2013.
Thanks in advance.

New Release Management tools for VS/TFS is the way forward. You can create one component for each web service in Release Management tool and use IIS related activities inside the each
component.

Related

How to Debug multiple Sharepoint services on the same machine?

I need to debug a Sharepoint WCF service that is deployed for Sharepoint 2010. However, a colleague needs to debug another Sharepoint service deployed on the same phyical machine. If we debug at the same time strange things occur with the Visual Studio debugger. For example, his debugger would break at breakpoints I have set, or I am seeing exceptions raised by his code. Mind you, we are debugging different services in different solutions. From the information I have gathered so far, this behaviour is like this because there is only one w3wp process per application pool that both the Visual Studio Debugger instances attach to.
So I figured I should try running my service in another applicaion pool to get a different w3wp.exe to attach to. Here is what I tried, but I am not sure, if what I attempted makes any sense, please clarify:
IIS Manager shows that there are two different Sharepoint application pools (excluding the one for Central Adminisitration) and a Site for each. So I tried deploying my service using the other application pool by setting the deployment location to the URL of the other site. However the virtual _vti_bin directory of the service still maps to the same physical directory ...\Web Server Extensions\14\ISAPI\. Deploying from Visual Studio works, but getting a ServiceReference does not. Trying to open <url>/_vti_bin/MyService.svc/MEX shows an error page telling me that therer is already a binding instance associated to the URL. So, I guess this is either not the way to do this, or it is simply not possible to "isolate" services in this way. I am very hesitant to just trial-and-error with IIS Manager or Sharepoint Central Administration settings, because I feel I don't know enough to avoid screwing things up.
Could someone tell me how I can solve this?
The URL you specify when deploying in Visual Studio can be misleading. If you have a sandboxed solution, it gets deployed to this location. If you have a farm solution, it gets deployed centrally and it uses the URL to figure out what application pool to recycle. If you have web application specific settings in the solution i.e. safecontrols), these will be applied to the web application hosting the URL.
The _vti_bin is available to every site in the whole farm; as is _layouts. Since a service will be exposed through multiple URL's (one for each site) the SharePoint team has created custom factory classes to make this possible. Check out one of the built in svc files, and you will see that it uses a special factory class. Use this in your svc file to expose your service in all sites.
As for the debugging, it's never a good idea to have multiple developers using the same machine. If you really want to do it, I suggest using two web applications with different application pools. That way each developer has their own process to attach to. If you use different accounts for the application pool, it makes it easier to find the correct one in the 'attach process' dialog.

Out Of Browser Silverlight app with local offline database and WCF-RIA

I have the following scenario:
We develop a silverlight 4 app for our customers, that will be used as an out-of-browser app. The app is working offline, i.e. app and database are on the users local machine. The app is using WCF-RIA-services to connect to the local database. The database will be an SQL Server Express, SQL Server CE or MySQL. We are using MVVMLight and MEF.
An external webserver is only used for updating the app from time to time or adding new modules to the app. To achieve this we do something similar as shown in Jeremy Likness blog (http://www.wintellect.com/CS/blogs/jlikness/archive/2010/05/25/silverlight-out-of-browser-dynamic-modules-in-offline-mode.aspx )
The reasons why we are doing such a scenario are complex. But to keep a long story short it is mainly for compatibility reasons for a later online version and we don't want to use WPF. So we need to get this working with Silverlight and WCF-RIA services.
Ok, that's the scenario and here's the question:
Do we need a local webserver in this scenario? The app is programmatically installed as out-of-browser, the database is local and connected via WCF-RIA.
If yes, which webserver would be sufficient? It should be installed and configured via an initial setup that is executed by the customer. The customer should not have to do anything with configuring the webserver.
Any other ideas or comments on this scenario? Any other possible solutions for this?
Thanks for your help
Dirk
silverlight wasn't meant to be used this way I think. So it would be like when you are developing app in visual studio and use Cassini to see result - everything runs locally - but you still need a web server. Maybe more info here - http://www.infoq.com/news/2010/06/WPF-vs-Silverlight
I´m not able to provide with a full answer to your problem, as we are currently facing the same problem. (WPF not being cross-platform, Very specific hardware on some clients)
But I may share some of our thoughts on our type of Thick-Silverlight-Client:
To keep deployment etc. simple we use a self-hosting process (installed as background process)
We may not use RIA as the background process has to run using Mono VM (but for MS-only solution see Can WCF RIA Services be self hosted? )
Architectural thoughts on standalone "Clients":
Depending on your requirements implementing a server for each client communicating with the "main"-server by messages (NServiceBus) may be overkill. But if you want to use a client database if offline and silverlight for ui you should consider using an event-driven-architecture.
There is a slideshow on combining "Event-Driven-Architecture" & "CQRS" with Silverlight. But i would not use it as a blueprint more like an inspiration.
http://www.slideshare.net/dennisdoomen/cqrs-and-event-sourcing-an-alternative-architecture-for-ddd

creating a WCF client proxy with 3 solution windows open

My WCF service library, the console host for the service, and the client are all in separate Visual Studio solutions. Does this choice of organization impose a problem? I cannot seem to create the client proxy by using the Add Service Reference and Discovery features.
When I run the console hosted WCF service, then change focus to the Visual Studio solution for developing a client, invoke "add service reference" and "discover" it says "no services found in the solution". Do I have to develop the client code inside the same Visual Studio solution as I have developed the host code? That would seem unreasonable.
Having several projects for your WCF solution is a great idea - definitely stick with that!
But you cannot run the WCF host application from within Visual Studio and then use Visual Studio to add the service reference, too, at the same time....
So what you need to do is run the service host application from outside Visual Studio (find the directory, double-click on the EXE to spin up the host) and then you can add the client service reference inside Visual Studio.
In such cases I usually use a single solution file containing all projects across all subsystems + separate solution files for individual subsystems. This allows me to develop the system as a whole, and, at the same time, build individual subsystems separately. This way you can overcome any “editing-time experience” shortcomings, while preserving good separation and independence of subsystems.
Solutions are meant to have multiple projects in them. They are meant to be the level of organization that contains all of the projects you are working on at a time.
No, it's not unreasonable to put all of those related projects into a single solution.

WCF Automated Deployment

I am in the process of setting up some IIS hosted WCF projects for continuous integration and am stuck trying to find the best and simplest way to get deployment automated.
Right now, I have the build and deploy working with CC.NET, MSBUILD and a batch file that copies the necessary files to the deployment folder, but I think there must be a better way.
Ideally, I'd like something like web deployment projects, but for WCF.
I would settle for a nice Power Shell script to copy all the necessary files and exclude all the fluff.
Well, there isn't anything stopping you from using a web deployment project for hosting your WCF class library. The SVC file will be picked up by IIS and routed appropriately. We use a standard deployment project and a custom action to create the IIS vroot so that we have a finer control over the settings but a standard web deployment project will do the job as well.
Unless you are running under IIS7 then as far as IIS is concerned it's just standard content that has it's own handler. When you get to Windows 2008 / Windows 7 Beta then things can change a bit as those versions have a very different handler model.
I've found this post to be really helpful: http://msdn.microsoft.com/en-us/library/bb332338.aspx
This depends very much on the technologies you are using. On a previous project, we used TFS, with Team Build. The result was that the WCF projects were built into a folder structure that matched their deployment structure. Additional tasks in the MSBUILD script triggered a deployment script (written in PERL, I think). This took care of all deployment tasks, from deleting old folders, creating the new, creating databases and populating with reference data, then deploying the service and web sites, and finally running Installation Verification scripts and publishing the results to a web site.
On the other hand, if all you've got is a hammer, then hammer away.

Best methodology for developing c# long running processor apps

I have several different c# worker applications that run various continuous tasks: sending emails from queue, importing new orders from website database to orders database, making database backups and restores, running data processing for OLTP -> OLAP, and other related tasks. Before, I released these as windows services, but currently I release them as regular console applications. They are all based on a common task runner framework I created, and I am happy with that, however I am not sure what is the best way to deploy these types of applications. I like the console version because it is quick and easy, and it is possible to quickly see program activity and output. The downside is that the worker computer has several console screens running and it gets messy. On the other hand the service method seems to take to long to deploy and I have to go through event logs to see messages. What are some experiences/comments on this?
I like the console app approach. I typically have things set up so I can pass a switch like -unattended that suppresses the console screen.
Windows Service would be a good choice, it runs in the background no matter if you close current session, also you can configure it to start automatically after windows restart when performing a patches update on the server. You can log important messages to event viewer or database table.
For a thing like this, the standard way of doing it is with Windows services. You want the service to run on the network account so it won't require a logged in user.
I worked on something a few years ago that had similar issues. Logically I needed a service, but sometimes I needed to see what was going on and generally I wanted a history. So I developed a service which did the work, any time it wanted to log, it called to it's subscribers (implemented as an observer pattern).
The service registered it's own data logger (writing to a database) and at run time, the user could run a GUI which connected to the service using remoting to become a live listener!
I'm going to vote for Windows Services. It's going to get to be a real pain managing those console applications.
Windows Service deployment is easy: after the initial install, you just turn them off and do an XCOPY. No need to run any complicated installers. It's only semi-complicated the first time, and even then it's just
installutil MyApp.exe
Configre the services to run under a domain account for the best security and easiest interop with other machines.
Use a combination of event logs (with Error, Warning, and Information) for important notifications, and just dump verbose logging to a text file.
Why not get the best of all worlds and use something like:
http://topshelf-project.com/
It will allow you to run your program as command line or a windows service.
I'm not sure if this applies to your applications or not, but when I have some console applications that are not dependent on user input or they are the kind of applications that just do their job and quit, I run such programs on a virtual server, this way I don't see a screen popping up when I'm working, and virtual servers are easy to create and restart.
We regularly use windows services as the background processes. I don't like command-line apps as you need to be logged into the server for them to run. Services run in the background all the time (assuming they're auto-start). They're also trivial to install w/the sc.exe command-line tool that's in windows. I like it better than the bloat-ware that is installutil.exe. Of course installutil does more, but I don't need what it does. I just want to register my service.
We've also created a infrastructure where we have a generic service .exe that loads .DLLs based on an interface definition, so adding a new "service" is as simple as dropping in a new DLL and restarting the service host.
However, we started to move away from services. The problem we have with them is that they lock up the DLLs (for obvious reasons) so it's a pain to upgrade them. We need to stop, upgrade and then restart. Not hard, but additional steps. Instead we're moving to special "pages" in our asp.net apps that run the actual background jobs we need done. There's still a service, but all it does it invoke the asp.net pages so it doesn't lock up any of our DLLs. Then we can replace the DLLs in the asp.net bin directory and normal asp.net rules for app-domain restart kick in.