I'm part of a distributed development team. We all work through terminal services, accessing a remote server where our applications are located.
We're working on a project in which a client application consumes a WCF service, which exposes all the business logic functionality.
In our development process, a developer is often asked to develop an entire use case from user interface to database access, including the service and the business logic.
In such cases the developer must be able to debug the functions/methods on the server side that she/he has build for a given use case. The problem with that is that the service must be run and when another developer needs to debug his/her work, an exception is thrown (I think it is 'AddressAlreadyInUseException' not sure) and the 2nd developer is not able to perform any kind of debugging at the service. This happens even thought we (off course) have different windows usernames and hence we are working in different sessions.
It's still possible for the client app. to continue working with the 'original' service instance since we're catching the exception at the service, but debbugging is impossible. And if the first developer stops the wcf service then the app. fails.
I would like to know if you could have any recomendation for us. My be there's some sort of tool available (even if we must pay for it) that could somehow isolate each developers' workspace at the server... or may be we just need to change something in the way we work.
I would be very grateful for any kind of advice or clue.
Best regards,
Gonzalo
I would recomend that each developer had their own copy of the server services.
When we develop, each developer has a full environment on their machine. As things are completed, they are checked in to the version control system. When the other developers get the lastest version, new functionality is spread to the other developers.
If I understand your setup, all developers are working against the same server, in this case a programming error of one developer will stop all development.
Hey man, the debugger connects through IP communication. That means if a service or process binds a listener, no other service or process can bind this IP port a second time.
That is the reason for throwing the exception.
In Citrix you have the Virtual IP configuration.
You can also consider to place a VM on the server that serves only for one developer. This would also solve this problem
Related
Advanced Attacks Detection in a Platform-as-a-Service(PaaS) Environment
In the first part of this project, i'm supposed to monitor incoming packets
in a web service, accept only HTTP & HTTPS (TCP)packets for later analysis and, drop the rest.
I was thinking doing this in JAVA, because i think it's a very flexible and
complete language and, it's present in every PaaS Environment! So, my idea is
to build a simple web page in JSP/JSF with a bean to attend this first step
of the project.
This is where i need some guidance! Because i've started considering
libpcap JAVA wrappers like jNetPcap, Jpcap and Pcap4J. But none of them is able to drop packets!
Forgetting JAVA, i also have red about other libraries like: libnet, libdnet and libcrafter.
libnet can not handle the task!
libdnet has network firewall rule manipulation capabilities, but it's a very old library and, i'm not sure it can handle integration with iptables!
libcrafter is the best! Because it's an actual updated project and, it allows the use of iptables rules in the code.
And, of course, working directly with netfilter would be the ideal scenario!
But working with libcrafter or netfilter, to follow my simple idea of a web service with a JAVA bean, i would have to write my own java wrapper by JNI! Which i assume NOT to be a simple task!
Now, what is raising many doubts in my mind, is the fact that this has to be
done in a PaaS environment! None of them (PaaS providers) seem to have the
same restrictions. There are some more flexible like AWS and Microsoft Azure that let you choose and manage a VM with the OS distro you want. Others like OpenShift, BlueMix or Cloud Foundry, in a project, only give you the option of defining the programming language, application server and, that's it! So, one might not have permissions to install libraries and control network & transport layers to manage the packets! Since the hole OS administration is handled by the provider.
Considering only the main purpose of this project, which is managing the packet flow pointed to a domain located in a PaaS environment, without the help of other servers like tcp proxies, i am desperately in need of someone pointing me a direction to start from! Because with that, i can dig as deep as needed to get a solution. Please HELP!
Thank you very much for your time and consideration.
I have a vb6-mysql client-server desktop application which is distributed as a setup file.
It uses DLL all logical operations as well as database operations. The EXE and the DLL are installed in the server as well as the client machines. When I say server which only means the database resides in that machine no other difference in EXE or DLLs.
As all the database operations are done in the DLL when connected from a client machine, performance would be less. It is not possible now to change all the logic into database.
Is it possible to store the DLL in the server machine only and use the same DLL by the client machine also so that database connection will always be from the server itself?
Is converting the DLL to a windows service the possible solution for this?
How can I to convert it to a service?
And finally, if it is possible to make the DLL act as a service, what would the connection issues be?
You appear to be trying to rediscover n-tier application development.
The usual way this would be done using VB6 within a LAN would be to create an ActiveX EXE instead of a DLL so you can use DCOM. However DCOM isn't something you'd want to expose over the Internet.
For such cases it is more typical to use a commonly-open-port protocol such as HTTP or HTTPS. Almost everyone has firewall settings permitting outbound HTTP and HTTPS connections and most of the major Web servers undergo regular hardening to make them safer to expose to the Internet.
The classic way to do this with VB6 was to use IIS to host the Remote Data Service, which uses a form of Web Service "under the covers" where your program doesn't deal with the gory details. However this is a deprecated approach, and today configuring IIS and the RDS components can be a chore since they are locked down hard by default.
This leaves you with such things as the deprecated SOAP Toolkit or 3rd party tools such as those in the PocketSOAP suite... or you can roll your own.
Doing this from scratch can be a bit of work but is more flexible, allowing REST instead of SOAP which can have advantages in itself. You could use whatever Web server you choose that can work with VB6 (via CGI, etc.).
The hardest approach to justify might seem the simplest on the surface: create your own protocol over TCP and write a Windows Service. This can be the most flexible of all but it can be more work than other options and you are on your own as far as making it and keeping it secure. You'll probably also face firewall issues depending on where your clients are and what the local firewall policies are there.
When we could rely on DCOM the issues were relatively small aside from security configuration headaches. With the Internet in the picture it is an entirely different story.
This really isn't something you undertake casually. Even assuming that your database is safe exposed to the Internet is naive and should be rethought.
I need to debug a Sharepoint WCF service that is deployed for Sharepoint 2010. However, a colleague needs to debug another Sharepoint service deployed on the same phyical machine. If we debug at the same time strange things occur with the Visual Studio debugger. For example, his debugger would break at breakpoints I have set, or I am seeing exceptions raised by his code. Mind you, we are debugging different services in different solutions. From the information I have gathered so far, this behaviour is like this because there is only one w3wp process per application pool that both the Visual Studio Debugger instances attach to.
So I figured I should try running my service in another applicaion pool to get a different w3wp.exe to attach to. Here is what I tried, but I am not sure, if what I attempted makes any sense, please clarify:
IIS Manager shows that there are two different Sharepoint application pools (excluding the one for Central Adminisitration) and a Site for each. So I tried deploying my service using the other application pool by setting the deployment location to the URL of the other site. However the virtual _vti_bin directory of the service still maps to the same physical directory ...\Web Server Extensions\14\ISAPI\. Deploying from Visual Studio works, but getting a ServiceReference does not. Trying to open <url>/_vti_bin/MyService.svc/MEX shows an error page telling me that therer is already a binding instance associated to the URL. So, I guess this is either not the way to do this, or it is simply not possible to "isolate" services in this way. I am very hesitant to just trial-and-error with IIS Manager or Sharepoint Central Administration settings, because I feel I don't know enough to avoid screwing things up.
Could someone tell me how I can solve this?
The URL you specify when deploying in Visual Studio can be misleading. If you have a sandboxed solution, it gets deployed to this location. If you have a farm solution, it gets deployed centrally and it uses the URL to figure out what application pool to recycle. If you have web application specific settings in the solution i.e. safecontrols), these will be applied to the web application hosting the URL.
The _vti_bin is available to every site in the whole farm; as is _layouts. Since a service will be exposed through multiple URL's (one for each site) the SharePoint team has created custom factory classes to make this possible. Check out one of the built in svc files, and you will see that it uses a special factory class. Use this in your svc file to expose your service in all sites.
As for the debugging, it's never a good idea to have multiple developers using the same machine. If you really want to do it, I suggest using two web applications with different application pools. That way each developer has their own process to attach to. If you use different accounts for the application pool, it makes it easier to find the correct one in the 'attach process' dialog.
I have several different c# worker applications that run various continuous tasks: sending emails from queue, importing new orders from website database to orders database, making database backups and restores, running data processing for OLTP -> OLAP, and other related tasks. Before, I released these as windows services, but currently I release them as regular console applications. They are all based on a common task runner framework I created, and I am happy with that, however I am not sure what is the best way to deploy these types of applications. I like the console version because it is quick and easy, and it is possible to quickly see program activity and output. The downside is that the worker computer has several console screens running and it gets messy. On the other hand the service method seems to take to long to deploy and I have to go through event logs to see messages. What are some experiences/comments on this?
I like the console app approach. I typically have things set up so I can pass a switch like -unattended that suppresses the console screen.
Windows Service would be a good choice, it runs in the background no matter if you close current session, also you can configure it to start automatically after windows restart when performing a patches update on the server. You can log important messages to event viewer or database table.
For a thing like this, the standard way of doing it is with Windows services. You want the service to run on the network account so it won't require a logged in user.
I worked on something a few years ago that had similar issues. Logically I needed a service, but sometimes I needed to see what was going on and generally I wanted a history. So I developed a service which did the work, any time it wanted to log, it called to it's subscribers (implemented as an observer pattern).
The service registered it's own data logger (writing to a database) and at run time, the user could run a GUI which connected to the service using remoting to become a live listener!
I'm going to vote for Windows Services. It's going to get to be a real pain managing those console applications.
Windows Service deployment is easy: after the initial install, you just turn them off and do an XCOPY. No need to run any complicated installers. It's only semi-complicated the first time, and even then it's just
installutil MyApp.exe
Configre the services to run under a domain account for the best security and easiest interop with other machines.
Use a combination of event logs (with Error, Warning, and Information) for important notifications, and just dump verbose logging to a text file.
Why not get the best of all worlds and use something like:
http://topshelf-project.com/
It will allow you to run your program as command line or a windows service.
I'm not sure if this applies to your applications or not, but when I have some console applications that are not dependent on user input or they are the kind of applications that just do their job and quit, I run such programs on a virtual server, this way I don't see a screen popping up when I'm working, and virtual servers are easy to create and restart.
We regularly use windows services as the background processes. I don't like command-line apps as you need to be logged into the server for them to run. Services run in the background all the time (assuming they're auto-start). They're also trivial to install w/the sc.exe command-line tool that's in windows. I like it better than the bloat-ware that is installutil.exe. Of course installutil does more, but I don't need what it does. I just want to register my service.
We've also created a infrastructure where we have a generic service .exe that loads .DLLs based on an interface definition, so adding a new "service" is as simple as dropping in a new DLL and restarting the service host.
However, we started to move away from services. The problem we have with them is that they lock up the DLLs (for obvious reasons) so it's a pain to upgrade them. We need to stop, upgrade and then restart. Not hard, but additional steps. Instead we're moving to special "pages" in our asp.net apps that run the actual background jobs we need done. There's still a service, but all it does it invoke the asp.net pages so it doesn't lock up any of our DLLs. Then we can replace the DLLs in the asp.net bin directory and normal asp.net rules for app-domain restart kick in.
I've created a service for one of my apps. How do i create a system tray component in VB.net that can be used to monitor the progress of the service? Is there a way to have this installed via tcpip on multiple client machines such as those that are for our employees?
We do exactly that here, with the server running a really basic HTTP server on a configurable port on a separate thread that returns status in an XML format (nothing else, just that) -- the client just uses a web request to get the XML, before parsing it and displaying it appropriately.
This approach also allows for future extensibility (detailed status, sending service control commands, adding an association to an XSLT file elsewhere for use with a normal web browser, etc.)
You could use WCF for this. Using WCF your service would open up an EndPoint which would expose status information to callers. You could then build a tray icon application that can be deployed to the employees workstations. The tray icon application could periodically poll the WCF service the your Windows Service is hosting and get status information. I know #Johan mentioned Remoting already and this is a similar approach. I'd recommend WCF though as the programming API is more simple, IMHO, and WCF will give you more flexibility with regards to network transports, etc.
I guess your question is not about how to actually do the "traybar"-thing, but how to communicate with the service to get the information you want to show in the monitor/traybar-program?
It can be done in many ways, API is one way, use sendmessage/postmessage/getmessage is one way to make 2 running programs communicate with each other without having to store anything in files or databases first.
DDE is another way. If it needs to do the stuff via net then there is something called NetDDE, but I havent done anything with NetDDE I cant help anything there.
But about the API and DDE, feel free to ask more questions if you want some clarification.
I'll take the second question: Is there a way to remotely install software on client machines?
Yes. However it is very dependent on your environment. For example, if you have an Active Directory domain, you can use group policy to force installation of software on the client boxes.
If you don't like that or if you aren't on active directory, you can buy something like Altiris to push installs down.
Another option would be to use login scripts which would run a custom program to detect if your program is installed and take appropriate action. But then you are probably better off buying Altiris.
For the comunication part, i have used remoting before, and this works very well. With a little bit of configuration, you can even get it working to another machine.