I'm trying to publish some HL7 schemas (with quite a few ) as wcf services using the "WCF Service Publishing Wizard". The wizard seemingly runs and completes just fine, creating a service that exposes the schemas I want. But when I try to browse the newly created service, I get "Server Application Unavailable"... I looked in the eventviewer and noticed the error message: "System.OutOfMemoryException". I tested once more while having a look in "Task Manager", and i noticed that the aspnet_wp.exe was consuming more than 1 GB of RAM before it was terminated (application pool probably recycled after reaching maximum memory consumption allowed).
I was quite puzzled as to why this happened, so I decided to publish the same schema as a ASMX webservice using the "Web Services Publishing Wizard", to see if it would make any difference. After running the wizard I tried to browse the service, and it worked out just fine with no problmes whatsoever. I looked at the generated WSDL definition, which was huge, and all the referenced schemas was added as inline schemas, and not as include or import.
This left me to believe that it could be an issue with the generation of the WSDL, having so many includes in the published schema, but im not at all sure yet as to if this could be the case...
Is there anyone who have experienced similar problems trying to publish schemas as wcf services?
I welcome all suggestions that could lead me in the right direction in this issue.
Thanks.
-M.Papas
This problem is definitely a memory issue with the WSDL generation tool. Publishing complex or even semi-complex schemas as Web Services or WCF Services usually ends in out of memory exceptions. I've ran into this a few times doing a SAP iDoc demo and its just that the schema is too complex for the WSDL tool. Hope that helps.
Related
I'm migrating a service based integration platform from .Net Framework to .Net Core. The original versions of the integration platform have proven very successful and compared to replacing it with a 'off the shelf' integration solution, it has a far better ROI.
So after redeveloping the code, all tests has been working very well and have achieved higher levels of performance with a single IIS server that I could with 2 IIS servers with the original versions.
Except... If I go over ~3 message/sec with multiple clients, I start seeing duplicate GUID key errors when trying to save instrumentation data to my DB. All these errors are generated from the on-ramp service. The on-ramp places the message on a queue. The messages are then consumed by an off-ramp service and sent to the destination (for this load test the destination is a file folder).
Even though the off-ramp is also running on the same server as the on-ramp, we do not see any duplication errors generated by the off-ramp. I suspect this is due to the queue creating a linier process, so only one instance of the off-ramp is running at any time vs the on-ramp that has up to 4 clients firing concurrent messages at it's API.
Initially I thought the issue was caused by a static global variable class I had implemented, crossing process boundaries. But I would expect that the issue would be seen with the off-ramp as well, as the service architecture for both are virtually identical.
Summary of thoughts on issue:
If it is a pure coding issue, then errors would happen at low messaging rates.
The error would also be seen on the off-ramp if the GUID duplication was chance.
The on and off ramps are both running on the same server, but duplication only seen on the on ramp. IE on ramp not impacting the off ramp and visa versa.
Duplication has to be due to shared memory between concurrently running on-ramp instances, generated by multiple client scenario.
To try and resolve the issue I removed the static global variable class but I'm still seeing the duplication errors.
This issue was never observed in the original IIS implementation (after millions of message processed). I suspect the issue is with process isolation in the IIS hosted Kestrel .Net Core service host. From what I have read there is good isolation between different apps (based on IIS path) but not within the same app. So basically within the same IIS app pool. This could explain why .Net Core does not support multiple app running in the same IIS app pool.
If any one has a good idea how i can achieve process isolation between instances of the same app running in the same IIS app pool I would appreciate your thoughts/suggestions.
After running more tests I was able to resolve the issue. The problem was with the scope of the instrumentation variable. At low rates there was never a problem, but at high throughput, the same instrumentation object was being accessed by a second instance of the process.
The issue was difficult to track down due to the short lived nature of the integration services.
Thanks to anyone who reviewed the question.
Martin
I cannot seem to find any combination of tutorials or information online to set me in the right direction, so I'm hoping the community can help me out!
I have some experience with WCF in the past (mostly simple/default http implementations), but nothing to the level I am attempting with my current architecture. Unfortunately 99% of the info I'm finding for WCF is a couple of years old, and most of it does not address Azure specific details. Most books are published back in 2007, and do not address the newer IDE/Tooling or WCF updates since that time. Needless to say I have a few open questions, and would love to get pointed in the right direction after exhausting Google, Stack Overflow, MSDN & YouTube!
In a nutshell:
I want to centralize all business logic behind a single WCF service
on Azure (it will be load balanced on a Cloud Service).
I have a number of web clients that will be consuming this service.
All the clients are C#/.NET MVC projects that I control (I do not need or want the
WCF endpoints to be publicly available)
I would prefer to whitelist access to the endpoints, rather than
implement authentication (for performance & simplicity)
Hear are my questions and potential speed bumps:
Is WCF the right solution? Is there a newer better technology I should be using?
If I use a Cloud Service for my WCF solution, is WebRole or WorkerRole my best option and why? Are hosting the service as a Website an option? (It would save cost)
In my research I've landed on the fact that using NetTCP binding is faster than using the default Http bindings. But I can't find a simple example of how to set this up using VS 2013/.Net 4.5/Azure Cloud Service. Is there a good tutorial for this? Also, I'm assuming NamedPipes are not on option for me?
Since all the consumers of the WCF service will be running on Azure Websites, is NetTCP still possible? How do I create service references? I'm assuming I just use the NetTCP endpoint address, but what about whitelisting for security within the Azure infrastructure?
How can my Azure Website clients connect to TCP within Azure the fastest? Affinity groups don't seem to be an option for Websites, should I abandon this and deploy all my clients as WebRoles so they can share Affinity with my WCF Service? Is Azure smart enough to know that the website is calling a machine within the same region and keep the connection within the region? How is this ensured?
I will have a debug, stage and production environment for my WCF service. What is the best way to switch between the various endpoints on my azurewebsite client(s)? I'd prefer to do it during startup in my global.asax file using C#, rather than in my web.config. I only intend to keep one setting in my Web.Config for "Environment". Ideally I will have a Switch() statement in my startup file that will determine with WCF environment endpoint to use for my Service References.
My apologies for the array of questions. I was thinking about breaking this out into multiple posts, but keeping them in the same context seemed to be the only way to ensure that I am communicating the scope of my inquiry.
Thank you.
I found a great series of videos on Microsoft Virtual Academy that answers all of my questions:
Azure & Services
The key videos in this series are: 1,2 & 7. Here is a direct link to each one:
Intro to WCF
WCF on Azure
Advanced Topics
I am working on a project in which I want to use a Windows Workflow 4 State Machine. The Visual Studio solution templates and most guidance seem to steer everything towards hosting as a service in IIS that is created dynamically from send and receive activities within the workflow.
However, I would prefer to not use the send and receive activities and then host in my own WCF service which would allow me to use a Windows Service instead of IIS and use other bindings like TCP instead of HTTP and create my own interface instead of exposing MEX. In addition, it would be portable to any other hosting arrangement like in a WPF app or a console or whatever.
This feels a lot more flexible to me. Somehow, having service operations as part of the workflow seems like pretty tight coupling of two things that aren't that related. Is there any downside to my approach? I'm new to WF so I might be missing something.
Depending on the kind of workflows you are running you might need to write quite a bit of pluming code that workflow services provide for you.
Things to consider:
Are your workflows long lived?
Are you sending multiple messages to the same workflow?
Do your workflows need to survive a host restart?
Are you using Delay activities to respond to timeouts?
Do you need to be a able to retry action after error situations?
Lots of these things are automatically taken care of with a WF service and need your attention otherwise. It is certainly doable, I have done it in the past, but be aware of of what you are losing.
I have a WCF service that does some document conversions and returns the document to the caller. When developing locally on my local dev server, the service is hosted on ASP.NET Development server, a console application invokes the operation and executes within seconds.
When I host the service in IIS via a .svc file, two of the documents work correctly, the third one bombs out, it begins to construct the word document using the OpenXml Sdk, but then just dies. I think this has something to do with IIS, but I cannot put my finger on it.
There are a total of three types of documents I generate. In a nutshell this is how it works
SQL 2005 DB/IBM DB2 -> WCF Service written by other developer to expose data. This service only has one endpoint using basicHttpBinding
My Service invokes his service, gets the relevant data, uses the Open Xml Sdk to generate a Microsoft Word Document, saves it on a server and returns the path to the user.
The word documents are no bigger than 100KB.
I am also using basicHttpBinding although I have tried wsHttpBinding with the same results.
What is amazing is how fast it is locally, and even more that two of the documents generate just fine, its the third document type that refuses to work.
To the error message:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
I have spent the last 2 days trying to figure out what is going on, I have tried everything, including changing the maxReceivedMessageSize, maxBufferSize, maxBufferPoolSize, etc etc to large values, I even included:
<httpRuntime maxRequestLength="2097151" executionTimeout="120"/>
To see maybe if IIS was choking because of that.
Programatically the service does nothing special, it just constructs the word documents from the data using the Open Xml Sdk and like I said, locally all 3 documents work when invoked via a console app running locally on the asp.net dev server, i.e. http://localhost:3332/myService.svc
When I host it on IIS and I try to get a Windows Forms application to invoke it, I get the error.
I know you will ask for logs, so yes I have logging enabled on my Host.
And there is no error in the logs, I am logging everything.
Basically I invoke two service operations written by another developer.
MyOperation calls -> HisOperation1 and then HisOperation2, both of those calls give me complex types. I am going to look at his code tomorrow, because he is using LINQ2SQL and there may be some funny business going on there. He is using a variety of collections etc, but the fact that I can run the exact same document, lets call it "Document 3" within seconds when the service is being hosted locally on ASP WebDev Server is what is most odd, why would it run on scaled down Cassini and blow up on IIS?
From the log it seems, after calling HisOperation1 and HisOperation2 the service just goes into la-la land dies, there is a application pool (w3wp.exe) error in the Windows Event Log.
Faulting application w3wp.exe, version 6.0.3790.1830, stamp 42435be1, faulting module kernel32.dll, version 5.2.3790.3311, stamp 49c5225e, debug? 0, fault address 0x00015dfa.
It's classified as .NET 2.0 Runtime error.
Any help is appreciated, the lack of sleep is getting to me.
Help me Obi-Wan Kenobi, you're my only hope.
I had this message appearing:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
And the problem was that the object that I was trying to transfer was not [Serializable]. The object I was trying to transfer was DataTable.
I believe word documents you were trying to transfer are also non serializable so that might be the problem.
Yes, we'd want logs, or at least some idea of what you're logging. I assume you have both message and transport logging on at the WCF level.
One thing to look at is permissions. When you run under Cassini the web server is running as the currently logged in user. This hides any SQL or CAS permission problems (as, lets be honest, your account is usually a local administrator). As soon as you publish to IIS you are now running under the application pool user, which is, by default, a lot more limited.
Try turning on IIS debug dumps and following the steps in KB919789
Fyi, I changed IIS 6 to work in IIS 5.0 Isolation mode and everything works. Odd.
I had the same error when using an IEnumerable<T> DataMember in my WCF service. Turned out that in some cases I was returning an IQueryable<T> as an IEnumerable<T>, so all I had to do was add .ToList<T>() to my LINQ statements.
I changed the IEnumerable<T> to IList<T> to prevent making the same mistake again.
I am looking at using MSMQ as a solution to do asynchronous execution in my upcoming project. I want to know the differences between using WCF and frameworks like MassTransit or even hand written MSMQ client to place/read task off MSMQ.
Basically the application will be several websites (internal through LAN or external through the Internet) reading/writing data through a service layer (be it WCF or normal web service). Then this service layer will do one of two things: 1. write data to database 2. and/or trigger the background process by placing a message in the queue. 3. obviously it can also retrieve data from database. The little agent (a windows service) on the other side of the queue will monitor the queue and execute based on the task command.
This architecture will be quite easy to scale (add more queues and agents) and easy to implement compared to RPC or distributed execution or whatever. And the agent processing doesn’t need to be real time. And the agent and service layer are separate applications except they share the common domain objects and Repositories etc.
What do you think? Architecture suggestions for the above requirements are welcomed. Thank you!
WCF adds an abstraction over MSMQ. In fact, once you define compatible contracts (operations must be OneWay), you can switch out MSMQ in the config, transparently. (For instance, you could switch to normal HttpWS or a NetTcp binding.)
You should evaluate the other WCF benefits, like security and so on, to see how those fit in with your needs. Again, they should be reasonably transparent of the fact you're using MSMQ underneath. For instance, adding SOAP security and so on should "just work", independent of using MSMQ.
(Although, IIRC, you still need to login to the desktop on each machine that uses MSMQ, with the service account that will use MSMQ, to generate the certificate in the machines local profile. And then, it doesn't work very well from IIS6, since user profiles aren't loaded. A real pain in general, but nothing to do with WCF specifically.)
Apart from that:
Have you looked at SQL Server Service Broker? After using MSMQ + WCF and SSSB, I think that SSSB is vastly easier to configure and manage. SSSB works with T-SQL commands over any SQL client (I use it from Mono, on Linux, with transactions). It'll also give you transactional send/receive, even remotely (I think MSMQ 4 now allows this). It really takes a lot of the pain away from message queuing, and if you're using SQL Server already...
SSSB is often overlooked since the SQL Management Studio doesn't have GUI designers for it all, but it isn't hard and is a great option. The one downside is that if you want local send capability (i.e., queue message when network is down), you'll need to run a local SQL Express instance.
Your architecture seems sound and reasonable. However you should consider using the WCF net MSMQ transport over hand coded MSMQ classes. WCF wraps this common functionality into a nice programming model. Also I believe there is some improvements in the protocol used by wcf compared to basic System.Messaging
Have a look at the value-add over plain MSMQ:
http://readthedocs.org/docs/masstransit/en/latest/overview/valueadd.html
In summary, you get a lot of messaging concepts clearly presented in the API with MassTransit; to an extent you wouldn't have if you hand-coded it or used WCF.