I am new to WCF,my task is to create,maintain sessions in WCF
I have a requirement in my project,what it says is I need to have a service(WCF) which has to be session enabled.More than one client will contact the above said service and the service has to deliver the required information that client wants.
For example: The service will hold a DOM object,here DOM means a database object which will have say Employee information.Each client will ask for different information from the DOM object,and our service has to deliver the information.Our servvice should not goto Database each time when the client calls,so for this we need to implement session management in service(WCF).
It will be of great help if someone provide some ideas,suggestions,or sample code for implementing my task...
Thanks...
First I'll point out that it is usually a very bad idea to use sessions with WCF. Having too many sessions open will consume lots of resources (eg memory and database connections). You mentioned that you are also storing database objects in the session - this is also likely to end up hurting you as most databases only allow a limited number of sessions.
All that said, if you really need to use sessions, there is some info for configuring it on MSDN.
You can configure your binding to use sessions as follows:
<wsHttpBinding>
<binding name="wsHttpBinding">
<reliableSession enabled="true" />
</binding>
</wsHttpBinding>
You can then mark your ServiceContract with SessionMode=SessionMode.Required:
[ServiceContract(Namespace="http://Microsoft.ServiceModel.Samples",
SessionMode=SessionMode.Required)]
public interface IMyService
{
...
}
Related
I am new to WCF, I am facing concurrency related issue in my hosted wcf service (.net framework 4.0) on IIS 7 / Windows 2008 server. I did all the possibilities after googling but still not able to fix my problem. I have created and inventory service which uses Entity Framework to fetch data from SQL Server tables like ItemHeadMaster, ItemMaster etc.
I referenced this WCF in my custom user search control for searching purposes. All is running well when 2 concurrent user hit search control placed on ASP.Net page.
My code looks like this:
namespace HIS.STORESERVICES
{
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Multiple)]
public class StoreMasterData : IStoreMasterData
{
public string GetAllItemHead(string strHospitalId)
{
using (DAL.ItemHeadMaster objItemHeadMasterDAL = new DAL.ItemHeadMaster())
{
List<STORE.MODEL.ItemHeadMaster> objItemHeamMasterList = new List<STORE.MODEL.ItemHeadMaster>();
objItemHeamMasterList = objItemHeadMasterDAL.GetAllItemHead(strHospitalId);
XmlSerializer Xml_Serializer = new XmlSerializer(objItemHeamMasterList.GetType());
StringWriter Writer = new StringWriter();
Xml_Serializer.Serialize(Writer, objItemHeamMasterList);
return Writer.ToString();
}
}
}
I did following after googling:
added in config but NO EFFECT
<system.net>
<connectionManagement>
<add address="*" maxconnection="100" />
</connectionManagement>
</system.net>`
Added in config but NO EFFECT instead it gets more slow..
<behaviors>
<serviceBehaviors>
<behavior>
<serviceMetadata httpGetEnabled="True" />
<serviceThrottling maxConcurrentCalls="32"
maxConcurrentInstances="2147483647"
maxConcurrentSessions="20"/>
Please help
Before WCF, to construct a service for cross process communications between processes in the same host, or in the same LAN, or in the Internet, you have to hand-craft transportation layers and data serializations for target environments and specific protocols.
With WCF, you just need to focus on creating data models (DataContracts after being decorated by attributes) and operation models (OperationContracts), and .NET CLR will "create" most if not all needed transportation layers and data serializations at run time, according to the configuration defined by you or the system administration in the target environment.
The defects in your codes:
WCF typically uses DataContractSerializer, NOT Xmlserializer to serialize things, and you don't need to call it explicitly, since the runtime will do it.
For most applications, you don't need ServiceBehaviorAttribute explicitly. You must know WCF in depth before using those advantage config which is not for beginner. And I rarely used them.
Your service interface function should comfortably return complex type rather the serialized text. In 99.9% of cases, if you have explicit serialization codes in WCF programs, the whole design is very dirty if not entirely wrong.
There are plenty of tutorials of creating Hello World WCF projects, and VS has one for you when creating a new WCF application. After you got familiar with Hello World, you may have a look at http://www.codeproject.com/Articles/627240/WCF-for-the-Real-World-Not-Hello-World
BTW, WCF serialization is very fast, check http://webandlife.blogspot.com.au/2014/05/performances-of-deep-cloning-and.html
I have a use case in my ASP.MVC app in which I need to save a collection of about 15k records (this is from a CSV file upload). I'm putting it through CSLA business objects in order validate the uploaded data with business rules.
I'm making use of the WCF DataPortal. When save is called I get this error after about 30s to 45s:
System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at [my dataportal host address]/WcfPortal.svc that could accept the message.
I have determined that if I break down the collection into smaller chunks, and call save on each of those chunks, the use case completes without a problem.
I have configured my Service to use the max values as follows (recommended in Rocky's book) (and increased the sendTimeout based on other guidance):
<binding name="wsHttpBinding_IWcfPortal" maxReceivedMessageSize="2147483647" sendTimeout="05:00:00">
<readerQuotas maxBytesPerRead="2147483647" maxArrayLength="2147483647" maxStringContentLength="2147483647" maxNameTableCharCount="2147483647" maxDepth="2147483647"/>
</binding>
Now I KNOW for a fact that my data does not exceed the 2147486347 size limit. Besides, if it did, I would expect to get a more meaningful error message indicating this (like I did when the size limits were at their defaults).
I have turned on WCF logging/tracing, which reveals nothing. This error seems to be some communication level error that gets hit before WCF stack comes into the picture.
Please advise as to why I would be getting this error when trying to save this large collection?
As WCF has changed over the years they've added some other limits that you can change. The latest info on WCF configuration for the data portal is available in two places:
The data portal FAQ page
The Using CSLA 4: Data Portal Configuration ebook
My ASP.NET MVC3 application uses Ninject to instantiate service instances through a wrapper. The controller's constructor has an IMyService parameter and the action methods call myService.SomeRoutine(). The service (WCF) is accessed over SSL with a wsHttpBinding.
I have a search routine that can return so many results that it exceeds the maximum I have configured in WCF (Maximum number of items that can be serialized or deserialized in an object graph). When this happens, the application pools for both the service and the client grow noticeably and remain bloated well past the end of the request.
I know that I can restrict the number of results or use DTOs to reduce the amount of data being transmitted. That said, I want to fix what appears to be a memory leak.
Using CLR Profiler, I see that the bulk of the heap is used by the following:
System.RunTime.IOThreadTimer.TimerManager
System.RunTime.IOThreadTimer.TimerGroup
System.RunTime.IOThreadTimer.TimerQueue
System.ServiceModel.Security.SecuritySessionServerSettings
System.ServiceModel.Channels.SecurityChannelListener
System.ServiceModel.Channels.HttpsChannelListener
System.ServiceModel.Channels.TextMessageEncoderFactory
System.ServiceModel.Channels.TextMessageEncoderFactory.TextMessageEncoder
System.Runtime.SynchronizedPool
System.Runtime.SynchronizedPool.Entry[]
...TextMessageEncoderFactory.TextMessageEncoder.TextBufferedMessageWriter
System.Runtime.SynchronizedPool.GlobalPool
System.ServiceModel.Channels.BufferManagerOutputStream
System.Byte[][]
System.Byte[] (92%)
In addition, if I modify the search routine to return an empty list (while the NHibernate stuff still goes on in the background - verified via logging), the application pool sizes remain unchanged. If the search routine returns significant results without an exception, the application pool sizes remain unchanged. I believe the leak occurs when the list of objects is serialized and results in an exception.
I upgraded to the newest Ninject and I used log4net to verify that the service client was closed or aborted depending on its state (and the state was never faulted). The only thing I found interesting was that the service wrapper was being finalized and not explicitly disposed.
I'm having difficulty troubleshooting this to find out why my application pools aren't releasing memory in this scenario. What else should I be looking at?
UPDATE: Here's the binding...
<wsHttpBinding>
<binding name="wsMyBinding" closeTimeout="00:01:00" openTimeout="00:01:00"
receiveTimeout="00:02:00" sendTimeout="00:02:00" bypassProxyOnLocal="false"
transactionFlow="false" hostNameComparisonMode="StrongWildcard"
maxBufferPoolSize="999999" maxReceivedMessageSize="99999999"
messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="false"
allowCookies="false">
<readerQuotas maxDepth="90" maxStringContentLength="99999"
maxArrayLength="99999999" maxBytesPerRead="99999"
maxNameTableCharCount="16384" />
<reliableSession enabled="false" />
<security mode="TransportWithMessageCredential">
<message clientCredentialType="UserName" />
</security>
</binding>
</wsHttpBinding>
UPDATE #2: Here is the Ninject binding but more curious is the error message. My wrapper wasn't setting MaxItemsInObjectGraph properly so it used the default. Once I set this, the leak went away. Seems that the client and service keep the serialized/deserialized data in memory when the service sends the serialized data to the client and the client rejects it because it exceeds MaxItemsInObjectGraph.
Ninject Binding:
Bind<IMyService>().ToMethod(x =>
new ServiceWrapper<IMyService>("MyServiceEndpoint")
.Channel).InRequestScope();
Error Message:
The InnerException message was 'Maximum number of items that can be
serialized or deserialized in an object graph is '65536'
This doesn't actually fix the memory leak so I am still curious as to what have been causing it if anyone has any ideas.
How are you handling your proxy client creation and disposal?
I've found the most common cause of WCF-related memory leaks is mishandling WCF proxy clients.
I suggest at the very least wrapping your clients with a using block kinda like this:
using (var client = new WhateverProxyClient())
{
// your code goes here
}
This ensures that the client is properly closed and disposed of, freeing memory.
This method is a bit controversial though, but it should remove the possibility of leaking memory from client creation.
Take a look here for more on this topic.
I am new to WCF i am trying to implement WCF Session Management but i am not clear about how to implement the session in WCF
This is my CODE
<wsHttpBinding>
<binding name="wsHttpBinding">
<reliableSession enabled="true" />
</binding>
</wsHttpBinding>
[ServiceContract(Namespace="http://Microsoft.ServiceModel.Samples",
SessionMode=SessionMode.Required)]
public interface IMyService
{
...
}
This is not working...session is not maintained in my project
Now i wanted to know whether am missing anything or whether i need to add anything else in client or server side???? or this alone is enough to implement the session in my project???
It will be of great help if someone provide some ideas,suggestions,or sample code for implementing my task...
When you implement your IMyService in a class and a client connects to your service every client gains a new instance of your class.
There is a little example, that might help you:
http://www.devx.com/architect/Article/40665
How your service will behave depends not just on the SessionMode specified for the ServiceContract, but also on the InstanceContextMode under which your service implementation runs (controlled by the InstanceContextMode property of the ServiceBehavior). There is a helpful table here which tells you what to expect with the various combinations of these settings.
If this doesn't help solve your problem, please explain more specifically what behaviour you are expecting and what you are seeing.
We have an application where we wish to expose an large number of database entities and some business logic. Each entity will require the ability to Read , Add, and Update. at this point we do not expect to allow deletion.
the software we build is used in a wide range of business, so of which are multi tenanted operations Bureau services, also some of our clients use this approach to have separate databases for financial reasons.
We wish to be able to minimize the number of endpoints that need to be maintained. At the moment there are only 3 tables be exposed as WCF interfaces each with 6 attached methods. this is manageable but if operation has 50 databases that suddenly becomes 150 endpoints. worse if we have 50 tables exposed that becomes 2500 endpoints.
Does anyone have a suggestion on how we could design out system that we still have a simple entity model of Job.add (var1) or iList jobs = Job.GetSelected("sql type read").
without all these endpoints
WCF Data Services allows you to expose your data in a RESTful manner using the Open Data protocal (OData). This was formally called ADO.Net data services and before that Astoria. Any IQueryable collection can be exposed. The way shown in most of the examples is to use the Entity Framework, however there are examples showing usage with NHibernate and other Data Access technologies. OData is a self describing API based on Atom-Pub with some custom extensions. With a minimal amount of code you can expose you're entire database in a well defined format. That's the easy part.
In order to implement multi-tenency, you can create query interceptors in the WCF Data Services application to implement that logic. The number of interceptors and the complexity of the code you write will depend upon your security model and requirements. Looking at something like T4 templates or CodeSmith to generate the interceptor methods based on your database schema may be a way to prevent lots of repetitive manual coding.
The link I provided has a lot of information and tutorials on WCF Data Services and would provide a good place to start to see if it would meet your needs. I have been looking at WCF Data Services for a similar problem (Multi-tenancy), and would love to hear how you evently implement your solution.
It seems like you could pass the "identity" to every query and take that into account. This would mean that every record on your "Job" table would need to have a reference to the owner "identity" but that should not be much of a problem.
Just make sure that every query validates the "identity", and you should be OK.
If I understand your question correctly, I think you need unique endpoints but you can have a single service behavior that your end points reference.
Create a default endpoint:
<behaviors>
<serviceBehaviors>
<behavior name="MyService.DefaultBehavior">
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />
</behavior>
</serviceBehaviors>
</behaviors>
Set your default binding:
<bindings>
<wsHttpBinding>
<binding name="DefaultBinding">
<security mode="None">
<transport clientCredentialType="None"/>
</security>
</binding>
</wsHttpBinding>
</bindings>
Have all service reference point to the default behavior and binding:
<service behaviorConfiguration="MyService.DefaultBehavior"
name="MyService.Customer">
<endpoint address="" binding="wsHttpBinding" bindingConfiguration="DefaultBinding"
contract="MyService.ICustomer">
<identity>
<dns value="localhost" />
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>
Each time you add a service, its a simple config entry.
With Apache you can use a fairly simple set of URL rewriting rules to map an arbitrary set of DB table tables and their corresponding endpoints to a single endpoint with a parameter.
For example, to map $ROOT/table_name/column_name to $ROOT/index.php?tn=table_name&cn=column_name, you could add a rule like this to $ROOT/.htaccess:
RewriteRule ^([a-zA-Z0-9_]+)/([a-zA-Z0-9_]+)/?$ index.php?tn=$1&cn=$2 [QSA,L]
Then you only need to maintain $ROOT/index.php (which of course can generate the appropriate HTTP status codes for nonexistent tables and/or columns).
Providing Multi-Tenancy, Without A Bazillion End Points
One way is to go with a REST-style WCF service that can use username/passwords to distinguish which client you are working with, and thus be able to select internally which DB to connect to. WCF gives you the the UriTemplate which allows you to map part's of the URL to the param's in your web methods:
HTTP GET Request: http://www.mysite.com/table1/(row Id)
HTTP PUT Request: http://www.mysite.com/table1/(row Id)/(field1)/(field2)
HTTP POST Request: http://www.mysite.com/table1/(row Id)/(field1)/(field2)
HTTP DELETE Request: http://www.mysite.com/table1/(row Id)
You can add other Uri Templates for more tasks as well, such as the following:
HTTP GET Request: http://www.mysite.com/table1/recentitems/(number of most recent items)
HTTP GET Request: http://www.mysite.com/table1/cancelPendingOrders/(user Id)
Who's Using My Service?
By requiring clients to supply a username and password, you can map that to specific DB. And by using the UriTemplate of /{tableName}/{operation}/{params...} you could then use code in your web service to execute the DB procedures given the table, operation, and params.
Wrapping It Up
Your web configuration wouldn't need to be altered much at all even. The following web article series is a great place to learn about REST-style web services, which I believe fits what you need: http://www.robbagby.com/rest/rest-in-wcf-blog-series-index/