My problem resembles this question but is not exactly the same: Change culture when deserializing WCF service
I have built a WCF (web) service running on Windows 7 (IIS 7.5) using VS 2008. One of the properties in my webservice is typed as a System.Double. When building my own client for testing the service, Visual Studio requiers my to write - for example - a number like 123.4 (using a dot as the decimal point). But when using WcfTestClient.exe to access the service and entering a number on the field (like 123.4, still using a dot) it gives me the message "123.4 the is not a valid value for this type". I should mention I live in Sweden and comma is the culture-specific decimal point symbol here.
If I instead use a comma (,) as the decimal point when using WcfTestClient, that is accepted. The problem is that when I debug my webservice code I can see that the comma gets removed by the serializing process somehow and the number has been changed to 1234. Not good.
In my dev environment I have both the service and client running on the same machine. The webservice is running under the NetworkService account which uses the same locale.
My question is: how do I in WCF make sure that whatever number that's supplied on this field/property to the webservice, if containing a comma the comma should NOT be stripped away?
I thought this was automatically handled in the framework. I don´t really care if the number is stored with a comma or a dot as long as the value stays the same.
I am using the DataContractSerializer and auto-implemented properties, like this: [DataMember] public double Price { get; set; }
I've also tested building a property using Convert.ToDouble(value, System.Globalization.CultureInfo.InvariantCulture) in the setter with no visible change in outcome in the WCF Service.
Can you tell if it is the wcf test client, or your web service that is messing up the serialization? Maybe try enabling full message logging in WCF and check the incoming message body to see if it contains "1234" or "123,4" or "123.4". Maybe (hopefully) its just a bug in the WCF test client.
Link to MSDN page to set up message logging: http://msdn.microsoft.com/en-us/library/ms730064.aspx
And set:
logEntireMessage="true"
logMessagesAtServiceLevel="false"
logMessagesAtTransportLevel="true"
I think that should give you a service log that has the raw incoming message. You might have to turn off transport level security (SSL) if you have it enabled.
I had the same problem with System.Decimal.
You could see in the XML tab of WcfTestClient that the data sent was indeed without a decimal separator.
However, the bug seems to be within your regional settings of the computer you use.
I changed my regional settings to use "." (dot) as separator for numbers and was able to test it successfully. Now you could see that the data sent (again in the XML tab) and when debugging my service contained the correct separator.
This problem arises from the use of the data type Double.The Double value type represents a double-precision 64-bit number with values ranging from negative 1.79769313486232e308 to positive 1.79769313486232e308. It is intended to represent values that are extremely large (such as distances between planets or galaxies) or extremely small (the molecular mass of a substance in kilograms). See more here.
In this case, when I try to test a WCF service from WCF Test Client with double parameter must be a smaller number of 9.9999999999 because the decimal point is removed.
In this example I put as parameter value 9.947814E+22 and if we click the XML tab, we can see that the comma is not deleted.
Therefore it is recommended to change the data type of the parameter from Double to decimal when working with higher decimal values.
Related
I'm building a multi-layered Windows VS C# solution that has a WCF Service Library project with EF6.2 loaded, and an ADO.NET Data layer with EF6.2 also.
The EDMX model is built as a 'database first' set of tables from my MSSQL Server Express 2016 server on my laptop. My WCF Service Interface and code
only have properties and methods for one of the tables at this point. And that table has also been built out in logic and data layer methods.
So, I'm testing that service now with the WCF Test Client, and I'm receiving some integer data correctly in my service's response from to the data layer, but no string data.
While testing my "GetMemberByID" method, it returns all String column results as a value of "(null)", and a type of "NullObject",
but returns Integers with their actual value. The WCF Soap response shows the returned String values as "". But,
the integers are returned like this: "7". There are over 50 data rows in my test database which is used as the source
for the EF6.2 EDMX build. My App.config's in data and service layers are referencing the same (localdb)\ V13.0 server and database.
Has anyone had this issue, and can you tell me what I'm missing? The MSSQL database was originally an (OleDb) MS Access database and I imported it into
MSSQL Server. Thanks in advance.
It seems that there is something wrong with the serialization process. On my side, the string field can be returned properly. By default, the DataContractSerializer is used to deserialize/deserialize the complex object data.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/using-data-contracts
The most possible reason might be that the nullable field is not decorated by the [DataMember] attribute. Please check if the column of the DataContract autogenerated on the client-side contains the [DataMember] attribute.
http://sivakrishnakuchi.blogspot.com/2010/05/troubleshoot-wcf-service-returning.html
Feel free to let me know if the problem still exists.
Okay, I found my mistake. When translating the business domain objects back to the Service objects in the Service.cs code file, I was only translating the MemberID and the RowVersion - and no other columns. So, the only thing that was showing up in the WCF Test Client result was the MemberID and the RowVersion - which happen to be the only two non-strings in my entity. All the string types were null because I was not translating them back into the Service. Thanks for taking a look at this, Abraham, but you made me start looking closer, and thanks for the advice. The MS Documentation was helpful, too. Once I did a full "step into" debug trace from the UI through to the Data Layer and back, I was able to see the data translation failure. One more thing, before I could debug step by step all the way to the Data layer and back to the Service layer, I had to fix my "Underlying database did not open" issue that so many have had. I'm hosting my service through my local IIS, and had to make a few changes to the IIS application concerning the user credentials. My App.config is set to use "Integrated Security=True" - which is a "passthrough" credential in the IIS App pools. I had my IIS App set to "Specific User", but was not using a UserID/Password in my Connection String. Once I changed my IIS App to "Passthrough" - I was able to connect, and to debug to the DAL and back.
I am using SOAPUI to call a WCF endpoint with a decimal value. Somewhere the value is getting converted to a zero.
I can call the same service, with the same parameters from a .NET application and the value is not getting altered. I can de-serialise and inspect the values being passed from my .NET app and SoapUI, and both de-serialised versions of the object are identical.
I have been able to capture the request in Fiddler after it has left SoapUI and the decimal value is still in tact so I know it is getting converted down stream somewhere.
This post suggests that this can happen when the proxy is generated:
int properties are 0 when consuming WCF in .Net 2 - but evidence is not pointing to this being a problem in the service, not the client.
Apologies I can't share WSDL nor XML due to corporate privacy restrictions.
The resolution in my case was to change the order of my request parameters.
I was able to determine this by enabling WCF tracing, including message payloads, and then comparing the payloads from my .NET application against the payloads from SoapUI.
The payloads are massively different, but ignoring namespaces, correlation ids, keys and dates I was able to determine that my problematic parameter was in a different position. Changing the order within SoapUI XML request resolved the issue.
At the moment I have:
1 ) a WCF setup to return a block of xml (specifically it is the contents of a calendar from Exchange 2003).
2 ) a vb6 form with a command on it accessing the WCF via an object built on the service moniker including the content of the WSDL contract file.
This is working fine only when the string being passed across is of an acceptable size. When i attempt to return the whole xml generated on the WCF-side i encounter the following error:
"The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element."
When i add a binding-configuration into the WCF app-config to increase the maxReceivedMessageSize, this has no effect - presumably as the VB6 client is blind to this. (the WSDL contract generated doesn't even include the value in its xml)
Reading around*, there is the suggestion that i need to make a similar config change on the client-side. So i have created a VB6.EXE.CONFIG file and copied the binding-configuration details to to this. i have then extended the moniker to include this :
binding=WSHttpBinding_IExchange, bindingNamespace='Exchange', bindingConfiguration='ExchangeBinding'
I am however still receiving the same error message regarding the size quota.
when mis-spelling the binding or bindingNamespace elements of the moniker above i get an expected error, but when mis-spelling the bindingConfiguration element i get no error, as if this element is irrelevant anyway.
I seem to have the pieces but not the working solution at the moment. Any ideas anyone?
The obj object is declared to be of the interface proxy type. The moniker is then set to include only the address and the binding type. Since you’re using just the default settings for the wsHttpBinding, you aren’t required to supply a bindingConfiguration value. If you required overriding any of the default settings for the binding, you could supply an application configuration file with the name file.exe.config and place it in the program directory of the client
Personally, I would create a .NET COM exposed library that you call from the VB6. The .NET library could control all of the client binding and VB6 would be simply talking with a DLL and passing strings around.
I have a WCF web service that I am working on and I built it and was delighted to find that I could use complex types in it. I added some and then realized that they were still not useable as those types on the client end. This is an internal web service so these types are known on both sides. Anyway, that's not the problem, as I took the complex types out, but I think it may have left some residual issues.
When I then changed my additions to all be base types (string, date, int, etc) then added the web service to the client project, I got a "[enumtype] is already defined" error. It occurred in the reference.cs file so I opened it up. Sure enough there were duplicate enums. Plus there were a bunch of helper (serializing) functions. The duplicate enum was from code that had been in there before I picked this web service up to work on. It had not caused an issue previously.
I opened up the reference.cs for the previous (successful) service reference. It did not have the duplicates or functions and also I noticed a difference between the entries that were in there. The reference.cs that was failing to compile had this additional attribute in several places:
[System.ServiceModel.XmlSerializerFormatAttribute()]
I also see that my new failed code was using string[] and the old was using ArrayOfString. I did not intentionally change this, but must have somehow set something differently in the process.
Does anyone have a few clues?
Thanks!
Have you tried deleting the service reference from the project and re-adding it? You may have to manually remove some (or all) of the serviceModel contents too. If that is the only Service Reference then definitely remove the serviceModel element contents too.
Once its all gone, re-add the Service Reference. If you're still having problems then it may be that the service metadata is generating invalid WSDL causing the duplicate enums.
UPDATE: Just for verifying the WSDL is not valid, you could try creating the service proxy manually using the SvcUtil command line utility. It generates your proxy code like Visual Studio does and may give you more troubleshooting info.
After a lot of experimentation this is what I found out:
Our web service up to this point was using the Request / Response classes for input and output. There were required in 1.0, and were a carry over from that. I attempted to create a simple entry point that sent in a string and returned a string. This compiled ok, but:
Although you can use regular types for input and output, if you are using Request / Response types exchanges for other entry points, then you cannot.
Mixed method of request / response and regular types will compile, but it will not successfully import (at least into Visual Studio 2008). What ends up being created seems to be an attempt to create input and output classes for all of the functions to translate them to their complex types, along with the Request / Response types which creates duplicate entries and will not compile.
This then also means that you cannot send in a request object and return a string (which is how I found out that this was not allowed) – this generated an error in the unit test, which started me down this path.
So if you have a request / response web service, all functions must follow that protocol.
I have an existing asp.net application that talks to load balanced wcf services (iis hosted, in app pool running under account configured as servicePrincipalName, etc.). The wcf services return a few custom faults, all defined with FaultContract(typeof(x), ProtectionLevel = ProtectionLevel.None) -- these services are not exposed to the public. The client uses the 'service reference' generated classes to access the services.
This has worked fine but now, with the latest code base, we are getting "The primary signature must be encrypted." exceptions on the client when the service returns one of these faults. The service code and configuration is unchanged (at least the legacy parts that generate the faults). The client side service reference generated code appears the most changed (it often gets removed and recreated).
The security configuration is unchanged for over a year. All the updates are pretty current. We've tested this in three environments and as soon as we deploy the new code base, the faults start generating exceptions. Seems like it has to be in the generated classes but they are generated by Visual Studio so it is very perplexing.
Does this sound familiar to anyone? Any suggestions?
Update: Removing the ProtectionLevel attribute and allowing it to default makes the problem 'go away', but I am curious why specifying None causes it to fail. Perhaps it conflicts with the default level of the operation contract or service contract, but those values have not changed in the past year so that doesn't explain why what had worked now doesn't.
Update: For what it is worth, this change in code gen happened between 2.0.50727.3053 and 2.0.50727.3082 (according to the runtime-version comment in the generated code).
I haven't experienced this problem myself, but my questionn is: why on earth do you specify a "ProtectionLevel=None" in your fault contract? Any particular reason for that?
If not, I'd strongly recommend not specifying that at all - the default is ProtectionLevel=EncryptAndSign and that's usually your best bet all around. Try it, unless you have a very strong and explicit reason against it.
Marc