How to create generic deserializer using Google Protocol Buffer? Using C# - serialization

I am trying to use Google protocol buffer library for serializing and deserialzing process in C#. I can deserialize using following codeNotificationSet.Parser.ParseJson(json); And this is working fine.
NotificationSet is auto generated file by .proto.
But here you can see it is not generic. So, instead of specif type i need to make a method in generic way. Can you please advice on this?
Example:
public async Task<TResult> Deserialize<TResult, TValue>(TValue value)
{
TResult.Parser.ParseJson(value.ToString());
}
Problem is TResult is generic type, so unable to get Parser method from that.

Found an answer.
Try with is code to achieve generic deserialization process using google protocol buffer library.
public async Task<TResult> Deserialize<TResult,TValue>(TValue value)
{
try
{
System.Type type = typeof(TResult);
var typ = Assembly.GetExecutingAssembly().GetTypes().First(t => t.Name == type.Name);
var descriptor = (MessageDescriptor)typ.GetProperty("Descriptor", BindingFlags.Public | BindingFlags.Static).GetValue(null, null);
var response = descriptor.Parser.ParseJson(value.ToString());
return await Task.FromResult((TResult)response);
}
catch (Exception ex)
{
throw ex;
}
}
}

Related

ASP.NET Core 2.1 API POST body is null when called using HttpWebRequest, seems it can't be parsed as JSON

I'm facing a strange bug, where .NET Core 2.1 API seems to ignore a JSON body on certain cases.
I advised many other questions (e.g this one, which itself references others), but couldn't resolve my problem.
I have something like the following API method:
[Route("api/v1/accounting")]
public class AccountingController
{ sometimes it's null
||
[HttpPost("invoice/{invoiceId}/send")] ||
public async Task<int?> SendInvoice( \/
[FromRoute] int invoiceId, [FromBody] JObject body
)
{
// ...
}
}
And the relevant configuration is:
public IServiceProvider ConfigureServices(IServiceCollection services)
{
services
.AddMvcCore()
.AddJsonOptions(options =>
{
options.SerializerSettings.Converters.Add(new TestJsonConverter());
})
.AddJsonFormatters()
.AddApiExplorer();
// ...
}
Where TestJsonConverter is a simple converter I created for testing why things doesn't work as they should, and it's simple as that:
public class TestJsonConverter : JsonConverter
{
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
var token = JToken.Load(reader);
return token;
}
public override bool CanRead
{
get { return true; }
}
public override bool CanConvert(Type objectType)
{
return true;
}
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
{
throw new NotImplementedException("Unnecessary (would be neccesary if used for serialization)");
}
}
Calling the api method using Postman works, meaning it goes through the JSON converter's CanConvert, CanRead, ReadJson, and then routed to SendInvoice with body containing the parsed json.
However, calling the api method using HttpWebRequest (From a .NET Framework 4, if that matters) only goes through CanConvert, then routes to SendInvoice with body being null.
The request body is just a simple json, something like:
{
"customerId": 1234,
"externalId": 5678
}
When I read the body directly, I get the expected value on both cases:
using (var reader = new StreamReader(context.Request.Body))
{
var requestBody = await reader.ReadToEndAsync(); // works
var parsed = JObject.Parse(requestBody);
}
I don't see any meaningful difference between the two kinds of requests - to the left is Postman's request, to the right is the HttpWebRequest:
To be sure, the Content-Type header is set to application/json. Also, FWIW, the HttpWebRequest body is set as follows:
using(var requestStream = httpWebRequest.GetRequestStream())
{
JsonSerializer.Serialize(payload, requestStream);
}
And called with:
var response = (HttpWebResponse)request.GetResponse();
Question
Why does body is null when used with HttpWebRequest? Why does the JSON converter read methods are skipped in such cases?
The problem was in the underlying code of the serialization. So this line:
JsonSerializer.Serialize(payload, requestStream);
Was implemented using the default UTF8 property:
public void Serialize<T>(T instance, Stream stream)
{
using(var streamWriter = new StreamWriter(stream, Encoding.UTF8) // <-- Adds a BOM
using(var jsonWriter = new JsonTextWriter(streamWriter))
{
jsonSerializer.Serialize(jsonWriter, instance); // Newtonsoft.Json's JsonSerializer
}
}
The default UTF8 property adds a BOM character, as noted in the documentation:
It returns a UTF8Encoding object that provides a Unicode byte order
mark (BOM). To instantiate a UTF8 encoding that doesn't provide a BOM,
call any overload of the UTF8Encoding constructor.
It turns out that passing the BOM in a json is not allowed per the spec:
Implementations MUST NOT add a byte order mark (U+FEFF) to the
beginning of a networked-transmitted JSON text.
Hence .NET Core [FromBody] internal deserialization failed.
Lastly, as for why the following did work (see demo here):
using (var reader = new StreamReader(context.Request.Body))
{
var requestBody = await reader.ReadToEndAsync(); // works
var parsed = JObject.Parse(requestBody);
}
I'm not very sure. Certainly, StreamReader also uses UTF8 property by default (see remarks here), so it shouldn't remove the BOM, and indeed it doesn't. Per a test I did (see it here), it seems that ReadToEnd is responsible for removing the BOM.
For elaboration:
StreamWriter and UTF-8 Byte Order Marks
The Curious Case of the JSON BOM

Lagom http status code / header returned as json

I have a sample where I make a client request to debug token request to the FB api, and return the result to the client.
Depending on whether the access token is valid, an appropriate header should be returned:
#Override
public ServerServiceCall<LoginUser, Pair<ResponseHeader, String>> login() {
return this::loginUser;
}
public CompletionStage<Pair<ResponseHeader, String>> loginUser(LoginUser user) {
ObjectMapper jsonMapper = new ObjectMapper();
String responseString = null;
DebugTokenResponse.DebugTokenResponseData response = null;
ResponseHeader responseHeader = null;
try {
response = fbClient.verifyFacebookToken(user.getFbAccessToken(), config.underlying().getString("facebook.app_token"));
responseString = jsonMapper.writeValueAsString(response);
} catch (ExecutionException | InterruptedException | JsonProcessingException e) {
LOG.error(e.getMessage());
}
if (response != null) {
if (!response.isValid()) {
responseHeader = ResponseHeader.NO_CONTENT.withStatus(401);
} else {
responseHeader = ResponseHeader.OK.withStatus(200);
}
}
return completedFuture(Pair.create(responseHeader, responseString));
}
However, the result I get is:
This isn't really what I expected. What I expect to receive is an error http status code of 401, and the json string as defined in the code.
Not sure why I would need header info in the response body.
There is also a strange error that occurs when I want to return a HeaderServiceCall:
I'm not sure if this is a bug, also I am a bit unclear about the difference between a ServerServiceCall and HeaderServiceCall.
Could someone help?
The types for HeaderServiceCall are defined this way:
interface HeaderServiceCall<Request,Response>
and
CompletionStage<Pair<ResponseHeader,Response>> invokeWithHeaders(RequestHeader requestHeader,
Request request)
What this means is that when you define a response type, the return value should be a CompletionStage of a Pair of the ResponseHeader with the response type.
In your code, the response type should be String, but you have defined it as Pair<ResponseHeader, String>, which means it expects the return value to be nested: CompletionStage<Pair<ResponseHeader,Pair<ResponseHeader, String>>>. Note the extra nested Pair<ResponseHeader, String>.
When used with HeaderServiceCall, which requires you to implement invokeWithHeaders, you get a compilation error, which indicates the mismatched types. This is the error in your screenshot above.
When you implement ServerServiceCall instead, your method is inferred to implement ServiceCall.invoke, which is defined as:
CompletionStage<Response> invoke()
In other words, the return type of the method does not expect the additional Pair<ResponseHeader, Response>, so your implementation compiles, but produces the incorrect result. The pair including the ResponseHeader is automatically serialized to JSON and returned to the client that way.
Correcting the code requires changing the method signature:
#Override
public HeaderServiceCall<LoginUser, String> login() {
return this::loginUser;
}
You also need to change the loginUser method to accept the RequestHeader parameter, even if it isn't used, so that it matches the signature of invokeWithHeaders:
public CompletionStage<Pair<ResponseHeader, String>> loginUser(RequestHeader requestHeader, LoginUser user)
This should solve your problem, but it would be more typical for a Lagom service to use domain types directly and rely on the built-in JSON serialization support, rather than serializing directly in your service implementation. You also need to watch out for null values. You shouldn't return a null ResponseHeader in any circumstances.
#Override
public ServerServiceCall<LoginUser, Pair<ResponseHeader, DebugTokenResponse.DebugTokenResponseData>> login() {
return this::loginUser;
}
public CompletionStage<Pair<ResponseHeader, DebugTokenResponse.DebugTokenResponseData>> loginUser(RequestHeader requestHeader, LoginUser user) {
try {
DebugTokenResponse.DebugTokenResponseData response = fbClient.verifyFacebookToken(user.getFbAccessToken(), config.underlying().getString("facebook.app_token"));
ResponseHeader responseHeader;
if (!response.isValid()) {
responseHeader = ResponseHeader.NO_CONTENT.withStatus(401);
} else {
responseHeader = ResponseHeader.OK.withStatus(200);
}
return completedFuture(Pair.create(responseHeader, response));
} catch (ExecutionException | InterruptedException | JsonProcessingException e) {
LOG.error(e.getMessage());
throw e;
}
}
Finally, it appears that fbClient.verifyFacebookToken is a blocking method (it doesn't return until the call completes). Blocking should be avoided in a Lagom service call, as it has the potential to cause performance issues and instability. If this is code you control, it should be written to use a non-blocking style (that returns a CompletionStage). If not, you should use CompletableFuture.supplyAsync to wrap the call in a CompletionStage, and execute it in another thread pool.
I found this example on GitHub that you might be able to adapt: https://github.com/dmbuchta/empty-play-authentication/blob/0a01fd1bd2d8ef777c6afe5ba313eccc9eb8b878/app/services/login/impl/FacebookLoginService.java#L59-L74

Store and retrieve string arrays in HBase

I've read this answer (How to store complex objects into hadoop Hbase?) regarding the storing of string arrays with HBase.
There it is said to use the ArrayWritable Class to serialize the array. With WritableUtils.toByteArray(Writable ... writable) I'll get a byte[] which I can store in HBase.
When I now try to retrieve the rows again, I get a byte[] which I have somehow to transform back again into an ArrayWritable.
But I don't find a way to do this. Maybe you know an answer or am I doing fundamentally wrong serializing my String[]?
You may apply the following method to get back the ArrayWritable (taken from my earlier answer, see here) .
public static <T extends Writable> T asWritable(byte[] bytes, Class<T> clazz)
throws IOException {
T result = null;
DataInputStream dataIn = null;
try {
result = clazz.newInstance();
ByteArrayInputStream in = new ByteArrayInputStream(bytes);
dataIn = new DataInputStream(in);
result.readFields(dataIn);
}
catch (InstantiationException e) {
// should not happen
assert false;
}
catch (IllegalAccessException e) {
// should not happen
assert false;
}
finally {
IOUtils.closeQuietly(dataIn);
}
return result;
}
This method just deserializes the byte array to the correct object type, based on the provided class type token.
E.g:
Let's assume you have a custom ArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
}
Now you issue a single HBase get:
...
Get get = new Get(row);
Result result = htable.get(get);
byte[] value = result.getValue(family, qualifier);
TextArrayWritable tawReturned = asWritable(value, TextArrayWritable.class);
Text[] texts = (Text[]) tawReturned.toArray();
for (Text t : texts) {
System.out.print(t + " ");
}
...
Note:
You may have already found the readCompressedStringArray() and writeCompressedStringArray() methods in WritableUtils
which seem to be suitable if you have your own String array-backed Writable class.
Before using them, I'd warn you that these can cause serious performance hit due to
the overhead caused by the gzip compression/decompression.

FaultException.Detail coming back empty

I am trying to catch a given FaultException on a WCF client. I basically need to extract a inner description from the fault class so that I can then package it in another exception for the upper layers to do whatever.
I've done this successfully a number of time, what makes it different this time is that fault is declared as an array, as you can see from the service reference attribute declared on top of the method that throws the exception:
[System.ServiceModel.FaultContractAttribute(typeof(FaultClass[]), Action = "http://whatever/", Name = "whateverBusinessFault")]
This is my code:
try
{
// call service here
}
catch (FaultException<FaultClass[]> ex)
{
if (ex.Detail != null && ex.Detail.Length > 0)
{
throw new CustomException(ex.Detail[0].description);
}
else
{
throw;
}
}
Problem is Detail (which is an array) is always coming back empty in the code even if I can see the data (description field etc.) in the SOAP response from WCF trace.
So the stuff I need is definitely coming back but for some reason either it doesn't get deserialized or I can't get to it from code.
Any help appreciated!
UPDATE:
Trying with #Darin suggestion but no luck, the string I am extracting from the XmlReader is "/r/n":
var sb = new StringBuilder();
using (XmlReader reader = fault.GetReaderAtDetailContents())
{
while (reader.Read())
sb.AppendLine(reader.ReadOuterXml());
}
var detail = sb.ToString();
Looks like the detail section is not coming up at all!
I found the solution on a UPS Forum :
https://developerkitcommunity.ups.com/index.php/Special:AWCforum/st/id371
"The problem was the visual studio didn't quite map out the ErrorDetail objects right. The ErrorDetail node is called "ErrorDetail", but the type generated for it is "ErrorDetailType." I edited the reference.cs class generated for each service I was using and added a TypeName:"
It is difficult to say where the problem is but I suspect the smoking gun is this axis web service not generating standard message. One way to workaround this would be to parse the XML yourself:
try
{
proxy.CallSomeMethod();
}
catch (FaultException ex)
{
var fault = ex.CreateMessageFault();
using (XmlReader reader = fault.GetReaderAtDetailContents())
{
// TODO: read the XML fault and extract the necessary information.
}
}
It took me ages to figure out how to get the full details message from a FaultException as a string. I eventually figured it out and wrote this extension method:
public static string GetDetail(this FaultException faultException)
{
if (faultException == null)
throw new ArgumentNullException(nameof(faultException));
MessageFault messageFault = faultException.CreateMessageFault();
if (messageFault.HasDetail) {
using (XmlDictionaryReader reader = messageFault.GetReaderAtDetailContents()) {
return reader.ReadContentAsString();
}
}
return null;
}
Originally I was using reader.Value but that only appeared to the return the first line of a multi-line details message. reader.ReadContentAsString() appears to get the whole thing, new lines included, which is what I wanted.
I came up with the simplest test case I could. I hope it will help you.
Server side:
[ServiceContract]
public interface IService1
{
[OperationContract]
[FaultContract(typeof(FaultClass[]))]
string Crash();
}
public class Service1 : IService1
{
public string Crash()
{
var exception = new FaultException<FaultClass[]>(new FaultClass[] { new FaultClass { Data = "TEST" } }, new FaultReason("Boom"));
throw exception;
}
}
[DataContract]
public class FaultClass
{
[DataMember]
public string Data { get; set; }
}
Client side:
try
{
using (var client = new Service1Client())
{
client.Crash();
}
}
catch(FaultException<FaultClass[]> e)
{
//Break here
}
I had a similar situation in trying to communicate data with faults (specifically a stack trace). See this question. I ended up solving it by creating my own serializable stack trace and including it in a derived FaultException class.

Wiring up WCF client side caching?

My application uses client side enterprise caching; I would like to avoid writing code for each and every cacheable call and wondered if there is a solution such that WCF client side calls can be cached, even for async calls.
Can this be done with WCF "behaviour" or some other means? Code examples?
I did this the other day with Generic Extension methods on the WCF service client (DataServiceClient). It uses Actions and Funcs to pass around the actual ServiceClient calls. The final client usage syntax is a little funky (if you don't like lambdas), but this method does FaultException/Abort wrapping AND caching:
public static class ProxyWrapper
{
// start with a void wrapper, no parameters
public static void Wrap(this DataServiceClient _svc, Action operation)
{
bool success = false;
try
{
_svc.Open();
operation.Invoke();
_svc.Close();
success = true;
}
finally
{
if (!success)
_svc.Abort();
}
}
// next, a void wrapper with one generic parameter
public static void Wrap<T>(this DataServiceClient _svc, Action<T> operation, T p1)
{
bool success = false;
try
{
_svc.Open();
operation.Invoke(p1);
_svc.Close();
success = true;
}
finally
{
if (!success)
_svc.Abort();
}
}
// non-void wrappers also work, but take Func instead of Action
public static TResult Wrap<T, TResult>(this DataServiceClient _svc, Func<T, TResult> operation, T p1)
{
TResult result = default(TResult);
bool success = false;
try
{
_svc.Open();
result = operation.Invoke(p1);
_svc.Close();
success = true;
}
finally
{
if (!success)
_svc.Abort();
}
return result;
}
}
On the client side, we have to call them like this:
internal static DBUser GetUserData(User u)
{
DataServiceClient _svc = new DataServiceClient();
Func<int, DBUser> fun = (x) => _svc.GetUserById(x);
return _svc.Wrap<int, DBUser>(fun, u.UserId);
}
See the plan here? Now that we have a generic set of wrappers for WCF calls, we can use the same idea to inject some cacheing. I went "low tech" here, and just started throwing around strings for the cache key name... You could do something more elegant with reflection, no doubt.
public static TResult Cache<TResult>(this DataServiceClient _svc, string key, Func<TResult> operation)
{
TResult result = (TResult)HttpRuntime.Cache.Get(key);
if (result != null)
return result;
bool success = false;
try
{
_svc.Open();
result = operation.Invoke();
_svc.Close();
success = true;
}
finally
{
if (!success)
_svc.Abort();
}
HttpRuntime.Cache.Insert(key, result);
return result;
}
// uncaching is just as easy
public static void Uncache<T>(this DataServiceClient _svc, string key, Action<T> operation, T p1)
{
bool success = false;
try
{
_svc.Open();
operation.Invoke(p1);
_svc.Close();
success = true;
}
finally
{
if (!success)
_svc.Abort();
}
HttpRuntime.Cache.Remove(key);
}
Now just call Cache on your Reads and Uncache on your Create/Update/Deletes:
// note the parameterless lambda? this was the only tricky part.
public static IEnumerable<DBUser> GetAllDBUsers()
{
DataServiceClient _svc = new DataServiceClient();
Func<DBUser[]> fun = () => _svc.GetAllUsers();
return _svc.Cache<DBUser[]>("AllUsers", fun);
}
I like this method because I didn't have to recode anything server-side, just my WCF proxy calls (which were admittedly a little brittle / smelly to have scattered about everywhere).
Substitute in your own WCF proxy conventions and standard caching procedures, and you're good to go. It's a lot of work to create all the generic wrapper templates at first too, but i only went up to two parameters and it helps all my caching operations share a single function signature (for now). Let me know if this works for you or if you have any improvements.
Unfortunately, I think you'll have to roll your own. I don't believe WCF has a client-side caching mechanism built in.
The answer to this question may also help.
Similar to the above solution, check out http://www.acorns.com.au/blog/?p=85 (PolicyInjection on WCF Services). You can sepecify the policy to match your service name.
If you want caching without having to explicitly implement it on each and every service call, consider the Caching Handler in the Policy Injection application block. You can mark your calls with an attribute, and the policy injection block will handle caching for you.
http://msdn.microsoft.com/en-us/library/cc511757.aspx