How to match the JSON Response differences between Jackson 1 and Jackson 2 for Weblogic 11G to 12C migration - jackson

I might be late to the party. Now we are fixing the TechDebt by migrating application from Weblogic 11G to 12C. As part of it we had to change from Jersey 1.19 to Jersey 2.26(as 12C supports it) which in turn requires us to move to Jackson 2.9.6 from Jackson 1.9.2(guessing) internally. Also I've migrated Javaee-api from 6.0 to 7.0. It is an EJB application with LOCAL implementation.
P.s.: Same Request/Response POJO is used for JSON and SOAP services with #XmlRootElement annotation
Now we are started seeing the difference between 11G and 12C responses like the few below,
"true" -> true
"2" -> 2
{ "list1" : [ { "foo": "bar",..}, {"foo1": "bar1",..}] } -> [ {"type": "list1", "foo": "bar"..}, {"type": "list1", "foo1": "bar1",..}
Getting null valued key fields after migrated whereas it wasn't coming earlier
etc.
Is there any documentation about what are all such changes(complete set of changes) has been introduced and its respective fixes? I couldn't find any such documentation out of the internet.
Most of the response changes that are introduced in Jersey 2 seems logical only. However we have to match the 11G response to avoid downstream impact. So we are trying to google the individual issues and fixing it with Serialization/Deserialization.Feature or by adding custom serializer or by adding JsonGenerator according to the suggestion.
As my application is vast and can't test it end to end with each REST endpoints(due to Business SME unavailability), we have to handle all corner cases without escaping any scenarios. So if I have the documentation or the list of changes I can go and find the occurrences in the code and address those.

Related

Apache Beam Java 2.26.0: BigQueryIO 'No rows present in the request'

Since the Beam 2.26.0 update we ran into errors in our Java SDK streaming data pipelines. We have been investigating the issue for quite some time now but are unable to track down the root cause. When downgrading to 2.25.0 the pipeline works as expected.
Our pipelines are responsible for ingestion, i.e., consume from Pub/Sub and ingest into BigQuery. Specifically, we use the PubSubIO source and the BigQueryIO sink (streaming mode). When running the pipeline, we encounter the following error:
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "No rows present in the request.",
"reason" : "invalid"
} ],
"message" : "No rows present in the request.",
"status" : "INVALID_ARGUMENT"
}
Our initial guess was that the pipeline's logic was somehow bugged, causing the BigQueryIO sink to fail. After investigation, we concluded that the PCollection feeding the sink is indeed containing correct data.
Earlier today I was looking in the changelog and noticed that the BigQueryIO sink received numerous updates. I was specifically worried about the following changes:
BigQuery’s DATETIME type now maps to Beam logical type org.apache.beam.sdk.schemas.logicaltypes.SqlTypes.DATETIME
Java BigQuery streaming inserts now have timeouts enabled by default. Pass --HTTPWriteTimeout=0 to revert to the old behavior
With respect to the first update, I made sure to disable all DATETIME in the resulting TableRow objects. In this specific scenario, the error still stands.
For the second change, I'm unsure how to pass the --HTTPWriteTimeout=0 flag to the pipeline. How is this best achieved?
Any other suggestions as to the root cause of this issue?
Thanks in advance!
We have finally been able to fix this issue and rest assured it has been a hell of a ride. We basically debugged the entire BigQueryIO connector and came to the following conclusions:
The TableRow objects that are being forwarded to BigQuery used to contain enum values. Due to these not being serializable, an empty payload is forwarded to BigQuery. In my opinion, this error should be made more explicit (why was this suddenly changed anyway?).
The issue was solved by adding the #value annotation to each enum entry (com.google.api.client.util.Value).
The same TableRow object also contained values of the type byte[]. This value was injected in a BigQuery column with the bytes type. While this was working without explicitly computing a base64 before, it was now yielding errors.
The issue was solved by computing a base64 ourselves (this setup is also discussed in the following post).
--HTTPWriteTimeout is a pipeline option. You can set it the same way you set the runner, etc. (typically on the command line).

Creating a Sequelize Dialect for new Database

I'm pretty new to sequelize, though I've worked on node previously I did not use any ORM framework. At present I'm using new SQL DB(which is not supported by sequelize ) and want to connect it using node.js and sequelize( popular ORM for node.js ) by prototyping the existing dialects
The configuration is correct as I've tried it wihtout ORM.
The problem is after configuring the connection with properties the
sequelize.authenticate() doesn't throw any error but doesn't return a promise back
/**
* Test the connection by trying to authenticate
*
* #error 'Invalid credentials' if the authentication failed (even if the database did not respond at all...)
* #return {Promise}
*/
authenticate(options) {
return this.query('SELECT 1+1 AS result', _.assign({ raw: true, plain: true }, options)).return();
}
The return statement doesn't return anything. I've read this post how to create a new dialect. Though it says it is not encouraged to create a new dialect and throws an error if we try to, I think there must be a way to create because if it can be created for other SQL databases then may be there should be a way to do it. This is an open source project on github. Did anyone previously work on this any help is appreciated. Thanks in Advance
Only the 5 dialects are supported and an error will be thrown if you try and use NewSQL.
There is a lot of code in Sequelize to construct queries based on the dialect, so even if you could get past the error (such as forking the repo and changing it) the likelihood of everything working as you expect (or as is documented) is low.
I suggest posting an issue on GitHub to bring that dialect to the project.

How to build Query Parameter using Spring Traverson

I have a Spring Data Rest webservice with QueryDSL Web Support enabled so I can query any of the fields directly like below;
http://localhost:9000/api/prod1007?cinfo1=0126486035
And I was using Traverson to access this service but traverson is not generating the query parameter as above; below is my code (I have tried both withTemplateParameters() and withParameters() in Hop level)
Code:
Map<String,Object> parameters = new HashMap<String,Object>();
parameters.put("cinfo1", "0127498374");
PagedResources<Tbpinstance> items = traverson
.follow(Hop.rel("prod1007"))
.withTemplateParameters(parameters)
.toObject(resourceParameterizedTypeReference);
Any Help is much appreciated. Thanks!
Traverson needs to know where to put those parameters. They could be path parameters, or they could be query parameters. Furthermore, Traverson navigates the service from the root, so the parameters might need to be inserted somewhere in the middle, and not in the final step only.
For these reasons the server needs to clearly tell how to use the parameters. Traverson needs a HATEOAS-"directory" for the service. When Traverson HTTP GETs the http://localhost:9000/api document, it needs to contain a link similar to this:
"_links" : {
"product" : {
"href" : "http://localhost:9000/api/prod1007{?cinfo1}",
"templated" : true
},
}
Now it knows that the cinfo1 parameter is a query parameter and will be able to put it into its place.
#ZeroOne, you are entirely correct, that is what the response from the server should look like. Currently spring-hateoas does not support responses that look like that (I expect it will in the future as I have seen comments by Oliver Gierke indicating that spring-hateoas is going through a major upgrade).
As at the time of writing, to generate responses from the server as you describe, we have used spring-hateoas-ext mentioned in https://github.com/spring-projects/spring-hateoas/issues/169. You can find code at https://github.com/dschulten/hydra-java#affordancebuilder-for-rich-hyperlinks-from-v-0-2-0.
This is a 'drop in replacement' for spring-hateoas' ControllerLinkBuilder.
Here is the maven dependency we use (but check for the latest version).
<!-- Drop in replacement from spring-hateoas ControllerLinkBuilder -->
<dependency>
<groupId>de.escalon.hypermedia</groupId>
<artifactId>spring-hateoas-ext</artifactId>
<version>0.3.0-beta6</version>
</dependency>
Here's the import we use in our ResourceAssemblers.
import static de.escalon.hypermedia.spring.AffordanceBuilder.*;

ember data serialize embed records

Ember: 1.5.1 ember.js
Ember Data: 1.0.0-beta.7.f87cba88
I have a need for asymmetrical (de)serialization for one relationship type: sideloaded records on deserializing and embedded on serializing.
I have asked for this in the standard way:
RailsEmberTest.PlanItemSerializer = DS.ActiveModelSerializer.extend(DS.EmbeddedRecordsMixin, {
attrs: {
completions: {serialize: 'records', deserialize: 'ids'}//embedded: 'always'}
}
});
However, it doesn't seem to work. Following the execution through, I find that at line 498 of Ember data, the serializer decides whether or not to embed a relationship:
embed = attrs && attrs[key] && attrs[key].embedded === 'always';
At this stage, the attrs hash is well-formed, with completions containing the attributes as above. However, this line results in embed being false, and consequently the record is not embedded.
Overriding the value of embed to true makes it all hunky-dory.
Any ideas why Ember data is ignoring the settings? I suspect that maybe in my version the only option is embedded, and I need to upgrade to a later version to take advantage of the asymmetrical settings for serialize and deserialize.
However, given the possible manifold changes I am fearful of upgrading!
I'd be very grateful for your advice.
Courtesy of the London Ember meetup, I now know that it was simply down to the version of Ember Data! Now upgraded to the latest beta with no trouble.

Determine request Uri from WCF Data Services LINQ query for FirstOrDefault against Azure without executing it?

Problem
I would like to trace the Uri that will be generated by a LINQ query executed against a Microsoft.WindowsAzure.StorageClient.TableServiceContext object. TableServiceContext just extends System.Data.Services.Client.DataServiceContext with a couple of properties.
The issue I am having is that the query executes fine against our Azure Table Storage instance when we run the web role on a dev machine in debug mode (we are connecting to Azure storage in the cloud not using Dev Storage). I can get the resulting query Uri using Fiddler or just hovering over the statement in the debugger.
However, when we deploy the web role to Azure the query fails against the exact same Azure Table Storage source with a ResourceNotFound DataServiceClientException. We have had ResoureNotFound errors before that dealt with the behavior of FirstOrDefault() on empty tables. This is not the problem here.
As one approach to the problem, I wanted to compare the query Uri that is being generated when the web role is deployed versus when it is running on a dev machine.
Question
Does anyone know a way to get the query Uri for the query that will be sent when the FirstOrDefault() method is called. I know that you can call ToString() on the IQueryable returned from the TableServiceContext but my concern is that when FirstOrDefault() is called the Uri might be further optimized and ToString() on IQueryable might not be what is ultimately sent to the server when FirstOrDefault() is called.
If someone has another approach to the problem I am open to suggestions. It seems to be a general problem with LINQ when trying to determine what will happen when the expression tree is finally evaluated. I am open to suggestions here as well because my LINQ skills could use some improvement.
Sample Code
public void AddSomething(string ProjectID, string Username) {
TableServiceContext context = new TableServiceContext();
var qry = context.Somethings.Where(m => m.RowKey == Username
&& m.PartitionKey == ProjectID);
System.Diagnostics.Trace.TraceInformation(qry.ToString());
// ^ Here I would like to trace the Uri that will be generated
// and sent to the server when the qry.FirstOrDefault() call below is executed.
if (qry.FirstOrDefault() == null) {
// ^ This statement generates an error when the web role is running
// in the fabric
...
}
}
Edit Update and Answer
Steve provided the write answer. Our problem was as exactly described in this post which describes an issue with PartitionKey/RowKey ordering in Single Entity query which was fixed with an update to the Azure OS. This explains the discrepancy between our dev machines and when the web role was deployed to Azure.
When I indicated we had dealt with the ResourceNotFound issue before in our existence checks, we had dealt with it in two ways in our code. One way was using exception handling to deal with the ResourceNotFound error the other way was to put the RowKey first in the LINQ query (as some MS people had indicated was appropriate).
It turns out we have several places where the RowKey was first instead of using the exception handling. We will address this by refactoring our code to target .NET 4 and using the .IgnoreResourceNotFoundException = true property of theTableServiceContext .
Lesson learned (more than once): Don't depend on quirky undocumented behavior.
Aside
We were able to get the query Uri's. They did turn out to be different (as indicated they would be in the blog post). Here are the results:
Query Uri from Dev Fabric
`https://ourproject.table.core.windows.net/Somethings()?$filter=(RowKey eq 'test19#gmail.com') and (PartitionKey eq '41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
Query Uri from Azure Fabric
`https://ourproject.table.core.windows.net/Somethings(RowKey='test19#gmail.com',PartitionKey='41e0c1ae-e74d-458e-8a93-d2972d9ea53c')
I can do one better... I think I know what the problem is. :)
See http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/26/how-wcf-data-service-changes-in-os-1-4-affects-windows-azure-table-clients.aspx.
Specifically, it used to be the case (in previous Guest OS builds) that if you wrote the query as you did (with the RowKey predicate before the PartitionKey predicate), it resulted in a filter query (while the reverse, PartitionKey preceding RowKey) resulted in the kind of query that raises an exception if the result set is empty.
I think the right fix for you (as indicated in the above blog post) is to set the IgnoreResourceNotFoundException to true on your context.