Mono Slice with data transformation in spring reactive cassandra - spring-webflux

After reading the new spring data casandra documentation (here), it says that there is now Mono<Slice<T>> support for reactive cassandra.
Which is great, because in my team, we wanted to implement some sort of pagination on our reactive response. So we are happy to move from Flux<T> to Mono<Slice<T>> but there is this issue, we do transformations on the data of our flux, with flux.map, but that doesn't seem possible with Slice without blocking.
For example, we have this functionallity:
Flux<Location> resp = repository.searchLocations(searchFields).map(this::transformLocation);
Where transformLocation is a function that receives the database object and returns a more user friendly object with more user friendly data.
How will you achieve that with Mono<Slice<Location>>?
From what I have seen of Slice, you can get the data with getContent, but that returns a list, will that not be like blocking?

You can use the list, got from the getContent() method, to create a flux as you were doing before.
Mono<Slice<Location>> sliceMono; //this is just another mono on which you can operate
Flux<location> intermediateResp = sliceMono.flatMap(slice -> Flux.fromIterable(slice.getContent()));
//you can now transform this intermediateResp flux as you were doing before, can't you?
Won't this serve your purpose or do you require something else?
(The code has been written without the help of IDE, so use it to understand the approach)

Related

ReactiveRedisOperations not saving object in Redis

I am using ReactiveRedisOperations to save data objects in Redis and this call returns a Mono as per the api.
I notice that if I don't do anything with this Mono return than this code does not do anything.
Just trying to understand how this works.
I would like below code to save every Object to Redis in this loop, however it does not do so, please share what is missing here.
for (SomeObject obj : list) {
reactiveRedisOperations.opsForHash().put(key, hashKey, obj).map(b -> obj); }
On the other side if i return the Mono result from similar code via a rest service response than it seems to save in Redis correctly, not sure why this is this way. Thanks
This is a quirk of reactive streams, not Lettuce.
Unlike a completable future which begins execution when it's created, a stream won't begin executing (the command isn't sent) until a consumer has subscribed to it.
I believe this is to facilitate backpressure, so a slow consumer isn't flooded with data by the producer.
Some nice reading -> https://blog.knoldus.com/working-with-project-reactor-reactive-streams/
If you return a Mono, to the underlying web framework, which generally will handle subscribe(ing) to this Mono, the respective operation will trigger resulting in whatever side-effects such as data being created in your Redis datastore.
Shall you wish to have your operations executed, you should do the same, i.e. subscribe to the publisher (Mono, or Flux) or return these data wrappers to whatever calling function you would know will handle this for you as in the aforementioned example:
Flux.fromIterable(list)
.flatMap(obj -> reactiveRedisOperations.opsForHash().put(key, hashKey, obj))
.subscribe();

Ember change normalizeResponse based on queried model

I'm using a second datastore with my Ember app, so I can communicate with a separate external API. I have no control over this API.
With a DS.JSONSerializer I can add some missing properties like id:
normalizeResponse(store, primaryModelClass, payload, id, requestType) {
if (requestType == 'query') {
payload.forEach(function(el, index) {
payload[index].id = index
})
}
Now I can do some different tricks for each different requestType. But every response is parsed. Now sometimes a response from one request needs to be parsed differently.
So what I am trying to do is change the normalizeResponse functionality for each different request path (mapped to a fake model using pathForType in an adapter for this store). But the argument store is always the same (obviously) and the argument promaryModelClass is always "unknown mixin" - not sure if this can be any help.
How can I find what model was requested? With this information I could do a switch() in normalizeResponse.
Is there a different way to achieve my goal that does not require me to make a separate adapter for every path/model?
There are over a dozen normalize functions available. Something should work for what I am trying to achieve.
I think this is a great example of a use case of not using ember data.
Assuming that you have models A,B,C that are all working great with ember data, leave those alone.
I'd create a separate service and make raw requests to that different endpoint. So you'd replace this.store.query('thing', {args}) with a separate service that uses ember-ajax (or ember-fetch or whatever). If you need, you can use that service to hold the data that you need (Ember-data is just a service anyway) or you can create models and push them into the store manually.
Without knowing more about your exact situation, hard to give a specific code/advice, but I'd just avoid this problem and write your own custom service.
You can use primaryModelClass.modelName.

MULE 3.7.0 C.E. - BufferInputStream payload turns into String

We are programming a MULE REST service which is divided in several layers.
The API layer (RAML-based) receives the inbound requests and prepares some flowVars so that the lower layers know how to proceed.
The second layer is also service defined, so there's one flow for each service oferred.
Finally, the third layer contains a unique flow and is the one which, depending on the flowVars configured in the upper layer, carries out a call using a HTTP Request component to the third-party service needed.
In this third layer, some audit registers are made in order to know what we are sending and what we are receiving. So, our audit component (a custom MULE connector) needs to write the content of the payload to our database, so a message.getPayloadAsString() (or similar) is needed. If we use a clean getter (like message.getPayload()), only the data type is obtained and thus written into the database.
The problem lays right in here. Every single payload received seems to be a BufferInputStream and, when doing the message.getPayloadAsString(), an inner casting seems to be affecting the payload. This, normally, wouldn't be a problem except for one of the cases that we have found: one of the services we invoke returns a PNG file, so message.getPayloadAsString() turns it into a String and breaks the image.
We've tried to clone the payload in order to keep one of the copies safe from the casting but, as an Object, it's not implementing Cloneable interface; we've tried to make a copy of the payload in any other single way, but only a new reference is generated; we've tried to serialize the payload to create a new copy from the serialized data but the Object doesn't either implement Serializable interface... Everything useless.
Any help, idea or piece of advice would be appreciated.
We finally managed to solve the problem by using message.getPayloadAsBytes();, which return value is a brand new byte[] object. This method doesn't either alter the payload within the message. By using the byte array we can create a String object to be written in our audit like this:
byte[] auditByteArray[] = message.getPayloadAsBytes();
String auditString = new String(auditByteArray);
Moreover, we tried a test consisting in stablishing that byte array as the new payload in the message and both JSON and PNG responses are managed correctly by the browser.

Relay modern caching example

I would like to enable caching in my react native application. I am using GraphQL with Relay modern. I found out that caching is not enabled by default in relay modern, but they have exposed RelayQueryResponseCache from relay-runtime, which we can add to the fetchQuery function in our API. I read discussion here and here about it, but have not seen any example to get started. Can someone help me out on this?
EDIT:
Ok I came up with a solution. I think it misses few things but so far it serves our needs.
I have noticed that passing anything into QueryRenderer into cacheConfig results passing that value into fetchQuery function inside my environment.
So I have created a Component which loads the data by some relation and resolves it into the correct json structure requested by the query. Then I return this into the state. Then I extended my Component which contains QueryRenderer with the created 'cache loader'. Now when componentWillMount() is called I ask for the cached data. During this I have set this.state.loading = true so I am able to handle loading state. Reading from DB is async.
I am using this class in other components as well. Every one handles its cache data. I just pass it to QueryRenderer.
However I was thinking that this makes some extra logic need to add for each Component which is supported by this caching. Probably passing the cache resolver as cacheConfig and resolve the cached data immediately inside the environment would be much more cleaner.

Backbone.sync – Collection using ajax as well as Socket.IO/WebSockets

I have a Backbone application, which has a collection called Links. Links maps to a REST API URI of /api/links.
The API will give the user the latest links. However, I have a system in place that will add a job to the message queue when the user hits this API, requesting that the links in the database are updated.
When this job is finished, I would to push the new links to the Backbone collection.
How should I do this? In my mind I have two options:
From the Backbone collection, long poll the API for new links
Setup WebSockets to send a "message" to the collection when the job is done, sending the new data with it
Scrap the REST API for my application and just use WebSockets for everything, as I am likely to have more realtime needs later down the line
WebSockets with the REST API
If I use WebSockets, I'm not sure of the best way to integrate this into my Backbone collection so that it works alongside the REST API.
At the moment my Backbone collection looks like this:
var Links = Backbone.Collection.extend({
url: '/api/links'
});
I'm not sure how to enable the Backbone collection to handle AJAX and WebSockets. Do I continue to use the default Backbone.sync for the CRUD Ajax operations, and then deal with the single WebSocket connection manually? In my mind:
var Links = Backbone.Collection.extend({
url: '/api/links',
initialize: function () {
var socket = io.connect('http://localhost');
socket.on('newLinks', addLinks)
},
addLinks: function (data) {
// Prepend `data` to the collection
};
})
Questions
How should I implement my realtime needs, from the options above or any other ideas you have? Please provide examples of code to give some context.
No worries! Backbone.WS got you covered.
You can init a WebSocket connection like:
var ws = new Bakcbone.WS('ws://exmaple.com/');
And bind a Model to it like:
var model = new Backbone.Model();
ws.bind(model);
Then this model will listen to messages events with the type ws:message and you can call model.send(data) to send data via that connection.
Of course the same goes for Collections.
Backbone.WS also gives some tools for mapping a custom REST-like API to your Models/Collections.
My company has a fully Socket.io based solution using backbone, primarily because we want our app to "update" the gui when changes are made on another users screen in real time.
In a nutshell, it's a can of worms. Socket.IO works well, but it also opens a lot of doors you may not be interested in seeing behind. Backbone events get quite out of whack because they are so tightly tied to the ajax transactions...you're effectively overriding that default behavior. One of our better hiccups has been deletes, because our socket response isn't the model that changed, but the entire collection, for example. Our solution does go a bit further than most, because transactions are via a DDL that is specifically setup to be universal across the many devices we need to be able to communicate with, now and in the future.
If you do go the ioBind path, beware that you'll be using different methods for change events compared to your non-socket traffic (if you mix and match) That's the big drawback of that method, standard things like "change" becomes "update" for example to avoid collisions. It can get really confusing in late-night debug or when you have a new developer join the team. For that reason, I prefer either going sockets, or not, not a combination. Sockets have been good so far, and scary fast.
We use a base function that does the heavy lifting, and have several others that extend this base to give us the transaction functionality we need.
This article gives a great starter for the method we used.