Custom serializer in Apache Storm 1.1.0 - serialization

I use custom serializer in Storm topology config like this :
config.put(Config.TOPOLOGY_FALL_BACK_ON_JAVA_SERIALIZATION, false);
config.registerSerialization(ObjectNode.class, ObjectNodeSerializer.class);
ObjectNodeSerializer.class is well instanciated during bolts preparation but serialize and deserialize methods are never called during topology execution.

By default Storm will not serialize any tuple when it passes it to bolts within the same Worker. If you only have a single Worker process (running in local-cluster mode?) and want to test serialization, set following config
topology.testing.always.try.serialize: true

Related

ThreadLocal in spring web flux

I am new to spring webflux. I am trying to implement the ThreadLocal using spring webflux.
I have a requirement where I need to pass the header from one microservice to another microservice in webclient.
I do not want to pass the header from on service to another service carrying it manually and assigning it in each request.
So thought of using ThreadLocal when I can set it and can access that in webclient call.
I am try to find a sample application where I can refer ThreadLocal with in spring webflux.
You should not use ThreadLocal in reactive environment. Webflux (which based on Reactor) is a non blocking framework. It reuses threads so the steps of one reactive pipeline can run in different threads and also multiple requests can use the same thread concurrently - until one waits, another operation will be picked and executed. Imagine if your request puts something into threadlocal and waits - for example - on a db select, another request can override this value and the next pipeline stage of the original request will see that new value belongs to another request. Threadlocal is good for request-per-thread model.
For webflux, you can use contexts. For example put the value into the pipeline in a WebFilter, then you can retrieve it in any point of the reactive pipeline:
chain.filter(exchange).contextWrite(<your data>)
In the pipeline (in map/flatmap...)
Mono.deferContextual(...)
Here is the link for documentation.
Alternatively you can lift ThreadLocal's value on every operation using Hooks, but this is not a nice and bulletproof solution.

WebClient instrumentation in spring sleuth

I'm wondering whether sleuth has reactive WebClient instrumentation supported.
I did't find it from the document:
Instruments common ingress and egress points from Spring applications (servlet filter, async endpoints, rest template, scheduled actions, message channels, Zuul filters, and Feign client).
My case:
I may use WebClient in either a WebFilter or my rest resource to produce Mono.
And I want:
A sub span auto created as child of root span
trace info propagated via headers
If the instrumentation is not supported at the moment, Am I supposed to manually get the span from context and do it by myself like this:
OpenTracing instrumentation on reactive WebClient
Thanks
Leon
Even though this is an old question this would help others...
WebClient instrumentation will only work if new instance is created via Spring as a Bean. Check Spring Cloud Sleuth reference guide.
You have to register WebClient as a bean so that the tracing instrumentation gets applied. If you create a WebClient instance with a new keyword, the instrumentation does NOT work.
If you go to Sleuth's documentation for the Finchley release train, and you do find and you search for WebClient you'll find it - https://cloud.spring.io/spring-cloud-static/Finchley.RC2/single/spring-cloud.html#__literal_webclient_literal . In other words we do support it out of the box.
UPDATE:
New link - https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/integrations.html#sleuth-http-client-webclient-integration
let me paste the contents
3.2.2. WebClient
This feature is available for all tracer implementations.
We inject a ExchangeFilterFunction implementation that creates a span
and, through on-success and on-error callbacks, takes care of closing
client-side spans.
To block this feature, set spring.sleuth.web.client.enabled to false.
You have to register WebClient as a bean so that the tracing
instrumentation gets applied. If you create a WebClient instance with
a new keyword, the instrumentation does NOT work.

How to share SignalR singleton between multiple instance of Simple Injector

I'm using Simple Injector for IoC and Rebus, service bus, to dispatch events saved in multiple queues (topic).
Rebus needs to be configured with a new SimpleInjectorContainerAdapter for each queue.
var bus = Configure.With(new SimpleInjectorContainerAdapter(container))
In this configuration phase is not possible to pass the same instance of Simple injector container neither the same instance of container adapter (the container rises an error of multiple registration of IBus).
I'm also using SignalR as one of the events ' handlers to dispatch events to the clients.
Following this SignalR configuration tutorial I set up several hub and relative event notifier (one for each bounded context in the application).
Using the classical singleton pattern, as shown in the tutorial example, is easy to pass the same instance of notifier to the various instances of containers:
container.RegisterSingleton(Finishing.Notification.Notifier.Instance);
Now I would like to delegate the instance creation to the Simple Injector container (only one), so I started to follow this tutorial:
container.RegisterSingleton<Finishing.Notification.Notifier>();
container.Register(() => GlobalHost.ConnectionManager.GetHubContext<Finishing.Notification.NotificationHub>().Clients);
The issue is that, in this way, I will have n instance of notifier one for each container instance (deeply regrettable).
I know that I can solve this using a master container as Abstract Factory, but I'm looking for a more specific solution.
Thanks.
Now I would like to delegate the instance creation to the Simple Injector containers
(...)
The issue is this way I will have n instance of notifier one for each container instance.
So you would like for each container to create the singleton instance, but it is a problem that each container holds an instance of the singleton... isn't that a contradiction?

Using javaconfig to create regions in gemfire

Is it possible to do Javaconfig i.e annotations in spring instead of xml to create client regions in Spring gemfire?
I need to plug in cache loader and cache writer also to the regions created...how is that possible to do?
I want to perform the client pool configuration as well..How is that possible?
There is a good example of this in the spring.io guides. However, GemFire APIs are factories, wrapped by Spring FactoryBeans in Spring Data Gemfire, so I find XML actually more straightforward for configuring Cache and Regions.
Regarding... "how can I create a client region in a distributed environment?"
In the same way the Spring IO guides demonstrate Regions defined in a peer cache on a GemFire Server, something similar to...
#Bean
public ClientRegionFactoryBean<Long, Customer> clientRegion(ClientCache clientCache) {
return new ClientRegionFactoryBean() {
{
setCache(clientCache);
setName("Customers");
setShortcut(ClientRegionShortcut.CACHING_PROXY); // Or just PROXY if the client is not required to store data, or perhaps another shortcut type.
...
}
}
}
Disclaimer, I did not test this code snippet, so it may need minor tweaking along with additional configuration as necessary by the application.
You of course, will defined a along with a Pool in your Spring config, or use the element abstraction on the client-side.

Mule spring bean schedule run

We have defined spring beans in Mule-config.xml. Certain public methods in this bean class needs to be periodically executed. We attempted to used spring quartz and spring task scheduler (adding beans in mule-config.xml)- but method is not executing in a schedule way - it is not triggered. Even using annotation (scheduled) does not work. Any work around for this? Any issue with spring scheduler with mule? Kindly help.
Thanks
If you want to use the Schedule annotation, take a look at this recent answer on the subject for a workaround.
Otherwise, Spring Quartz should work fine too. What have you tried? Share your config and specify the Mule version you're using. I'll review my answer accordingly.