My application is using Netty 4.1.6.Final and Lettuce 4.3.0, Async Http Client 2.1.0
Lettuce 4.3.0 also relies on netty 4.1.6.Final.
Async Http Client 4.3.0 also relies on netty 4.1.4.Final.
Now I see Lettuce creates its thread pool. Async Http Client also creates its thread pool. And my application creates another thread pool for Netty.
Is it possible to share the same NioEventLoopGroup cross all the component to reduce thread number?
Async Http Client
AsyncHttpClient httpClient = new DefaultAsyncHttpClient(new DefaultAsyncHttpClientConfig.Builder().setEventLoopGroup(myEventGroup).build())
Lettuce
Add RedisEventLoopGroupProvider class
public class RedisEventLoopGroupProvider extends DefaultEventLoopGroupProvider {
public RedisEventLoopGroupProvider() {
super(3);
// TODO Auto-generated constructor stub
}
#SuppressWarnings("unchecked")
#Override
public <T extends EventLoopGroup> T allocate(Class<T> type) {
if (NioEventLoopGroup.class.equals(type)) {
return (T)myEventGroup;
}
return super.allocate((Class<T>)type);
}
}
Initialize in this way
RedisClient redisClient = RedisClient.create( DefaultClientResources.builder()
.eventLoopGroupProvider(new RedisEventLoopGroupProvider())
.ioThreadPoolSize(1)
.computationThreadPoolSize(3)
.build(), Environment.REDIS_CONNECTION);
Related
Our team's project is a product that accepts requests and then forwards them to processing business machines. We use Springboot's webflux as a framework and use gRPC in it as a client to send requests to business machines.
Its process is as follows:
User <-> Webflux <-> gRPC <-> The machine that actually handle the business
We were drawn to the asynchronous non-blocking nature of webflux and gRPC. gRPC uses streaming.
The controller and service we wrote are roughly as follows:
#PostMapping(”xxxx“)
pulbic Mono<String> mac() {
final COmpletableFuture<String> future = new CompletableFuture<>();
final StreamObserver<MacMessage> request = gRPCService.getStub.mac(new StreamObserver<>() {
String mac;
#Override
public void onNext(MacMessage value) {
mac = Base64.encode(value.getMac.toByteArray());
}
#Override
public void onError(Throwable t) {
future.completeExceptionally(t);
}
#Override
public void onCompleted() {
future.complete(mac);
}
});
request.onNext(MacMessage.newBuilder().setxxxx....build());
request.onCompleted();
return Mono.fromFuture(future);
}
In addition, when I tried the Mono API recently, I found another way of writing, which does not require the use of CompletableFuture, and the effect is similar:
#PostMapping(”xxxx“)
pulbic Mono<String> mac() {
return Mono.create(monoSink -> {
final StreamObserver<MacMessage> request = gRPCService.getStub.mac(new StreamObserver<>() {
String mac;
#Override
public void onNext(MacMessage value) {
mac = Base64.encode(value.getMac.toByteArray());
}
#Override
public void onError(Throwable t) {
monoSink.error(t);
}
#Override
public void onCompleted() {
monoSink.success(mac);
}
});
request.onNext(MacMessage.newBuilder().setxxxx....build());
request.onCompleted();
});
}
I would like to know whether the above writing methods have met the requirements of asynchronous non-blocking, and if not, how should they be used?
The reason for this doubt is that (taking the above code as an example) we found that if we only use printing to consume the response value in gRPC StreamObserver#onNext, the performance is much higher than the above two writing methods, and the performance of the above two writing methods is similar . This makes me wonder, is there such a big difference in the performance of printing the consumption response value and returning the response value?
at the moment I have the following problem: The external RabbitMQ remote server expects from me first an async reply login message and then a broadcast queue broadcastQueue.MYUSER_123 is created by this remote server. From this I want to consume via #RabbitListener annotation, which is seen in the code example. But i got an error, you can see below.
While debugging I noticed that a container starts this listener before I have executed the login in my code and so it comes to a rejection when connecting against this broadcast queue. I found the post in How to Lazy Load RabbitMQ queue using #Lazy in spring boot? but the problem on my side is, that the autostart did not work and the container is started for the listener. What i'm doing wrong? Is it possible to create lazy queues.?
Another point is: Is there an easy way to make a direct reply configuration for the RabbitMQ template where I can specify the reply address? According to the spring amqp code it seems that you can only make a directy reply connection if you don't specify a reply address yourself. So i need to create a custom container.
Thank you for your help .
Kind Regards
Sven
#SpringBootTest
#Slf4j
class DemoAmqpApplicationTests {
#RabbitListener(queues="broadcastQueue.MYUSER_123",autoStartup = "false")
public void handleMessage(Object obj) {
log.info("{}",obj);
}
#Test
void contextLoads() throws InterruptedException {
Message<User> login = login();
}
Configuration
#Configuration
#Slf4j
#EnableRabbit
public class RabbitMqConfiguration {
#Bean
public RabbitAdmin amqpAdmin(RabbitTemplate rabbitTemplate) {
return new RabbitAdmin(rabbitTemplate);
}
#Bean
public RabbitTemplate rabbitTemplate(#NonNull CachingConnectionFactory connectionFactory,
#NonNull MessageConverter messageConverter,
#NonNull Queue inquiryResponseQueue,
#NonNull RabbitTemplateConfigurer configurer) {
RabbitTemplate rabbitTemplate = new RabbitTemplate();
configurer.configure(rabbitTemplate, connectionFactory);
String username = connectionFactory.getUsername();
configurePostReceiveProcessor(rabbitTemplate);
rabbitTemplate.setMessageConverter(messageConverter);
configurePrepareSendingProcessor(rabbitTemplate, username, inquiryResponseQueue.getName());
configureReply(rabbitTemplate, inquiryResponseQueue);
return rabbitTemplate;
}
private void configureReply(RabbitTemplate rabbitTemplate,
#NonNull Queue inquiryResponseQueue) {
rabbitTemplate.setReplyAddress(inquiryResponseQueue.getName());
rabbitTemplate.setDefaultReceiveQueue(inquiryResponseQueue.getName());
}
#Bean
public SimpleMessageListenerContainer replyListenerContainer(
#NonNull CachingConnectionFactory connectionFactory,
#NonNull List<Queue> inquiryResponseQueue,
#NonNull RabbitTemplate rabbitTemplate,
#NonNull RabbitAdmin rabbitAdmin) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueues(inquiryResponseQueue.get(0));
container.setMessageListener(rabbitTemplate);
return container;
}
#Bean
public Queue privateInquiryResponseQueue(
#NonNull CachingConnectionFactory connectionFactory) {
return new Queue(privateInquiryResponseQueueName(connectionFactory.getUsername()),
false,
true,
true);
}
}
Error Log:
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:721) ~[spring-rabbit-2.3.10.jar:2.3.10]
... 5 common frames omitted
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no queue 'broadcastQueue.MYUSER_123' in vhost 'app', class-id=50, method-id=10)
autoStartup works fine - something else must be starting the container - add a breakpoint to see what is starting it - are you calling start() on the application context? That will start the container.
Direct reply-to is a special RabbitMQ mode that uses a pseudo queue for replies; for a named reply queue, you need a listener container.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#direct-reply-to
Despite my efforts, I did not find any tutorials on how to get live sensor data via Azure IoT Hub to a web client using SignalR, where the web backend is running on ASP.net Core. I somehow get it running, so here's what I did, and a couple questions.
First, using Azure Iot Hub is really great since I'm using the device management and securing the devices and connections are easy. The data will go into Stream Analytics and Azure Functions for further processing, but live data is also available on an Event Hub.
On the client side, we are using a Vue.js setup, connecting to an API running ASP.net Core 2.1. SignalR uses the Azure SignalR Service. This is described in the docs and my setup code looks something like:
app.UseFileServer();
app.UseAzureSignalR(routes =>
{
routes.MapHub<NotificationHub>("/notification");
});
Then, in my hub, I can use the groups for handling subscriptions as multiple clients can subscribe to the same device:
public class NotificationHub : Hub
{
public async Task Subscribe(string deviceId)
{
await Groups.AddToGroupAsync(Context.ConnectionId, deviceId);
}
public async Task UnSubscribe(string deviceId)
{
await Groups.RemoveFromGroupAsync(Context.ConnectionId, deviceId);
}
}
The hub context is available for injection, so a pseudo code for a controller wanting to use SignalR would be:
public class MyController : Controller
{
private readonly IHubContext<NotificationHub> _hubContext;
public MyController(IHubContext<NotificationHub> hubContext)
{
_hubContext = hubContext;
}
public async Task<IActionResult> MyMethod()
{
...
await _hubContext.Clients.Group("a group").SendAsync("TheMessage", "The data");
...
}
}
So far, everything seemed straight forward.
Then I wanted to implement a listener for the Event Hub, and running this as a background task. Turns out that we only need one line of code for registering a hosted service in our Startup.cs:
services.AddHostedService<IoTEventHubService>();
The IoTEventHubService would inherit from BackgroundService, which is found in Microsoft.Extensions.Hosting:
public class IoTEventHubService : BackgroundService
{
private readonly IHubContext<NotificationHub> _hubContext;
public IoTEventHubService(IHubContext<NotificationHub> hubContext)
{
_hubContext = hubContext;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
...defining the names and connection strings...
var eventProcessorHost = new EventProcessorHost(
eventProcessorHostName,
_eventHubName,
PartitionReceiver.DefaultConsumerGroupName,
_eventHubConnectionString,
_storageConnectionString,
leaseName);
var options = new EventProcessorOptions
{
InitialOffsetProvider = (partitionId) =>
EventPosition.FromEnqueuedTime(DateTime.UtcNow)
};
// Registers the Event Processor Host and starts receiving messages
await eventProcessorHost.RegisterEventProcessorFactoryAsync(
new IoTEventProcessorFactory(_hubContext), options);
}
In order to pass a parameter, _hubContext, I had to use the factory method to register my event processor. My factory, IoTEventProcessorFactory, inherits from IEventProcessorFactory and pass the hub context further to an event processor in the CreateEventProcessor method,
public IEventProcessor CreateEventProcessor(PartitionContext context)
{
return new IoTEventProcessor(_hubContext);
}
The IoTEventProcessor class inherits from IEventProcessor, and process the messages like the following:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
// De-serializing into a IoTMessage class
var data = JsonConvert.DeserializeObject<IoTMessage>(
Encoding.UTF8.GetString(eventData.Body.Array));
// I can get the device id as defined in the IoT Hub
data.DeviceId = eventData.SystemProperties.GetValueOrDefault(
"iothub-connection-device-id").ToString();
// And also the time for which the message was processed
data.EventEnqueuedUtcTime = DateTime.Parse(
eventData.SystemProperties.GetValueOrDefault("iothub-enqueuedtime").ToString());
// Using the hub context, we can send live data to subscribers
_hubContext.Clients.Group(data.DeviceId).SendAsync("LiveDataMessage", data);
}
return context.CheckpointAsync();
}
This works and the client is receiving live data, but is it correctly implemented? Should I dispose the event processor host somewhere? In my Application Insights, I see a large number (>1000/hour, when no sensor data is coming in) of dependencies "failures" for the IoT Hub connection, but the status is False. Is this just saying that it wasn't any sensor data..?
In RestEasy 3.0.16.Final version PreProcessInterceptor interface is deprecated. So what is the proper replacement of this interface. In jboss eap 7 RestEasy version 3.0.16.Final is used.
Old code -
#Provider
#ServerInterceptor
#SecurityPrecedence
public class AbcInterceptor implements PreProcessInterceptor
{
public ServerResponse preProcess(final HttpRequest httpRequest, ResourceMethod resourceMethod) throws Failure,
WebApplicationException {
// auth logic
}
}
New code -
#Provider
#ServerInterceptor
#SecurityPrecedence
public class AuthenticationInterceptor
{
public ServerResponse preProcess(HttpRequest httpRequest, ResourceMethodInvoker method)
throws Failure, WebApplicationException {
// auth logic
}
}
The org.jboss.resteasy.spi.interception.PreProcessInterceptor interface is replaced by the javax.ws.rs.container.ContainerRequestFilter interface in RESTEasy 3.x.
So, you can can use the ContainerRequestFilter for the same.
I want to call another web-api from my backend on a specific request of user. For example, I want to call Google FCM send message api to send a message to a specific user on an event.
Does Retrofit have any method to achieve this? If not, how I can do that?
This website has some nice examples for using spring's RestTemplate.
Here is a code example of how it can work to get a simple object:
private static void getEmployees()
{
final String uri = "http://localhost:8080/springrestexample/employees.xml";
RestTemplate restTemplate = new RestTemplate();
String result = restTemplate.getForObject(uri, String.class);
System.out.println(result);
}
Modern Spring 5+ answer using WebClient instead of RestTemplate.
Configure WebClient for a specific web-service or resource as a bean (additional properties can be configured).
#Bean
public WebClient localApiClient() {
return WebClient.create("http://localhost:8080/api/v3");
}
Inject and use the bean from your service(s).
#Service
public class UserService {
private static final Duration REQUEST_TIMEOUT = Duration.ofSeconds(3);
private final WebClient localApiClient;
#Autowired
public UserService(WebClient localApiClient) {
this.localApiClient = localApiClient;
}
public User getUser(long id) {
return localApiClient
.get()
.uri("/users/" + id)
.retrieve()
.bodyToMono(User.class)
.block(REQUEST_TIMEOUT);
}
}
Instead of String you are trying to get custom POJO object details as output by calling another API/URI, try the this solution. I hope it will be clear and helpful for how to use RestTemplate also,
In Spring Boot, first we need to create Bean for RestTemplate under the #Configuration annotated class. You can even write a separate class and annotate with #Configuration like below.
#Configuration
public class RestTemplateConfig {
#Bean
public RestTemplate restTemplate(RestTemplateBuilder builder) {
return builder.build();
}
}
Then, you have to define RestTemplate with #Autowired or #Injected under your service/Controller, whereever you are trying to use RestTemplate. Use the below code,
#Autowired
private RestTemplate restTemplate;
Now, will see the part of how to call another api from my application using above created RestTemplate. For this we can use multiple methods like execute(), getForEntity(), getForObject() and etc. Here I am placing the code with example of execute(). I have even tried other two, I faced problem of converting returned LinkedHashMap into expected POJO object. The below, execute() method solved my problem.
ResponseEntity<List<POJO>> responseEntity = restTemplate.exchange(
URL,
HttpMethod.GET,
null,
new ParameterizedTypeReference<List<POJO>>() {
});
List<POJO> pojoObjList = responseEntity.getBody();
Happy Coding :)
Create Bean for Rest Template to auto wiring the Rest Template object.
#SpringBootApplication
public class ChatAppApplication {
#Bean
public RestTemplate getRestTemplate(){
return new RestTemplate();
}
public static void main(String[] args) {
SpringApplication.run(ChatAppApplication.class, args);
}
}
Consume the GET/POST API by using RestTemplate - exchange() method. Below is for the post api which is defined in the controller.
#RequestMapping(value = "/postdata",method = RequestMethod.POST)
public String PostData(){
return "{\n" +
" \"value\":\"4\",\n" +
" \"name\":\"David\"\n" +
"}";
}
#RequestMapping(value = "/post")
public String getPostResponse(){
HttpHeaders headers=new HttpHeaders();
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
HttpEntity<String> entity=new HttpEntity<String>(headers);
return restTemplate.exchange("http://localhost:8080/postdata",HttpMethod.POST,entity,String.class).getBody();
}
Refer this tutorial[1]
[1] https://www.tutorialspoint.com/spring_boot/spring_boot_rest_template.htm
As has been mentioned in the various answers here, WebClient is now the recommended route.
You can start by configuring a WebClient builder:
#Bean
public WebClient.Builder getWebClientBuilder(){
return WebClient.builder();
}
Then inject the bean and you can consume an API as follows:
#Autowired
private WebClient.Builder webClientBuilder;
Product product = webClientBuilder.build()
.get()
.uri("http://localhost:8080/api/products")
.retrieve()
.bodyToMono(Product.class)
.block();
Does Retrofit have any method to achieve this? If not, how I can do that?
YES
Retrofit is type-safe REST client for Android and Java. Retrofit turns your HTTP API into a Java interface.
For more information refer the following link
https://howtodoinjava.com/retrofit2/retrofit2-beginner-tutorial
In this case need download whit my API, files hosted in other server.
In my case, don't need use a HTTP client to download the file in a external URL, I combined several answers and methods worked in previous code for files that were in my local server.
My code is:
#GetMapping(value = "/download/file/pdf/", produces = MediaType.APPLICATION_PDF_VALUE)
public ResponseEntity<Resource> downloadFilePdf() throws IOException {
String url = "http://www.orimi.com/pdf-test.pdf";
RestTemplate restTemplate = new RestTemplate();
byte[] byteContent = restTemplate.getForObject(url, String.class).getBytes(StandardCharsets.ISO_8859_1);
InputStream resourceInputStream = new ByteArrayInputStream(byteContent);
return ResponseEntity.ok()
.header("Content-disposition", "attachment; filename=" + "pdf-with-my-API_pdf-test.pdf")
.contentType(MediaType.parseMediaType("application/pdf;"))
.contentLength(byteContent.length)
.body(new InputStreamResource(resourceInputStream));
}
and it works with HTTP and HTTPS urls!
Since the question explicitly tags spring-boot, it worth noting that recent versions already ship a pre-configured instance of a builder for WebClient, thus you can directly inject it inside your service constructor without the needing to define a custom bean.
#Service
public class ClientService {
private final WebClient webClient;
public ClientService(WebClient.Builder webClientBuilder) {
webClient = webClientBuilder
.baseUrl("https://your.api.com")
}
//Add all the API call methods you need leveraging webClient instance
}
https://docs.spring.io/spring-boot/docs/2.0.x/reference/html/boot-features-webclient.html
Simplest way I have found is to:
Create an annotated interface (or have it generated from somehing like OpenAPI)
Give that interface to Spring RestTemplate Client
The Spring RestTemplate Client will parse the annotations on the interface and give you a type safe client, a proxy-instance. Any invocation on the methods will be seamlessly translated to rest-calls.
final MyApiInterface myClient = SpringRestTemplateClientBuilder
.create(MyApiInterface.class)
.setUrl(this.getMockUrl())
.setRestTemplate(restTemplate) // Optional
.setHeader("header-name", "the value") // Optional
.setHeaders(HttpHeaders) // Optional
.build();
And a rest call is made by inoking methods, like:
final ResponseEntity<MyDTO> response = myClient.getMyDto();