Changing logging level at runtime in ktor application - kotlin

Previously I worked with Spring Boot and I could change logging level for concrete class without app restart due to Spring Boot actuator (just http request on loggers endpoint).
Now I'm working with ktor and I also want to change logging level at runtime.
How can I do that?

Related

Spring Cloud config with JDBC Backend Failure in Spring boot

I have a client-server based microservice architecture. i.e, a server-config(Spring cloud config) service that has all the configurations and properties that are fetched using jdbc Backend-(PostgreSQL in my case) along with profiling.
I wanted to understand that incase of failure of my server-config service are there any chances of the rest of the dependent services to fail as they will no longer be able to fetch the required properties from the server-config service? If yes, then how can I mitigate this issue.
I could think of implementing cache but will have to go through the steps required to do so. Can someone help me clear out the above scenario.

Ktor application with spring config server

wanted to try some new stuff and decided to build next micro-app in Ktor + Koin + Exposed. Everything looks really nice but I found one problem that is actually destroying the whole idea.
App needs DB access and the connection cannot be stored within repository but should be encrypted on config-server. Every other micro-app is suing spring boot and fetches configs with lib spring-cloud-config-client but I don't know if it's even possible to use that somehow from Ktor app. Anyone had the same problem and managed to fix it somehow?
Cfg4j seems to be an alternative to Spring Cloud Config Server.
https://github.com/cfg4j/cfg4j
Other than that, there are a few articles and some questions on SO about integrating Spring Cloud Config Server with non Spring Boot projects.
Spring Cloud Config Client Without Spring Boot
Also, I am working on an external configuration parser for Kotlin and am considering to implement similar functionality.

Unable to configure retry for Spring AMQP RabbitMQ (Spring Boot 2.0.2)

I'm working with a Spring Boot 2.0.2 application and I want to configure RabbitMq to retry failed messages 3 times, with an interval between each retry.
Previously on Spring Boot 1.5.1 I have successfully setup this on application.properties:
spring.rabbitmq.listener.retry.enabled=true
spring.rabbitmq.listener.retry.initial-interval=45000
spring.rabbitmq.listener.retry.max-attempts=3
spring.rabbitmq.listener.retry.multiplier=1.3
spring.rabbitmq.listener.retry.max-interval=80000
I've tried do the same on Spring Boot 2.0.2 but it doesn't work. I've read that these properties have changed in Spring Boot 2.0, but even after updating the properties, it still doesn't work:
spring.rabbitmq.listener.direct.retry.enabled=true
spring.rabbitmq.listener.direct.retry.initial-interval=45000
spring.rabbitmq.listener.direct.retry.max-attempts=3
spring.rabbitmq.listener.direct.retry.multiplier=1.3
spring.rabbitmq.listener.direct.retry.max-interval=80000
Am I missing something?
The default container type is simple.
Use spring.rabbitmq.listener.simple.retry.enabled=true unless you decide to use the direct container type instead.
See Choosing a Container.
The DMLC was added in Spring AMQP 2.0; the boot properties were deprecated in a later 1.5.x release, switching to the ...simple... properties in preparation for Boot 2.0.

Can Lagom 1.4 forward websocket error messages?

I'm using streamed service calls in Lagom. Once I upgraded to 1.4, error messages from the server are not being propagated to the client over websockets. This works in test using the lagomtestkit, but not when running a service using 'runAll' from SBT or in a live deployment.
Using 'runAll', all client calls that fail come back with "Peer closed connection with code 1011 'internal error'"
The issue here is fairly easy to diagnose. Lines 66-68 of akka-http 10.0.11 FrameOutHandler create the WebSocket closeFrame, throwing away the passed in exception and returning "internal error", even though they have the exception message.
My problem is that although I can see the error, I can't see any easy way to fix it without patching akka-http. Is this something that should be supported in Lagom? It used to work in 1.3 when we used the netty client.
Are you testing with another Lagom client connecting directly to the port that the service listens to, or using a web browser or some other client connecting through port 9000?
If it's the latter, you might also need to change the service gateway implementation back to Netty as described in the documentation on Default gateway implementation:
The Lagom development environment provides an implementation of a
Service Gateway based on Akka HTTP and the (now legacy) implementation
based on Netty.
You may opt in to use the old netty implementation.
In the Maven root project pom:
<plugin>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-maven-plugin</artifactId>
<version>${lagom.version}</version>
<configuration>
<serviceGatewayImpl>netty</serviceGatewayImpl>
</configuration>
</plugin>
In sbt:
// Implementation of the service gateway: "akka-http" (default) or
"netty" lagomServiceGatewayImpl in ThisBuild := "netty"
In any case, please create an issue on GitHub and we can investigate a solution in the framework.

Backup Spring Session with GemFire

Spring documentation says that Spring Session can transparently leverage Redis to back a web application’s HttpSession when using REST endpoints.
Does anyone know if Spring supports GemFire in this place instead of Redis to back a web application's HttpSession ?
Ref: http://docs.spring.io/spring-session/docs/current/reference/html5/guides/rest.html
Not yet, ;).
However, I did spend a little time researching the effort involved to implement a GemFire adapter for Spring Session to back (store/replicate) an HttpSession. I still need to dig a little deeper and I will be tracking this effort in JIRA here (SGF-373).
Also know that GemFire already has support for HTTP server session replication using GemFire's HTTP Session Management Module.
Will post back when I have more details.
Will these 3 steps (at a high level) be sufficient to allow Spring Session to write to Gemfire repository instead of Redis ?
Step 1: Implement just a Configuration class which provide all functions as the annotation
Allow spring to Load the configuration class
Register Spring Session Filter in Container
Establish Repo Connection Factory
Repo connection configuration
we will continue to re-use the Spring Session’s springSessionRepositoryFilter
Step 2: Need to develop an equivalent GemfireOperationsSessionRepository implementing the interface SessionRepository
Step 3: SessionMessageListener.java
3.1. Need to decide a technique to identify and save delta changes in Session to underlying repository
3.2. Need to see how session expire notification from underlying repository can be captured to invoke SessionDestroyEvent and cleanup operations -