When a spring cloud config item is refreshed, is there a way to know in the client to be able to re-calculate some things? - spring-cloud-config

With spring cloud config, when I update a configuration and call refresh on any clients, is there a way that I can have a notification that this happened? If I am constructing objects based on some #ConfigurationProperties, I will want to refresh these objects with the new state of those hierarchical properties. I would rather not perform lookups each time I need to reference the config props; in my case it is best to refresh certain objects at the time of config changes. So, is there a way to hook into that refresh lifecycle?
Edit: Ideally, if I could have a #Configuration class know about the refresh event, and re-bind/re-instantiate some relevant Spring #Beans, that would be quite ideal!

Ok, so when I posted this question a month ago, I was obviously pretty new to this, and pretty naive. I have since learned that #ConfigurationProperties get updated when spring cloud config clients are refreshed. Say that you have a bean (lombok to reduce boilerplate, of course):
#Data
public class ClientSettings {
private List<String> list1 = new ArrayList<>();
private List<String> list2 = new ArrayList<>();
private List<String> list3 = new ArrayList<>();
}
And you have a #Configuration class like this:
#Configuration
public class PropsConfig {
#Bean
#RefreshScope
#ConfigurationProperties(prefix = "settings")
public ClientSettings clientSettings() {
return new ClientSettings();
}
}
If you update the config file that the spring cloud config server is serving, and then call the refresh actuator endpoint on the client, the underlying bean will be updated, and any services that have this bean wired will have an updated bean after it was refreshed.
So, bravo to Spring for implementing this magic voodoo wizardry so well! All kidding aside, the autowired bean is proxied, so it makes sense that if the bean registry is updated with new values, that the services with the injected singleton will have the updated values. That brings up the question of what would happen if the same #Configuration #Bean method added #Scope(SCOPE_PROTOTYPE). Since #RefreshScope is a specialized scope for spring cloud config, and since the annotation is not repeatable, I am not sure what would happen.

Related

Project Reactor Schedulers elastic using old threadlocal value

I am using spring webflux to call one service from another via Schedulers.elastic()
Mono<Integer> anaNotificationCountObservable = wrapWithRetryForFlux(wrapWithTimeoutForFlux(
notificationServiceMediatorFlux.getANANotificationCountForUser(userId).subscribeOn(reactor.core.scheduler.Schedulers.elastic())
)).onErrorReturn(0);
In main thread i am setting one InhertitableThreadLocal variable and in the child thread I am trying to access it and it is working fine.
This is my class for storing threadlocal
#Component
public class RequestCorrelation {
public static final String CORRELATION_ID = "correlation-id";
private InheritableThreadLocal<String> id = new InheritableThreadLocal<>();
public String getId() {
return id.get();
}
public void setId(final String correlationId) {
id.set(correlationId);
}
public void removeCorrelationId() {
id.remove();
}
}
Now the issue is first time its working fine meaning the value i am setting in threadlocal is passed to other services.
But second time also, it is using old id(generated in last request).
I tried using Schedulers.newSingle() instead of elastic(), then its working fine.
So think since elastic() is re-using threads, thats why it is not able to clear / or it is re-using.
How should i resolve issue.
I am setting thread local in my filter and clearing the same in myfiler
requestCorrelation.setId(UUID.randomUUID().toString());
chain.doFilter(req,res)
requestCorrelation.removeCorrelationId();
You should never tie resources or information to a particular thread when leveraging a reactor pipeline. Reactor is itself scheduling agnostic; developers using your library can choose to schedule work on another scheduler - if you decide to force a scheduling model you might lose performance benefits.
Instead you can store data inside the reactor context. This is a map-like structure that’s tied to the subscriber and independent of the scheduling arrangement.
This is how projects like spring security and micrometer store information that usually belongs in a threadlocal.

Can I create a request-scoped object and access it from anywhere, and avoid passing it around as a parameter in JAX-RS?

Say I have a web service / a REST resource that is called with some HTTP header parameters. The resource method builds a complex data object (currently a POJO) and eventually returns it to the client (via Gson as JSON, but that doesn't matter).
So I have this call hierarchy:
#Path(foo) ProjectResource #GET getProject()
-> new Project()
-> new List<Participant> which contains lots of new Participant()s
-> new Affiliation()
If I want the Affiliation object to be e.g. populated in English or German depending on a header parameter, I have to pass that as a parameter down the chain. I want to avoid having to do that. Maybe this is just fundamentally impossible, but it feels so wrong. All these objects only live inside the request, so wouldn't it be convenient to be able to access information tied to the request from anywhere?
I was hoping I could e.g. define a CDI #RequestScoped object that initialized itself (or gets populated by some WebFilter) and that I can then inject where I might need it.
But obviously that doesn't work from inside the POJOs, and I also had trouble getting hold of the headers from inside the request-scoped object.
I've read many SO questions/answers about EJBs and JAX-RS Context and CDI but I can't wrap my head around it.
Am I expecting too much? Is passing down the parameter really the preferred option?
If I understand what you need, you can try the following (just wrote this solution from the top of my head, but it should work):
Defining a class to store the data you need
Define a class annotated with #RequestScoped which will store the data you need:
#RequestScoped
public class RequestMetadata {
private Locale language;
// Default constructor, getters and setters ommited
}
Ensure you are using the #RequestScoped annotation from the javax.enterprise.context package.
Creating a request filter
Create a ContainerRequestFilter to populate the RequestMetadata:
#Provider
#PreMatching
public class RequestMetadataFilter implements ContainerRequestFilter {
#Inject
private RequestMetadata requestMetadata;
#Override
public void filter(ContainerRequestContext requestContext) throws IOException {
requestMetadata.setLanguage(requestContext.getLanguage());
}
}
Performing the injection
And then you can finally perform the injection of the RequestMetadata using #Inject:
#Stateless
public class Foo {
#Inject
private RequestMetadata requestMetadata;
...
}
Please, be aware that anywhere is too broad: The injection will work into beans managed by the container, such as servlets, JAX-RS classes, EJB and CDI beans, for example.
You won't be able to perform injections into beans created by yourself neither into JPA entities.

SpringData Gemfire inserting fake date on Dev env

I am developing some app using Gemfire and it would be great to be able to provide some fake data while in Dev environment.
So instead of doing it in the code like I do today, I was thinking about using spring application-context.xml do pre-load some dummy data in the region I am currently working on. Something close to what DBUnit does but for DEV not Test scope.
Later I could just switch envs on Spring and that data would not be loaded.
Is it possible to add data using SpringData Gemfire to a local data grid?
Thanks!
There is no direct support in Spring Data GemFire to load data into a GemFire cluster. However, there are several options afforded to a SDG/GemFire developer to load data.
The most common approach is to define a GemFire CacheLoader attached to the Region. However, this approach is "lazy" and only loads data from a (potentially) external data source on a cache miss. Of course, you could program the logic in the CacheLoader to "prefetch" a number of entries in a somewhat "predictive" manner based on data access patterns. See GemFire's User Guide for more details.
Still, we can do better than this since it is more likely that you want to "preload" a particular data set for development purposes.
Another, more effective technique, is to use a Spring BeanPostProcessor registered in your Spring ApplicationContext that post processes your "Region" bean after initialization. For instance...
Where the RegionPutAllBeanPostProcessor is implemented as...
package example;
public class RegionPutAllBeanPostProcessor implements BeanPostProcessor {
private Map regionData;
private String targetRegionBeanName;
protected Map getRegionData() {
return (regionData != null ? regionData : Collections.emptyMap());
}
public void setRegionData(final Map regionData) {
this.regionData = regionData;
}
protected String getTargetRegionBeanName() {
Assert.state(StringUtils.hasText(targetRegionBeanName), "The target Region bean name was not properly specified!");
return targetBeanName;
}
public void setTargetRegionBeanName(final String targetRegionBeanName) {
Assert.hasText(targetRegionBeanName, "The target Region bean name must be specified!");
this.targetRegionBeanName = targetRegionBeanName;
}
#Override
public Object postProcessBeforeInitialization(final Object bean, final String beanName) throws BeansException {
return bean;
}
#Override
#SuppressWarnings("unchecked")
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (beanName.equals(getTargetRegionBeanName()) && bean instanceof Region) {
((Region) bean).putAll(getRegionData());
}
return bean;
}
}
It is not too difficult to imagine that you could inject a DataSource of some type to pre-populate the Region. The RegionPutAllBeanPostProcessor was designed to accept a specific Region (based on the Region beans ID) to populate. So you could defined multiple instances each taking a different Region and different DataSource (perhaps) to populate the Region(s) of choice. This BeanPostProcess just take a Map as the data source, but of course, it could be any Spring managed bean.
Finally, it is a simple matter to ensure that this, or multiple instances of the RegionPutAllBeanPostProcessor is only used in your DEV environment by taking advantage of Spring bean profiles...
<beans>
...
<beans profile="DEV">
<bean class="example.RegionPutAllBeanPostProcessor">
...
</bean>
...
</beans>
</beans>
Usually, loading pre-defined data sets is very application-specific in terms of the "source" of the pre-defined data. As my example illustrates, the source could be as simple as another Map. However, it would be a JDBC DataSource, or perhaps a Properties file or well, anything for that matter. It is usually up to the developers preference.
Though, one thing that might be useful to add to Spring Data GemFire would be to load data from a GemFire Cache Region Snapshot. I.e. data that may have been dumped from a QA or UAT environment, or perhaps even scrubbed from PROD for testing purposes. See GemFire Snapshot Service for more details.
Also see the JIRA ticket (SGF-408) I just filed to add this support.
Hopefully this gives you enough information and/or ideas to get going. Later, I will add first-class support into SDG's XML namespace for preloading data sets.
Regards,
John

connection string injection

I am trying to learn Ninject and started with what I thought is very simple thing. Can't make it work. Obviously, I am missing something basic.
So, I have this little console application that listens for WCF calls on a certain port and saves data that come via WCF to a database. There are 3 projects in the solution: 1. data access library, 2. WCF stuff and 3. console acting as a host. Ninject is not used yet. So the dependencies between projects are like this: 3 -> 2 -> 1
I want to start with injecting connection string that console host takes from its config into data access library. Googling for ninjection of connection string brought some examples, but they are not complete.
One of the examples suggested to bind in the host's Main() like this:
static void Main(string[] args)
{
new StandardKernel().Bind<ConnectionStringProvider>().ToConstant(
new ConnectionStringProvider { ConnectionString = Config.ConnectionString });
}
where ConnectionStrinProvider is a simple class that contains only one property ConnectionString. What I can't figure out is how do I instantiate ConnectionStrinProvider in the data access library. I tried
var csprovider = new StandardKernel().Get<ConnectionStringProvider>();
it doesn't work - meaning that it returns new instance of the provider instead of the one that was created during binding. I also tried to add .InSingletonScope() to the binding, with the same result.
You need to keep a reference to the kernel you set up. It doesn't work if you instantiate it every time.
public static IKernel Ninject {get; private set;}
static void Main(string[] args)
{
Ninject = new StandardKernel()
Ninject.Bind<ConnectionStringProvider>().ToConstant(
new ConnectionStringProvider { ConnectionString = Config.ConnectionString });
}
On the consummer side, you can call the Ninject static property on your main.
Obvious note aside: this is sample code, on production code you may want to make a better design for that global static variable.
The kernel is what keeps track of all the bindings for you. However, you are creating a new instance each time. That won't work. Instead, create the kernel and then store it off (here I'm storing it off in a local variable, but you'd probably want to store it in a field in some class):
var connectionStringProvider = new ConnectionStringProvider { ConnectionString = Config.ConnectionString };
var kernel = new StandardKernel().Bind<ConnectionStringProvider>().ToConstant(connectionStringProvider);
Now obtain instances by accessing the existing kernel.
var csprovider = kernel.Get<ConnectionStringProvider>();
That being said, using it in this fashion is the wrong way to go about it, as this pattern is known as a service locator pattern, which is the antitheses of dependency injection. Generally speaking, you have a top-level class (for example, your application class with the Main method) that is either obtained via Kernel.Get or injected via Kernel.Inject, and all other dependencies are injected normally through constructors or [Inject]'ed properties.
Also, there are usually plugins available for most situations so that you don't have to instantiate the kernel yourself. However, I'm not aware of one for console apps.

IQueryable Repository with StructureMap (IoC) - How do i Implement IDisposable?

If i have the following Repository:
public IQueryable<User> Users()
{
var db = new SqlDataContext();
return db.Users;
}
I understand that the connection is opened only when the query is fired:
public class ServiceLayer
{
public IRepository repo;
public ServiceLayer(IRepository injectedRepo)
{
this.repo = injectedRepo;
}
public List<User> GetUsers()
{
return repo.Users().ToList(); // connection opened, query fired, connection closed. (or is it??)
}
}
If this is the case, do i still need to make my Repository implement IDisposable?
The Visual Studio Code Metrics certainly think i should.
I'm using IQueryable because i give control of the queries to my service layer (filters, paging, etc), so please no architectural discussions over the fact that im using it.
BTW - SqlDataContext is my custom class which extends Entity Framework's ObjectContext class (so i can have POCO parties).
So the question - do i really HAVE to implement IDisposable?
If so, i have no idea how this is possible, as each method shares the same repository instance.
EDIT
I'm using Depedency Injection (StructureMap) to inject the concrete repository into the service layer. This pattern is followed down the app stack - i'm using ASP.NET MVC and the concrete service is injected into the Controllers.
In other words:
User requests URL
Controller instance is created, which receives a new ServiceLayer instance, which is created with a new Repository instance.
Controller calls methods on service (all calls use same Repository instance)
Once request is served, controller is gone.
I am using Hybrid mode to inject dependencies into my controllers, which according to the StructureMap documentation cause the instances to be stored in the HttpContext.Current.Items.
So, i can't do this:
using (var repo = new Repository())
{
return repo.Users().ToList();
}
As this defeats the whole point of DI.
A common approach used with nhibernate is to create your session (ObjectContext) in begin_request (or some other similar lifecycle event) and then dispose it in end_request. You can put that code in an HttpModule.
You would need to change your Repository so that it has the ObjectContext injected. Your Repository should get out of the business of managing the ObjectContext lifecycle.
I would say you definitely should. Unless Entity Framework handles connections very differently than LinqToSql (which is what I've been using), you should implement IDisposable whenever you are working with connections. It might be true that the connection automatically closes after your transaction successfully completes. But what happens if it doesn't complete successfully? Implementing IDisposable is a good safeguard for making sure you don't have any connections left open after your done with them. A simpler reason is that it's a best practice to implement IDisposable.
Implementation could be as simple as putting this in your repository class:
public void Dispose()
{
SqlDataContext.Dispose();
}
Then, whenever you do anything with your repository (e.g., with your service layer), you just need to wrap everything in a using clause. You could do several "CRUD" operations within a single using clause, too, so you only dispose when you're all done.
Update
In my service layer (which I designed to work with LinqToSql, but hopefully this would apply to your situation), I do new up a new repository each time. To allow for testability, I have the dependency injector pass in a repository provider (instead of a repository instance). Each time I need a new repository, I wrap the call in a using statement, like this.
using (var repository = GetNewRepository())
{
...
}
public Repository<TDataContext, TEntity> GetNewRepository()
{
return _repositoryProvider.GetNew<TDataContext, TEntity>();
}
If you do it this way, you can mock everything (so you can test your service layer in isolation), yet still make sure you are disposing of your connections properly.
If you really need to do multiple operations with a single repository, you can put something like this in your base service class:
public void ExecuteAndSave(Action<Repository<TDataContext, TEntity>> action)
{
using (var repository = GetNewRepository())
{
action(repository);
repository.Save();
}
}
action can be a series of CRUD actions or a complex query, but you know if you call ExecuteAndSave(), when it's all done, you're repository will be disposed properly.
EDIT - Advice Received From Ayende Rahien
Got an email reply from Ayende Rahien (of Rhino Mocks, Raven, Hibernating Rhinos fame).
This is what he said:
You problem is that you initialize
your context like this:
_genericSqlServerContext = new GenericSqlServerContext(new
EntityConnection("name=EFProfDemoEntities"));
That means that the context doesn't
own the entity connection, which means
that it doesn't dispose it. In
general, it is vastly preferable to
have the context create the
connection. You can do that by using:
_genericSqlServerContext = new GenericSqlServerContext("name=EFProfDemoEntities");
Which definetely makes sense - however i would have thought that Disposing of a SqlServerContext would also dispose of the underlying connection, guess i was wrong.
Anyway, that is the solution - now everything is getting disposed of properly.
So i no longer need to do using on the repository:
public ICollection<T> FindAll<T>(Expression<Func<T, bool>> predicate, int maxRows) where T : Foo
{
// dont need this anymore
//using (var cr = ObjectFactory.GetInstance<IContentRepository>())
return _fooRepository.Find().OfType<T>().Where(predicate).Take(maxRows).ToList();
And in my base repository, i implement IDisposable and simply do this:
Context.Dispose(); // Context is an instance of my custom sql context.
Hope that helps others out.