In nonreactive ElasticsearchRepository, there's a class AbstractElasticsearchRepository that does
try {
if (createIndexAndMapping()) {
createIndex();
putMapping();
}
} catch (Exception exception) {
LOGGER.warn("Cannot create index: {}", exception.getMessage());
}
Is there a different, more manual, setup for ReactiveElasticsearchRepository? My index mapping only gets created when creating a record and not on startup.
For reactive repositories this is implemented in 4.1.0-M1, so it will be available in the next release (or when using the milestone).
If you cannot switch to the milestone version, you will need to create a non-reactive client as well and do the createIndex and putMapping with this client.
Related
Let's say I have a method store(Flux<DataBuffer> bufferFlux) which receives some data as a flux of DataBuffers, calculates an identifier, creates an AsynchronousFileChannel and then uses DataBufferUtils the data to the channel.
I started like this. Please note, that the following code will not work. It should just illustrate how I create a FileChannel and how I would like to write the data, while releasing used buffers and closing the channel afterwards.
public Mono<Void> store(Flux<DataBuffer> bufferFlux) {
var channelMono = Mono.defer(() -> {
try {
log.info("opening file {}", filePath);
return Mono.just(AsynchronousFileChannel
.open(filePath, StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE));
} catch (IOException ex) {
log.error("error opening file", ex);
return Mono.error(ex);
}
});
// calculate identifier
// store buffers to AsynchronousFileChannel
return DataBufferUtils
.write(bufferFlux, fileChannel)
.doOnNext(DataBufferUtils.releaseConsumer())
.doFinally(f -> {
try {
fileChannel.close();
} catch (IOException ioException) {
log.error("error closing file channel", ioException);
}
})
.then();
}
The problem is, that I just started with reactive programming and have no clue how I could bring these two building blocks together, so that
the data is written to the channel
all buffers are gracefully released
the channel is closed after writing the data
the whole operation just signals complete or error (I guess this is what Mono<Void> is used for)
Can anyone help me choose the right operators or point me to a conceptual problem (perhaps there is a good reason why I cannot find a suitable operator)? :)
Thank you!
I have two Debezium SQL Server connectors that have to connect to one database and publish two different tables. Their name and database.history.kafka.topic are unique. Still when adding the second one (using a POST request) I get below exceptions. I don't want to use a unique value for database.server.name which counterintuitively has been used for metric name.
java.lang.RuntimeException: Unable to register the MBean 'debezium.sql_server:type=connector-metrics,context=schema-history,server=mydatabase'
Caused by: javax.management.InstanceAlreadyExistsException: debezium.sql_server:type=connector-metrics,context=schema-history,server=mydatabase
We won't be using JMX/MBeans so it's okay to disable it, but the question is how. If there is a common way to do it for JVM please advise.
I even see below code in Debezium where it registers a MBean. Looking just at two first lines, it seems one way to bypass this issue is forcing ManagementFactory.getPlatformMBeanServer() to return null. So another way of asking the same question may be how to force ManagementFactory.getPlatformMBeanServer() return null?
public synchronized void register() {
try {
final MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
if (mBeanServer == null) {
LOGGER.info("JMX not supported, bean '{}' not registered", name);
return;
}
// During connector restarts it is possible that Kafka Connect does not manage
// the lifecycle perfectly. In that case it is possible the old metric MBean is still present.
// There will be multiple attempts executed to register new MBean.
for (int attempt = 1; attempt <= REGISTRATION_RETRIES; attempt++) {
try {
mBeanServer.registerMBean(this, name);
break;
}
catch (InstanceAlreadyExistsException e) {
if (attempt < REGISTRATION_RETRIES) {
LOGGER.warn(
"Unable to register metrics as an old set with the same name exists, retrying in {} (attempt {} out of {})",
REGISTRATION_RETRY_DELAY, attempt, REGISTRATION_RETRIES);
final Metronome metronome = Metronome.sleeper(REGISTRATION_RETRY_DELAY, Clock.system());
metronome.pause();
}
else {
LOGGER.error("Failed to register metrics MBean, metrics will not be available");
}
}
}
// If the old metrics MBean is present then the connector will try to unregister it
// upon shutdown.
registered = true;
}
catch (JMException | InterruptedException e) {
throw new RuntimeException("Unable to register the MBean '" + name + "'", e);
}
}
You should use a single Debezium SQL Server connector for this, and use the table.include.list property on the connector to list the two tables you want to capture.
https://debezium.io/documentation/reference/stable/connectors/sqlserver.html#sqlserver-property-table-include-list
using janusGraph git code example : example-remotegraph
It works well when i going to create elements and do some query things.
But it report exception when update and delete...
java.util.concurrent.CompletionException: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: Could not locate method: DefaultGraphTraversal.none()
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
at org.apache.tinkerpop.gremlin.driver.ResultSet.one(ResultSet.java:107)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.hasNext(ResultSet.java:159)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:166)
at org.apache.tinkerpop.gremlin.driver.ResultSet$1.next(ResultSet.java:153)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal$TraverserIterator.next(DriverRemoteTraversal.java:142)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal$TraverserIterator.next(DriverRemoteTraversal.java:127)
at org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteTraversal.nextTraverser(DriverRemoteTraversal.java:108)
at org.apache.tinkerpop.gremlin.process.remote.traversal.step.map.RemoteStep.processNextStart(RemoteStep.java:80)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:128)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.next(AbstractStep.java:38)
at org.apache.tinkerpop.gremlin.process.traversal.Traversal.iterate(Traversal.java:203)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal.iterate(GraphTraversal.java:2694)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal$Admin.iterate(GraphTraversal.java:178)
at org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal.iterate(DefaultGraphTraversal.java:48)
at org.janusgraph.example.GraphApp.deleteElements(GraphApp.java:301)
at org.janusgraph.example.GraphApp.runApp(GraphApp.java:350)
at org.janusgraph.example.RemoteGraphApp.main(RemoteGraphApp.java:227)
here is the code :
public void deleteElements() {
try {
if (g == null) {
return;
}
LOGGER.info("deleting elements");
// note that this will succeed whether or not pluto exists
g.V().has("name", "pluto").drop().iterate();
if (supportsTransactions) {
g.tx().commit();
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
if (supportsTransactions) {
g.tx().rollback();
}
}
}
emmm.....i thought i have fixed this problem.....
the only reason perhaps the library version used doesn't match the gremlin-server's version;
I tried to turn the gremlin driver library to 3.2.9 version, and it works well.
You need to use the same Tinkerpop version JanusGraph is using, as this is an incompatible change that was introduced in Tinkerpop
For a new project i'm building a rest api that references resources from a second service. For the sake of client convenience i want to add this association to be serialized as an _embedded entry.
Is this possible at all? i thought about building a fake CrudRepository (facade for a feign client) and manually change all urls for that fake resource with resource processors. would that work?
a little deep dive into the functionality of spring-data-rest:
Data-Rest wraps all Entities into PersistentEntityResource Objects that extend the Resource<T> interface that spring HATEOAS provides. This particular implementation has a list of embedded objects that will be serialized as the _embedded field.
So in theory the solution to my problem should be as simple as implementing a ResourceProcessor<Resource<MyType>> and add my reference object to the embeds.
In practice this aproach has some ugly but solvable issues:
PersistentEntityResource is not generic, so while you can build a ResourceProcessor for it, that processor will by default catch everything. I am not sure what happens when you start using Projections. So that is not a solution.
PersistentEntityResource implements Resource<Object> and as a result can not be cast to Resource<MyType> and vice versa. If you want to to access the embedded field all casts have to be done with PersistentEntityResource.class.cast() and Resource.class.cast().
Overall my solution is simple, effective and not very pretty. I hope Spring-Hateoas gets full fledged HAL support in the future.
Here my ResourceProcessor as a sample:
#Bean
public ResourceProcessor<Resource<MyType>> typeProcessorToAddReference() {
// DO NOT REPLACE WITH LAMBDA!!!
return new ResourceProcessor<>() {
#Override
public Resource<MyType> process(Resource<MyType> resource) {
try {
// XXX all resources here are PersistentEntityResource instances, but they can't be cast normaly
PersistentEntityResource halResource = PersistentEntityResource.class.cast(resource);
List<EmbeddedWrapper> embedded = Lists.newArrayList(halResource.getEmbeddeds());
ReferenceObject reference = spineClient.findReferenceById(resource.getContent().getReferenceId());
embedded.add(embeddedWrappers.wrap(reference, "reference-relation"));
// XXX all resources here are PersistentEntityResource instances, but they can't be cast normaly
resource = Resource.class.cast(PersistentEntityResource.build(halResource.getContent(), halResource.getPersistentEntity())
.withEmbedded(embedded).withLinks(halResource.getLinks()).build());
} catch (Exception e) {
log.error("Something went wrong", e);
// swallow
}
return resource;
}
};
}
If you would like to work in type safe manner and with links only (addition references to custom controller methods), you can find inspiration in this sample code:
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.hateoas.EntityModel;
import org.springframework.hateoas.server.RepresentationModelProcessor;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
#Configuration
public class MyTypeLinkConfiguration {
public static class MyType {}
#Bean
public RepresentationModelProcessor<EntityModel<MyType>> MyTypeProcessorAddLifecycleLinks(MyTypeLifecycleStates myTypeLifecycleStates) {
// WARNING, no lambda can be passed here, because type is crucial for applying this bean processor.
return new RepresentationModelProcessor<EntityModel<MyType>>() {
#Override
public EntityModel<MyType> process(EntityModel<MyType> resource) {
// add custom export link for single MyType
myTypeLifecycleStates
.listReachableStates(resource.getContent().getState())
.forEach(reachableState -> {
try {
// for each possible next state, generate its relation which will get us to given state
switch (reachableState) {
case DRAFT:
resource.add(linkTo(methodOn(MyTypeLifecycleController.class).requestRework(resource.getContent().getId(), null)).withRel("requestRework"));
break;
case IN_REVIEW:
resource.add(linkTo(methodOn(MyTypeLifecycleController.class).requestReview(resource.getContent().getId(), null)).withRel("requestReview"));
break;
default:
throw new RuntimeException("Link for target state " + reachableState + " is not implemented!");
}
} catch (Exception ex) {
// swallowed
log.error("error while adding lifecycle link for target state " + reachableState + "! ex=" + ex.getMessage(), ex);
}
});
return resource;
}
};
}
}
Note, that myTypeLifecycleStates is autowired "service"/"business logic" bean.
I have a component with a custom model (extending the wicket standard Model class). My model loads the data from a database/web service when Wicket calls getObject().
This lookup can fail for several reasons. I'd like to handle this error by displaying a nice message on the web page with the component. What is the best way to do that?
public class MyCustomModel extends Model {
#Override
public String getObject() {
try {
return Order.lookupOrderDataFromRemoteService();
} catch (Exception e) {
logger.error("Failed silently...");
// How do I propagate this to the component/page?
}
return null;
}
Note that the error happens inside the Model which is decoupled from the components.
Handling an exception that happens in the model's getObject() is tricky, since by this time we are usually deep in the response phase of the whole request cycle, and it is too late to change the component hierarchy. So the only place to handle the exception is very much non-local, not anywhere near your component or model, but in the RequestCycle.
There is a way around that though. We use a combination of a Behavior and an IRequestCycleListener to deal with this:
IRequestCycleListener#onException allows you to examine any exception that was thrown during the request. If you return an IRequestHandler from this method, that handler will be run and rendered instead of whatever else was going on beforehand.
We use this on its own to catch generic stuff like Hibernate's StaleObjectException to redirect the user to a generic "someone else modified your object" page. If you
For more specific cases we add a RuntimeExceptionHandler behavior:
public abstract class RuntimeExceptionHandler extends Behavior {
public abstract IRequestHandler handleRuntimeException(Component component, Exception ex);
}
In IRequestCycleListener we walk through the current page's component tree to see whether any component has an instance of RuntimeExceptionHandler. If we find one, we call its handleRuntimeException method, and if it returns an IRequestHandler that's the one we will use. This way you can have the actual handling of the error local to your page.
Example:
public MyPage() {
...
this.add(new RuntimeExceptionHandler() {
#Override public IRequestHandler handleRuntimeException(Component component, Exception ex) {
if (ex instanceof MySpecialException) {
// just an example, you really can do anything you want here.
// show a feedback message...
MyPage.this.error("something went wrong");
// then hide the affected component(s) so the error doesn't happen again...
myComponentWithErrorInModel.setVisible(false); // ...
// ...then finally just re-render this page:
return new RenderPageRequestHandler(new PageProvider(MyPage.this));
} else {
return null;
}
}
});
}
Note: This is not something shipped with Wicket, we rolled our own. We simply combined the IRequestCycleListener and Behavior features of Wicket to come up with this.
Your model could implement IComponentAssignedModel, thus being able to get hold on the owning component.
But I wonder how often are you able to reuse MyCustomModel?
I know that some devs advocate creating standalone model implementations (often in separate packages). While there are general cases where this is useful (e.g. FeedbackMessagesModel), in my experience its easier to just create inner classes which are component specific.
Being the main issue here that Models are by design decoupled from the component hierarchy, you could implement a component-aware Model that will report all errors against a specific component.
Remember to make sure it implements Detachable so that the related Component will be detached.
If the Model will perform an expensive operation, you might be interested in using LoadableDetachableModel instead (take into account that Model.getObject() might be called multiple times).
public class MyComponentAwareModel extends LoadableDetachableModel {
private Component comp;
public MyComponentAwareModel(Component comp) {
this.comp = comp;
}
protected Object load() {
try {
return Order.lookupOrderDataFromRemoteService();
} catch (Exception e) {
logger.error("Failed silently...");
comp.error("This is an error message");
}
return null;
}
protected void onDetach(){
comp.detach();
}
}
It might also be worth to take a try at Session.get().error()) instead.
I would add a FeedbackPanel to the page and call error("some description") in the catch clause.
You might want to simply return null in getObject, and add logic to the controller class to display a message if getObject returns null.
If you need custom messages for different fail reasons, you could add a property like String errorMessage; to the model which is set when catching the Exception in getObject - so your controller class can do something like this
if(model.getObject == null) {
add(new Label("label",model.getErrorMessage()));
} else {
/* display your model object*/
}