Is there a way to add a specific behavior that should only be executed on (and follow) the Ignite Coordinator ServerNode?
Is there a add-on hook to add some customer specific behavior?
Thanks in advance.
Greg
AFAIK, there is not any built-in hook to achieve this.
However, it can be achieved through Node Singleton Ignite Service and using an api on TcpDiscoverySpi.isLocalNodeCoordinator().
If different discovery mechanism other than TcpDiscovery, is used, above mentioned way by #Alexandr can be used to get a co-ordinator node.
Define an Ignite Service as follows. It will schedule a task that will periodically execute on every node of the cluster and execute certain logic, if it's a coordinator node.
public class IgniteServiceImpl implements Service {
#IgniteInstanceResource
Ignite ignite;
#Override
public void cancel(ServiceContext ctx) {
}
#Override
public void init(ServiceContext ctx) throws Exception {
System.out.println("Starting a service");
}
#Override
public void execute(ServiceContext ctx) throws Exception {
Timer timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
System.out.println("Inside a service");
if (ignite != null) {
DiscoverySpi discoverySpi = ignite.configuration().getDiscoverySpi();
if (discoverySpi instanceof TcpDiscoverySpi) {
TcpDiscoverySpi tcpDiscoverySpi = (TcpDiscoverySpi) discoverySpi;
if(tcpDiscoverySpi.isLocalNodeCoordinator())
doSomething();
} else {
ClusterNode coordinatorNode = ((IgniteEx) ignite).context().discovery().discoCache().oldestAliveServerNode();
UUID localNodeId = ((IgniteEx) ignite).context().localNodeId();
if(localNodeId.equals(coordinatorNode.id()))
doSomething();
}
} else {
System.out.println("Ignite is null");
}
}
}, 5, (30 * 1000L));
}
private void doSomething(){
System.out.println("Hi I am Co-ordinator Node");
}
}
Start above service as node singleton using Ignite.services() as follows
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setIgniteInstanceName("ignite-node");
Ignite ignite = Ignition.start(igniteConfiguration);
IgniteServices services = ignite.services();
services.deployNodeSingleton("test-service",new IgniteServiceImpl());
In case of a low-level logic, you can extend Ignite with custom plugins.
I'm not sure if there is an easy way to check if a node is indeed the coordinator, but you might check for the oldest one:
private boolean isCoordinator() {
ClusterNode node = ((IgniteEx)ignite()).context().discovery().discoCache().oldestAliveServerNode();
return node != null && node.isLocal();
}
Otherwise, just run a custom initialization logic/compute task, once a node is started.
Related
I have written a Reactive API using Spring WebFlux version 2.3.0.RELEASE having reactor-netty version 0.9.10. As part of the API's SLA, I want to timeout the request if the Server takes more than the stipulated configured WriteTimeout.
Sharing the code snipped below where I have implemented a customizer for NettyReactiveWebServerFactory.
#Bean
public WebServerFactoryCustomizer serverFactoryCustomizer() {
return new NettyTimeoutCustomizer();
}
class NettyTimeoutCustomizer implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {
#Override
public void customize(NettyReactiveWebServerFactory factory) {
int connectionTimeout = 1000;
int writeTimeout = 1;
factory.addServerCustomizers(server -> server.tcpConfiguration(tcp ->
tcp.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connectionTimeout)
.doOnConnection(connection ->
connection.addHandlerLast(new WriteTimeoutHandler(writeTimeout)))));
}
}
In spite of the Customizer, the WriteTimeout is Not Working for the API.
Instead of defining a WebServerFactoryCustomizer bean, create a bean of NettyReactiveWebServerFactory to override Spring's auto-configuration.
#Bean
public NettyReactiveWebServerFactory nettyReactiveWebServerFactory() {
NettyReactiveWebServerFactory webServerFactory = new NettyReactiveWebServerFactory();
webServerFactory.addServerCustomizers(new MyCustomizer());
return webServerFactory;
}
Now the MyCustomizer will look something like this:
public class MyCustomizer implements NettyServerCustomizer {
#Override
public HttpServer apply(HttpServer httpServer) {
return httpServer.tcpConfiguration(tcpServer -> tcpServer.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000)
.bootstrap(serverBootstrap -> serverBootstrap.childHandler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel channel) throws Exception {
channel.pipeline().addLast("writeTimeoutHandler", new WriteTimeoutHandler(1));
}
}))
);
}
}
This is the way suggested in the official API doc
I want to trigger an email, after RabbitMQ Listener retrials are over and still if the process of handle failed.
retry logic is working with below code. But how to trigger the functionality (email trigger) once the max retrial attempts are over.
#Bean
public SimpleMessageListenerContainer container() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(myQueue());
container.setDefaultRequeueRejected(false);
Advice[] adviceArray = new Advice[]{interceptor()};
container.setAdviceChain(adviceArray);
return container;
}
#Bean
public IntegrationFlow inboundFlow() {
return IntegrationFlows.from(
Amqp.inboundAdapter(container()))
.log()
.handle(listenerBeanName, listenerMethodName)
.get();
}
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(retryMaxAttempts)
.backOffOptions(initialInterval, multiplier, maxInterval)
//.recoverer(new RejectAndDontRequeueRecoverer())
.recoverer(new CustomRejectAndRecoverer())
.build();
}
Adding with code of CustomeRecover
#Service
public class CustomRejectAndRecoverer implements MessageRecoverer {
#Autowired
private EmailGateway emailgateway;
#Override
public void recover(Message message, Throwable cause) {
// INSERT CODE HERE.... HOW TO CALL GATEWAY
// emailgateway.sendMail(cause);
throw new ListenerExecutionFailedException("Retry Policy Exhausted",
new AmqpRejectAndDontRequeueException(cause), message);
} }
That's exactly what a .recoverer() in that RetryInterceptorBuilder is for.
You use there now a RejectAndDontRequeueRecoverer, but no body stops you to implement your own MessageRecoverer with the delegation to the RejectAndDontRequeueRecoverer and sending a message to some MessageChannel with a sending emails logic.
I got a project built under ASP Core 2 that utilizes the Quartz.NET scheduler 3-beta1
I've got the following job that i want to execute:
public class TestJob: IJob
{
private readonly AppDbContext _dbContext;
public TestJob(AppDbContext dbContext)
{
_dbContext = dbContext;
}
public Task Execute(IJobExecutionContext context)
{
Debug.WriteLine("Test check at " + DateTime.Now);
var testRun = _dbContext.TestTable.Where(o => o.CheckNumber > 10).ToList();
Debug.WriteLine(testRun.Count);
return Task.CompletedTask;
}
}
Unfortunately it never works and there are not error logs to indicate an issue.
Yet when i remove everything and just leave the Debug.WriteLine it works as per below example.
public class TestJob: IJob
{
public Task Execute(IJobExecutionContext context)
{
Debug.WriteLine("Test check at " + DateTime.Now);
return Task.CompletedTask;
}
}
How can i get my job to execute the database call?
EDIT 1: Job Creation
var schedulerFactory = new StdSchedulerFactory(properties);
_scheduler = schedulerFactory.GetScheduler().Result;
_scheduler.Start().Wait();
var testJob = JobBuilder.Create<TestJob>()
.WithIdentity("TestJobIdentity")
.Build();
var testTrigger = TriggerBuilder.Create()
.WithIdentity("TestJobTrigger")
.StartNow()
.WithSimpleSchedule(x => x.WithIntervalInMinutes(1).RepeatForever())
.Build();
if (CheckIfJobRegistered(testJob.Key).Result == false)
_scheduler.ScheduleJob(testJob, testTrigger).Wait();
The main problem here is that Quartz can't create the job and swallows the exception.
The Documentation states:
When a trigger fires, the JobDetail (instance definition) it is associated to is loaded, and the job class it refers to is instantiated via the JobFactory configured on the Scheduler. The default JobFactory simply calls the default constructor of the job class using Activator.CreateInstance, then attempts to call setter properties on the class that match the names of keys within the JobDataMap. You may want to create your own implementation of JobFactory to accomplish things such as having your application’s IoC or DI container produce/initialize the job instance.
Quartz provides the IJobFactory to achieve that. And it works really good with Dependency Injection. A JobFactory can look like this:
public class JobFactory : IJobFactory
{
//TypeFactory is just the DI Container of your choice
protected readonly TypeFactory Factory;
public JobFactory(TypeFactory factory)
{
Factory = factory;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
try
{
return Factory.Create(bundle.JobDetail.JobType) as IJob;
}
catch (Exception e)
{
//Log the error and return null
//every exception thrown will be swallowed by Quartz
return null;
}
}
public void ReturnJob(IJob job)
{
//Don't forget to implement this,
//or the memory will not released
Factory.Release(job);
}
}
Then just register your JobFactory with the scheduler and everything should work:
_scheduler.JobFactory = new JobFactory(/*container of choice*/);
Edit:
Additionally you can take a look to one of my previous answers.
I think I am missing something here..I am trying to create simple rabbit listner which can accept custom object as message type. Now as per doc it says
In versions prior to 1.6, the type information to convert the JSON had to be provided in message headers, or a custom ClassMapper was required. Starting with version 1.6, if there are no type information headers, the type can be inferred from the target method arguments.
I am putting message manually in to queue using rabbit mq adm in dashboard,getting error like
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, amqp_deliveryTag=1, amqp_consumerQueue=customer, amqp_redelivered=false, id=81e8a562-71aa-b430-df03-f60e6a37c5dc, amqp_consumerTag=amq.ctag-LQARUDrR6sUcn7FqAKKVDA, timestamp=1485635555742}]
My configuration:
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("test");
connectionFactory.setPassword("test1234");
connectionFactory.setVirtualHost("/");
return connectionFactory;
}
#Bean
RabbitTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(new Jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public AmqpAdmin amqpAdmin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(connectionFactory());
return rabbitAdmin;
}
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
Also question is with this exception message is not put back in the queue.
I am using spring boot 1.4 which brings amqp 1.6.1.
Edit1 : I added jackson converter as above (prob not required with spring boot) and given contenty type on rmq admin but still got below, as you can see above I am not configuring any listener container yet.
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [[B] to [com.example.Customer] for GenericMessage [payload=byte[21], headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=customer, content_type=application/json, amqp_deliveryTag=3, amqp_consumerQueue=customer, amqp_redelivered=false, id=7f84d49d-037a-9ea3-e936-ed5552d9f535, amqp_consumerTag=amq.ctag-YSemzbIW6Q8JGYUS70WWtA, timestamp=1485643437271}]
If you are using boot, you can simply add a Jackson2JsonMessageConverter #Bean to the configuration and it will be automatically wired into the listener (as long as it's the only converter). You need to set the content_type property to application/json if you are using the administration console to send the message.
Conversion errors are considered fatal by default because there is generally no reason to retry; otherwise they'd loop for ever.
EDIT
Here's a working boot app...
#SpringBootApplication
public class So41914665Application {
public static void main(String[] args) {
SpringApplication.run(So41914665Application.class, args);
}
#Bean
public Queue queue() {
return new Queue("foo", false, false, true);
}
#Bean
public Jackson2JsonMessageConverter converter() {
return new Jackson2JsonMessageConverter();
}
#RabbitListener(queues = "foo")
public void listen(Foo foo) {
System.out.println(foo);
}
public static class Foo {
public String bar;
public String getBar() {
return this.bar;
}
public void setBar(String bar) {
this.bar = bar;
}
#Override
public String toString() {
return "Foo [bar=" + this.bar + "]";
}
}
}
I sent this message
With this result:
2017-01-28 21:49:45.509 INFO 11453 --- [ main] com.example.So41914665Application : Started So41914665Application in 4.404 seconds (JVM running for 5.298)
Foo [bar=baz]
Boot will define an admin and template for you.
Ran into the same issue, turns out that, git stash/merge messed up with my config, I need to include this package again in my main again:
#SpringBootApplication(scanBasePackages = {
"com.example.amqp" // <- git merge messed this up
})
public class TeamActivityApplication {
public static void main(String[] args) {
SpringApplication.run(TeamActivityApplication.class, args);
}
}
I'm wondering if anyone can help me. I have a wcf service running over TCP which will make use of a duplex service. currently this service calls a business object which in turn does some processing. While this processing is happening on a background thread I wish the UI to be updated at certain points. I've attached my code below. TestStatus should be broken up into six parts and the service should update the windows forms UI each time this changes.
The class Scenariocomponent is a singleton (following).
public void BeginProcessingPendingTestCases()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessPendingTestCases));
}
public void BeginProcessingPendingTestCases()
{
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessPendingTestCases));
}
private void ProcessPendingTestCases(object state)
{
while (this.IsProcessingScenarios)
{
ProcessNextPendingTestCase();
}
}
private void ProcessNextPendingTestCase()
{
while (this.ServiceStatus == Components.ServiceStatus.Paused)
{
//Wait.
}
var testOperation = this.PendingTestCases.Dequeue();
if (testOperation.OperationStatus == TestStatus.Pending)
{
throw new NotImplementedException(); //TODO : Handle test.
if (testOperation.OperationStatus != TestStatus.Failed)
{
testOperation.OperationStatus = TestStatus.Processed;
}
this.CompletedTestCases.Enqueue(testOperation);
}
}
Initially I was using MSMQ to update the UI as it worked sufficiently however this is no longer acceptable due to client restrictions.
My Service is as follows:
public class TestHarnessService : ITestHarnessService
{
public bool Ping()
{
return true;
}
public bool IsProcessingScenarios()
{
return ScenarioComponent.Instance.IsProcessingScenarios;
}
public void BeginProcessingScenarios(string xmlDocument, Uri webServiceUri)
{
var doc = new XmlDocument();
doc.LoadXml(xmlDocument);
var scenarios = ScenarioComponent.Deserialize(doc);
ScenarioComponent.Instance.EnqueueScenarioCollection(scenarios, webServiceUri);
ScenarioComponent.Instance.BeginProcessingPendingTestCases();
}
public void ValidateScenarioDocument(string xmlDocument)
{
var doc = new XmlDocument();
doc.LoadXml(xmlDocument);
ScenarioComponent.ValidateScenarioSchema(doc);
}
ITestOperationCallBack Callback
{
get
{
return OperationContext.Current.GetCallbackChannel<ITestOperationCallBack>();
}
}
Now I need the UI to update each time a testoperation changes or completes but I am unsure how to accomplish this. Any feedback would be greatly appreciated.
Thank you!
Instead of using WinForms, you could use WPF and binding, which would handle the updating for you.