I have a basic stream processing flow which looks like
master topic -> my processing in a mapper/filter -> output topics
and I am wondering about the best way to handle "bad messages". This could potentially be things like messages that I can't deserialize properly, or perhaps the processing/filtering logic fails in some unexpected way (I have no external dependencies so there should be no transient errors of that sort).
I was considering wrapping all my processing/filtering code in a try catch and if an exception was raised then routing to an "error topic". Then I can study the message and modify it or fix my code as appropriate and then replay it on to master. If I let any exceptions propagate, the stream seems to get jammed and no more messages are picked up.
Is this approach considered best practice?
Is there a convenient Kafka streams way to handle this? I don't think there is a concept of a DLQ...
What are the alternative ways to stop Kafka jamming on a "bad message"?
What alternative error handling approaches are there?
For completeness here is my code (pseudo-ish):
class Document {
// Fields
}
class AnalysedDocument {
Document document;
String rawValue;
Exception exception;
Analysis analysis;
// All being well
AnalysedDocument(Document document, Analysis analysis) {...}
// Analysis failed
AnalysedDocument(Document document, Exception exception) {...}
// Deserialisation failed
AnalysedDocument(String rawValue, Exception exception) {...}
}
KStreamBuilder builder = new KStreamBuilder();
KStream<String, AnalysedPolecatDocument> analysedDocumentStream = builder
.stream(Serdes.String(), Serdes.String(), "master")
.mapValues(new ValueMapper<String, AnalysedDocument>() {
#Override
public AnalysedDocument apply(String rawValue) {
Document document;
try {
// Deserialise
document = ...
} catch (Exception e) {
return new AnalysedDocument(rawValue, exception);
}
try {
// Perform analysis
Analysis analysis = ...
return new AnalysedDocument(document, analysis);
} catch (Exception e) {
return new AnalysedDocument(document, exception);
}
}
});
// Branch based on whether analysis mapping failed to produce errorStream and successStream
errorStream.to(Serdes.String(), customPojoSerde(), "error");
successStream.to(Serdes.String(), customPojoSerde(), "analysed");
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
Any help greatly appreciated.
Right now, Kafka Streams offers only limited error handling capabilities. There is work in progress to simplify this. For now, your overall approach seems to be a good way to go.
One comment about handling de/serialization errors: handling those error manually, requires you to do de/serialization "manually". This means, you need to configure ByteArraySerdes for key and value for you input/output topic of your Streams app and add a map() that does the de/serialization (ie, KStream<byte[],byte[]> -> map() -> KStream<keyType,valueType> -- or the other way round if you also want to catch serialization exceptions). Otherwise, you cannot try-catch deserialization exceptions.
With your current approach, you "only" validate that the given string represents a valid document -- but it could be the case, that the message itself is corrupted and cannot be converted into a String in the source operator in the first place. Thus, you don't actually cover deserialization exception with you code. However, if you are sure a deserialization exception can never happen, you approach would be sufficient, too.
Update
This issues is tackled via KIP-161 and will be included in the next release 1.0.0. It allows you to register an callback via parameter default.deserialization.exception.handler. The handler will be invoked every time a exception occurs during deserialization and allows you to return an DeserializationResponse (CONTINUE -> drop the record an move on, or FAIL that is the default).
Update 2
With KIP-210 (will be part of in Kafka 1.1) it's also possible to handle errors on the producer side, similar to the consumer part, by registering a ProductionExceptionHandler via config default.production.exception.handler that can return CONTINUE.
Update Mar 23, 2018: Kafka 1.0 provides much better and easier handling for bad error messages ("poison pills") via KIP-161 than what I described below. See default.deserialization.exception.handler in the Kafka 1.0 docs.
This could potentially be things like messages that I can't deserialize properly [...]
Ok, my answer here focuses on the (de)serialization issues as this might be the most tricky scenario to handle for most users.
[...] or perhaps the processing/filtering logic fails in some unexpected way (I have no external dependencies so there should be no transient errors of that sort).
The same thinking (for deserialization) can also be applied to failures in the processing logic. Here, most people tend to gravitate towards option 2 below (minus the deserialization part), but YMMV.
I was considering wrapping all my processing/filtering code in a try catch and if an exception was raised then routing to an "error topic". Then I can study the message and modify it or fix my code as appropriate and then replay it on to master. If I let any exceptions propagate, the stream seems to get jammed and no more messages are picked up.
Is this approach considered best practice?
Yes, at the moment this is the way to go. Essentially, the two most common patterns are (1) skipping corrupted messages or (2) sending corrupted records to a quarantine topic aka a dead letter queue.
Is there a convenient Kafka streams way to handle this? I don't think there is a concept of a DLQ...
Yes, there is a way to handle this, including the use of a dead letter queue. However, it's (at least IMHO) not that convenient yet. If you have any feedback on how the API should allow you to handle this -- e.g. via a new or updated method, a configuration setting ("if serialization/deserialization fails send the problematic record to THIS quarantine topic") -- please let us know. :-)
What are the alternative ways to stop Kafka jamming on a "bad message"?
What alternative error handling approaches are there?
See my examples below.
FWIW, the Kafka community is also discussing the addition of a new CLI tool that allows you to skip over corrupted messages. However, as a user of the Kafka Streams API, I think ideally you want to handle such scenarios directly in your code, and fallback to CLI utilities only as a last resort.
Here are some patterns for the Kafka Streams DSL to handle corrupted records/messages aka "poison pills". This is taken from http://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-messages
Option 1: Skip corrupted records with flatMap
This is arguably what most users would like to do.
We use flatMap because it allows you to output zero, one, or more output records per input record. In the case of a corrupted record we output nothing (zero records), thereby ignoring/skipping the corrupted record.
Benefit of this approach compared to the others ones listed here: We need to manually deserialize a record only once!
Drawback of this approach: flatMap "marks" the input stream for potential data re-partitioning, i.e. if you perform a key-based operation such as groupings (groupBy/groupByKey) or joins afterwards, your data will be re-partitioned behind the scenes. Since this might be a costly step we don't want that to happen unnecessarily. If you KNOW that the record keys are always valid OR that you don't need to operate on the keys (thus keeping them as "raw" keys in byte[] format), you can change from flatMap to flatMapValues, which will not result in data re-partitioning even if you join/group/aggregate the stream later.
Code example:
Serde<byte[]> bytesSerde = Serdes.ByteArray();
Serde<String> stringSerde = Serdes.String();
Serde<Long> longSerde = Serdes.Long();
// Input topic, which might contain corrupted messages
KStream<byte[], byte[]> input = builder.stream(bytesSerde, bytesSerde, inputTopic);
// Note how the returned stream is of type KStream<String, Long>,
// rather than KStream<byte[], byte[]>.
KStream<String, Long> doubled = input.flatMap(
(k, v) -> {
try {
// Attempt deserialization
String key = stringSerde.deserializer().deserialize(inputTopic, k);
long value = longSerde.deserializer().deserialize(inputTopic, v);
// Ok, the record is valid (not corrupted). Let's take the
// opportunity to also process the record in some way so that
// we haven't paid the deserialization cost just for "poison pill"
// checking.
return Collections.singletonList(KeyValue.pair(key, 2 * value));
}
catch (SerializationException e) {
// log + ignore/skip the corrupted message
System.err.println("Could not deserialize record: " + e.getMessage());
}
return Collections.emptyList();
}
);
Option 2: dead letter queue with branch
Compared to option 1 (which ignores corrupted records) option 2 retains corrupted messages by filtering them out of the "main" input stream and writing them to a quarantine topic (think: dead letter queue). The drawback is that, for valid records, we must pay the manual deserialization cost twice.
KStream<byte[], byte[]> input = ...;
KStream<byte[], byte[]>[] partitioned = input.branch(
(k, v) -> {
boolean isValidRecord = false;
try {
stringSerde.deserializer().deserialize(inputTopic, k);
longSerde.deserializer().deserialize(inputTopic, v);
isValidRecord = true;
}
catch (SerializationException ignored) {}
return isValidRecord;
},
(k, v) -> true
);
// partitioned[0] is the KStream<byte[], byte[]> that contains
// only valid records. partitioned[1] contains only corrupted
// records and thus acts as a "dead letter queue".
KStream<String, Long> doubled = partitioned[0].map(
(key, value) -> KeyValue.pair(
// Must deserialize a second time unfortunately.
stringSerde.deserializer().deserialize(inputTopic, key),
2 * longSerde.deserializer().deserialize(inputTopic, value)));
// Don't forget to actually write the dead letter queue back to Kafka!
partitioned[1].to(Serdes.ByteArray(), Serdes.ByteArray(), "quarantine-topic");
Option 3: Skip corrupted records with filter
I only mention this for completeness. This option looks like a mix of options 1 and 2, but is worse than either of them. Compared to option 1, you must pay the manual deserialization cost for valid records twice (bad!). Compared to option 2, you lose the ability to retain corrupted records in a dead letter queue.
KStream<byte[], byte[]> validRecordsOnly = input.filter(
(k, v) -> {
boolean isValidRecord = false;
try {
bytesSerde.deserializer().deserialize(inputTopic, k);
longSerde.deserializer().deserialize(inputTopic, v);
isValidRecord = true;
}
catch (SerializationException e) {
// log + ignore/skip the corrupted message
System.err.println("Could not deserialize record: " + e.getMessage());
}
return isValidRecord;
}
);
KStream<String, Long> doubled = validRecordsOnly.map(
(key, value) -> KeyValue.pair(
// Must deserialize a second time unfortunately.
stringSerde.deserializer().deserialize(inputTopic, key),
2 * longSerde.deserializer().deserialize(inputTopic, value)));
Any help greatly appreciated.
I hope I could help. If yes, I'd appreciate your feedback on how we could improve the Kafka Streams API to handle failures/exceptions in a better/more convenient way than today. :-)
For the processing logic you could take this approach:
someKStream
.mapValues(inputValue -> {
// for each execution the below "return" could provide a different class than the previous run!
// e.g. "return isFailedProcessing ? failValue : successValue;"
// where failValue and successValue have no related classes
return someObject; // someObject class vary at runtime depending on your business
}) // here you'll have KStream<whateverKeyClass, Object> -> yes, Object for the value!
// you could have a different logic for choosing
// the target topic, below is just an example
.to((k, v, recordContext) -> v instanceof failValueClass ?
"dead-letter-topic" : "success-topic",
// you could completelly ignore the "Produced" part
// and rely on spring-boot properties only, e.g.
// spring.kafka.streams.properties.default.key.serde=yourKeySerde
// spring.kafka.streams.properties.default.value.serde=org.springframework.kafka.support.serializer.JsonSerde
Produced.with(yourKeySerde,
// JsonSerde could be an instance configured as you need
// (with type mappings or headers setting disabled, etc)
new JsonSerde<>()));
Your classes, though different and landing into different topics, will serialize as expected.
When not using to(), but instead one wants to continue with other processing, he could use branch() with splitting the logic based on the kafka-value class; the trick for branch() is to return KStream<keyClass, ?>[] in order to further allow one to cast to the appropriate class the individual array items.
If you want to send an exception (custom exception) to another topic (ERROR_TOPIC_NAME):
#Bean
public KStream<String, ?> kafkaStreamInput(StreamsBuilder kStreamBuilder) {
KStream<String, InputModel> input = kStreamBuilder.stream(INPUT_TOPIC_NAME);
return service.messageHandler(input);
}
public KStream<String, ?> messageHandler(KStream<String, InputModel> inputTopic) {
KStream<String, Object> output;
output = inputTopic.mapValues(v -> {
try {
//return InputModel
return normalMethod(v);
} catch (Exception e) {
//return ErrorModel
return errorHandler(e);
}
});
output.filter((k, v) -> (v instanceof ErrorModel)).to(KafkaStreamsConfig.ERROR_TOPIC_NAME);
output.filter((k, v) -> (v instanceof InputModel)).to(KafkaStreamsConfig.OUTPUT_TOPIC_NAME);
return output;
}
If you want to handle Kafka exceptions and skip it:
#Autowired
public ConsumerErrorHandler(
KafkaProducer<String, ErrorModel> dlqProducer) {
this.dlqProducer = dlqProducer;
}
#Bean
ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ObjectProvider<ConsumerFactory<Object, Object>> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory.getIfAvailable());
factory.setErrorHandler(((exception, data) -> {
ErrorModel errorModel = ErrorModel.builder().message()
.status("500").build();
assert data != null;
dlqProducer.send(new ProducerRecord<>(DLQ_TOPIC, data.key().toString(), errorModel));
}));
return factory;
}
All above answers although valid and useful, they are assuming that your streams topology is stateless. For example going back to the original example,
master topic -> my processing in a mapper/filter -> output topics
"my processing in a mapper/filter" should be stateless. I.e. Not re-partitioning (aka writing to a persistent re-partition topic) or doing a toTable() (aka writing to a changelog topic). If the processing fails further down the topology and you commit the transaction (by following any of the 3 option mention above - flatmap, branch or filter - then you have to cater for manually or programmatically eventually deleting that inconsistent state. That would mean writing extra custom code for automatic this.
I would personally expect Streams to also give you a LogAndSkip option for any unhandled runtime exception, not only for deserialization and production ones.
Has anyone any ideas on this?
I don't believe these examples work at all when working with Avro.
When the schema can't be resolved (i.e there is bad/non-avro message corrupting the topic, for example) there is no key or value to deserialize in the first place because by the time the DSL .branch() code is called, the exception has already been thrown (or handled).
Can anyone confirm if this i indeed the case? The very fluent approach you refer to here isn't possible when working with Avro?
KIP-161 does explain how to use a handler, however, it's much more fluent to see it as part of the topology.
Newbie question. Imagine a ParseTreeListener implementation with dozens of enter- and exit- methods which require exception handling. To avoid coding try-catch for each of these 40+ methods individually, I'd prefer a solution which would allow to catch listener's exceptions in a centralized manner and still have a reference to the context (line, position) where the exception was thrown, just like in the imaginary code below:
TestParser parser = new TestParser(tokens);
ParseTreeWalker walker = new ParseTreeWalker();
TestListener listener = new DefaultTestListener();
ParseTree tree = parser.entrynode();
try {
walker.walk(listener, tree);
} catch (RuntimeException e) {
LOG.info("Exception at " + tree.getContextWhereExceptionWasThrown());
}
Is it possible in any way?
Given that you have only shown imaginary code, you need to verify the true source and kind of the Exceptions you are seeing.
Walking an otherwise valid parseTree should not throw any Exception other than those you choose to throw. OTOH, the parser will throw exceptions of the type you seem to be concerned with.
If they are indeed parser exceptions, you can catch them explicitly as RecognitionException rather than RuntimeException exceptions. That exception object has the methods you seem to be looking for: getContext, getOffendingToken, etc.
If they are instead occurring in the execution of the walker, you will need to clarify your question regarding the type of exception. If you are throwing the exception, include the relevant token indexes and intervals, obtained from the TerminalNodes and ParserRuleContext objects in the then current context, in a 'one size fits all' exception.
I wrote some code in an MVC Framework that looks something like:
class Controller_Test extends Controller
{
public function action_index()
{
$obj = new MyObject();
$errors = array();
try
{
$results = $obj->doSomething();
}
catch(MyObject_Exception $e)
{
$e->getErrors();
}
catch(Exception $e)
{
$errors[] = $e->getMessage();
}
}
My friend argues that the Controller should know nothing about MyObject, and therefore I should not catch MyObject_Exception.
He argues that the code should do something like this instead:
class Controller_Test extends Controller
{
public function action_index()
{
$obj = new MyObject();
$errors = array();
if($obj->doSomething())
{
$results = $obj->getResults();
}
else
{
$errors = $obj->getErrors();
}
}
I definitely understand his approach, but feel as though state management can lead to unintended side effects.
What is the right or preferred approach?
Edit: mistakenly put $obj->getErrors() in MyObject_Exception catch clause instead of $e->getErrors();
The debate about exceptions vs. returned error codes is a long and bloody one.
His argument breaks down in that, by using a getErrors() function, you are learning information about the object. If that is your reason for using a boolean return to indicate success, then you are wrong. In order for the Controller to handle the error properly, it has to know about the object it was touching and what the specific error was. Was it a network error? Memory error? It has to know in some way or another.
I prefer the exception model because it's cleaner and allows me to handle more errors in a more controlled fashion. It also provides a clear cut way for the data relating to an exception to be passed.
However, I disagree with your use of a function like getErrors(). Any data pertaining to the exception that would help me handle it should be included with the exception. I should not have to go hunting into the object again to get information about what went wrong.
Did the network connection timeout? The exception should contain the host/port it tried to connect to, how long it waited, and any data from the lower networking levels.
Let's do this in example (in psuedo c#):
public class NetworkController {
Socket MySocket = null;
public void EstablishConnection() {
try {
this.MySocket = new Socket("1.1.1.1",90);
this.MySocket.Open();
} catch(SocketTimeoutException ex) {
//Attempt a Single Reconnect
}
catch(InvalidHostNameException ex) {
Log("InvalidHostname");
Exit();
}
}
}
Using his method:
public class NetworkController {
Socket MySocket = null;
public Boolean EstablishConnection() {
this.MySocket = new Socket("1.1.1.1",90);
if(this.MySocket.Open()) {
return true;
} else {
switch(this.MySocket.getError()) {
case "timeout":
// Reattempt
break;
case "badhost":
Log("InvalidHostname");
break;
}
}
}
}
Ultimately, you need to know what happened to the object to know how to respond to it, and there is no sense in using some convoluted if statement set or switch-case to determine that. Use the exceptions and love them.
EDIT: I accidentally the last half of a sentence.
In general, I would say that what's important is whether the controller understands the meaning of the exception and can handle it properly. In many cases (if not most), the controller will not know how to properly handle the exception, and so should not catch and handle it.
On the other hand, the controller might reasonably be permitted to understand some specific exception like a "DatabaseUnavailableException", even if it has no idea how or why MyObject used a database. The controller might be permitted to retry the call to MyObject a certain number of times, all without knowing about how MyObject is implemented.
First of all controller is not meant for handling the underlying exceptions thrown by classes.
Even if one occurs controller should halt saying something wrong at underlying error.
This way we make sure that controller does really and only do the job of flow control.
The other classes which give controller some output should be error free unless the error is very much controller specific.
In below code I want to neutralize the throw and continue the method - Can it be done ?
public class TestChild extends TestParent{
private String s;
public void doit(String arg) throws Exception {
if(arg == null) {
Exception e = new Exception("exception");
throw e;
}
s=arg;
}
}
The net result should be that, in case of the exception triggered (arg == null)
throw e is replaced by Log(e)
s=arg is executed
Thanks
PS : I can 'swallow' the exception or replace it with another exception but in all cases the method does not continue, all my interventions take place when the harm is done (ie the exception has been thrown)
I strongly doubt that general solution exists. But for your particular code and requirements 1 and 2:
privileged public aspect SkipNullBlockAspect {
public pointcut needSkip(TestChild t1, String a1): execution(void TestChild.doit(String))
&& this(t1) && args(a1) ;
void around(TestChild t1, String a1): needSkip(t1, a1){
if(a1==null) //if argument is null - doing hack.
{
a1=""; //alter argument to skip if block.
proceed(t1, a1);
t1.s=null;
a1=null; //restore argument
System.out.println("Little hack.");
}
else
proceed(t1, a1);
}
}
I think that generally what you want makes no sense most cases because if an application throws an exception it has a reason to do so, and that reason almost always includes the intention not to continue with the normal control flow of the method where the exception was thrown due to possible subsequent errors caused by bogus data. For example, what if you could neutralise the throw in your code and the next lines of code would do something like this:
if(arg == null)
throw new Exception("exception");
// We magically neutralise the exception and are here with arg == null
arg.someMethod(); // NullPointerException
double x = 11.0 / Integer.parseInt(arg); // NumberFormatException
anotherMethod(arg); // might throw exception if arg == null
Do you get my point? You take incalculable risks by continuing control flow here, assuming you can at all. Now what are the alternatives?
Let us assume you know exactly that a value of null does not do any harm here. Then why not just catch the exception with an after() throwing advice?
Or if null is harmful and you know about it, why not intercept method execution and overwrite the parameter so as to avoid the exception to begin with?
Speculatively assuming that the method content is a black box to you and you are trying to do some hacky things here, you can use an around() advice and from there call proceed() multiple times with different argument values (e.g. some authentication token or password) until the called method does not throw an exception anymore.
As you see, there are many ways to solve your practical problem depending on what exactly the problem is and what you want to achieve.
Having said all this, now let us return to your initial technical question of not catching, but actually neutralising an exception, i.e. somehow avoiding its being thrown at all. Because the AspectJ language does not contain technical means to do what you want (thank God!), you can look at other tools which can manipulate Java class files in a more low-level fashion. I have never used them productively, but I am pretty sure that you can do what you want using BCEL or Javassist.