private final ReactiveStringRedisTemplate redisTem;
#Override
public GatewayFilter apply(List<Role> roles) {
return (exchange, chain) -> {
ReactiveValueOperations<String, String> ops = redisTem.opsForValue();
ops
.get("aa")
.map(t -> {
log.info("[Redis] get value: {}", t);
if(t != null) throw new GlobalException(ResCode.F_FOBID);
return t;
})
.subscribe();
return chain.filter(exchange);
};
}
hello! I want to throw GlobalException when get value from Redis.
(GlobalException is my custom global exception).
but this code does not return an exception and proceeds to the next filter.
and I get message from terminal.
[Redis] get value: bbbbbbbbbbbbbbbbbbbbbb
[Operators:324] - Operator called default onErrorDropped
Caused by: com.api.sendgateway.errors.globals.GlobalException: Forbidden
at com.api.sendgateway.filters.part.AuthTokenFilter.lambda$apply$0(AuthTokenFilter.java:59)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)
at reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:122)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:122)
at reactor.core.publisher.FluxFilter$FilterSubscriber.onNext(FluxFilter.java:113)
at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
at reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onNext(FluxDefaultIfEmpty.java:101)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:122)
at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)
at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:82)
at io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
at io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
at io.lettuce.core.RedisPublisher$SubscriptionCommand.doOnComplete(RedisPublisher.java:773)
at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:65)
at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:63)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:747)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:682)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:599)
The filter process that I want to make is
get key from user.
search data from Redis with key.
if get data from Redis. throw exception.(returns an error to the user without further moving to the next filter)
my depndencies is below :
dependencies {
// redis
implementation 'org.springframework.boot:spring-boot-starter-data-redis-reactive'
// actuator
implementation 'org.springframework.boot:spring-boot-starter-actuator'
// gateway
implementation 'org.springframework.cloud:spring-cloud-starter-gateway'
// lombok
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok'
// jwt
implementation group: 'io.jsonwebtoken', name: 'jjwt', version: '0.9.1'
implementation group: 'javax.xml.bind', name: 'jaxb-api', version: '2.3.1'
// test
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
Related
I am trying to write to a peripheral in Android Kotlin using RxAndroidBle. The application writes to the peripheral and then the peripheral responds if this write request is successful, i.e.
According to the evaluation made of the information sent to the peripheral, the peripheral sends a response to the app if it is the expected information, if not the expected information, then the peripheral responds with a different response; In summary, it is a scenario very similar to an HTTP request via POST, information is sent and the server responds with a status if the information meets the requirements. I already managed to connect perfectly and read information from the peripheral in the following way:
override fun connectDeviceToGetInfoHardwareByBle(mac: String): Observable<Resource<HardwareInfoResponse>> {
val device: RxBleDevice = bleClient.getBleDevice(mac)
return Observable.defer {
device.bluetoothDevice.createBond()// it is a blocking function
device.establishConnection(false) // return Observable<RxBleConnection>
}
.delay(5, TimeUnit.SECONDS)
.flatMapSingle { connection ->
connection.requestMtu(515)
.flatMap {
Single.just(connection)
}
}
.flatMapSingle {
it.readCharacteristic(UUID.fromString(GET_HARDWARE_INFORMATION_CHARACTERISTIC))
.map { byteArray ->
evaluateHardwareInfoResponse(byteArray = byteArray)
}
}
.map {
Resource.success(data = it)
}
.take(1)
.onErrorReturn {
Timber.i("Rointe Ble* Error getting ble information. {$it}")
Resource.error(data = null, message = it.message.toString())
}
.doOnError {
Timber.i("Rointe Ble*","Error getting ble information."+it)
}
.subscribeOn(ioScheduler)
.observeOn(uiScheduler)
}
As you can see, the MTU is needed by the peripheral, and it answers what I need. After that response, I close that BLE connection and the app does another independent job on the network (HTTP). Then it is required to connect again but this time it is necessary to write JSON information to the peripheral and the device analyzes that JSON and gives some answers that I need as a return; How do I implement a write waiting for a response from the peripheral? Is it necessary to do a long-write for a JSON since I'm assigning MTU on the connection? I'm developing this in Kotlin under the Repository pattern.
The JSON sent is this:
{
"data": {
"id_hardware": "[ID_HARDWARE]",
"product_brand": <value>,
"product_type": <value>,
"product_model": <value>,
"nominal_power": <value>,
"industrialization_process_date": <value>,
"platform_api_path": "[Host_API_REST]",
"platform_streaming_path": "[Host_STREAMING]",
"updates_main_path": "[Host_UPDATES]",
"updates_alternative_path": "[Host_ALTERNATIVE_UPDATES]",
"check_updates_time": <value>,
"check_updates_day": <value>,
"auth_main_path": "[Host_AUTHORIZATION]",
"auth_alternative_path": "[Host_BACKUP_AUTHORIZATION]",
"analytics_path": "[Host_ANALYTICS]",
"idToken": "[ID_TOKEN]",
"refreshToken": "[REFRESH_TOKEN]",
"expiresIn": "3600",
"apiKey": "[API_KEY]",
"factory_wifi_ssid": "[FACTORY_WIFI_SSID]",
"factory_wifi_security_type": "[FACTORY_WIFI_TYPE]",
"factory_wifi_passphrase": "[FACTORY_WIFI_PASS]",
"factory_wifi_dhcp": 1,
"factory_wifi_device_ip": "[IPv4]",
"factory_wifi_subnet_mask": "[SubNetMask_IPv4]",
"factory_wifi_gateway": "[IPv4]"
},
"factory_version": 1,
"crc": ""
}
The peripheral analyzes that JSON and gives me some answers according to the JSON sent.
Now, the way I try to do the write expecting a response is this:
private fun setupNotifications(connection: RxBleConnection): Observable<Observable<ByteArray>> =
connection.setupNotification(UUID.fromString(SET_FACTORY_SETTINGS_CHARACTERISTIC))
private fun performWrite(connection: RxBleConnection, notifications: Observable<ByteArray>, data: ByteArray): Observable<ByteArray> {
return connection.writeCharacteristic(UUID.fromString(SET_FACTORY_SETTINGS_CHARACTERISTIC), data).toObservable()
}
override fun connectDeviceToWriteFactorySettingsByBle(mac: String, data: ByteArray): Observable<Resource<HardwareInfoResponse>> {
val device: RxBleDevice = bleClient.getBleDevice(mac)
return Observable.defer {
//device.bluetoothDevice.createBond()// it is a blocking function
device.establishConnection(false) // return Observable<RxBleConnection>
}
.delay(5, TimeUnit.SECONDS)
.flatMapSingle { connection ->
connection.requestMtu(515)
.flatMap {
Single.just(connection)
}
}
.flatMap(
{ connection -> setupNotifications(connection).delay(5, TimeUnit.SECONDS) },
{ connection, deviceCallbacks -> performWrite(connection, deviceCallbacks, data) }
)
.flatMap {
it
}
//.take(1) // after the successful write we are no longer interested in the connection so it will be released
.map {
Timber.i("Rointe Ble: Result write: ok ->{${it.toHex()}}")
Resource.success(data = evaluateHardwareInfoResponse(it))
}
//.take(1)
.onErrorReturn {
Timber.i("Rointe Ble: Result write: failed ->{${it.message.toString()}}")
Resource.error(data = HardwareInfoResponse.NULL_HARDWARE_INFO_RESPONSE, message = "Error write on device.")
}
.doOnError {
Timber.i("Rointe Ble*","Error getting ble information."+it)
}
//.subscribeOn(ioScheduler)
.observeOn(uiScheduler)
}
As can be seen, the MTU is negotiated to the maximum and a single packet is sent (json file shown).
When I run my code it connects but shows this error:
com.polidea.rxandroidble2.exceptions.BleCannotSetCharacteristicNotificationException:
Cannot find client characteristic config descriptor (code 2) with
characteristic UUID 4f4a4554-4520-4341-4c4f-520001000002
Any help on Kotlin?
Thanks a lot!!
When I run my code it connects but shows this error:
com.polidea.rxandroidble2.exceptions.BleCannotSetCharacteristicNotificationException:
Cannot find client characteristic config descriptor (code 2) with
characteristic UUID 4f4a4554-4520-4341-4c4f-520001000002
You can fix this in two ways:
Change your peripheral code to include a Client Characteristic Config Descriptor on the characteristic that you want to use notifications on ā this is the preferred way as it would make the peripheral conform to Bluetooth Specification
Use COMPAT mode when setting up notification which does not set CCCD value at all
How to clean UUID's characteristics cache? what happens is the library remember in cache maybe the last UUID registered. How I clean this cache?
It is possible to clear the cache by using BluetoothGatt#refresh and subsequently getting the new set of services which will allow bypassing the library UUID helper ā you need to use functions that accept BluetoothGattCharacteristic instead of UUID.
Code that refreshes BluetoothGatt:
RxBleCustomOperation<Void> bluetoothGattRefreshCustomOp = (bluetoothGatt, rxBleGattCallback, scheduler) -> {
try {
Method bluetoothGattRefreshFunction = bluetoothGatt.getClass().getMethod("refresh");
boolean success = (Boolean) bluetoothGattRefreshFunction.invoke(bluetoothGatt);
if (!success) return Observable.error(new RuntimeException("BluetoothGatt.refresh() returned false"));
return Observable.<Void>empty().delay(500, TimeUnit.MILLISECONDS);
} catch (NoSuchMethodException e) {
return Observable.error(e);
} catch (IllegalAccessException e) {
return Observable.error(e);
} catch (InvocationTargetException e) {
return Observable.error(e);
}
};
Code that discovers services bypassing the library caches:
RxBleCustomOperation<List<BluetoothGattService>> discoverServicesCustomOp = (bluetoothGatt, rxBleGattCallback, scheduler) -> {
boolean success = bluetoothGatt.discoverServices();
if (!success) return Observable.error(new RuntimeException("BluetoothGatt.discoverServices() returned false"));
return rxBleGattCallback.getOnServicesDiscovered()
.take(1) // so this RxBleCustomOperation will complete after the first result from BluetoothGattCallback.onServicesDiscovered()
.map(RxBleDeviceServices::getBluetoothGattServices);
};
I noticed that Schedulers.enableMetrics() got deprecated but I don't know what I should I do to get all my schedulers metered in a typical use case (using Spring Boot application).
Javadoc suggests using timedScheduler but how should it be achieved for Spring Boot?
First off, here are my thoughts on why the Schedulers.enableMetrics() approach was deprecated:
The previous approach was flawed in several ways:
intrinsic dependency on the MeterRegistry#globalRegistry() without any way of using a different registry.
wrong level of abstraction and limited instrumentation:
it was not the schedulers themselves that were instrumented, but individual ExecutorService instances assumed to back the schedulers.
schedulers NOT backed by any ExecutorService couldn't be instrumented.
schedulers backed by MULTIPLE ExecutorService (eg. a pool of workers) would produce multiple levels of metrics difficult to aggregate.
instrumentation was all-or-nothing, potentially polluting metrics backend with metrics from global or irrelevant schedulers.
A deliberate constraint of the new approach is that each Scheduler must be explicitly wrapped, which ensures that the correct MeterRegistry is used and that metrics are recognizable and aggregated for that particular Scheduler (thanks to the mandatory metricsPrefix).
I'm not a Spring Boot expert, but if you really want to instrument all the schedulers including the global ones here is a naive approach that will aggregate data from all the schedulers of same category, demonstrated in a Spring Boot app:
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Configuration
static class SchedulersConfiguration {
#Bean
#Order(1)
public Scheduler originalScheduler() {
// For comparison, we can capture a new original Scheduler (which won't be disposed by setFactory, unlike the global ones)
return Schedulers.newBoundedElastic(4, 100, "compare");
}
#Bean
public SimpleMeterRegistry registry() {
return new SimpleMeterRegistry();
}
#Bean
public Schedulers.Factory instrumentedSchedulers(SimpleMeterRegistry registry) {
// Let's create a Factory that does the same as the default Schedulers factory in Reactor-Core, but with instrumentation
return new Schedulers.Factory() {
#Override
public Scheduler newBoundedElastic(int threadCap, int queuedTaskCap, ThreadFactory threadFactory, int ttlSeconds) {
// The default implementation maps to the vanilla Schedulers so we can delegate to that
Scheduler original = Schedulers.Factory.super.newBoundedElastic(threadCap, queuedTaskCap, threadFactory, ttlSeconds);
// IMPORTANT NOTE: in this example _all_ the schedulers of the same type will share the same prefix/name
// this would especially be problematic if gauges were involved as they replace old gauges of the same name.
// Fortunately, for now, TimedScheduler only uses counters, timers and longTaskTimers.
String prefix = "my.instrumented.boundedElastic"; // TimedScheduler will add `.scheduler.xxx` to that prefix
return Micrometer.timedScheduler(original, registry, prefix);
}
#Override
public Scheduler newParallel(int parallelism, ThreadFactory threadFactory) {
Scheduler original = Schedulers.Factory.super.newParallel(parallelism, threadFactory);
String prefix = "my.instrumented.parallel"; // TimedScheduler will add `.scheduler.xxx` to that prefix
return Micrometer.timedScheduler(original, registry, prefix);
}
#Override
public Scheduler newSingle(ThreadFactory threadFactory) {
Scheduler original = Schedulers.Factory.super.newSingle(threadFactory);
String prefix = "my.instrumented.single"; // TimedScheduler will add `.scheduler.xxx` to that prefix
return Micrometer.timedScheduler(original, registry, prefix);
}
};
}
#PreDestroy
void resetFactories() {
System.err.println("Resetting Schedulers Factory to default");
// Later on if we want to disable instrumentation we can reset the Factory to defaults (closing all instrumented schedulers)
Schedulers.resetFactory();
}
}
#Service
public static class Demo implements ApplicationRunner {
final Scheduler forComparison;
final SimpleMeterRegistry registry;
final Schedulers.Factory factory;
Demo(Scheduler forComparison, SimpleMeterRegistry registry, Schedulers.Factory factory) {
this.forComparison = forComparison;
this.registry = registry;
this.factory = factory;
Schedulers.setFactory(factory);
}
public void generateMetrics() {
Schedulers.boundedElastic().schedule(() -> {});
Schedulers.newBoundedElastic(4, 100, "bounded1").schedule(() -> {});
Schedulers.newBoundedElastic(4, 100, "bounded2").schedule(() -> {});
Micrometer.timedScheduler(
forComparison,
registry,
"my.custom.instrumented.bounded"
).schedule(() -> {});
Schedulers.newBoundedElastic(4, 100, "bounded3").schedule(() -> {});
}
public String getCompletedSummary() {
return Search.in(registry)
.name(n -> n.endsWith(".scheduler.tasks.completed"))
.timers()
.stream()
.map(c -> c.getId().getName() + "=" + c.count())
.collect(Collectors.joining("\n"));
}
#Override
public void run(ApplicationArguments args) throws Exception {
generateMetrics();
System.err.println(getCompletedSummary());
}
}
}
Which prints:
my.instrumented.boundedElastic.scheduler.tasks.completed=4
my.custom.instrumented.bounded.scheduler.tasks.completed=1
Notice how the metrics for the four instrumentedFactory-produced Scheduler are aggregated together.
There's a bit of a hacky workaround for this: by default Schedulers uses ReactorThreadFactory, an internal private class which happens to be a Supplier<String>, supplying the "simplified name" (ie toString but without the configuration options) of the Scheduler.
One could use the following method to tentatively extract that name:
static String inferSimpleSchedulerName(ThreadFactory threadFactory, String defaultName) {
if (!(threadFactory instanceof Supplier)) {
return defaultName;
}
Object supplied = ((Supplier<?>) threadFactory).get();
if (!(supplied instanceof String)) {
return defaultName;
}
return (String) supplied;
}
Which can be applied to eg. the newParallel method in the factory:
String simplifiedName = inferSimpleSchedulerName(threadFactory, "para???");
String prefix = "my.instrumented." + simplifiedName; // TimedScheduler will add `.scheduler.xxx` to that prefix
This can then be demonstrated by submitting a few tasks to different parallel schedulers in the Demo#generateMetrics() part:
Schedulers.parallel().schedule(() -> {});
Schedulers.newParallel("paraOne").schedule(() -> {});
Schedulers.newParallel("paraTwo").schedule(() -> {});
And now it prints (blank lines for emphasis):
my.instrumented.paraOne.scheduler.tasks.completed=1
my.instrumented.paraTwo.scheduler.tasks.completed=1
my.instrumented.parallel.scheduler.tasks.completed=1
my.custom.instrumented.bounded.scheduler.tasks.completed=1
my.instrumented.boundedElastic.scheduler.tasks.completed=4
i'm trying to add additional Baggage to the existing span on a HTTP server, i want to add a path variable to the span to be accessed from log MDC and to be propagated on the wire to the next server i call via http or kafka.
my setup : spring cloud sleuth Hoxton.SR5 and spring boot 2.2.5
i tried adding the following setup and configuration:
spring:
sleuth:
propagation-keys: context-id, context-type
log:
slf4j:
whitelisted-mdc-keys: context-id, context-type
and added http interceptor :
public class HttpContextInterceptor implements HandlerInterceptor {
private final Tracer tracer;
private final HttpContextSupplier httpContextSupplier;
#Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
if (httpContextSupplier != null) {
addContext(request, handler);
}
return true;
}
private void addContext(HttpServletRequest request, Object handler) {
final Context context = httpContextSupplier.getContext(request);
if (!StringUtils.isEmpty(context.getContextId())) {
ExtraFieldPropagation.set(tracer.currentSpan().context(), TracingHeadersConsts.HEADER_CONTEXT_ID, context.getContextId());
}
if (!StringUtils.isEmpty(context.getContextType())) {
ExtraFieldPropagation.set(tracer.currentSpan().context(), TracingHeadersConsts.HEADER_CONTEXT_TYPE, context.getContextType());
}
}
}
and http filter to affect the current span(according to the spring docs)
public class TracingFilter extends OncePerRequestFilter {
private final Tracer tracer;
#Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
try (Tracer.SpanInScope ws = tracer.withSpanInScope(tracer.currentSpan())){
filterChain.doFilter(request, response);
}
}
}
the problem is the logs doesn't contain my custom context-id, context-type, although is see it in the span context.
what i'm missing ?
Similar question Spring cloud sleuth adding tag
and answer to it https://stackoverflow.com/a/66554834
For some context: This is from the Spring Docs.
In order to automatically set the baggage values to Slf4jās MDC, you have to set the spring.sleuth.baggage.correlation-fields property with a list of allowed local or remote keys. E.g. spring.sleuth.baggage.correlation-fields=country-code will set the value of the country-code baggage into MDC.
Note that the extra field is propagated and added to MDC starting with the next downstream trace context. To immediately add the extra field to MDC in the current trace context, configure the field to flush on update.
// configuration
#Bean
BaggageField countryCodeField() {
return BaggageField.create("country-code");
}
#Bean
ScopeDecorator mdcScopeDecorator() {
return MDCScopeDecorator.newBuilder()
.clear()
.add(SingleCorrelationField.newBuilder(countryCodeField())
.flushOnUpdate()
.build())
.build();
}
// service
#Autowired
BaggageField countryCodeField;
countryCodeField.updateValue("new-value");
A way to flush MDC in current span is also described in official Sleuth 2.0 -> 3.0 migration guide
#Configuration
class BusinessProcessBaggageConfiguration {
BaggageField BUSINESS_PROCESS = BaggageField.create("bp");
/** {#link BaggageField#updateValue(TraceContext, String)} now flushes to MDC */
#Bean
CorrelationScopeCustomizer flushBusinessProcessToMDCOnUpdate() {
return b -> b.add(
SingleCorrelationField.newBuilder(BUSINESS_PROCESS).flushOnUpdate().build()
);
}
}
I wanted to setup a basic producer-consumer with Flink on Kafka but I am having trouble producing data to an existing consumer via Java.
CLI solution
I setup a Kafka broker using kafka_2.11-2.4.0 zip from https://kafka.apache.org/downloads with commands
bin/zookeeper-server-start.sh config/zookeeper.properties
and bin/kafka-server-start.sh config/server.properties
I create a topic called transactions1 using
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic transactions1
Now I can use a producer and consumer on the command line to see that the topic has been created and works.
To setup consumer I run
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic transactions1 --from-beginning
Now if any producer sends data to the topic transactions1 I will see it in the consumer console.
I test that the consumer is working by running
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic transactions1
and enter the following data lines in the producer in cli which also show up in consumer cli.
{"txnID":1,"amt":100.0,"account":"AC1"}
{"txnID":2,"amt":10.0,"account":"AC2"}
{"txnID":3,"amt":20.0,"account":"AC3"}
Now I want to replicate step 3 i.e producer and consumer in Java code which is the core problem of this question.
So I setup a gradle java8 project with build.gradle
...
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'org.apache.flink', name: 'flink-connector-kafka_2.11', version: '1.9.0'
compile group: 'org.apache.flink', name: 'flink-core', version: '1.9.0'
// https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-java
compile group: 'org.apache.flink', name: 'flink-streaming-java_2.12', version: '1.9.2'
compile group: 'com.google.code.gson', name: 'gson', version: '2.8.5'
compile group: 'com.twitter', name: 'chill-thrift', version: '0.7.6'
compile group: 'org.apache.thrift', name: 'libthrift', version: '0.11.0'
compile group: 'com.twitter', name: 'chill-protobuf', version: '0.7.6'
compile group: 'org.apache.thrift', name: 'protobuf-java', version: '3.7.0'
}
...
I setup a Custom Class Transactions.class where you can suggest changes to the Serialization Logic using Kryo, Protobuf or TbaseSerializer by extending classes relevant to Flink.
import com.google.gson.Gson;
import org.apache.flink.api.common.functions.MapFunction;
public class Transaction {
public final int txnID;
public final float amt;
public final String account;
public Transaction(int txnID, float amt, String account) {
this.txnID = txnID;
this.amt = amt;
this.account = account;
}
public String toJSONString() {
Gson gson = new Gson();
return gson.toJson(this);
}
public static Transaction fromJSONString(String some) {
Gson gson = new Gson();
return gson.fromJson(some, Transaction.class);
}
public static MapFunction<String, String> mapTransactions() {
MapFunction<String, String> map = new MapFunction<String, String>() {
#Override
public String map(String value) throws Exception {
if (value != null || value.trim().length() > 0) {
try {
return fromJSONString(value).toJSONString();
} catch (Exception e) {
return "";
}
}
return "";
}
};
return map;
}
#Override
public String toString() {
return "Transaction{" +
"txnID=" + txnID +
", amt=" + amt +
", account='" + account + '\'' +
'}';
}
}
Now time to use Flink to Produce and Consume streams on topic transactions1.
public class SetupSpike {
public static void main(String[] args) throws Exception {
System.out.println("begin");
List<Transaction> txns = new ArrayList<Transaction>(){{
add(new Transaction(1, 100, "AC1"));
add(new Transaction(2, 10, "AC2"));
add(new Transaction(3, 20, "AC3"));
}};
// This list txns needs to be serialized in Flink as Transaction.class->String->ByteArray
//via producer and then to the topic in Kafka broker
//and deserialized as ByteArray->String->Transaction.class from the Consumer in Flink reading Kafka broker.
String topic = "transactions1";
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("zookeeper.connect", "localhost:2181");
properties.setProperty("group.id", topic);
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
//env.getConfig().addDefaultKryoSerializer(Transaction.class, TBaseSerializer.class);
// working Consumer logic below which needs edit if you change serialization
FlinkKafkaConsumer<String> myConsumer = new FlinkKafkaConsumer<String>(topic, new SimpleStringSchema(), properties);
myConsumer.setStartFromEarliest(); // start from the earliest record possible
DataStream<String> stream = env.addSource(myConsumer).map(Transaction::toJSONString);
//working Producer logic below which works if you are sinking a pre-existing DataStream
//but needs editing to work with Java List<Transaction> datatype.
System.out.println("sinking expanded stream");
MapFunction<String, String> etl = new MapFunction<String, String>() {
#Override
public String map(String value) throws Exception {
if (value != null || value.trim().length() > 0) {
try {
return fromJSONString(value).toJSONString();
} catch (Exception e) {
return "";
}
}
return "";
}
};
FlinkKafkaProducer<String> myProducer = new FlinkKafkaProducer<String>(topic,
new KafkaSerializationSchema<String>() {
#Override
public ProducerRecord<byte[], byte[]> serialize(String element, #Nullable Long timestamp) {
try {
System.out.println(element);
return new ProducerRecord<byte[], byte[]>(topic, stringToBytes(etl.map(element)));
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
}, properties, Semantic.EXACTLY_ONCE);
// stream.timeWindowAll(Time.minutes(1));
stream.addSink(myProducer);
JobExecutionResult execute = env.execute();
}
}
As you can see I am not able to do this with the List txns provided. The above is working code I could gather from Flink documentation to redirect topic stream data and sending data manually via Cli producer. The problem is writing KafkaProducer code in java that sends data to the topic, which is further compounded with issues like
Adding Timestamps, Watermarks
KeyBy operations
GroupBy/WindowBy operations
Adding custom ETL logic before Sinking.
Serialization/Deserialization logic in Flink
Can someone who has worked with Flink please help me with how to Produce the txns List to transactions1 topic in Flink and then verify that it works with Consumer?
Also any help on the issues of adding timestamp or some processing before sinking will be of great help. You can find source code on https://github.com/devssh/kafkaFlinkSpike and the intent is generate Flink boilerplate to add details of "AC1" from an in-memory store and join it with the Transaction event coming in real time to send expanded output to user.
Several points, in no particular order:
It would be better not to mix Flink version 1.9.2 together with version 1.9.0 as you've done here:
compile group: 'org.apache.flink', name: 'flink-connector-kafka_2.11', version: '1.9.0'
compile group: 'org.apache.flink', name: 'flink-core', version: '1.9.0'
compile group: 'org.apache.flink', name: 'flink-streaming-java_2.12', version: '1.9.2'
For tutorials on how to work with timestamps, watermarks, keyBy, windows, etc., see the online training materials from Ververica.
To use List<Transaction> txns as an input stream, you can do this (docs):
DataStream<Transaction> transactions = env.fromCollection(txns);
For an example of how to handle serialization / deserialization when working with Flink and Kafka, see the Flink Operations Playground, in particular look at ClickEventDeserializationSchema and ClickEventStatisticsSerializationSchema, which are used in ClickEventCount.java and defined here. (Note: this playground has not yet been updated for Flink 1.10.)
For some reason a context inside the doAfterSuccessOrError method is not available (populated) from the upstream. I've tried to access it using Mono.subscriberContext() (see the snipped). I would expect to have it present but for some reason is not. Am I doing something wrong?
public class LoggingRequestExchangeFunction implements ExchangeFilterFunction {
private final Logger log = LoggerFactory.getLogger(getClass());
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next) {
long start = System.currentTimeMillis();
return next.exchange(request).doAfterSuccessOrError((res, ex) -> {
Mono.subscriberContext().map((ctx -> {
log.info("doAfterSuccessOrError Context {}",ctx);
// log req/res ...
return ctx;
})).subscribe();
}).subscriberContext( ctx -> {
log.info("SubscriberContext: {}" , ctx);
return ctx;
});
}
}
Here is a log output
23:16:59.426 INFO [reactor-http-epoll-2] .p.c.LoggingRequestExchangeFunction [] SubscriberContext: Context1{nexmo-tracing-context=TracingContext{{traceId=f04961da-933a-4d1d-85d5-3bea2c47432f, clientIp=N/A}}}
23:16:59.589 INFO [reactor-http-epoll-2] .p.c.LoggingRequestExchangeFunction [] doAfterSuccessOrError Context Context0{}
The reason is that you create a new Mono inside doAfterSuccessOrError which is independent from the original reactor chain since you subscribe to it separately.
If you just want to log something inside, your alternative is to use doOnEach operator which beside the signal type gives you access to the context as well.
Mono.just("hello")
.doOnEach((signal) ->
{
if (signal.isOnError() || signal.isOnComplete())
{
Context ctx = signal.getContext();
log.info("doAfterSuccessOrError Context {}",ctx);
// log req/res ...
}
})
.subscriberContext( ctx -> {
log.info("SubscriberContext: {}" , ctx);
return ctx;
})
.subscribe();