I've search for examples and I have not found any.
My intention is to use a Redis Stream as a source to Spring Cloud Dataflow and route messages to AWS Kinesis or S3 data sinks
Redis is not listed as a Spring Cloud Dataflow source. Will I have to create a custom binder?
Redis only seems available as a sink with PubSub
There used to be a redis-binder for Spring Cloud Stream, but that has been deprecated for a while now. We have plans to implement a binder for Redis Streams in the future, though.
That said, if you have data in Redis, it'd be good to start building a redis-source as a custom application. We have many suppliers/sources that you can use as a reference.
There's currently also a blog-series in the works, which can be of further guidance when building custom applications.
Lastly, feel free to contribute the redis-supplier/source to the applications repo, we can collaborate on a pull request.
Related
Do I really need to use confluent (CLI maybe)? Can I write my custom connector?
How can I write my first Kafka Sink? How to deploy them?
For now, let's assume we have the following details:
Topic: curious.topic
S3 bucket name: curious.s3
Data in the topic: Text/String
My OS: Mac
You start at the documentation for S3 Sink, looking over the configuration properties, and understand how to run Connect itself and deploy any connector (use the REST API); no, confluent CLI is never needed.
You don't need to "write your own sink" because Confluent already has an S3 Sink Connector. Sure, you could fork their open-source repo, and compile it yourself, but that doesn't seem to be what you're asking.
You can download the connector using different command confluent-hub.
Note: pinterest/secor does the same thing, without Kafka Connect.
I'm pretty new to S3. I'm trying to create a Bucket and receive notifications on Object Created events using code only (not with the AWS Management UI).
I'm writing in dotnet so I'm using the AWSSDK.Core nuget package.
Until now I've managed to create a bucket using the sdk.
It seems like a trivial task though I couldn't find references around the web to accomplish it.
Also, the object storage is S3 compatible, not AWS S3.
I tried configuring a SNS Topic, but it seems that in order to enable notifications, the API requires SQS as a Queueing service, not RabbitMQ.
I did see another approach - configuration of a lambda function that transfers messages to RabbitMQ, but couldn't find references and documentation as well.
Any help is appreciated :)
I'm building a proxy server that streams large files from clients (iOS, web etc) to S3. I'm planning to use Spring reactive with Netty. I'm catching up with Netty and reactive architecture and so far it looks very promising. Does anyone of you has solved something like this before? If yes, can you please share some pointers or a GitHub URL for a starter project that will be great.
Few questions:
Is this possible to do with my current tech stack? I think it is. But wanted to get feedback.
With Netty and reactive architecture, chunks of data will be coming in an async fashion, how do I make sure I send the packet in sequence to S3?
Also, does AmazonS3 client supports reactive file operations using their Java SDK? If not then probably I will need to directly call their API using Spring reactive WebClient.
I understand this question is not to the point and very broad. The intent here is to find if anyone has solved something like this and if they can provide some tips.
Thanks.
With upcoming AWS SDK 2.0, you should be able to use reactive file operation with S3 as it would call subscribe on the publishing stream you pass to it.
I want to transfer data from kafka to hdfs by confluent, and I do the experiments by the quickstart in CLI model successfully.
Now, I intend to deploy confluent platform on production environment, Is there any tutorial about distributed deployment in detail?
And if there are many topics in kafka, such as register_info, video_play_info, video_like_info, video_repost_info and etc.
I need to process messages by different converters, and transfer to different hive table.
what should i?
I need to process messages by different converters, and transfer to different hive table
Run bin/connect-distributed etc/kafka/connect-distributed.propeties
Create individual JSON files for each HDFS Connector
POST them to the REST endpoint of Kafka Connect
Distributed mode is documented here
I'm deploying a small Spring Cloud Stream project,
using only http sources and jdbc sinks (3 instances each). The estimated load is 10 hits/second.
I was thinking on using redis because I feel more confortable with it, but in the latest documentation almost all the refereces are to kafka and RabbitMQ so I am wondering if redis is not going to be supported in the future or if there is any issue using redis.
Regards
Redis is not recommended for production with Spring Cloud Stream - the binder is not fully functional and message loss is possible.