Kafka S3 Sink basic doubts - amazon-s3

Do I really need to use confluent (CLI maybe)? Can I write my custom connector?
How can I write my first Kafka Sink? How to deploy them?
For now, let's assume we have the following details:
Topic: curious.topic
S3 bucket name: curious.s3
Data in the topic: Text/String
My OS: Mac

You start at the documentation for S3 Sink, looking over the configuration properties, and understand how to run Connect itself and deploy any connector (use the REST API); no, confluent CLI is never needed.
You don't need to "write your own sink" because Confluent already has an S3 Sink Connector. Sure, you could fork their open-source repo, and compile it yourself, but that doesn't seem to be what you're asking.
You can download the connector using different command confluent-hub.
Note: pinterest/secor does the same thing, without Kafka Connect.

Related

S3 Buckets notifications to RabbitMQ using dotnet sdk

I'm pretty new to S3. I'm trying to create a Bucket and receive notifications on Object Created events using code only (not with the AWS Management UI).
I'm writing in dotnet so I'm using the AWSSDK.Core nuget package.
Until now I've managed to create a bucket using the sdk.
It seems like a trivial task though I couldn't find references around the web to accomplish it.
Also, the object storage is S3 compatible, not AWS S3.
I tried configuring a SNS Topic, but it seems that in order to enable notifications, the API requires SQS as a Queueing service, not RabbitMQ.
I did see another approach - configuration of a lambda function that transfers messages to RabbitMQ, but couldn't find references and documentation as well.
Any help is appreciated :)

Kafka S3 Source Connector

I have a requirement where sources outside of our application will drop a file in an S3 bucket that we have to load in a kafka topic. I am looking at Confluent's S3 Source connector and currently working on defining the configuration for setting up the connector in our environment. But a couple of posts indicated that one can use S3 Source connector only if you have used the S3 Sink connector to drop the file in S3.
Is the above true? Where / what property do I use to define the output topic in the configuration? And can the messages be transformed when reading from S3 and putting them in the topic. Both will be JSON / Avro formats.
Confluent's Quick Start example also assumes you have used the S3 Sink connector, hence the questiion.
Thank you
I received a response from Confluent that it is true that the Confluent S3 Source connector can only be used with the Confluent S3 Sink connector. It cannot be used independently
Confluent release version 2.0.0 as of 2021-12-15. This version includes generalized s3 source connection mode

How can I configure Redis as a Spring Cloud Dataflow Source?

I've search for examples and I have not found any.
My intention is to use a Redis Stream as a source to Spring Cloud Dataflow and route messages to AWS Kinesis or S3 data sinks
Redis is not listed as a Spring Cloud Dataflow source. Will I have to create a custom binder?
Redis only seems available as a sink with PubSub
There used to be a redis-binder for Spring Cloud Stream, but that has been deprecated for a while now. We have plans to implement a binder for Redis Streams in the future, though.
That said, if you have data in Redis, it'd be good to start building a redis-source as a custom application. We have many suppliers/sources that you can use as a reference.
There's currently also a blog-series in the works, which can be of further guidance when building custom applications.
Lastly, feel free to contribute the redis-supplier/source to the applications repo, we can collaborate on a pull request.

How to send data to AWS S3 from Kafka using Kakfa Connect without Confluent?

I have a local instance of Apache Kafka 2.0.0 , it running very well. In my test I produce and consume data from twitter and put them in a specific topic twitter_tweets and everything is OK. But now I want to consume the topic twitter_tweets with Kafka Connect using de connector Kafka Connect S3 and obviusly store the data in AWS S3 without using Confluent-CLI.
Can I do this without Confluent? Anyone have an example or something to help me?
without Confluent
S3 Sink is open source; so is Apache Kafka Connect.
Connect framework is not specific to Confluent
You may use Kafka Connect Docker image, for example, or you may use confluent-hub to install S3 Connect on your own Kafka Connect installation.

Can't find extensions to add redis as destination mirth connector

I am new in mirth connector and want to publish message after processing of hl7 file. I tried to find extension for redis but couldn't find anything.
Can anyone help to find redis extension for mirth connector?
This is something you have to do through custom code. The easiest solution that I would do is to download the redis java client. From here you can bring in the jar and reference it inside of javascript connector.
To solve this take a look at:
https://redislabs.com/lp/redis-java/
Use this documentation for how to reference the specific redis methods
and
Mirth connect to mongo db connectivity
Use this documentation for how to reference external libraries in Mirth connect.
This should give you all that you need to solve the issue.