SASL support in WSO2 Stream Processor / Siddhi - ssl

Qs (1) Does Siddhi application support connecting to Kafka using SASL_SSL protocol with PLAIN as SASL Mechanism.
Qs (2) If not what SASL options are available. I am using WSO2 Streaming Processor 4.4.
Below is a simple program that's expected to read from a Kafka topic and write the content as is on server console.
Note-1: The connections establishes just fine (Deployment on worker node is also successful). But nothing reflected on console.
Note:2: The program runs just fine if connected to a non secure Kafka cluster (I remove the optional.configuration and change the bootstrap servers value appropriately)
#App:name("SKAppOne")
#source(
type='kafka',
topic.list='skapp1',
group.id='g1',
partition.no.list='0',
threading.option='single.thread',
bootstrap.servers='**KAFKABROKERIP:KAFKABROKERPORT**',
optional.configuration=
***"sasl.mechanism:PLAIN,security.protocol:SASL_SSL,***
sasl.jaas.config:org.apache.kafka.common.security.plain.PlainLoginModule required username='**validuserid**' password='**validpassword**';,
ssl.truststore.location:**validlocationfor_client.truststore.jks file**,
ssl.truststore.password:**validpassword**,
ssl.keystore.location:**validlocationfor_server.keystore.jks file**,
ssl.keystore.password:**validpassword**,
ssl.key.password:**validpassword**",
#map(type='json'))
define stream InputStreamFromSecureKafka (name string, location string);
#sink(type='log')
define stream SOutputStreamToConsole (name string, location string);
#info(name='kafkatosconsole')
from InputStreamFromSecureKafka
select *
insert into SOutputStreamToConsole;

Related

Where can I pass the properties file in Datastage when using Kafka Connector

There are some properties that I want to change as, for instance, security.protocol from SASL_PLAINTEXT to SASL_SSL. But the Kafka Connector in Datastage has very limited number of properties (host, use kerberos, principal name, keytab, topic name, consumer group, max poll records, max messages, reset policy timeout and classpath)
While reading this documentation the very first thing to do is to pass the JAAS configuration file. But my question is:
Where should I put this file? In the Datastage or Kafka side?
How can I point to this file?
This is what I tried:
Added a before-job subroutine in Datastage and use the following command:
export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
Added the -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf to the Kafka Client Classpath in the Kafka Connector propertis in Datastage
But no matter what I do, everytime that I run the job the parameter security.protocol keeps unchanged:
Kafka_Connector_2,1: security.protocol = SASL_PLAINTEXT
Meaning that it's not reading the properties file.
Have you faced similar problem like these?
The Kafka Connector does have support for SASL SSL
Kafka Connector Properties
This was added in JR61201 for 11.5 and is available in 11.7.1.1
If you want to insert a JVM option such as
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
You should be able to leverage the CC_JVM_OPTIONS environment variable.

Is it possible to implement rate limiting with Geode Redis Adapter?

I am trying to use Geode Redis Adapter as my server for Rate Limiting provided by Spring Cloud Gateway. If I use a real Redis Server, everything works perfectly, but with Geode Redis Adapter doesn't.
I am not too sure if this functionality is supported.
I tried to start a [Geode image] (https://hub.docker.com/r/apachegeode/geode/) exposing the default Redis port 6739. Starting the container, I executed using gfsh the following commands:
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
When I try to access in my local machine by redis-cli -h localhost -p 6379 I can get connected.
My implementation is simple:
application.yaml
- id: rate-limitter
predicates:
- Path=${GUI_CONTEXT_PATH:/rate-limit}
- Host=${APP_HOST:localhost:8080}
filters:
- name: RequestRateLimiter
args:
key-resolver: "#{#remoteAddrKeyResolve}"
redis-rate-limiter:
replenishRate: ${rate.limit.replenishRate:1}
burstCapacity: ${rate.limit.burstCapacity:2}
uri: ${APP_HOST:localhost:8080}
Application.java
#Bean
KeyResolver remoteAddrKeyResolve() {
return exchange -> Mono.just(exchange.getSession().subscribe().toString());
}
When my application is started and I try to access /rate-limit, I expected to connect to redis and my page be displayed.
However, my Spring application keeps trying to access and can't i.l.c.p.ReconnectionHandler: Reconnected to localhost:6379. So, the page is not displayed and keep loading. FIXED in Edit1 below
Problem is I am using RedisRateLimiter and tried to simulate the access with a for loop. Checking the RedisRateLimiter.REMAINING_HEADER, the value is -1 always. Doesn't seems right, because I don't have this issue in Redis itself.
During the start of the application, I also receive these messages on connection to Geode Redis Adapter:
Starting without optional epoll library
Starting without optional kqueue library
Is anything missing in my Geode Redis Adapter or anything else in Spring?
Thank you
Edit 1: I was missing to start the locator and region, that's why I wasn't able to connect.
start locator --name=locator
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
create region --name=redis-region --type=REPLICATE_PERSISTENT

How to get graph with transaction support from remote gremlin server?

I have next configuration: remote Gremlin server (TinkerPop 3.2.6) with Janus GraphDB
I have gremlin-console (with janus plugin) + conf in remote.yaml:
hosts: [10.1.3.2] # IP og gremlin-server host
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
So I want to make connection through gremlin-server (not to JanusGraph directly by graph = JanusGraphFactory.build().set("storage.backend", "cassandra").set("storage.hostname", "127.0.0.1").open();) and get graph which supports transactions?
Is it possible? Because as I see all TinkerFactory graphs do not support transactions
As I understood to use the Janus graph through the gremlin server you should:
Define ip&port in the config file of the gremlin-console:
conf/remote.yaml
Connect by Gremlin-console to the gremlin server:
: remote connect tinkerpop.server conf/remote.yaml
==> Configured localhost/10.1.23.113: 8182
...and work in remote mode (using :> or :remote console), i.e. send ALL commands (or #script) to the gremlin-server.
:> graph.addVertex(...)
or
:remote console
==>All scripts will now be sent to Gremlin Server - [10.1.2.222/10.1.2.222:818]
graph.addVertex(...)
You don't need to define variables for the graph and the trawersal, but rather used
graph. - for the graph
g. - for the traversal
In this case, you can uses all graph features that are provided by the JanusGraphDB.
Tinkerpop provides Cluster object to keep the config of connection. Using Cluster object graphTraversalSource object can be spawned.
this.cluster = Cluster.build()
.addContactPoints("192.168.0.2","192.168.0.1")
.port(8082)
.credentials(username, password)
.serializer(new GryoMessageSerializerV1d0(GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance())))
.maxConnectionPoolSize(8)
.maxContentLength(10000000)
.create();
this.gts = AnonymousTraversalSource
.traversal()
.withRemote(DriverRemoteConnection.using(cluster));
gts object is thread safe. With remote each query will be executed in separate transaction. Ideally gts should be a singleton object.
Make sure to call gts.close() and cluster.close() upon shutdown of application else it may lead to connection leak.
I believe that connecting a java application to a running gremlin server using withRemote() will not support transactions. I have had trouble finding information on this as well but as far as I can tell, if you want to do anything but read the graph, you need to use "embedded janusgraph" and have your remotely hosted persistent data stored in a "storage backend" that you connect to from your application as you describe in the second half of your question.
https://groups.google.com/forum/#!topic/janusgraph-users/t7gNBeWC844
Some discussion I found around it here ^^ makes a mention of it auto-committing single transactions in remote mode, but it doesn't seem to do that when I try.

Configuring SSL channel connectivity on MQ client machine

From Linux server with MQ client installed we are trying to set up connection to secured channel. I am ETL person and our MQ admin is struggling. Anyways I will explain what I tried (which obviously hasn't worked yet ) and anyone please let me know what else needs to be done to set up the connectivity.. Thanks :)
tmp/mqmutility/keyrepmodmq> ls
AMQCLCHL.TAB key.kdb key.rdb key.sth MODE_MODELTAP_DEV_keyStLst.txt
export MQSSLKEYR=/tmp/mqmutility/keyrepmodmq/key
export MQCHLLIB=/tmp/mqmutility/keyrepmodmq
export MQCHLTAB=AMQCLCHL.TAB
/opt/mqm/samp/bin> amqsputc <queue_name> <queue_manager_name>
Sample AMQSPUT0 start
MQCONN ended with reason code 2058
Note: I can connect to the same queue manager for a non-SSL channel.
Any help will be great and other approaches you follow for SSL channel connectivity from client machine will also be helpful.
When using a Client Channel Definition Table (CCDT) file - your AMQCLCHL.TAB file, a return code of 2058 usually means that the queue manager name the application tried to use - your 'queue_manager_name' - was not found in any of the channel entries in the CCDT file.
If you're using. MQ V8 you can very easily display the entries in your CCDT file and the queue manager names they are configured for using the following command:
runmqsc -n
DISPLAY CHANNEL(*) QMNAME
If none of the channels in your file have the queue manager name you are using when running the amqsputc sample then this is the cause of your 2058 reason code.
Hopefully it will be clear when you see the entries in the file listed out which queue manager name you should be using, but if not, update your question with some more details (like the contents of said file and the queue manager details) and we can help further.
You must ensure that you have a CLNTCONN channel defined which has the queue manager name you want to use in the QMNAME field, and that you have a matching named SVRCONN channel defined on the queue manager. Since you are using SSL, you must also ensure that these two channels are using the same SSLCIPH.
Please read Creating server-connection and client-connection definitions on the server and it's child topics.

ActiveMQ; how to make a broker distribute messages among several transportConnectors

is it possible to make the ActiveMQ broker distribute messages received on one transportConnector to other transportConnectors as well?
The concrete use case is this: I have a Java client sending messages using the openwire transportConnector and I would like to be able to read them on the mqtt transportConnector.
I use the sample jndi.properties file that is on the ActiveMQ page http://activemq.apache.org/jndi-support.html:
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
# use the following property to configure the default connector
java.naming.provider.url = tcp://localhost:61616
# use the following property to specify the JNDI name the connection factory
# should appear as.
#connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue
# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
I had to replace the default 'vm' transportConnector with the 'tcp' one because it did not execute using 'vm'.
The messages are pushed to my Java MessageListener instance but my mqtt client does not show them. I tried with different topics, started with 'example.MyTopic' up to '/example/MyTopic'.
Any help would be much appreciated.
Many thanks,
Roman
The broker does that by default so you are not doing something right, check the admin console for producers and consumer registered on the given destinations to see what is going on. You must remember that a Topic consumer will not receive messages sent to that Topic unless it was online at the time they were sent or you had previously created a durable topic subscription.