Started Zookeeper with the following properties ie zookeeper.properties
dataDir=/tmp/zookeepeeer
clientPort=2186
maxClientCnxns=0
auto.offset.reset=smallest
authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Server.properties
group.initial.rebalance.delay.ms=0
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
listeners=SASL_PLAINTEXT://localhost:9092
security.inter.broker.protocol= SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
super.users=User:admin
zookeeper.set.acl=true
Kafka_server_jaaz.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
The error is as follows
java.lang.SecurityException: zookeeper.set.acl is true, but the verification of the JAAS login file failed.
I have tried the below solution but it again fails with the following error inspite of doing the changes
kafka_server_jaaz.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
The server.properties is the same as above
But it fails with the following error:
[2018-02-23 10:16:04,459] ERROR Invalid ACL (kafka.utils.ZKCheckedEphemeral)
[2018-02-23 10:16:04,459] ERROR Invalid ACL (kafka.utils.ZKCheckedEphemeral)
[2018-02-23 10:16:04,460] FATAL [Kafka Server 0], Fatal error during KafkaServer
startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException:
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL
In Kafka you also need to configure the SASL client which will be used when connecting to Zookeeper. This is done using the Client context in the Kafka JAAS config, e.g.
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
If needed, the context name can be changed using the zookeeper.sasl.clientconfig system property.
Related
I am trying to put a value in Infinispan cache using Hotrod nodeJS client. The code runs fine if the server is installed locally. However, when I run the same code with Infinispan server hosted on docker container I get the following error
java.lang.SecurityException: ISPN006017: Unauthorized 'PUT' operation
try {
client = await infinispan.client({
port: 11222,
host: '127.0.0.1'
}, {
cacheName: 'testcache'
});
console.log(`Connected to cache`);
await client.put('test', 'hello 1');
await client.disconnect();
} catch (e) {
console.log(e);
await client.disconnect();
}
I have tried setting CORS Allow all option on the server as well
Need to provide custom config.yaml to docker with following configurations
endpoints:
hotrod:
auth: false
enabled: false
qop: auth
serverName: infinispan
Unfortunately the nodejs client doesn't support authentication yet. The issue to implement this is https://issues.redhat.com/projects/HRJS/issues/HRJS-36
During request GET in Postman (https://localhost:9001/test)
I've received an error:
Error: write EPROTO 8768:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:c:\users\administrator\buildkite-agent\builds\pm-electron\postman\electron-release\vendor\node\deps\openssl\openssl\ssl\record\ssl3_record.c:252:
Warning: This request did not get sent completely and might not have all the required system headers.
Postman Configuration:
SSL certificate verification is disabled;
Proxy configuration - Default Proxy Configuration and Proxy configurations for sending requests are disabled;
Request timeout in ms - 0.
For Localhost we don't really need https, please try with http://localhost:9001/test
check that you are using the HTTP protocol not HTTPS to send requests to the server:
example
export const config = {
baseUrl: "http://localhost:4000"
}
it has to be the http not https for localhost
"http://localhost:3000"
NOT
"https://localhost:3000"
const transporter = nodemailer.createTransport({
host:'smtp.gmail.com',
port:587,
secure:false,
requireTLC:true,
auth: {
user:'youmail#gmail.com',
pass:'youpass'
}
}
Notice that secure: is false.
The error occurs when secure: is true.
the reason was in an incorrect link.
In the controller I have used
#RequestMapping(value="/{baseSiteId}/test")
And this {baseSiteId} was not what I expected.
I am trying to configure pithos.io on my server testmbr1.kabuter.com:8081:
Here is how I start pithos.io:
java -jar pithos-0.7.5-standalone.jar -f pithos.yaml
My pithos.yaml:
service:
host: "0.0.0.0"
port: 8081
logging:
level: info
console: true
overrides:
io.pithos: debug
options:
service-uri: testmbr1.kabuter.com
default-region: myregion
keystore:
keys:
AKIAIOSFODNN7EXAMPLE:
master: true
tenant: test#example.com
secret: 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
bucketstore:
default-region: myregion
cluster: "45.33.37.148"
keyspace: storage
regions:
myregion:
metastore:
cluster: "45.33.37.148"
keyspace: storage
storage-classes:
standard:
cluster: "45.33.37.148"
keyspace: storage
max-chunk: "128k"
max-block-chunk: 1024
cassandra:
saved_caches_directory: "target/db/saved_caches"
data_file_directories:
- "target/db/data"
commitlog_directory: "target/db/commitlog"
I am using AWS Java SDK to connect. Below is my JUnit:
#Test
public void testPithosIO() {
try {
ClientConfiguration config = new ClientConfiguration();
config.setSignerOverride("S3SignerType");
EndpointConfiguration endpointConfiguration = new EndpointConfiguration("http://testmbr1.kabuter.com:8081",
"myregion");
BasicAWSCredentials awsCreds = new BasicAWSCredentials("AKIAIOSFODNN7EXAMPLE",
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion("myregion")
.withClientConfiguration(config)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.withEndpointConfiguration(endpointConfiguration).build();
s3Client.createBucket("mybucket1");
System.out.println(s3Client.getRegionName());
System.out.println(s3Client.listBuckets());
} catch (Exception e) {
e.printStackTrace();
}
}
My problems is 1) I am getting: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to mybucket1.testmbr1.kabuter.com:8081 [mybucket1.testmbr1.kabuter.com/198.105.254.130, mybucket1.testmbr1.kabuter.com/104.239.207.44] failed: connect timed out
This was fixed by adding mybucket1.testmbr1 CNAME record pointing to testmbr1.kabuter.com.
2) while trying to createBucket: s3Client.createBucket("mybucket1") I am getting:
com.amazonaws.services.s3.model.AmazonS3Exception: The request signature we calculated does not match the signature you provided. Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d98b7908-d11e-458a-be27-254b136f344a), S3 Extended Request ID: d98b7908-d11e-458a-be27-254b136f344a
How do I get it to working? pithos.io seems to have limited documentation.
Any pointers?
Since my endpoint was using a non-standard port:
http://testmbr1.kabuter.com:8081
I had to define server-uri in pithos.yaml with the port as well:
server-uri : testmbr1.kabuter.com:8081
I am facing the following error while enabling SASL on Zookeeper and broker authentication.
[2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0
(org.apache.zookeeper.server.ZooKeeperServer)
[2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server. ZooKeeperServer)
[2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection)
[2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer)
Following configuration is given in the JAAS file, which is passed as KAFKA_OPTS to take it as JVM parameter:-
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
kafka broker's server.properties has following extra fields set:-
zookeeper.set.acl=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=path
ssl.keystore.password=anything
ssl.key.password=anything
ssl.truststore.location=path
ssl.truststore.password=anything
Zookeeper properties are as follows:
authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
I found the issue by increasing the log level to DEBUG. Basically follow the steps below. I don't use SSL but you will integrate it without any issue.
Following are my configuration files:
server.properties
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
auto.create.topics.enable=false
broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
advertised.host.name=localhost
num.partitions=1
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=30000000
log.flush.interval.ms=1800000
log.retention.minutes=30
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
super.users=User:admin
zookeeper.properties
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none
consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group
Now are the most important files for making your server starting without any issue:
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
After doing all these configuration, on a first terminal window:
Terminal 1 (start Zookeeper server)
From kafka root directory
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh config/zookeeper.properties
Terminal 2 (start Kafka server)
From kafka root directory
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh config/server.properties
[BEGIN UPDATE]
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret";
};
Terminal 3 (start Kafka consumer)
On a client terminal, export client jaas conf file and start consumer:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
Terminal 4 (start Kafka producer)
If you also want to produce, do this on another terminal window:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
[END UPDATE]
You need to create a JAAS config file for Zookeeper and make it use it.
Create a file JAAS config file for Zookeeper with a content like this:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="admin-secret";
};
Where user (admin) and password (admin-secret) must match with username and password that you have in Client section of Kafka JAAS config file.
To make Zookeeper use the JAAS config file, pass the following JVM flag to Zookeeper pointing to the file created before.
-Djava.security.auth.login.config=/path/to/server/jaas/file.conf"
If you are using Zookeeper included with Kafka package you can launch Zookeeper like this, assuming that your Zookeeper JAAS config file is located in ./config/zookeeper_jaas.conf
EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties
i have some error
Caused by: FWLSE0099E: An error occurred while invoking procedure [project EMoney]InquiryAdapters/HttpRequestFWLSE0100E: parameters: [project EMoney]
Http request failed: org.apache.http.conn.HttpHostConnectException: Connect to rss.cnn.com:80 [rss.cnn.com/74.125.200.121] failed: Connection timed out: connect
FWLSE0101E: Caused by: [project EMoney]org.apache.http.conn.HttpHostConnectException: Connect to rss.cnn.com:80 [rss.cnn.com/74.125.200.121] failed: Connection timed out: connectjava.lang.RuntimeException: Http request failed: org.apache.http.conn.HttpHostConnectException: Connect to rss.cnn.com:80 [rss.cnn.com/74.125.200.121] failed: Connection timed out: connect
Caused by: org.apache.http.conn.HttpHostConnectException: Connect to rss.cnn.com:80 [rss.cnn.com/74.125.200.121] failed: Connection timed out: connect
Caused by: java.net.ConnectException: Connection timed out: connect
inquiryAdapters.xml
<wl:adapter name="InquiryAdapters"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:wl="http://www.ibm.com/mfp/integration"
xmlns:http="http://www.ibm.com/mfp/integration/http">
<displayName>InquiryAdapters</displayName>
<description>InquiryAdapters</description>
<connectivity>
<connectionPolicy xsi:type="http:HTTPConnectionPolicyType">
<protocol>http</protocol>
<domain>rss.cnn.com</domain>
<port>80</port>
<connectionTimeoutInMilliseconds>30000</connectionTimeoutInMilliseconds>
<socketTimeoutInMilliseconds>30000</socketTimeoutInMilliseconds>
<maxConcurrentConnectionsPerNode>50</maxConcurrentConnectionsPerNode>
<!-- Following properties used by adapter's key manager for choosing specific certificate from key store
<sslCertificateAlias></sslCertificateAlias>
<sslCertificatePassword></sslCertificatePassword>
-->
</connectionPolicy>
</connectivity>
<procedure name="getStories"/>
<procedure name="getStoriesFiltered"/>
<procedure name="getFeedsFiltered"/>
</wl:adapter>
inquiryAdapters.impl
function getStories(interest) {
path = getPath(interest);
var input = {
method : 'get',
returnedContentType : 'xml',
path : path
};
return WL.Server.invokeHttp(input);}
function getStoriesFiltered(interest) {
path = getPath(interest);
var input = {
method : 'get',
returnedContentType : 'xml',
path : path,
transformation : {
type : 'xslFile',
xslFile : 'filtered.xsl'
}
};
return WL.Server.invokeHttp(input);}
function getFeedsFiltered() {
var input = {
method : 'get',
returnedContentType : 'xml',
path : "rss.xml",
transformation : {
type : 'xslFile',
xslFile : 'filtered.xsl'
}
};
return WL.Server.invokeHttp(input);}
function getPath(interest) {
if (interest == undefined || interest == '') {
interest = '';
}else {
interest = '_' + interest;
}
return 'rss/edition' + interest + '.rss';}
when i want to invoke the adapters (http adapters).
If you have followed the next steps yet you get a "Connection timed out" error, you likely have a network issue unrelated to MobileFirst Platform 6.3: check for any firewalls that prevent your connection to arrive to CNN.com
Created a new project
Create a new HTTP adapter
Right-click on adapter folder > Deploy MobileFirst Adapter
Right-click on adapter folder > Call MobileFirst Adapter
Now a browser window with the response should've opened.