I'm trying to connect to remove kafka broker as consumer using 0.11.0.3 kafka version via SSL using commandline tool, connection string is the following
kafka-console-consumer.bat \
--bootstrap-server host:port \
--topic topicName \
--from-beginning \
--group groupId \
--consumer.config ssl.properties
ssl.properties file
security.protocol=SSL
ssl.truststore.location=path/to/truststore.jks
ssl.truststore.password=1234567
ssl.keystore.location=path/to/keystore.jks
ssl.keystore.password=1234567
ssl.key.password=1234567
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.type=JKS
ssl.keystore.type=JKS
Exception I get
[2020-04-28 17:24:39,522] ERROR [Consumer clientId=consumer-1, groupId=binomix] Connection to node -1 (/178.208.149.84:9301) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2020-04-28 17:24:39,524] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1582)
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:544)
at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1216)
at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1184)
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:471)
at org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:448)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:313)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:265)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:170)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:547)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:235)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:317)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1226)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1191)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1176)
at kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:439)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:105)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:77)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
at sun.security.ssl.Alerts.getSSLException(Alerts.java:198)
at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:333)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:325)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1689)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:226)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1082)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:1015)
at sun.security.ssl.Handshaker$1.run(Handshaker.java:1012)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1520)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:402)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:484)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:340)
... 18 more
Caused by: java.security.cert.CertificateException: No subject alternative names present
at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:145)
at sun.security.util.HostnameChecker.match(HostnameChecker.java:94)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:459)
at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:440)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:284)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:144)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1676)
... 27 more
Processed a total of 0 messages
Any ideas why it happens? Is it an issue with certificates? I did not generate them, they were given me by kafka server owner.
Thanks in advance.
It seems that Subject Alternative Name (SAN) is missing from your brokers' certificates (for example /var/private/ssl/kafka.server.truststore.jks).
To do so, append the argument -ext SAN=DNS:{FQDN} to the keytool command:
keytool \
-keystore kafka.server.keystore.jks \
-alias localhost \
-validity {validity} \
-genkey \
-keyalg RSA \
-ext SAN=DNS:{FQDN}
Make sure to inclued SAN when creating servers' keystores. This is also mentioned in the Confluent's Security Tutorial:
If host name verification is enabled, clients will verify the server’s
fully qualified domain name (FQDN) against one of the following two
fields:
Common Name (CN)
Subject Alternative Name (SAN)
Both fields are valid, however RFC-2818 recommends the use of SAN. SAN is also more flexible,
allowing for multiple DNS entries to be declared. Another advantage is
that the CN can be set to a more meaningful value for authorization
purposes.
Alternatively, you can choose to disable server host verification:
Disable server host name verification by setting
ssl.endpoint.identification.algorithm to an empty string.
Therefore, you just need to set in server.properties the following configuration and finally restart your Kafka Cluster:
ssl.endpoint.identification.algorithm=
Please note that inclusion of the 'SAN' when generating the key-pair (and hence the key-store) is not enough.
keytool \
-keystore kafka.server.keystore.jks \
-alias {alias} \
-validity {validity} \
-genkey \
-keyalg RSA \
-ext SAN=DNS:{FQDN}
This needs to be followed by inclusion of the 'SAN' when creating the certificate signing request as well.
keytool \
-keystore kafka.server.keystore.jks \
-alias {alias} \
-certreq -ext SAN=DNS:{FQDN}
-file {csr_filename}
One can verify the certificate signing request thus created, and it should have the relevant SAN present.
keytool \
-v -printcertreq -file {csr_filename}
Later, if openssl's x509 command is used to fulfil the certificate signing request, care should be taken to explicitly include the x509 version 3 extensions. The 'SAN' is one such extension and unless explicitly included, will not find its way into the certificate that is finally issued.
The concept is discussed in detail here - Subject Alternative Name not present in certificate.
Related
I've trying to setup a cluster by https://docs.confluent.io/platform/current/security/security_tutorial.html with SSL keys and username/password like it is described.
But failed to find a proper way to set up a dname of a key and broker's parameter "super.users"
Its told to create a key with:
# Without user prompts, pass command line arguments keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -validity {validity} -genkey -storepass {keystore-pass} -keypass {key-pass}
-dname {distinguished-name} -ext SAN=DNS:{hostname}
And later on with configuring broker's server.properties we need to setup a super.users:
Because this tutorial configures the inter-broker security protocol as
SSL, set the super user name to be the distinguished name configured
in the broker’s certificate. (See other authorization configuration
options).
super.users=User:;User:;User:;User:kafka-broker-metric-reporter
The problem is dname must follow a pattern: "CN=cName, OU=orgUnit, O=org, L=city, S=state, C=countryCode"
Moreover, there is a restriction on CN for kafka: it must be equal SAN FQDN setting.
So, a question is:
in case we have a localhost and setting up cluster with single broker, should we set dname for key like "CN=localhost" and the command will be:
keytool -keystore kafka.server.keystore.jks -alias localhost -genkey
-dname "CN=localhost" -ext SAN=DNS:localhost
and then have in server.properties entry:
super.users=User:CN=localhost
?
And if it true, the second question:
In case we still have a localhost and setting up a 2 separate brokers there. So, we will have a same dname?
Actually that's was correct to add users in config with their dname:
super.users=User:CN=localhost
It was not so obvious, but it's works.
I am trying to connect to a kafka 2.0 server using SSL. I have been provided with a Truststore file and a Keystore file and since I am using python I tried to extract the the client certificate using the command:
keytool -exportcert -alias localhost -keystore kafka.client.keystore.jks -rfc -file certificate.pem
taken from here (a similar command was provided also here). The problem is I am getting an error:
keytool error: java.lang.Exception: Alias does not exist
which I have some trouble decipher. Am I supposed to create an alias myself or I should ask for an alias from the ones provided the JKS containers? I am not really familiar with the SSL configuration so I may be missing something here.
I have also tried to check the available aliases in my machine using the command (from here):
keytool -list -keystore /etc/ssl/certs/java/cacerts -storepass changeit
but I am not sure 1) if this is the right place to search for the aliases and 2) I could not find any relevant entry there I think.
If someone can provide some instructions on how I should proceed from here it would be great.
Well, after some research I think I found a solution over this problem.
1 First I used this command to find out the aliases included in the files:
keytool -list -rfc -keystore kafka.client.keystore.jks
in my case there were 2 aliases: client and caroot. The output looks like this:
Keystore type: PKCS12
Keystore provider: XXX
Your keystore contains 2 entries
Alias name: caroot
Creation date: Sep 4, 2020
Entry type: trustedCertEntry
-----BEGIN CERTIFICATE-----
.....
where it is clear what the aliases are.
2 Then I used the proper alias in place of localhost:
keytool -exportcert -alias client -keystore kafka.client.keystore.jks -rfc -file certificate.pem
to extract the client certificate.
Before I start, I have looked at 2 other questions:
keytool error: java.lang.Exception: Public keys in reply and keystore don't match
And
java.lang.Exception: Public keys in reply and keystore don't match
But I believe that the error comes in the way I am generating the csr that I submit to my provider(Digicert). I will detail my commands below, notice that this is the way our department has always done this and up till this point I can't understand why this is not working (I am also not capacitated at all to do system administration things, but this landed on me)
First - Generating the keystore
keytool -genkey -alias aliasItem -keyalg RSA -sigalg SHA256withRSA -keysize 2048 -keypass <password> -dname "CN=server.domain.whatever, OU=IT, O=SOME NAME, L=City ST=State C=COUNTRY" -keystore keycerts -storepass <password>
I changed the important items as you might well assume for security concerns. Afterwards:
keytool -keycerts -keyalg RSA -sigalg SHA256withRSA -v -alias aliasItem -file outputfile.csr -keystore keycerts
After I get the csr, I submit it to my provider, there is no copy/paste error in this case since I import the file directly. They provide two .crt files, one from the service provider, and one for the server i am requesting it for. After I move these files to my server and attempt to import the service provider's .crt to the keystore I get an error, this is the command I use for importing the .crt to the keystore:
keytool -import -v -alias aliasItem -file <Provider>.crt -keystore keycerts
Which outputs the error:
keytool error: java.lang.Exception: Public keys in reply and keystore don't match
java.lang.Exception: Public keys in reply and keystore don't match
at sun.security.tools.KeyTool.establishCertChain(KeyTool.java:2688)
at sun.security.tools.KeyTool.installReply(KeyTool.java:1940)
at sun.security.tools.KeyTool.doCommands(KeyTool.java:855)
at sun.security.tools.KeyTool.run(KeyTool.java:194)
at sun.security.tools.KeyTool.main(KeyTool.java:188)
I have tried changing some parts of the scripts a total of 8 times now, all using the notes and documentation provided by me with no positive results. What strikes me as odd is that this server is identical in all aspects to another one of our test servers, for which I had done this before with no issues. I am still trying to do different things to solve my issue, but due to my limited knowledge in this I believe that there has got to be something I am doing from the beginning that might be wrong.
Any input will be greatly appreciated.
The second command for generating the car seems to be incorrect because "-keycerts" is an illegal parameter. IT must be "-certreq" .
Now, the error states that the private key does not match with the certificate which you are trying to install. Kindly check the below
a) Please make sure you are trying to use the same keystore file which you used to generate the csr
b) Please check while importing you are using the correct alias name. The alias should match with the one you mentioned while generating the keystore file.
c) Please make sure you are importing the correct server certificate and not the intermediate or root. Digicert must have provided you all the three files namely, Server certificate (CN of this certificate must match with the one when you generated the csr file and this is the one which needs to be imported), Intermediate and root.
If the above steps don't work then you have to generate the new csr and keystore file and ask Digicert to Reissue the certificates. They will do it for you free of cost.
I installed Rundeck on a new RHEL 7.7 box, using the rpm method. I can access the server just fine with http, but when I follow the directions in the docs, the server is not accessible from browsers or by curling localhost.
The only error I receive is:
WARN SslContextFactory --- [ main] No supported ciphers from [SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,...(many more ciphers)
Grails application running at https://localhost:4443 in environment: production
curl localhost:4443
curl: (35) Peer reports it experienced an internal error.
Relevant parts of the configuration files are as follows:
/etc/rundeck/profile:
RDECK_JVM="-Drundeck.jaaslogin=$JAAS_LOGIN \
-Djava.security.auth.login.config=$JAAS_CONF \
-Dloginmodule.name=$LOGIN_MODULE \
-Drdeck.config=$RDECK_CONFIG \
-Drundeck.server.configDir=$RDECK_SERVER_CONFIG \
-Dserver.datastore.path=$RDECK_SERVER_DATA/rundeck \
-Drundeck.server.serverDir=$RDECK_INSTALL \
-Drdeck.projects=$RDECK_PROJECTS \
-Drdeck.runlogs=$RUNDECK_LOGDIR \
-Drundeck.config.location=$RDECK_CONFIG_FILE \
-Djava.io.tmpdir=$RUNDECK_TEMPDIR \
-Drundeck.server.workDir=$RUNDECK_WORKDIR \
-Dserver.http.port=$RDECK_HTTP_PORT \
-Drdeck.base=$RDECK_BASE \
-Djdk.tls.ephemeralDHKeySize=jdk8 \
-Drundeck.rundeck.jetty.connector.ssl.excludedCipherSuites=SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA384,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,SSL_ECDH_RSA_WITH_AES_256_CBC_SHA384,SSL_DHE_RSA_WITH_AES_256_CBC_SHA256,SSL_DHE_DSS_WITH_AES_256_CBC_SHA256,SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_AES_256_CBC_SHA,SSL_ECDH_ECDSA_WITH_AES_256_CBC_SHA,SSL_ECDH_RSA_WITH_AES_256_CBC_SHA,SSL_DHE_RSA_WITH_AES_256_CBC_SHA,SSL_DHE_DSS_WITH_AES_256_CBC_SHA,SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,SSL_ECDH_RSA_WITH_AES_128_CBC_SHA256,SSL_DHE_RSA_WITH_AES_128_CBC_SHA256,SSL_DHE_DSS_WITH_AES_128_CBC_SHA256,SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA,SSL_ECDH_RSA_WITH_AES_128_CBC_SHA,SSL_DHE_RSA_WITH_AES_128_CBC_SHA,SSL_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,SSL_ECDH_RSA_WITH_AES_256_GCM_SHA384,SSL_DHE_DSS_WITH_AES_256_GCM_SHA384,SSL_DHE_RSA_WITH_AES_256_GCM_SHA384,SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256,SSL_RSA_WITH_AES_128_GCM_SHA256,SSL_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,SSL_ECDH_RSA_WITH_AES_128_GCM_SHA256,SSL_DHE_RSA_WITH_AES_128_GCM_SHA256,SSL_DHE_DSS_WITH_AES_128_GCM_SHA256"
#
# Set min/max heap size
#
RDECK_JVM="$RDECK_JVM $RDECK_JVM_SETTINGS"
#
# SSL Configuration - Uncomment the following to enable. Check SSL.properties for details.
#
if [ -n "$RUNDECK_WITH_SSL" ] ; then
RDECK_JVM="$RDECK_JVM -Drundeck.ssl.config=$RDECK_SERVER_CONFIG/ssl/ssl.properties -Dserver.https.port=${RDECK_HTTPS_PORT} -Dorg.eclipse.jetty.util.ssl.LEVEL=DEBUG"
fi
/etc/sysconfig/rundeckd:
export RUNDECK_WITH_SSL=true
export RDECK_HTTPS_PORT=4443
If I add export RDECK_JVM_OPTS="-Dserver.ssl.ciphers=SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384" to /etc/sysconfig/rundeckd I get the following:
[2020-03-29 09:01:51.533] WARN config --- [ main] Weak cipher suite SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 enabled for SslContextFactory#1456dec8[provider=null,keyStore=file:///etc/rundeck/ssl/keystore,trustStore=file:///etc/rundeck/ssl/truststore]
Grails application running at https://localhost:4443 in environment: production
curl: (35) Peer reports it experienced an internal error.
Other configurations:
/etc/rundeck/framework.properties:
framework.server.name = server-dns
framework.server.hostname = server-dns
framework.server.port = 4443
framework.server.url = https://server-dns
framework.rundeck.url = https://server-dns
/etc/rundeck/rundeck-config.properties:
grails.serverURL=https://server-dns:4443
keystore and truststore exist, I have attempted both self signed and real crts.
I'm at a loss here. I followed all sorts of guides and advice from the internet leading to my current (mis?)configuration.
Thanks
Edited to fix mistakes in the post.
Maybe you need to reference your keystore/trustore in ssl.properties file (usually at /etc/rundeck/ssl/ssl.properties path). I wrote a little guide to set up Rundeck with SSL.
1.- install Rundeck.
rpm -Uvh https://repo.rundeck.org/latest.rpm
yum install rundeck
2.- create keystore: (if you already have a certificate in .key/.crt or .pk12 formats, skip to 2b)
keytool -keystore /etc/rundeck/ssl/keystore -alias rundeck -genkey -keyalg RSA -keypass password -storepass password
2b.- in case you have your own certificate, do below:
If you have .crt and .key files, create a .p12 file:
openssl pkcs12 -export -in YOUR.crt -inkey YOUR.key -out NEW.p12
Convert it to a .jks (also if you have only the .p12 file):
keytool -importkeystore -destkeystore keystore -srckeystore NEW.p12 -srcstoretype pkcs12
3.- copy keystore as truststore.
4.- edit /etc/rundeck/ssl/ssl.properties file:
keystore=/etc/rundeck/ssl/keystore
keystore.password=password
key.password=password
truststore=/etc/rundeck/ssl/truststore
truststore.password=password
5.- edit /etc/rundeck/framework.properties file:
framework.server.port = 4443
framework.server.url = https://localhost:4443
6.- edit /etc/rundeck/rundeck-config.properties file:
grails.serverURL=https://localhost:4443
7.- edit/create /etc/sysconfig/rundeckd file:
export RUNDECK_WITH_SSL=true
8.- start the rundeck service.
systemctl start rundeckd
I am getting the following error while trying to connect to LDAP Server.
Is there a way to Ignore SSL Security Certificate. I am able to connect to the server outside of JMeter using other tools.
Thread Name: Thread Group 1-1
Sample Start: 2018-09-23 12:16:48 EDT
Load time: 154
Connect Time: 0
Latency: 0
Size in bytes: 555
Sent bytes:0
Headers size in bytes: 0
Body size in bytes: 555
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: 800
Response message: javax.naming.CommunicationException: x.x.x.x:1636
[Root exception is javax.net.ssl.SSLHandshakeException:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target]
Response headers:
SampleResult fields:
ContentType: text/xml
DataEncoding: UTF-8
the best (and most common) way to solve this is to trust the LDAPS server, i.e. add the server's certificate to JRE's cacerts file using keytool. There is already an s-o answer on how to do this (here: Is there a java setting for disabling certificate validation?) - the gist is (taken from there)
cd %JRE_HOME%
keytool -alias REPLACE_TO_ANY_UNIQ_NAME -import -keystore ..\lib\security\cacerts -file your.crt
When you don't have the public key (certificate file) yet, you can e.g. get it by connecting to the LDAPS server with Apache Directory Studio (https://directory.apache.org/studio/) which stores all public keys of LDAPS servers you trust. The exact routine was described on the mailing list already (here: http://mail-archives.apache.org/mod_mbox/directory-users/201004.mbox/%3C4BBF6471.6040900#apache.org%3E), so I'm just giving the gist (again largely taken from there)
find ~/.ApacheDirectoryStudio -name \*.jks # gives you the keystores managed by DirectoryStudio
keytool -list -keystore path/to/permanent.jks
keytool -exportcert -alias <aliasname> -keystore path/to/permanent.jks -file your.crt
Most probably it indicates the issue with your LDAP server SSL setup, i.e. one of certificates in chain cannot be checked against authority. I would recommend double-checking the certificate chain using i.e.
OpenSSL tool like: openssl s_client -showcerts -connect yourhost:yourport
SSLPoke tool like: java -Djavax.net.debug=ssl SSLPoke yourhost yourport
You have 2 ways:
Add the certificate into the JVM truststore like:
keytool -import -file your_ldap_certificate -alias certificate -keystore trustStore.keystore
Create a custom class which will be trusting all the certificates and set java.naming.ldap.factory.socket system property to point to that class (the class must be in the JMeter Classpath)
Just in case if you need more information on LDAP servers performance testing with JMeter check out How to Load Test LDAP with Apache JMeter™ article.