https://www.youtube.com/watch?v=5qecyZHL-GU
The key log file below associated with the above video contain the following content.
https://drive.google.com/file/d/1hGM3fU-k4o2cLb1xvcYd1W5OkqtTgjXH/view
$ cat sslkeylog.log
CLIENT_HANDSHAKE_TRAFFIC_SECRET 83ac6b24496f208daee39dfdfcbd36b7c428245af5e3775e42099dbd48741d4a db6f3d27b40b7c8e10ed415281b39e45ca6ef2b59468f943dbe6e81e1f82e0f0
SERVER_HANDSHAKE_TRAFFIC_SECRET 83ac6b24496f208daee39dfdfcbd36b7c428245af5e3775e42099dbd48741d4a d819660e194d9439e7152ceac2a439b41584afbeb5d719663cecb3c63b5c2eb1
CLIENT_TRAFFIC_SECRET_0 83ac6b24496f208daee39dfdfcbd36b7c428245af5e3775e42099dbd48741d4a 71d4806141cb1b247c1d1f3f7747a804fcc5e06c4192d8f53fc763a27b92316c
SERVER_TRAFFIC_SECRET_0 83ac6b24496f208daee39dfdfcbd36b7c428245af5e3775e42099dbd48741d4a 2ca17b0f7ff708fb3001be17a1c85163219221a4595462415e9e9e6653daf1fa
EXPORTER_SECRET 83ac6b24496f208daee39dfdfcbd36b7c428245af5e3775e42099dbd48741d4a 3f74b0cbe802d3e3dd3b5f6dee4114f928ec936a0cd388643d146cfb606f62a4
It has these.
CLIENT_HANDSHAKE_TRAFFIC_SECRET
CLIENT_HANDSHAKE_TRAFFIC_SECRET
CLIENT_TRAFFIC_SECRET_0
SERVER_TRAFFIC_SECRET_0
EXPORTER_SECRET
But the following example only has CLIENT_RANDOM.
$ SSLKEYLOGFILE=sslkey.log curl -s https://httpbin.org > /dev/null
$ cat sslkey.log
CLIENT_RANDOM 73c0277fd99b097691bc1745f14376cf9cca3c75f357ce4d276de9402d17e1b3 1cccf53210ce60caf626c39e55bf988d2666146dd0597437ba3b3feb745f53360683e86e00f77c7f93068f63fc24f551
Why do they have different fields? Do I capture the key log completely in the curl example?
The format of the SSLKEYLOGFILE is documented in NSS Key Log Format. From the documentation can be seen that several keys are only available with TLS 1.3, while others only with TLS 1.2 and lower.
The difference in the different files you have is due to the first representing a TLS 1.3 session while the latter represents a TLS 1.2 or lower session.
Related
I'm working on a task where i need to decrypt all the TLS 1.3 encrypted packets in wireshark (using Edit->Preferences->Protocol->TLS->pre-Master_secret log filename option) for debugging purpose. so i stored all the keys of TLS 1.3 sessions by setting the call back function with openssl provided API() call back function into a file as shown below:
SSL_CTX_set_keylog_callback(pCtx_m,Keylog_cb_func);
void Keylog_cb_func(const SSL *ssl, const char *line) {
// Code to log the line into a file in append mode
}
With this, i'm able to decrypt the TLS 1.3 packets with the keys logged into the file for only the below three ciphers in wireshark
TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_GCM_SHA256
and the other two ciphers of TLS1.3 TLS_AES_128_CCM_8_SHA256 , TLS_AES_128_CCM_SHA256 is not getting decrypted using the keys (in wireshark) opensll API has given though the calls are established over these two ciphers successfully.
I have read this post in stackoverflow about the default enabled cipher suite but i'm not quite sure about if this issue is something related to that. Could anyone help me to understand what i'm doing wrong here:
Sample Key File:
SERVER_HANDSHAKE_TRAFFIC_SECRET dd228de6d3f32ae5d83a9009c2e5908cb71c16d8624f5930dd05cabea3b7cc63 aa48ffe195090f87138caf32a520ecd41644f23f6d778f6436b5e2d697452572
CLIENT_HANDSHAKE_TRAFFIC_SECRET dd228de6d3f32ae5d83a9009c2e5908cb71c16d8624f5930dd05cabea3b7cc63 415e363721b8e204d3e2a2f94682d25792f565770a0f9221e86738d1c540e21b
EXPORTER_SECRET dd228de6d3f32ae5d83a9009c2e5908cb71c16d8624f5930dd05cabea3b7cc63 25276ba5f066f69b6caac00d35e981ff0b7d70e20d541c9435538d32a640ecdc
SERVER_TRAFFIC_SECRET_0 dd228de6d3f32ae5d83a9009c2e5908cb71c16d8624f5930dd05cabea3b7cc63 9d683bacbb0ce07fa5d0425c2763920f11d98af85c7461a52e26efb6bdaf74b5
CLIENT_TRAFFIC_SECRET_0 dd228de6d3f32ae5d83a9009c2e5908cb71c16d8624f5930dd05cabea3b7cc63 bfca7bc330de3836b0e49619375b504f4d2794206c4ecc7fcd1dfdab363bd36a
SERVER_HANDSHAKE_TRAFFIC_SECRET cb2136f271de0e168eb92a9f2a6811c76fe87d08c9b4ae374f4efc72ccdc7ea5 1be3cc095c1e336168ab10619c266eaf7a9057edf724d65f3016ba64d642ab0f
EXPORTER_SECRET cb2136f271de0e168eb92a9f2a6811c76fe87d08c9b4ae374f4efc72ccdc7ea5 0cfa53e94a8a6cf085af19a45a062152299b5ccea84cbffe237f136f0854672e
SERVER_TRAFFIC_SECRET_0 cb2136f271de0e168eb92a9f2a6811c76fe87d08c9b4ae374f4efc72ccdc7ea5 7de206865a34e1b3b004d47075d534ebfd60b47a16ce525f3c0da47c9c78c57f
CLIENT_HANDSHAKE_TRAFFIC_SECRET cb2136f271de0e168eb92a9f2a6811c76fe87d08c9b4ae374f4efc72ccdc7ea5 1a4cad91d84607f44cd6a65cf1e9bcddaf3488d2dcb67b5246995485226dfa5c
CLIENT_TRAFFIC_SECRET_0 cb2136f271de0e168eb92a9f2a6811c76fe87d08c9b4ae374f4efc72ccdc7ea5 be0df5b77fb5edb1785118e6adb119a5820beacb74d01d8cdee344d3feb56488
Wireshark Version: Version 3.6.1 (v3.6.1-0-ga0a473c7c1ba).
Thanks in advance
Prakash
It's found that wireshark Version 3.6.7 (v3.6.7-0-g4a304d7ec222) itself currently has an issue in the TLS 1.3 packet decryption feature for these two ciphers (LS1.3 TLS_AES_128_CCM_8_SHA256 , TLS_AES_128_CCM_SHA256 ) due to invalid-aad-length, and the same would be fixed in the upcoming version. We have reported a bug to the wirshark forum as in the link below
https://gitlab.com/wireshark/wireshark/-/issues/18277
I am able to check status of Nooba bucket using noobaa bucket status <bucket> command.
$ noobaa bucket status XYZ
INFO[0005] ✅ Exists: NooBaa "noobaa"
INFO[0005] ✅ Exists: Service "noobaa-mgmt"
INFO[0006] ✅ Exists: Secret "noobaa-operator"
INFO[0006] ✅ Exists: Secret "noobaa-admin"
INFO[0008] ✈️ RPC: bucket.read_bucket() Request: {Name:XYZ}
INFO[0010] ✅ RPC: bucket.read_bucket() Response OK: took 14.3ms
Bucket status:
Bucket : XYZ
OBC Namespace : xyz-namespace
OBC BucketClass : default-bucket-class
Type : REGULAR
Mode : OPTIMAL
ResiliencyStatus : OPTIMAL
QuotaStatus : QUOTA_NOT_SET
Num Objects : 1
Data Size : 3.000 B
Data Size Reduced : 5.000 B
Data Space Avail : 1.000 PB
But I am not able to check content present inside Noobaa bucket.
How can we check content of a Noobaa bucket? using Noobaa CLI or any other way?
Your question made me realize that noobaa CLI should have noobaa object list command so I opened a new issue for this enhancement on the operator github repo. Thanks :)
Until this is added, there are several ways we use to list objects:
run noobaa ui - notice that it opens the browser quickly, but on the terminal it prints the credentials for you to use for login. You can probably find the buckets and the drill down to the objects in the UI on your own, and you can also check out some recorded videos that navigate the UI - for example this video.
Take the admin S3 credentials and endpoint from noobaa status and then use your favorite s3 client - I currently use aws-cli or rclone:
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
and then:
s3 ls XYZ
Not many noticed but the NooBaa system CR contains a useful Readme text in its status, with commands to "Test S3 client" - ready to copy-paste to set up your aws-cli, including kubectl port-forward to support secure networks and reading the credentials from secrets. Check it out with kubectl describe noobaa. This 40 seconds youtube video shows this briefly. BTW, the readme text is generated for the system but its text does not contain actual secrets, only kubectl commands to read those secrets if permitted to.
$ kubectl describe noobaa
...
Phase: Ready
Readme:
Welcome to NooBaa!
-----------------
NooBaa Core Version: 5.3.0-9f579d9
NooBaa Operator Version: 2.1.0
Lets get started:
1. Connect to Management console:
Read your mgmt console login information (email & password) from secret: "noobaa-admin".
kubectl get secret noobaa-admin -n backup-service -o json | jq '.data|map_values(#base64d)'
Open the management console service - take External IP/DNS or Node Port or use port forwarding:
kubectl port-forward -n backup-service service/noobaa-mgmt 11443:443 &
open https://localhost:11443
2. Test S3 client:
kubectl port-forward -n backup-service service/s3 10443:443 &
NOOBAA_ACCESS_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_ACCESS_KEY_ID|#base64d')
NOOBAA_SECRET_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|#base64d')
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3'
s3 ls
...
Last option, which should have been mentioned first, but unfortunately I just saw it is broken in the current version v2.1.0 (opened new issue), is to use the generic noobaa api command in order to call the object_api list_objects method like so:
noobaa api object list_objects '{ "bucket": "first.bucket" }'
I hope that helps, feel free to open github issues with suggestions/issues.
Thanks!
(NooBaa CTO)
So I am trying to modify a third party (libtorrent) to only accept the TLS 1.2 protocol.
Part of the setup of the SSL context:
boost::shared_ptr<context> ctx = boost::make_shared<context>(boost::ref(m_ses.get_io_service()), context::tlsv12)
ctx->set_options(context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::no_sslv3
| boost::asio::ssl::context::no_tlsv1
| boost::asio::ssl::context::no_tlsv1_1
| boost::asio::ssl::context::single_dh_use);
However when I am testing my connection with OpenSSL s_client it still seems to accept tls 1.0 and tls 1.1 connection.
Is there something I am doing wrong?
EDIT: Added "| boost::asio::ssl::context::no_tlsv1_1" to options. I realized I was referring to an old boost reference guide. It did however not change anything.
EDIT: I just realize that I have not mentioned that this connection is a two-way/mutual authentication connection. Not sure if that changes anything.
There is no constant for TLS1.2 in asio::ssl::context. But you can use native openssl API to do that:
#include <openssl/ssl.h>
long ssl_disallowed = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1 | SSL_OP_NO_TLSv1_2;
ssl_disallowed &= ~SSL_OP_NO_TLSv1_2;
SSL_CTX_set_options(ctx.native_handle(), ssl_disallowed);
This is a total shot in the dark, but try this:
Try creating a string of ciphers specific to TLS 1.2 and then call
char* TLS_12_CIPHERS = "... list of ciphers specific to TLS 1.2";
SSL_CTX_set_cipher_list(ctx->native_handle(), TLS_12_CIPHERS);
Then set the option on the context (assuming it's a server context) that the server gets to choose what ciphers it wants use, not client.
SSL_CTX_set_options(ctx->native_handle(), SSL_OP_CIPHER_SERVER_PREFERENCE);
You'd think that boost::asio::ssl would take care of this stuff for you by specifying the no_X options but I can't be sure. Like I said this is a shot in the dark, but explicitly configuring context using the OpenSSL API in this way should enforce the conditions you're after. Even if somewhere, somehow, some conflicting option is being set to allow non TLS 1.2 connections, with these options, any non TLS 1.2 connection will fail with the error "no shared cipher".
As for why your server is even advertising that non 1.2 connections are acceptable is unknown, but one possible explanation is that there is a default context that is advertising this. This is why sehe made the point about "applying to all connections."
Here is a list of TLS 1.2 specific ciphers.
Im currently querying Okta for a list of events. The result set is tens of thousands. Currently the requests limits the return to 1000 results. Per the Okta API. http://developer.okta.com/docs/getting_started/design_principles.html#pagination I can use the Link header with the value "next".
How do I use CURL to capture this value and issue that CURL command on that url to get the rest of the values and loop till the end?
In cURL, if you include the include '-i' option, the headers are returned to the console. Within these headers, you'll find a link called rel="next". If you replace the resource in the first GET call with this filter, you'll get the next set of results.
The pagination code is based on https://michaelheap.com/follow-github-link-header-bash
#!/usr/bin/env bash
# Set these:
url="https://COMPANY.okta.com/api/v1/users"
token="..."
# Pagination code based on https://michaelheap.com/follow-github-link-header-bash
while [ "$url" ]; do
r=$(curl --compressed -Ss -i -H "authorization: SSWS $token" "$url" | tr -d '\r')
echo "$r" | sed '1,/^$/d' | jq -r '.[].profile.login'
url=$(echo "$r" | sed -n -E 's/link: <(.*)>; rel="next"/\1/pi')
done
Note that if you're making multiple requests (which you probably are, since you're paginating), using a language like Python with the requests package is much faster than curl. A requests session will keep the connection open, whereas curl closes it and reopens it every time.
#!/usr/bin/env python
import requests
# Set these:
url = 'https://COMPANY.okta.com/api/v1/users'
token = '...'
# If you're making multiple API calls, using a session is much faster.
session = requests.Session()
session.headers['authorization'] = 'SSWS ' + token
def get_objects(url):
while url:
r = session.get(url)
for o in r.json():
yield o
url = r.links.get('next', {}).get('url')
for user in get_objects(url):
print(user['profile']['login'])
This version of the shell script has better variable names to make it easier to understand.
As commented above, running curl with the -i or --include option includes the response headers.
#!/usr/bin/env bash
# Set these:
url='https://COMPANY.okta.com/api/v1/users'
token='...'
# Pagination code based on https://michaelheap.com/follow-github-link-header-bash
while [ "$url" ]; do
r=$(curl -i --compressed -Ss -H "authorization: SSWS $token" "$url" | tr -d '\r')
headers=$(echo "$r" | sed '/^$/q')
body=$(echo "$r" | sed '1,/^$/d')
echo "$body" | jq -r '.[].profile.login'
url=$(echo "$headers" | sed -n -E 's/link: <(.*)>; rel="next"/\1/pi')
done
Apache benchmark is failing when using lengthy custom authorization header -
ab.exe -n 1 -c 1 -S -H "Authorization: SAML PHRyaWJ1dGUgQXR0cmlidXRlTmFtZT0iaGFzX3NlbGVjdCIgQXR0cmlidXRlTmFtZXNwYWNlPSJodHRwOi8vc2NoZW1hcy5iZW50bGV5LmNvbS93cy8yMDExLzAzL2lkZW50aXR5L2NsYWltcyI+PHNhbWw6QXR0cmlidXRlVmFsdWU+dHJ1ZTwvc2FtbDpBdHRyaWJ1dGVWYWx1ZT48L3NhbWw6QXR0cmlidXRlPjwvc2FtbDpBdHRyaWJ1dGVTdGF0ZW1lbnQ+PGRzOlNpZ25hdHVyZSB4bWxuczpkcz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC8wOS94bWxkc2lnIyI+PGRzOlNpZ25lZEluZm8+PGRzOkNhbm9uaWNhbGl6YXRpb25NZXRob2QgQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxLzEwL3htbC1leGMtYzE0biMiIC8+PGRzOlNpZ25hdHVyZU1ldGhvZCBBbGdvcml0aG09Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMDQveG1sZHNpZy1tb3JlI3JzYS1zaGEyNTYiIC8+PGRzOlJlZmVyZW5jZSBVUkk9IiNfZDA4MTc0NWItOTU2ZC00NmE4LTg4MzItMTQ5MjBmNmY4ZTZmIj48ZHM6VHJhbnNmb3Jtcz48ZHM6VHJhbnNmb3JtIEFsZ29yaXRobT0iaHR0cDovL3d3dy53My5vcmcvMjAwMC8wOS94bWxkc2lnI2VudmVsb3BlZC1zaWduYXR1cmUiIC8+PGRzOlRyYW5zZm9ybSBBbGdvcml0aG09Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMTAveG1sLWV4Yy1jMTRuIyIgLz48L2RzOlRyYW5zZm9ybXM+PGRzOkRpZ2VzdE1ldGhvZCBBbGdvcml0aG09Imh0dHA6Ly93d3cudzMub3JnLzIwMDEvMDQveG1sZW5jI3NoYTI1NiIgLz48ZHM6RGlnZXN0VmFsdWU+Q1hISDVsWCtZTTROVDhFUU1tVHIyN2lBdEk1dE9XajhuQ0pSTFEzYnFQbz08L2RzOkRpZ2VzdFZhbHVlPjwvZHM6UmVmZXJlbmNlPjwvZHM6U2lnbmVkSW5mbz48ZHM6U2lnbmF0dXJlVmFsdWU+alhVNG5Ud1lQYVZWazJhcUhjNW1DMk5FWWRSdXBPV1A2Nis4bmNlbndFQmlseG81S0RmdldBNldUaDdHSGlVNUpSdCtzbWsyY1dWajRPTElnUEhIWFlFZjNkaVNYNkd2ejBqVGh4VDBzVDNmSDNob081dkt6S2dITVdNWGM5cEswU2xrQVRSenhPZmxCTHFkclQ3bnBBd3lrbEtlTFhhaTRIVlJ6MU5aN3pFNXFVaHArVmJ3Y3FCZXpkZ1pPMWlCMHkyRHdzWFRJSFhmN2hOUmFLVUYzenF2bm5IYlBNZG95cGZNbzhxU0hwcFhWNzRuN0lLQ3hHakhlYzdJak1JMWlrZVNFaTE2ODJUN0Y0cHRESXNjYmcxWW5LTTRhdHZxMWZMSi83QWx0ck90SGFMSkZBSU1jeG43TXhCckNuNXhSR2I2ZDN2c0U0VzVHN2E3MmY2OUpBPT08L2RzOlNpZ25hdHVyZVZhbHVlPjxLZXlJbmZvIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwLzA5L3htbGRzaWcjIj48WDUwOURhdGE+PFg1MDlDZXJ0aWZpY2F0ZT5NSUlHampDQ0JYYWdBd0lCQWdJUUFxQzZUbmZlVFJQeUQrN0Q2YlE4RERBTkJna3Foa2lHOXcwQkFRVUZBREJJTVFzd0NRWURWUVFHRXdKVlV6RVZNQk1HQTFVRUNoTU1SR2xuYVVObGNuUWdTVzVqTVNJd0lBWURWUVFERXhsRWFXZHBRMlZ5ZENCVFpXTjFjbVVnVTJWeWRtVnlJRU5CTUI0WERURXpNVEF4TlRBd01EQXdNRm9YRFRFME1UQXlNREV5TURBd01Gb3dkekVMTUFrR0ExVUVCaE1DVlZNeEZUQVRCZ05WQkFnVERGQmxibTV6ZVd4MllXNXBZVEVPTUF3R0ExVUVCeE1GUlhoMGIyNHhKakFrQmdOVkJBb1RIVUpsYm5Sc1pYa2dVM2x6ZEdWdGN5d2dTVzVqYjNKd2IzSmhkR1ZrTVJrd0Z3WURWUVFERXhCaGRYUm9MbUpsYm5Sc1pYa3VZMjl0TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFwemdYV29tVlpObS81NnBiUnlaeThWYlhyVUNsUWpPWU13N3dGYUZUTE9XRU9iOWhncUVWWnZOVUxLNFBmV2dmTUo4cUxxZkhodnFXbG8xSkFxRFdiUUpybFJIQVNLdUNjOWVPa3JQMzdpMW1WenNtNWxQUlBWb242R1RhUkE5SVRBRitiNW5OMDB5MHczOThNWnFJZ2xJdDRQRmNlM1NXd3lYQkdOSUpyWURNYndmRjFFQU5hU1V4Skh3eUd0eVVLVWlxeFB3SllFZ3JacHJoZHlMQnRRdXJaZGJwNkViSEduQVZJWHpKR2Y0TC9WN2tpdWExNWFmeW9Ycm5tZ2d0SGJBMEFOVkY5eXFBcG5oc0FxR2pDdHpSSGpvWG9FMFNnZFY1U3hvUm1lb3hXZWowOEZYdW96bHRLc2d5Z25ZbXZoZFB5VGl4SjVqYSttZlVxUjZCeXdJREFRQUJvNElEUXpDQ0F6OHdId1lEVlIwakJCZ3dGb0FVa0hIYk4rdHp5Ty9jMVI0U3RqUzZLMXFncHBJd0hRWURWUjBPQkJZRUZKaHM1L1d2bkYwbHFSdGhMYklwOUlLTUVwMXRNQnNHQTFVZEVRUVVNQktDRUdGMWRHZ3VZbVZ1ZEd4bGVTNWpiMjB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpCaEJnTlZIUjhFV2pCWU1DcWdLS0FtaGlSb2RIUndPaTh2WTNKc015NWthV2RwWTJWeWRDNWpiMjB2YzNOallTMW5OQzVqY213d0txQW9vQ2FHSkdoMGRIQTZMeTlqY213MExtUnBaMmxqWlhKMExtTnZiUzl6YzJOaExXYzBMbU55YkRDQ0FjUUdBMVVkSUFTQ0Fic3dnZ0czTUlJQnN3WUpZSVpJQVliOWJBRUJNSUlCcERBNkJnZ3JCZ0VGQlFjQ0FSWXVhSFIwY0RvdkwzZDNkeTVrYVdkcFkyVnlkQzVqYjIwdmMzTnNMV053Y3kxeVpYQnZjMmwwYjNKNUxtaDBiVENDQVdRR0NDc0dBUVVGQndJQ01JSUJWaDZDQVZJQVFRQnVBSGtBSUFCMUFITUFaUUFnQUc4QVpnQWdBSFFBYUFCcEFITUFJQUJEQUdVQWNnQjBBR2tBWmdCcEFHTUFZUUIwQUdVQUlBQmpBRzhBYmdCekFIUUFhUUIwQUhVQWRBQmxBSE1BSUFCaEFHTUFZd0JsQUhBQWRBQmhBRzRBWXdCbEFDQUFid0JtQUNBQWRBQm9BR1VBSUFCRUFHa0Fad0JwQUVNQVpRQnlBSFFBSUFCREFGQUFMd0JEQUZBQVV3QWdBR0VBYmdCa0FDQUFkQUJvQUdVQUlBQlNBR1VBYkFCNUFHa0FiZ0JuQUNBQVVBQmhBSElBZEFCNUFDQUFRUUJuQUhJQVpRQmxBRzBBWlFCdUFIUUFJQUIzQUdnQWFRQmpBR2dBSUFCc0FHa0FiUUJwQUhRQUlBQnNBR2tBWVFCaUFHa0FiQUJwQUhRQWVRQWdBR0VBYmdCa0FDQUFZUUJ5QUdVQUlBQnBBRzRBWXdCdkFISUFjQUJ2QUhJQVlRQjBBR1VBWkFBZ0FHZ0FaUUJ5QUdVQWFRQnVBQ0FBWWdCNUFDQUFjZ0JsQUdZQVpRQnlBR1VBYmdCakFHVUFMakI0QmdnckJnRUZCUWNCQVFSc01Hb3dKQVlJS3dZQkJRVUhNQUdHR0doMGRIQTZMeTl2WTNOd0xtUnBaMmxqWlhKMExtTnZiVEJDQmdnckJnRUZCUWN3QW9ZMmFIUjBjRG92TDJOaFkyVnlkSE11WkdsbmFXTmxjblF1WTI5dEwwUnBaMmxEWlhKMFUyVmpkWEpsVTJWeWRtVnlRMEV1WTNKME1Bd0dBMVVkRXdFQi93UUNNQUF3RFFZSktvWklodmNOQVFFRkJRQURnZ0VCQUtQQUhQcGNnd0FiSXNjT2JKYm1jbVA4RzJRWkI4VVVzeVIxOFd3OTdIWUlxRXM3MExScXFyVGpza1BsZnBYanRKSm1pVjgydzZOYWZrQ3dBbWxycy94TTJFako3UkVacDZ2YmVJZHlGTjIwM3dlQkViRm9RYjg0Wk9COWRHQmR4MktZdzBuUHJSM0F1YkRQVTZJNjlhWTBFNGJIRUM2Y09nOGxjeUhkWXFyZDNSRlByUjJiN1hGWVdyaXRXeElXbHFNRUVzNVU3dkRsYjkrby9HRlZqbE1QSW4xcG0yNDBUYW41WkR0Si9NcXZZRTYrQzh1ZUtEZW1lcndiR0ZCNzFJTVI3TUU1SHYzayt3N0hFMm1ETjF0RnVjb1ZyQVVFZndYVUNzTUdzMkJJbzJsVVNGL3FIYUY3akhHdnJPUmloVFlKQnBtQzRpWmJRTGpLQmZBRXBTVT08L1g1MDlDZXJ0aWZpY2F0ZT48L1g1MDlEYXRhPjwvS2V5SW5mbz48L2RzOlNpZ25hdHVyZT48L3NhbWw6QXNzZXJ0aW9uPg==" -v 2 http://localhost:55495/rest/1
Please note that I am testing .net rest api which needs lengthy user token in authorization header to proceed.
Apache Benchmark has a maximum request size of 2048 characters. Your header alone is more than 4KB in size, so that's why your test is failing. The maximum size of the request is given in ab.c. Look for char _request[2048]; in that file and try increasing it. Note that the passed-in headers are also parsed into a header structure that may have limits as well.
You can install the ab from httpd source code, which now sets char _request[8192];.
The code can be found in httpd-2.4.53/support/ab.c