Usage difference between SSL_add0_chain_cert and SSL_add1_chain_cert? - ssl

In OpenSSL documentation it says:
All these functions are implemented as macros. Those containing a 1 increment the reference count of the supplied certificate or chain so it must be freed at some point after the operation. Those containing a 0 do not increment reference counts and the supplied certificate or chain MUST NOT be freed after the operation.
But when I tried to look at examples of cases about which one should be used where I'm confused.
First OpenSSL:
It uses SSL_add0_chain_cert itself in the SSL_CTX_use_certificate_chain_file function of ssl_rsa.c. Here is the source:
static int use_certificate_chain_file(SSL_CTX *ctx, SSL *ssl, const char *file) {
if (ctx)
ret = SSL_CTX_use_certificate(ctx, x);
else
ret = SSL_use_certificate(ssl, x);
......
while ((ca = PEM_read_bio_X509(in, NULL, passwd_callback,
passwd_callback_userdata))
!= NULL) {
if (ctx)
r = SSL_CTX_add0_chain_cert(ctx, ca);
else
r = SSL_add0_chain_cert(ssl, ca);
......
}
Second usage I see is OpenResty Lua:
It uses SSL_add0_chain_cert in one way of setting certificate (ngx_http_lua_ffi_ssl_set_der_certificate), see here:
int ngx_http_lua_ffi_ssl_set_der_certificate(ngx_http_request_t *r,
const char *data, size_t len, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
......
while (!BIO_eof(bio)) {
x509 = d2i_X509_bio(bio, NULL);
if (x509 == NULL) {
*err = "d2i_X509_bio() failed";
goto failed;
}
if (SSL_add0_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add0_chain_cert() failed";
goto failed;
}
}
BIO_free(bio);
*err = NULL;
return NGX_OK;
failed:
.......
}
Yet uses SSL_add1_chain_cert in another way (ngx_http_lua_ffi_set_cert), see here:
int ngx_http_lua_ffi_set_cert(ngx_http_request_t *r,
void *cdata, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
x509 = NULL;
/* read rest of the chain */
for (i = 1; i < sk_X509_num(chain); i++) {
x509 = sk_X509_value(chain, i);
if (x509 == NULL) {
*err = "sk_X509_value() failed";
goto failed;
}
if (SSL_add1_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add1_chain_cert() failed";
goto failed;
}
}
*err = NULL;
return NGX_OK; /* No free of x509 here */
failed:
......
}
Yet I don't see a clear difference of what changes when calling these two in Lua, and it doesn't seem like the cert X509, when set successfully, gets freed in either case. According to my understanding of the OpenSSL doc, I should expect X509_free(x509) gets called somewhere after SSL_add1_chain_cert called on that x509. Is that the correct understanding?
Last, the Openssl implementation of ssl_cert_add1_chain_cert (what boils down from SSL_add1_chain_cert macro) does indeed show it's just a wrapper of ssl_cert_add0_chain_cert with reference count incremented on the cert, but how should that be reflected in the calling process?
int ssl_cert_add1_chain_cert(SSL *s, SSL_CTX *ctx, X509 *x)
{
if (!ssl_cert_add0_chain_cert(s, ctx, x))
return 0;
X509_up_ref(x);
return 1;
}
Now Nginx only deals with another function SSL_CTX_add_extra_chain_cert which leaves the burden of such choice behind, as it does not deal with switching cert per SSL connection basis. In my case I need to patch Nginx with this capability, switching cert per connection (but without using Lua).
So I'm not sure which one I should be using, SSL_add0_chain_cert or SSL_add1_chain_cert? And what's the freeing practice here?

Related

OpenSSL 1.0.1 SSL_read() function return 0 byte on certain https Websites

I'm trying to make a https client by openssl 1.0.1u that can visit websites with ssl protocol.
When visiting most of https websites (like google.com, yahoo.com, facebook.com, ...), it works well and the home page content is returned. However, there are certain websites (relatively small websites), the server returns me 0 bytes, here are some details:
I use SSLv23_method() to create my openssl context:
this->_sslContext = SSL_CTX_new(SSLv23_method()); // SSLv23_method: Negotiate highest available SSL/TLS version
Then I found that in the following calling sequence (listed forwardly):
(ssl_lib.c) SSL_read(SSL *s, void *buf, int num) ---->
(s3_lib.c) ssl3_read(SSL *s, void *buf, int len) ---->
(s3_lib.c) ssl3_read_internal(SSL *s, void *buf, int len, int peek) ---->
(s3_pkt.c) int ssl3_read_bytes(SSL *s, int type, unsigned char *buf, int len, int peek)
With some website (failed case), the function SSL_read() return 0 bytes because inside the function ssl3_read_bytes(), I got a alert_descr set to SSL_AD_CLOSE_NOTIFY then the function simply return 0, here is the source code:
...
if (alert_level == SSL3_AL_WARNING)
{
s->s3->warn_alert = alert_descr;
if (alert_descr == SSL_AD_CLOSE_NOTIFY) {
s->shutdown |= SSL_RECEIVED_SHUTDOWN;
return (0);
}
Anyone can give me any hint to fix this problem? Thanks.
=== UPDATE ===
Upon Steffen Ullrich's suggestion, I post source code that sends request / gets respone. My small experimental https client is composed of Socket and SSLSocket classes and a helper WebpageFetcher class. The function WebpageFetcher::fetchPage is used to send the https request and get the respond from private function WebpageFetcher::_getResponse():
wchar_t * WebpageFetcher::fetchPage(wchar_t * url, int port, bool useSSL)
{
wchar_t * response = NULL;
Socket * socket = Socket::createSocket(false, useSSL);
if (socket == nullptr)
{
response = String(L"Connection failed. Unable to create a SSLSocket!\n").toCharArray();
return response;
}
if (!socket->connect(url, port))//Connection failed
{
response = String(L"Connection failed. Possible reason: Wrong server URL or port.\n").toCharArray();
}
else //Connection succeeded
{
//Send request to server socket
static const char * REQUEST = "GET / \r\n\r\n";
static const int REQUEST_LEN = (const int)strlen(REQUEST);
socket->send((void *)REQUEST, REQUEST_LEN);
//Get the response from server
response = _getResponse(socket);
socket->shutDown();
socket->close();
}
delete socket;
return response;
}
// ============================================================================
wchar_t * WebpageFetcher::_getResponse(Socket * socket)
{
static const int READSIZE = 1024; //Reading buffer size, the larger the better performance
int responseBufferSize = READSIZE + 1;
char * readBuf = new char[READSIZE];
char * responseBuf = new char[responseBufferSize];
int bytesReceived;
int totalBytesReceived = 0;
while ((bytesReceived = socket->recv(readBuf, READSIZE)) > 0)
{
// Check if need to expand responseBuf size
if (totalBytesReceived + bytesReceived >= responseBufferSize)//No enough capacity, expand the response buffer
{
responseBufferSize += READSIZE;
char * tempBuf = new char[responseBufferSize];
memcpy(tempBuf, responseBuf, totalBytesReceived);
delete[] responseBuf;
responseBuf = tempBuf; //Response buffer expanded
}
// Append data from readBuf
memcpy(responseBuf + totalBytesReceived, readBuf, bytesReceived);
totalBytesReceived += bytesReceived;
responseBuf[totalBytesReceived] = '\0';
}
wchar_t * response = (wchar_t *)(totalBytesReceived == 0 ? //Generate the response as a C wide string
String(L"Received nothing from server. Possible reason: Wrong port.\n").toCharArray() :
StringUtil::charsToWchars(responseBuf));
delete[] readBuf;
delete[] responseBuf;
return response;
}
I passed argument useSSL with true when call factory function Socket::createSocket() so that the socket I got is a SSLSocket instance, which overrides the default functions connect(), _send() and _recv() to let openssl to do the actual job. Here is the constructor of my SSLSocket class, which derives from class Socket:
SSLSocket::SSLSocket(bool isServerSocket, int port, int socketType, int socketProtocol, int uOptions, wchar_t * strBindingAddress, wchar_t * cerPath, wchar_t * keyPath, wchar_t * keyPass) :
Socket(isServerSocket, port, socketType, socketProtocol, uOptions, strBindingAddress)
{
// Register the error strings
SSL_load_error_strings();
// Register the available ciphers and digests
SSL_library_init();
// Create an SSL_CTX structure by choosing a SSL/TLS protocol version
this->_sslContext = SSL_CTX_new(SSLv23_method()); // Use SSL 2 or SSL 3
// Create an SSL struct (client only, server does not need one)
this->_sslHandle = (this->_isServer ? NULL : SSL_new(this->_sslContext));
bool success = false;
if (!this->_isServer) // is Client socket
{
success = (this->_sslHandle != NULL);
}
else if (cerPath != NULL && keyPath != NULL) // is Server socket
{
success = ......
}
if (!success)
this->close();
}
And the followings are the functions override the virtual functions in parent class Socket, which lets openssl to do the relevant job:
bool SSLSocket::connect(wchar_t * strDestination, int port, int timeout)
{
SocketAddress socketAddress(strDestination, port);
return this->connect(&socketAddress, timeout);
}
bool SSLSocket::connect(SocketAddress * sockAddress, int timeout)
{
bool success =
(this->_sslHandle != NULL &&
Socket::connect(sockAddress, timeout) && // Regular TCP connection
SSL_set_fd(this->_sslHandle, (int)this->_hSocket) == 1 && // Connect the SSL struct to our connection
SSL_connect(this->_sslHandle) == 1); // Initiate SSL handshake
if (!success)
this->close();
return success;
}
int SSLSocket::_recv(void * lpBuffer, int size, int flags)
{
MonitorLock cs(&_mutex);
return SSL_read(this->_sslHandle, lpBuffer, size);
}
int SSLSocket::_send(const void * lpBuffer, int size, int flags)
{
return SSL_write(this->_sslHandle, lpBuffer, size);
}

OpenLDAP - Enabling CRL check for LDAP TLS connections

I have a client that connects to LDAP server using TLS. For this connection, I want to enable CRL check and reject the connection only if any server/client certificates are revoked.
In special cases (like CRL missing, CRL expired) I want to ignore the error and establish the connection.
So I though to overwrite the default SSL verify call back to ignore the specific errors.
But the call back is not called at all. Always only default call-back is called.
Here is my call back:
static int verify_callback(int ok, X509_STORE_CTX *ctx)
{
X509* cert = X509_STORE_CTX_get_current_cert(ctx);
if (ok)
return ok;
int sslRet = X509_STORE_CTX_get_error(ctx);
const char* err = NULL;
switch (sslRet)
{
case X509_V_ERR_UNABLE_TO_GET_CRL:
case X509_V_ERR_CRL_HAS_EXPIRED:
case X509_V_ERR_CRL_NOT_YET_VALID:
printf( "CRL: Verification failed... but ignored : %d\n", sslRet);
return 1;
default:
err = X509_verify_cert_error_string(sslRet);
if (err)
printf( "CRL: Failed to verify : %s\n",err);
return 0;
}
return sslRet;
}
Default verify call-back is overwritten using the ldap call-back set option:
void ldap_tls_cb(LDAP * ld, SSL * ssl, SSL_CTX * ctx, void * arg)
{
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER , verify_callback);
printf("verify call back is set...\n");
return;
}
Main Program:
int main( int argc, char **argv )
{
LDAP *ldap;
int auth_method = LDAP_AUTH_SIMPLE; //LDAP_AUTH_SASL
int ldap_version = LDAP_VERSION3;
char *ldap_host = "10.104.40.35";
int ldap_port = 389;
if ( (ldap = ldap_init(ldap_host, ldap_port)) == NULL ) {
perror( "ldap_init failed" );
return( EXIT_FAILURE );
}
int result = ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, &ldap_version);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option failed!");
return(EXIT_FAILURE);
}
int requireCert = LDAP_OPT_X_TLS_DEMAND;
result = ldap_set_option(NULL, LDAP_OPT_X_TLS_REQUIRE_CERT, &requireCert);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option - req cert -failed!");
return(EXIT_FAILURE);
}
result = ldap_set_option(NULL, LDAP_OPT_X_TLS_CACERTFILE, "/etc/certs/Cert.pem");
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option - cert file - failed!");
return(EXIT_FAILURE);
}
int crlvalue = LDAP_OPT_X_TLS_CRL_ALL;
result =ldap_set_option(NULL, LDAP_OPT_X_TLS_CRLCHECK, &crlvalue);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option failed!");
return(EXIT_FAILURE);
}
int debug = 7;
ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, &debug);
result = ldap_set_option(ldap, LDAP_OPT_X_TLS_CONNECT_CB, (void *)ldap_tls_cb);
if (result != LDAP_SUCCESS) {
fprintf(stderr, "ldap_set_option(LDAP_OPT_X_TLS_CONNECT_CB): %s\n", ldap_err2string(result));
return(1);
}
int msgidp = 0;
result = ldap_start_tls(ldap,NULL,NULL,&msgidp);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "start tls failed!");
return result;
} else {
printf("Start tls success.\n");
}
LDAPMessage *resultm;
struct timeval timeout;
result = ldap_result(ldap, msgidp, 0, &timeout, &resultm );
if ( result == -1 || result == 0 ) {
printf("ldap_result failed;retC=%d \n", result);
return result;
}
result = ldap_parse_extended_result(ldap, resultm, NULL, NULL, 0 );
if ( result == LDAP_SUCCESS ) {
result = ldap_install_tls (ldap);
printf("installing tls... %s\n", ldap_err2string(result));
}
int request_id = 0;
result = ldap_sasl_bind(ldap, "", LDAP_SASL_SIMPLE, NULL, 0, 0, &request_id);
if ( result != LDAP_SUCCESS ) {
fprintf(stderr, "ldap_x_bind_s: %s\n", ldap_err2string(result));
printf("LDAP bind error .. %d\n", result);
return(EXIT_FAILURE);
} else {
printf("LDAP connection successful.\n");
}
ldap_unbind(ldap);
return(EXIT_SUCCESS);
}
can someone help to check why my verify call-back is not called?
I think you need to set the callback on the SSL object directly instead of the context, so
void ldap_tls_cb(LDAP * ld, SSL * ssl, SSL_CTX * ctx, void * arg)
{
SSL_set_verify(ssl, SSL_VERIFY_PEER, verify_callback);
printf("verify call back is set...\n");
return;
}
The reason for this is that the SSL handle has already been initialised by the time your connect callback is called (see the OpenLDAP code), and
it's too late to set this callback through the context at that point:
If no special callback was set before, the default callback for the underlying ctx is used, that was valid at the time ssl was created with SSL_new(3).
OpenLDAP can be built with GnuTLS, so you may need to check that it's using OpenSSL before setting the callback. The LDAP_OPT_X_TLS_PACKAGE option could be used for this (note that I haven't tested this code):
char* package = NULL;
int result = ldap_get_option(NULL, LDAP_OPT_X_TLS_PACKAGE, (void *)&package);
if (result != LDAP_OPT_SUCCESS) {
ldap_perror(ldap, "ldap_get_option failed!");
return(EXIT_FAILURE);
} else {
if (strcmp(package, "OpenSSL") == 0) {
// Set your callback
}
ldap_memfree(package);
}

How to do ECDHE handshake without exportable private key

I'm building an OpenSSL engine that implements ECDSA_METHOD, which includes signature creation and signature verification functions. Since the only usage of ECDHE private key is related to signature creation, having the key exported from the engine and presenting it anywhere else is not required.
However, if I don't supply the private key to SSL_Context through SSL_set_private_key function SSL handshake fails with the error below:
error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure
I've also tried to provide a mock key (one that is not related to a public key in the cert) to SSL_set_private_key function, but this function does verify if private/public keys match and throws an error about bad certificate if they don't.
It looks like openssl allows by-passing this validation in some cases, e.g. this is what I found in ssl/ssl_rsa.c
#ifndef OPENSSL_NO_RSA
/*
* Don't check the public/private key, this is mostly for smart
* cards.
*/
if ((pkey->type == EVP_PKEY_RSA) &&
(RSA_flags(pkey->pkey.rsa) & RSA_METHOD_FLAG_NO_CHECK)) ;
else
#endif
if (!X509_check_private_key(c->pkeys[i].x509, pkey)) {
X509_free(c->pkeys[i].x509);
c->pkeys[i].x509 = NULL;
return 0;
}
I think, I need something similar for an EC key, but I didn't find it anywhere. Any other solutions are appreciated as well.
Any other solutions are appreciated as well.
This might not be the only option you have, but I think that you can achieve what you are looking for by creating your own EVP_PKEY_METHOD and implementing its functions as required. That way, you can store a handle to your own, for example, smart card based key and then invoke the proper sign methods at the right moment. You have to set the proper methods with the EVP_PKEY_meth_set_Xyz() functions, like EVP_PKEY_meth_set_sign(<yourSigningFunction>). For example, if you were using the Windows crypto API, you would have to invoke NCryptSignHash() from your signing function. That way, you do not have to export the private key from the Windows key store to obtain a signature.
I have done this before and the only big thing I ran into (apart from lack of documentation and examples) was a missing key store functionality at the EVP level. There seems to be some work in progress as you can see here. As a work around, I had to select keys/certificates from the a store as part of the key generation mechanism and it is not really intended for that.
If you decide to go this route, then be prepared for a few weeks of trial and error.
Here is how you can by-pass openssl validation rules by providing an EC_KEY with a public key set equal to that of public cert and the private key set to any non-zero value (in my example I've just set it equal to the X coordinate of the public key). After the key is created and stored in a file, it can be passed as a regular private key to SSL_Context.
I think, idealistically openssl should address this issue in a more systematic and transparent way, but until it's done, the suggested solution can be used as a work around:
#include <string.h>
#include <stdio.h>
#include <openssl/ssl.h>
#include <openssl/x509v3.h>
static char * my_prog = "dummykey";
static char * key_file = NULL;
static char * cert_file = NULL;
int verbose = 0;
static void print_help() {
fprintf(stderr,"Version: %s\nUSAGE: %s -cert in_cert_file -key out_key_file\n",
VERSION, my_prog);
}
static void parse_args(int argc, char** argv) {
argc--;
argv++;
while (argc >= 1) {
if (!strcmp(*argv,"-key")) {
key_file = *++argv;
argc--;
}
else if (!strcmp(*argv,"-cert")) {
cert_file = *++argv;
argc--;
}
else if (!strcmp(*argv,"-v")) {
verbose = 1;
}
else {
fprintf(stderr, "%s: Invalid param: %s\n", my_prog, *argv);
print_help();
exit(1);
}
argc--;
argv++;
}
if (key_file == NULL || cert_file == NULL ) {
print_help();
exit(1);
}
}
int get_curve_nid(X509 *c) {
int ret = 0;
if (c->cert_info->key->algor->parameter) {
ASN1_TYPE *p = c->cert_info->key->algor->parameter;
if (p && p->type == V_ASN1_OBJECT) {
ret = OBJ_obj2nid(c->cert_info->key->algor->parameter->value.object);
}
}
return ret;
}
int main(int argc, char** argv) {
X509 *c=NULL;
FILE *fp=NULL;
FILE *ofp=NULL;
EC_POINT *ec_point = NULL;
BIGNUM *x = NULL;
BIGNUM *y = NULL;
EC_KEY *ec_key = NULL;
EC_GROUP *grp = NULL;
parse_args(argc, argv);
fp = fopen(cert_file, "r");
if (!fp) {
fprintf(stderr,"%s: Can't open %s\n", my_prog, cert_file);
return 1;
}
c = PEM_read_X509 (fp, NULL, (int (*) ()) 0, (void *) 0);
if (c) {
x = BN_new();
y = BN_new();
int len = c->cert_info->key->public_key->length-1;
BN_bin2bn(c->cert_info->key->public_key->data+1, len/2, x);
BN_bin2bn(c->cert_info->key->public_key->data+1+len/2, len/2, y);
EC_GROUP *grp = EC_GROUP_new_by_curve_name(get_curve_nid(c));
ec_key = EC_KEY_new();
int sgrp = EC_KEY_set_group(ec_key, grp);
int sprk = EC_KEY_set_private_key(ec_key, x);
if (sgrp && sprk) {
ec_point = EC_POINT_new(grp);
int ac = EC_POINT_set_affine_coordinates_GFp(grp, ec_point, x, y, BN_CTX_new());
int spub =EC_KEY_set_public_key(ec_key, ec_point);
ofp = fopen(key_file, "w");
int r = 0;
if (ofp) {
r = PEM_write_ECPrivateKey(ofp, ec_key, NULL, NULL, 0, NULL, NULL);
if (!r)
fprintf(stderr,"%s: Can't write EC key %p to %s\n", my_prog, ec_key, key_file);
}
else {
fprintf(stderr,"%s: Can't open %s\n", my_prog, key_file);
}
}
}
if (ec_key)
EC_KEY_free(ec_key);
if (grp)
EC_GROUP_free(grp);
if (x)
BN_free(x);
if (y)
BN_free(y);
if (c)
X509_free (c);
if (fp)
fclose(fp);
if (ofp)
fclose(ofp);
return 0;
}

RSA public key decryption on OS X using SecTransform API (or other system API)

I'm trying to replace my use of OpenSSL, which was long ago deprecated and has been removed from the 10.11 SDK with the Security Transform API. My use of OpenSSL is simply for license key verification. The problem I've run into is that license keys are generated (server side) using OpenSSL's rsa_private_encrypt() function, rather than the (probably more appropriate) rsa_sign(). In the current OpenSSL code, I verify them using rsa_public_decrypt() like so:
int decryptedSize = RSA_public_decrypt([signature length], [signature bytes], checkDigest, rsaKey, RSA_PKCS1_PADDING);
BOOL success = [[NSData dataWithBytes:checkDigest length:decryptedSize] isEqualToData:[digest sha1Hash]])
Unfortunately, I'm unable to replicate this using the SecTransform APIs. I have the following:
SecTransformRef decryptor = CFAutorelease(SecDecryptTransformCreate(pubKey, &error));
if (error) { showSecError(error); return NO; }
SecTransformSetAttribute(decryptor, kSecTransformInputAttributeName, (CFDataRef)signatureData, &error);
if (error) { showSecError(error); return NO; }
CFDataRef result = SecTransformExecute(decryptor, &error);
if (error) { showSecError(error); return NO; }
return CFEqual(result, (CFDataRef)[digest sha1Hash]);
The call to SecTransformExecute() fails with a CSSMERR_CSP_INVALID_KEY_CLASS error.
Am I missing something, or is there no equivalent to OpenSSL's RSA_public_decrypt() in Security.framework? Perhaps a SecVerifyTransform can be used (I have been unable to get this to work either, but then the same is true of OpenSSL's RSA_sign()). I am certainly willing to use another system API (e.g. CDSA/CSSM) if it will enable me to do this.
Unfortunately, as this code needs to verify existing license codes, I can not simply change my license generation code to use RSA_sign() or similar instead.
I figured out how to do this using CDSA/CSSM. Code below:
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
NSData *ORSDecryptDataWithPublicKey(NSData *dataToDecrypt, SecKeyRef publicKey)
{
const CSSM_KEY *cssmPubKey = NULL;
SecKeyGetCSSMKey(publicKey, &cssmPubKey);
CSSM_CSP_HANDLE handle;
SecKeyGetCSPHandle(publicKey, &handle);
CSSM_DATA inputData = {
.Data = (uint8_t *)[dataToDecrypt bytes],
.Length = [dataToDecrypt length],
};
CSSM_DATA outputData = {
.Data = NULL,
.Length = 0,
};
CSSM_ACCESS_CREDENTIALS credentials;
memset(&credentials, 0, sizeof(CSSM_ACCESS_CREDENTIALS));
CSSM_CC_HANDLE contextHandle;
CSSM_RETURN result = CSSM_CSP_CreateAsymmetricContext(handle, cssmPubKey->KeyHeader.AlgorithmId, &credentials, cssmPubKey, CSSM_PADDING_PKCS1, &contextHandle);
if (result) { NSLog(#"Error creating CSSM context: %i", result); return nil; }
CSSM_CONTEXT_ATTRIBUTE modeAttribute = {
.AttributeType = CSSM_ATTRIBUTE_MODE,
.AttributeLength = sizeof(UInt32),
.Attribute.Uint32 = CSSM_ALGMODE_PUBLIC_KEY,
};
result = CSSM_UpdateContextAttributes(contextHandle, 1, &modeAttribute);
if (result) { NSLog(#"Error setting CSSM context mode: %i", result); return nil; }
CSSM_SIZE numBytesDecrypted = 0;
CSSM_DATA remData = {
.Data = NULL,
.Length = 0,
};
result = CSSM_DecryptData(contextHandle, &inputData, 1, &outputData, 1, &numBytesDecrypted, &remData);
if (result) { NSLog(#"Error decrypting data using CSSM: %i", result); return nil; }
CSSM_DeleteContext(contextHandle);
outputData.Length = numBytesDecrypted;
return [NSData dataWithBytesNoCopy:outputData.Data length:outputData.Length freeWhenDone:YES];
}
#pragma clang diagnostic pop
Note that as documented here, while CDSA is deprecated, Apple recommends its use "if none of the other cryptographic service APIs support what you are trying to do". I have filed radar #23063471 asking for this functionality to be added to Security.framework.

openssl header ssl

is there additional header which is presented by openssl before sending the message to socket ?
Thanks
I assume you're talking about TLS ("Secured TCP").
Then yes. Once the handshake between client and server is done, the "data" messages usually start with 3 special bytes (if I remember well) that indicates to the SSL layer that the frame is ciphered.
On the other hand, you cannot assume that the size of a ciphered frame will be the same of the raw frame/data.
Here you get an example function in C/C++.
bool isCiphered(const char* buf, size_t buflen)
{
if (buflen < 3)
{
return false;
}
uint8_t c = buf[0];
switch (c)
{
case 0x14:
case 0x15:
case 0x16:
case 0x17:
{
uint8_t v1 = buf[1];
uint8_t v2 = buf[2];
/* TLS v1 */
if ((v1 == 0x03) && (v2 == 0x01))
{
return true;
}
/* DTLS v1 */
if ((v1 == 0xfe) && (v2 == 0xff))
{
return true;
}
break;
}
}
return false;
}
I had to adapt my existing code so i'm not sure that compiles, but you should get the idea.