Specify padding for creating digital signature on OSX - objective-c

In iOS there is method SecKeyRawSign() to generate signature , that let you specify padding type.
OSStatus SecKeyRawSign (
SecKeyRef key,
SecPadding padding,
const uint8_t *dataToSign,
size_t dataToSignLen,
uint8_t *sig,
size_t *sigLen
);
On OSX it won't work in that way, I'm using Security Transforms to do that as described here https://developer.apple.com/library/mac/#documentation/Security/Conceptual/SecTransformPG/SigningandVerifying/SigningandVerifying.html#//apple_ref/doc/uid/TP40010801-CH4-SW3 .
/* Create the transform objects */
signer = SecSignTransformCreate(privatekey, &error);
if (error) { CFShow(error); exit(-1); }
SecTransformSetAttribute(
signer,
kSecTransformInputAttributeName,
sourceData,
&error);
if (error) { CFShow(error); exit(-1); }
signature = SecTransformExecute(signer, &error);
if (error) { CFShow(error); exit(-1); }
if (!signature) {
fprintf(stderr, "Signature is NULL!\n");
exit(-1);
}
Is there any way to set padding here? Please provide example if it's possible.
On iOS in Security Framework, there is define kSecPaddingPKCS1SHA256 for SHA256 to set padding. On OSX there isn't , so what should be equivalent to use? I need to set padding for SHA256.
Thanks!

This is untested, experimental code, but you need to specify the PKCS1 padding, that you're using a SHA-2 Digest type, and that the size is 256:
SecTransformSetAttribute(
signer,
kSecPaddingKey,
kSecPaddingPKCS1Key,
&error);
if (error) { CFShow(error); exit(-1); }
SecTransformSetAttribute(
signer,
kSecDigestTypeAttribute,
kSecDigestSHA2,
&error);
if (error) { CFShow(error); exit(-1); }
int digestLength = 256;
CFNumberRef dLen = CFNumberCreate(kCFAllocatorDefault, kCFNumberIntType, &digestLength);
Boolean set = SecTransformSetAttribute(
signer,
kSecDigestLengthAttribute,
dLen,
&error);
CFRelease(dLen);
if (!set || error) { CFShow(error); exit(-1); }

Related

Usage difference between SSL_add0_chain_cert and SSL_add1_chain_cert?

In OpenSSL documentation it says:
All these functions are implemented as macros. Those containing a 1 increment the reference count of the supplied certificate or chain so it must be freed at some point after the operation. Those containing a 0 do not increment reference counts and the supplied certificate or chain MUST NOT be freed after the operation.
But when I tried to look at examples of cases about which one should be used where I'm confused.
First OpenSSL:
It uses SSL_add0_chain_cert itself in the SSL_CTX_use_certificate_chain_file function of ssl_rsa.c. Here is the source:
static int use_certificate_chain_file(SSL_CTX *ctx, SSL *ssl, const char *file) {
if (ctx)
ret = SSL_CTX_use_certificate(ctx, x);
else
ret = SSL_use_certificate(ssl, x);
......
while ((ca = PEM_read_bio_X509(in, NULL, passwd_callback,
passwd_callback_userdata))
!= NULL) {
if (ctx)
r = SSL_CTX_add0_chain_cert(ctx, ca);
else
r = SSL_add0_chain_cert(ssl, ca);
......
}
Second usage I see is OpenResty Lua:
It uses SSL_add0_chain_cert in one way of setting certificate (ngx_http_lua_ffi_ssl_set_der_certificate), see here:
int ngx_http_lua_ffi_ssl_set_der_certificate(ngx_http_request_t *r,
const char *data, size_t len, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
......
while (!BIO_eof(bio)) {
x509 = d2i_X509_bio(bio, NULL);
if (x509 == NULL) {
*err = "d2i_X509_bio() failed";
goto failed;
}
if (SSL_add0_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add0_chain_cert() failed";
goto failed;
}
}
BIO_free(bio);
*err = NULL;
return NGX_OK;
failed:
.......
}
Yet uses SSL_add1_chain_cert in another way (ngx_http_lua_ffi_set_cert), see here:
int ngx_http_lua_ffi_set_cert(ngx_http_request_t *r,
void *cdata, char **err) {
......
if (SSL_use_certificate(ssl_conn, x509) == 0) {
*err = "SSL_use_certificate() failed";
goto failed;
}
x509 = NULL;
/* read rest of the chain */
for (i = 1; i < sk_X509_num(chain); i++) {
x509 = sk_X509_value(chain, i);
if (x509 == NULL) {
*err = "sk_X509_value() failed";
goto failed;
}
if (SSL_add1_chain_cert(ssl_conn, x509) == 0) {
*err = "SSL_add1_chain_cert() failed";
goto failed;
}
}
*err = NULL;
return NGX_OK; /* No free of x509 here */
failed:
......
}
Yet I don't see a clear difference of what changes when calling these two in Lua, and it doesn't seem like the cert X509, when set successfully, gets freed in either case. According to my understanding of the OpenSSL doc, I should expect X509_free(x509) gets called somewhere after SSL_add1_chain_cert called on that x509. Is that the correct understanding?
Last, the Openssl implementation of ssl_cert_add1_chain_cert (what boils down from SSL_add1_chain_cert macro) does indeed show it's just a wrapper of ssl_cert_add0_chain_cert with reference count incremented on the cert, but how should that be reflected in the calling process?
int ssl_cert_add1_chain_cert(SSL *s, SSL_CTX *ctx, X509 *x)
{
if (!ssl_cert_add0_chain_cert(s, ctx, x))
return 0;
X509_up_ref(x);
return 1;
}
Now Nginx only deals with another function SSL_CTX_add_extra_chain_cert which leaves the burden of such choice behind, as it does not deal with switching cert per SSL connection basis. In my case I need to patch Nginx with this capability, switching cert per connection (but without using Lua).
So I'm not sure which one I should be using, SSL_add0_chain_cert or SSL_add1_chain_cert? And what's the freeing practice here?

OpenLDAP - Enabling CRL check for LDAP TLS connections

I have a client that connects to LDAP server using TLS. For this connection, I want to enable CRL check and reject the connection only if any server/client certificates are revoked.
In special cases (like CRL missing, CRL expired) I want to ignore the error and establish the connection.
So I though to overwrite the default SSL verify call back to ignore the specific errors.
But the call back is not called at all. Always only default call-back is called.
Here is my call back:
static int verify_callback(int ok, X509_STORE_CTX *ctx)
{
X509* cert = X509_STORE_CTX_get_current_cert(ctx);
if (ok)
return ok;
int sslRet = X509_STORE_CTX_get_error(ctx);
const char* err = NULL;
switch (sslRet)
{
case X509_V_ERR_UNABLE_TO_GET_CRL:
case X509_V_ERR_CRL_HAS_EXPIRED:
case X509_V_ERR_CRL_NOT_YET_VALID:
printf( "CRL: Verification failed... but ignored : %d\n", sslRet);
return 1;
default:
err = X509_verify_cert_error_string(sslRet);
if (err)
printf( "CRL: Failed to verify : %s\n",err);
return 0;
}
return sslRet;
}
Default verify call-back is overwritten using the ldap call-back set option:
void ldap_tls_cb(LDAP * ld, SSL * ssl, SSL_CTX * ctx, void * arg)
{
SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER , verify_callback);
printf("verify call back is set...\n");
return;
}
Main Program:
int main( int argc, char **argv )
{
LDAP *ldap;
int auth_method = LDAP_AUTH_SIMPLE; //LDAP_AUTH_SASL
int ldap_version = LDAP_VERSION3;
char *ldap_host = "10.104.40.35";
int ldap_port = 389;
if ( (ldap = ldap_init(ldap_host, ldap_port)) == NULL ) {
perror( "ldap_init failed" );
return( EXIT_FAILURE );
}
int result = ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, &ldap_version);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option failed!");
return(EXIT_FAILURE);
}
int requireCert = LDAP_OPT_X_TLS_DEMAND;
result = ldap_set_option(NULL, LDAP_OPT_X_TLS_REQUIRE_CERT, &requireCert);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option - req cert -failed!");
return(EXIT_FAILURE);
}
result = ldap_set_option(NULL, LDAP_OPT_X_TLS_CACERTFILE, "/etc/certs/Cert.pem");
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option - cert file - failed!");
return(EXIT_FAILURE);
}
int crlvalue = LDAP_OPT_X_TLS_CRL_ALL;
result =ldap_set_option(NULL, LDAP_OPT_X_TLS_CRLCHECK, &crlvalue);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "ldap_set_option failed!");
return(EXIT_FAILURE);
}
int debug = 7;
ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, &debug);
result = ldap_set_option(ldap, LDAP_OPT_X_TLS_CONNECT_CB, (void *)ldap_tls_cb);
if (result != LDAP_SUCCESS) {
fprintf(stderr, "ldap_set_option(LDAP_OPT_X_TLS_CONNECT_CB): %s\n", ldap_err2string(result));
return(1);
}
int msgidp = 0;
result = ldap_start_tls(ldap,NULL,NULL,&msgidp);
if (result != LDAP_OPT_SUCCESS ) {
ldap_perror(ldap, "start tls failed!");
return result;
} else {
printf("Start tls success.\n");
}
LDAPMessage *resultm;
struct timeval timeout;
result = ldap_result(ldap, msgidp, 0, &timeout, &resultm );
if ( result == -1 || result == 0 ) {
printf("ldap_result failed;retC=%d \n", result);
return result;
}
result = ldap_parse_extended_result(ldap, resultm, NULL, NULL, 0 );
if ( result == LDAP_SUCCESS ) {
result = ldap_install_tls (ldap);
printf("installing tls... %s\n", ldap_err2string(result));
}
int request_id = 0;
result = ldap_sasl_bind(ldap, "", LDAP_SASL_SIMPLE, NULL, 0, 0, &request_id);
if ( result != LDAP_SUCCESS ) {
fprintf(stderr, "ldap_x_bind_s: %s\n", ldap_err2string(result));
printf("LDAP bind error .. %d\n", result);
return(EXIT_FAILURE);
} else {
printf("LDAP connection successful.\n");
}
ldap_unbind(ldap);
return(EXIT_SUCCESS);
}
can someone help to check why my verify call-back is not called?
I think you need to set the callback on the SSL object directly instead of the context, so
void ldap_tls_cb(LDAP * ld, SSL * ssl, SSL_CTX * ctx, void * arg)
{
SSL_set_verify(ssl, SSL_VERIFY_PEER, verify_callback);
printf("verify call back is set...\n");
return;
}
The reason for this is that the SSL handle has already been initialised by the time your connect callback is called (see the OpenLDAP code), and
it's too late to set this callback through the context at that point:
If no special callback was set before, the default callback for the underlying ctx is used, that was valid at the time ssl was created with SSL_new(3).
OpenLDAP can be built with GnuTLS, so you may need to check that it's using OpenSSL before setting the callback. The LDAP_OPT_X_TLS_PACKAGE option could be used for this (note that I haven't tested this code):
char* package = NULL;
int result = ldap_get_option(NULL, LDAP_OPT_X_TLS_PACKAGE, (void *)&package);
if (result != LDAP_OPT_SUCCESS) {
ldap_perror(ldap, "ldap_get_option failed!");
return(EXIT_FAILURE);
} else {
if (strcmp(package, "OpenSSL") == 0) {
// Set your callback
}
ldap_memfree(package);
}

How to io_control of boost library socket with customized command

I' trying to make an identical function to "ioctl" in c++ style using boost library.
Here is my "c" style code:
int sockfd;
char * id;
struct iwreq wreq;
memset(&wreq, 0, sizeof(struct iwreq));
sprintf(wreq.ifr_name, IW_INTERFACE);
if((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) == -1) {
fprintf(stderr, "Cannot open socket \n");
fprintf(stderr, "errno = %d \n", errno);
fprintf(stderr, "Error description is : %s\n",strerror(errno));
exit(1);
}
printf("Socket opened successfully \n");
id = malloc(IW_ESSID_MAX_SIZE+1);
wreq.u.essid.pointer = id;
if (ioctl(sockfd, SIOCGIWESSID, &wreq)) {
fprintf(stderr, "Get ESSID ioctl failed \n");
fprintf(stderr, "errno = %d \n", errno);
fprintf(stderr, "Error description : %s\n",strerror(errno));
exit(2);
}
printf("IOCTL Successfull\n");
printf("ESSID is %s\n", wreq.u.essid.pointer);
I found some relevant example, but I'm not clear how to use it correctly. example
Main function:
boost::asio::ip::udp::socket socket(io_service);
struct iwreq wreq
memset(&wreq, 0, sizeof(struct iwreq));
sprintf(wreq.ifr_name, IW_INTERFACE);
id = malloc(IW_ESSID_MAX_SIZE+1);
wreq.u.essid.pointer = id;
boost::asio::detail::io_control::myCommand command;
command.set(&wreq);
boost::system::error_code ec;
socket.io_control(command, ec);
if (ec)
{
// An error occurred.
}
Custom command:
#include <boost/asio/detail/config.hpp>
#include <cstddef>
#include <boost/config.hpp>
#include <boost/asio/detail/socket_types.hpp>
#include <boost/asio/detail/push_options.hpp>
namespace boost {
namespace asio {
namespace detail {
namespace io_control {
// I/O control command for getting number of bytes available.
class myCommand
{
public:
// Default constructor.
myCommand()
: value_(0)
{
}
// Get the name of the IO control command.
int name() const
{
return static_cast<int>(SIOCGIWESSID);
}
// Set the value of the I/O control command.
void set(struct iwreq* value)
{
value_ = static_cast<detail::ioctl_arg_type>(value);
}
// Get the current value of the I/O control command.
std::size_t get() const
{
return static_cast<struct iwreq*>(value_);
}
// Get the address of the command data.
detail::ioctl_arg_type* data()
{
return &value_;
}
// Get the address of the command data.
const detail::ioctl_arg_type* data() const
{
return &value_;
}
private:
detail::ioctl_arg_type value_;
};
} // namespace io_control
} // namespace detail
} // namespace asio
} // namespace boost
However, the code does not work.
If you have any example code or solution, please let me know.
Thank you.

RSA public key decryption on OS X using SecTransform API (or other system API)

I'm trying to replace my use of OpenSSL, which was long ago deprecated and has been removed from the 10.11 SDK with the Security Transform API. My use of OpenSSL is simply for license key verification. The problem I've run into is that license keys are generated (server side) using OpenSSL's rsa_private_encrypt() function, rather than the (probably more appropriate) rsa_sign(). In the current OpenSSL code, I verify them using rsa_public_decrypt() like so:
int decryptedSize = RSA_public_decrypt([signature length], [signature bytes], checkDigest, rsaKey, RSA_PKCS1_PADDING);
BOOL success = [[NSData dataWithBytes:checkDigest length:decryptedSize] isEqualToData:[digest sha1Hash]])
Unfortunately, I'm unable to replicate this using the SecTransform APIs. I have the following:
SecTransformRef decryptor = CFAutorelease(SecDecryptTransformCreate(pubKey, &error));
if (error) { showSecError(error); return NO; }
SecTransformSetAttribute(decryptor, kSecTransformInputAttributeName, (CFDataRef)signatureData, &error);
if (error) { showSecError(error); return NO; }
CFDataRef result = SecTransformExecute(decryptor, &error);
if (error) { showSecError(error); return NO; }
return CFEqual(result, (CFDataRef)[digest sha1Hash]);
The call to SecTransformExecute() fails with a CSSMERR_CSP_INVALID_KEY_CLASS error.
Am I missing something, or is there no equivalent to OpenSSL's RSA_public_decrypt() in Security.framework? Perhaps a SecVerifyTransform can be used (I have been unable to get this to work either, but then the same is true of OpenSSL's RSA_sign()). I am certainly willing to use another system API (e.g. CDSA/CSSM) if it will enable me to do this.
Unfortunately, as this code needs to verify existing license codes, I can not simply change my license generation code to use RSA_sign() or similar instead.
I figured out how to do this using CDSA/CSSM. Code below:
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
NSData *ORSDecryptDataWithPublicKey(NSData *dataToDecrypt, SecKeyRef publicKey)
{
const CSSM_KEY *cssmPubKey = NULL;
SecKeyGetCSSMKey(publicKey, &cssmPubKey);
CSSM_CSP_HANDLE handle;
SecKeyGetCSPHandle(publicKey, &handle);
CSSM_DATA inputData = {
.Data = (uint8_t *)[dataToDecrypt bytes],
.Length = [dataToDecrypt length],
};
CSSM_DATA outputData = {
.Data = NULL,
.Length = 0,
};
CSSM_ACCESS_CREDENTIALS credentials;
memset(&credentials, 0, sizeof(CSSM_ACCESS_CREDENTIALS));
CSSM_CC_HANDLE contextHandle;
CSSM_RETURN result = CSSM_CSP_CreateAsymmetricContext(handle, cssmPubKey->KeyHeader.AlgorithmId, &credentials, cssmPubKey, CSSM_PADDING_PKCS1, &contextHandle);
if (result) { NSLog(#"Error creating CSSM context: %i", result); return nil; }
CSSM_CONTEXT_ATTRIBUTE modeAttribute = {
.AttributeType = CSSM_ATTRIBUTE_MODE,
.AttributeLength = sizeof(UInt32),
.Attribute.Uint32 = CSSM_ALGMODE_PUBLIC_KEY,
};
result = CSSM_UpdateContextAttributes(contextHandle, 1, &modeAttribute);
if (result) { NSLog(#"Error setting CSSM context mode: %i", result); return nil; }
CSSM_SIZE numBytesDecrypted = 0;
CSSM_DATA remData = {
.Data = NULL,
.Length = 0,
};
result = CSSM_DecryptData(contextHandle, &inputData, 1, &outputData, 1, &numBytesDecrypted, &remData);
if (result) { NSLog(#"Error decrypting data using CSSM: %i", result); return nil; }
CSSM_DeleteContext(contextHandle);
outputData.Length = numBytesDecrypted;
return [NSData dataWithBytesNoCopy:outputData.Data length:outputData.Length freeWhenDone:YES];
}
#pragma clang diagnostic pop
Note that as documented here, while CDSA is deprecated, Apple recommends its use "if none of the other cryptographic service APIs support what you are trying to do". I have filed radar #23063471 asking for this functionality to be added to Security.framework.

Resolving SRV records with iOS SDK

I want to resolve DNS SRV records using the iOS SDK.
I've already tried the high-level Bonjour APIs Apple is providing, but they're not what I need. Now I'm using DNS SD.
void *processQueryForSRVRecord(void *record) {
DNSServiceRef sdRef;
int context;
printf("Setting up query for record: %s\n", record);
DNSServiceQueryRecord(&sdRef, 0, 0, record, kDNSServiceType_SRV, kDNSServiceClass_IN, callback, &context);
printf("Processing query for record: %s\n", record);
DNSServiceProcessResult(sdRef);
printf("Deallocating query for record: %s\n", record);
DNSServiceRefDeallocate(sdRef);
return NULL;
}
This works as long as it gets only correct SRV records (for example: _xmpp-server._tcp.gmail.com), but when the record is typed wrong, DNSServiceProcessResult(sdRef) goes into an infinite loop.
Is there a way to stop DNSServiceProcessResult or must I cancel the thread calling it?
Use good old select(). This is what I have at the moment:
- (void)updateDnsRecords
{
if (self.dnsUpdatePending == YES)
{
return;
}
else
{
self.dnsUpdatePending = YES;
}
NSLog(#"DNS update");
DNSServiceRef sdRef;
DNSServiceErrorType err;
const char* host = [self.dnsHost UTF8String];
if (host != NULL)
{
NSTimeInterval remainingTime = self.dnsUpdateTimeout;
NSDate* startTime = [NSDate date];
err = DNSServiceQueryRecord(&sdRef, 0, 0,
host,
kDNSServiceType_SRV,
kDNSServiceClass_IN,
processDnsReply,
&remainingTime);
// This is necessary so we don't hang forever if there are no results
int dns_sd_fd = DNSServiceRefSockFD(sdRef);
int nfds = dns_sd_fd + 1;
fd_set readfds;
int result;
while (remainingTime > 0)
{
FD_ZERO(&readfds);
FD_SET(dns_sd_fd, &readfds);
struct timeval tv;
tv.tv_sec = (time_t)remainingTime;
tv.tv_usec = (remainingTime - tv.tv_sec) * 1000000;
result = select(nfds, &readfds, (fd_set*)NULL, (fd_set*)NULL, &tv);
if (result == 1)
{
if (FD_ISSET(dns_sd_fd, &readfds))
{
err = DNSServiceProcessResult(sdRef);
if (err != kDNSServiceErr_NoError)
{
NSLog(#"There was an error reading the DNS SRV records.");
break;
}
}
}
else if (result == 0)
{
NBLog(#"DNS SRV select() timed out");
break;
}
else
{
if (errno == EINTR)
{
NBLog(#"DNS SRV select() interrupted, retry.");
}
else
{
NBLog(#"DNS SRV select() returned %d errno %d %s.", result, errno, strerror(errno));
break;
}
}
NSTimeInterval elapsed = [[NSDate date] timeIntervalSinceDate:startTime];
remainingTime -= elapsed;
}
DNSServiceRefDeallocate(sdRef);
}
}
static void processDnsReply(DNSServiceRef sdRef,
DNSServiceFlags flags,
uint32_t interfaceIndex,
DNSServiceErrorType errorCode,
const char* fullname,
uint16_t rrtype,
uint16_t rrclass,
uint16_t rdlen,
const void* rdata,
uint32_t ttl,
void* context)
{
NSTimeInterval* remainingTime = (NSTimeInterval*)context;
// If a timeout occurs the value of the errorCode argument will be
// kDNSServiceErr_Timeout.
if (errorCode != kDNSServiceErr_NoError)
{
return;
}
// The flags argument will have the kDNSServiceFlagsAdd bit set if the
// callback is being invoked when a record is received in response to
// the query.
//
// If kDNSServiceFlagsAdd bit is clear then callback is being invoked
// because the record has expired, in which case the ttl argument will
// be 0.
if ((flags & kDNSServiceFlagsMoreComing) == 0)
{
*remainingTime = 0;
}
// Record parsing code below was copied from Apple SRVResolver sample.
NSMutableData * rrData = [NSMutableData data];
dns_resource_record_t * rr;
uint8_t u8;
uint16_t u16;
uint32_t u32;
u8 = 0;
[rrData appendBytes:&u8 length:sizeof(u8)];
u16 = htons(kDNSServiceType_SRV);
[rrData appendBytes:&u16 length:sizeof(u16)];
u16 = htons(kDNSServiceClass_IN);
[rrData appendBytes:&u16 length:sizeof(u16)];
u32 = htonl(666);
[rrData appendBytes:&u32 length:sizeof(u32)];
u16 = htons(rdlen);
[rrData appendBytes:&u16 length:sizeof(u16)];
[rrData appendBytes:rdata length:rdlen];
rr = dns_parse_resource_record([rrData bytes], (uint32_t) [rrData length]);
// If the parse is successful, add the results.
if (rr != NULL)
{
NSString *target;
target = [NSString stringWithCString:rr->data.SRV->target encoding:NSASCIIStringEncoding];
if (target != nil)
{
uint16_t priority = rr->data.SRV->priority;
uint16_t weight = rr->data.SRV->weight;
uint16_t port = rr->data.SRV->port;
[[FailoverWebInterface sharedInterface] addDnsServer:target priority:priority weight:weight port:port ttl:ttl]; // You'll have to do this in with your own method.
}
}
dns_free_resource_record(rr);
}
Here's the Apple SRVResolver sample from which I got the RR parsing.
This Apple sample mentions that it may block forever, but strange enough suggest to use NSTimer when trying to add a timeout yourself. But I think using select() is a much better way.
I have one to-do: Implement flushing cache with DNSServiceReconfirmRecord. But won't do that now.
Be aware, this code is working, but I'm still testing it.
You need to add libresolv.dylib to your Xcode project's 'Linked Frameworks and Libraries'.