I would like to set the TLS 1.2 configuration as below in my linux application.
ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
I know of the API SSL_CTX_set_cipher_list(SSL *ssl, const char *str); My question is how do I set more than one cipher using this API. Should I call this API twice or how else?
Currently I am setting only one configuration like this:
SSL_CTX_set_cipher_list(ctx, ECDHE-RSA-AES256-GCM-SHA384)
How do I set both the configurations as listed above?
Based on https://www.openssl.org/docs/man1.0.2/man1/ciphers.html "The cipher list consists of one or more cipher strings separated by colons. Commas or spaces are also acceptable separators but colons are normally used."
I would say 'ECDHE_ECDSA_WITH_AES_256_GCM_SHA384:ECDHE_ECDSA_WITH_AES_128_GCM_SHA256'
Related
Currently, I am using the below code to set parameters to retrieve data from PACS.
DcmDataset findParams = DcmDataset();
findParams.putAndInsertString(DCM_QueryRetrieveLevel, "SERIES");
findParams.putAndInsertString(DCM_SpecificCharacterSet, "ISO_IR 192");
However, just wanted to check can we provide support multiple characters set to import data at the same time, Code will look like something below, I am trying to check whether this is possible or not as I dont have the facility to verify the same.
findParams.putAndInsertString(DCM_SpecificCharacterSet, "ISO_IR 192" ,"ISO_IR 100");
I think that what you want to express is that "this Query SCU can accept responses in the following character sets". This is plainly not possible. See a discussion in the DICOM newsgroup for reference. It ends with a proposal to add character set negotiation to the association negotiation. But such a supplement has not been submitted yet, and I am not aware of anyone working on it currently.
The semantics of the attribute Specific Character Set (0008,0005) in the context of the Query Retrieve Service Class:
PS3.4, C.4.1.1.3.1 Request Identifier Structure
Conditionally, the Attribute Specific Character Set (0008,0005). This Attribute shall be included if expanded or replacement character sets may be used in any of the Attributes in the Request Identifier. It shall not be included otherwise
I.e. it describes nothing but the character encoding of your request dataset.
and
C.4.1.1.3.2 Response Identifier Structure
Conditionally, the Attribute Specific Character Set (0008,0005). This Attribute shall be included if expanded or replacement character sets may be used in any of the Attributes in the Response Identifier. It shall not be included otherwise. The C-FIND SCP is not required to return responses in the Specific Character Set requested by the SCU if that character set is not supported by the SCP. The SCP may return responses with a different Specific Character Set.
I.e. you cannot control the character set in which the SCP will send you the responses. Surprising but a matter of fact.
Sending multiple values for the attribute is possible but has different semantics. It means that the request contains characters from different character sets which are switched using Code Extension Techniques as defined in ISO 2022. An illustrative example how this would look like and what it would mean is found in PS3.5, H.3.2
What implementors usually do to avoid character set compatibility issues is configuring "the one and only" character set for a particular installation (=hospital) in a locale configuration that is configured upon system setup. It works pretty well, for e.g. an installation in Russia will very likely support Cyrillic (ISO_IR 144) or UNICODE (ISO_IR 192) or both. In case of "both", you can select the character set that you prefer for configuring your system.
Im currently trying create some code to decode the SCT extension in a X.509 Certificate (OID: 1.3.6.1.4.1.11129.2.4.2). It has mostly been successful, however the SCT signature turns out wrong for me when comparing my result to OpenSSL. An example:
Mine:
37:F6:....:E2:16:....
OpenSSL:
30:45:02:20:37:F6:....:02:21:00:E2:16:....
To clarify the "...." are the same for both. For all my test cases, the same pattern of:
30:45:02:20 and 02:21:00 come up. When trying to interpret these additional octets my guess is that these represent: 0x30 -> SEQUENCE and 0x02 -> INTEGER and the following octets represents length etc. Should these be included in the signature or is there something i have missed?
It is possible if I do like this?
http://myurl:abc
Port : abc instead of 123
No.
According to RFC3986, the port part of a URI is defined thus (my bold):
The port subcomponent of authority is designated by an optional port number in decimal following the host and delimited from it by a single colon (":") character.
That RFC has been updated by both RFC6874 and RFC7320, but neither of those effect any changes on the relevant section.
No, that cannot be the case. The port is a number always, as per specification.
I am using GnuTLS 3.4.1. I have a x509 certificate with set of sequences inside. The certificate is stored that way on a smart card.
GnuTLS is rearranging the sequences via function _asn1_ordering_set_of, which appears to be causing a verification failure.
Here's what the sequence looks like:
SEQUENCE :
...
SET :
SEQUENCE :
OBJECT_IDENTIFIER : 'CN (2.5.4.3)'
PrintableString : '0000'
SEQUENCE :
OBJECT_IDENTIFIER : 'SN (2.5.4.4)'
TeletexString : 'XXX'
SEQUENCE :
OBJECT_IDENTIFIER : 'G (2.5.4.42)'
TeletexString : 'YYY'
OpenSSL (and probably Java PKCS11 provider) loads this construction as is.
GnuTLS on load of the certificate sorts this construction in _asn1_ordering_set_of. So that it becomes:
SEQUENCE :
...
SET :
SEQUENCE :
OBJECT_IDENTIFIER : 'G (2.5.4.42)'
TeletexString : 'YYY'
SEQUENCE :
OBJECT_IDENTIFIER : 'SN (2.5.4.4)'
TeletexString : 'XXX'
SEQUENCE :
OBJECT_IDENTIFIER : 'CN (2.5.4.3)'
PrintableString : '0000'
Why does GnuTLS sort the set of sequences? What way should it be done, is it a GnuTLS bug or other libraries simply omit ordering?
RFC5280 has:
4.1. Basic Certificate Fields
The X.509 v3 certificate basic syntax is as follows. For signature
calculation, the data that is to be signed is encoded using the ASN.1
distinguished encoding rules (DER) [X.690].
So it seems to me that GnuTLS is doing the right thing.
It also looks like it's trying to encode a Distinguished Name, but does it wrong. It's valid according to the ASN1, because the spec is just really weird. You can have multiple values for each part. But you want the CN, SN and so on each in it's own SET, so all those SEQUENCEs should have had their own SET.
What way should it be done ...
The ITU recommends SET OF be encoded using BER, as long as there's no need for CER or DER. The best I can tell, there's no need. See below for a more detailed explanation in the realm of the ITU and ASN.1.
However, GnuTLS may be complying with a standard that creates the need. In this case, I'm not aware of which standard it is. See Kurt's answer.
I looked at RFC 5280, PKIX Certificate and CRL Profile, but I could not find the restriction. Maybe its in another PKIX document.
is it a GnuTLS bug or other libraries simply omit ordering?
I don't believe its a bug in GnuTLS per se. Its just the way the library does things. Take this modulo the requirement to do so in a RFC or other standard.
Also note that other libraries don't omit ordering. They use the order the attributes are presented in the certificate, which is an ordering :)
(comment) The problem is that GNUTLS rearranging results in failed SSL authentication
That sounds like a bug to me (modulo standards requirements). In this case, the bug is reordering the SET OF after a signature is placed upon the TBS/Certificate.
If GnuTLS is building the TBS/Certificate, then its OK to reorder until the signature is placed upon it.
(comment) Does GnuTLS put the elements of a SET OF type in the correct order according to DER rules
In ASN.1 encoding rules, X.690, BER/CER/DER:
8.12 Encoding of a set-of value
...
8.12.3 The order of data values need not be preserved by the encoding and subsequent decoding.
A SET OF does not appear to be ordered (for example, lexicographical order), so the sender can put them in any order, and a receiver can reorder them.
However, 11.6 says:
11 Restrictions on BER employed by both CER and DER
...
11.6 Set-of components
The encodings of the component values of a set-of value shall appear in ascending order, the encodings being compared as octet strings with the shorter components being padded at their trailing end with 0-octets.
NOTE – The padding octets are for comparison purposes only and do not appear in the encodings.
In the above, they are saying BER can be any order, but CER and DER are ascending order.
And last but not least, the Introduction says:
Introduction
...
... The basic encoding rules is more suitable than the canonical or distinguished encoding rules if the encoding contains a set value or set-of value and there is no need for the restrictions that the canonical and distinguished encoding rules impose ...
So the Introduction recommends BER for SET OF.
But in the big picture: the certificate is in BER. That's how it was signed. GnuTLS cannot change that once they get a hold of the certificate because of the signature over the certificate's data.
GnuTLS is free to create certificates in DER encoding. They just can't impose the encoding after the fact.
(comment) gnutls_certificate_set_x509_key_file(xcred, CERT_URL, KEY_URL, GNUTLS_X509_FMT_PEM);
I looked at the latest GnuTLS sources. That's appears to be the way its used in src/serv.c.
Apparently, _asn1_ordering_set_of was not working as expected in the past. It was improved in April, 2014. See PATCH 1/3: Make _asn1_ordering_set_of() really sort (and friends) on the GnuTLS mailing list.
Here are the hits for it in the sources:
$ grep -R -n _asn1_ordering_set_of * | grep -v doc
lib/minitasn1/coding.c:832: /* Function : _asn1_ordering_set_of */
lib/minitasn1/coding.c:843: _asn1_ordering_set_of (unsigned char *der, int der_len, asn1_node node)
lib/minitasn1/coding.c:1261: err = _asn1_ordering_set_of (der + len2, counter - len2, p);
The use around line 1261 is for asn1_der_coding. asn1_der_coding is used more frequently in other components...
(comment) but I'm not sure that it's bug in GNUTls and not on the server side, so I'd like to find out how it should work before doing anything
You should probably reach out ot the GnuTLS folks as detailed at B.3 Bug Reports. It looks like a bug in the processing of non-GnuTLS certificates.
To be clear, GnuTLS uses DER when it creates certificates and that's fine. But GnuTLS cannot impose ordering after it receives a non-GnuTLS certificate because that invalidates the signature.
Their test suite probably misses it because GnuTLS DER encodes SET OF. They likely are not aware its happening.
In the HTTP header, line breaks are tokens to separate fields in the header.
But, if I wan't to send a line break literal in a custom field how should I escape it?
If you are designing your own custom extension field, you may use BASE64 or quoted-printable to escape(and unescape) the value.
The actual answer to this question is that there is no standard for encoding line breaks.
You can use any Binary-to-text encoding such as URL-Encoding or Base64, but obviously that's only going to work if both sender and receiver implement the same method.
RFC 2616 did allow to 'fold' (i.e. wrap) header values over multiple lines, but the line breaks were treated as a single space character and not part of the parsed field value.
However, that specification has been obsoleted by RFC 7230 which forbids folding:
Historically, HTTP header field values could be extended over multiple lines by preceding each extra line with at least one space or horizontal tab (obs-fold).
This specification deprecates such line folding except within the message/http media type (Section 8.3.1).
A sender MUST NOT generate a message that includes line folding
A standard for line breaks in HTTP Header field values is not – and never was – established.
According to RFC2616 4.2 Message Headers:
Header fields can be extended over
multiple lines by preceding each extra
line with at least one SP or HT.
where SP means a space character (0x20) and HT means a horizontal tab character (0x09).
The idea is, that HTTP is ASCII-only and newlines and such are not allowed. If both sender and receiver can interpret YOUR encoding then you can encode whatever you want, however you want. That's how DNS international names are handled with the Host header (it's called PUNYCODE).
Short answer is: You don't, unless you control both sender and receiver.
If it's a custom field how you escape it depends entirely on how the targetted application is going to parse it. If this is some add on you created you could stick with URL encoding since it's pretty tried and true and lots of languages have encoding/decoding methods built in so your web app would encode it and your plug in (or whatever you're working on) would decode it.