With the class javacard.security.KeyAgreement (Java Card 3.0.1 classic) it is possible to make an EC-DH key agreement. But there's no specific curve defined. As i understood, the standard IEEE P1363 does not specify domain parameters. So what curve is used in the Java Card implementation?
That's simple: none. You have to set your own parameters. It depends on the card which kind of parameters are supported. The same goes for the key sizes. For JCOP (on a chip with asymmetric co-processor) you can be reasonably certain that curves over F(p) with a maximum key size of 320 bits are supported for instance.
So you should check the user manual (or whatever other manual) of your Java Card runtime environment which curves are supported. After that you need to set the domain parameter values yourself for the ECPublicKey using the various setters (all but setW), then generate an (ephemeral) key pair and perform ECDH key agreement. Obviously you can also set all the parameters including the public / private key value instead of generating a new key pair.
In the case of JCOP you may need to copy the domain parameters to the ECPrivateKey as well before generating the key pair.
In 3.0.1 you can only choose to make the private key transient, which means that all of the domain parameters need to be stored in RAM as well. The public key must be in persistent memory (EEPROM or flash). In 3.0.5 it is possible to create separate domain parameters in EEPROM/flash and then create keys using the KeyBuilder.buildKeyWithSharedDomain method. This allows the parameters to stay in EEPROM while the actual keys can be stored in (transient) memory.
You could check your user manual to see if any curves have been stored inside of the ROM. But domain parameters take quite a bit of space, so this is not all that likely. I personally like the Brainpool curves such as BrainpoolP256r1 most, but NIST curves such as P-256 can be used as well. Bouncy Castle (core) has a whole bunch of curves inside.
Related
There is a scenario in which a Credential class contains a digital Proof. Each type of Credential can support multiple different types of Proof like JWSProof or a simple RSASignatureProof derived from the same base class (Proof) and specify the different behaviour to do the verification.
Suppose that a client sends the a credential containing a JWS proof, after receiving the data, the controller endpoint maps it to a Credential (that could eventually be stored afterwards) and somehow recognizes the proof type contained inside it. Its task is to verify it before processing (or saving it) so it should use the logic implemented in the related class (JWSProof in this case) by constructing it using a factory after collecting the cryptographic material necessary to do the verification.
So my question is: do you think this is the correct approach? I feel like the JWSProof is somehow useless as its logic could easily become a static function. How would you model this case and how many repositories are necessary to store all of these type of objects/classes?
Based on your explanation (see comments to question) that each type of Proof operates on the same kind of input data in terms of data structure (in your case simply a string) we are only concerned with different proof verification logic.
This sounds like Proof is more like behaviour only (logic) and does not represent data in your domain model. This is why I would suggest to handle this via the strategy pattern. This means there is a strategy implementation containing the verification logic for each different kind of proof. I would not suggest to use static methods because this makes testing and maintenance difficult.
You could implement a Credential factory that creates the corresponding Credential aggregate and already injects the specific proof strategy based on the proof type.
If this solution is feasible for your problem you do not even require different Credential class types due to different proof logic. Because when storing the Credential aggregate it always has the same structure and only the proof type's value (e.g. create a custom ProofType value object for strong typing) differs.
For pure computational logic that doesn't need to change (and I'd suggest that a proof algorithm is such a thing: it likely has no state from verification to verification), source code (e.g. a static function) is the correct repository.
Your ProofRepository would then fundamentally just be a map from an identifier for a proof (e.g. a string like "JWSProof") to a static function (different languages will make this easier or harder to model, especially if the proof implementations need to take different arguments).
You might by the same token have a repository for cryptographic materials needed by particular proofs.
ByteBuddy offers two mechanisms for representing a constant of a Class that is primitive:
JavaConstant.Dynamic.ofPrimitiveType(Class)
TypeDescription.ForLoadedType.of(Class)
I am aware that the first one creates a "true" dynamic constant in the constant pool. I am aware that the second is specially recognized by ByteBuddy and ultimately results in some other path to storing some sort of class constant in the constant pool. (For example, you can see in the documentation of FixedValue#value(TypeDescription) that a TypeDescription will end up being transformed into a constant in the constant pool in some unspecified non-ByteBuddy-specific format that (presumably) is not the same as a dynamic constant.)
I am also aware that ByteBuddy supports JVMs back to 1.5 and that only JDKs of version 11 or greater support true dynamic constants. I am using JDK 15 and personally don't need to worry about anything earlier than that.
Given all this: Should I make constants-representing-primitive-classes using JavaConstant.Dynamic.ofPrimitiveType(Class), or should I make them using TypeDescription.ForLoadedType.of(Class)? Is there some advantage I'm missing to one representation or the other?
Dynamic constants are bootstrapt what causes a minimal runtime overhead. The static constants are therefore likely a better choice but it simplifies your code, there's no danger in using the dynamic ones.
I am working on an applet which has to share some keys of type AESKey with different terminals. The thing is I don't know in advance how many terminals it will have to handle.
As there is no structure like HashTable in Java Card, it's getting complicated. I can still fix an upper bound and instanciate as much objects AESKey but I would like to search for another way to do.
I thought I could do something with byte arrays, but is it a bad practice to store keys in byte[]?
I think the answer is yes and it is only recommanded to store it in transient arrays to make computations. Otherwise, I don't understand the role of AESKey objects. Just want to be sure.
Important security-relevant data like keys and PINs shall always be stored in the therefore designated objects from the Javacard API, e.g. AESKey. The smartcard operating system will perform additional internal operations to protect there values from leaking. If you don't know how many terminals the card will encounter you could encapsulate the Keys in an Object which is part of a linked list:
class KeyElement{
KeyElement next;
AESKey key;
}
Technically, it is possible to store key values in a byte[] with some 'unknown level of security' by using the following scheme:
Store only wrapped (i.e. encrypted) values of the key in the persistent byte array using some persistent wrapping key.
Prior to the key use, unwrap the desired key using the same wrapping key into a transient key object. Then use it at will.
Advantage: Probably more memory efficient than the 'many AESKey objects approach'.
Drawback: It is quite weird. I would do my best not to implement it this way.
Desclaimer: I am no crypto expert, so please do validate my thoughts.
Desclaimer 2: Of course the most reasonable way is to use key derivation as Maarten Bodewes noted...
In fact, creating AESKey array is possible in Java Card. I thought that only byte arrays (byte[]) were authorized but no.
So nothing forbids me to declare an AESKey array (AESKey[]) if I consider that I have to fix an upperbound to limit the number of keys in my applet.
Dynamic Ice manual section doesn't explain how to obtain a list of operations (their names, argument and result types) implemented by an object, which seems to be pretty much necessary to "create applications such as object browsers, protocol analyzers". Is it possible? I am thinking of a case where a client doesn't have access to all Slice interfaces known to the server (e.g. because new ones can be loaded dynamically) and so wants to learn about them at the runtime. Is there any built-in way to do this in Ice?
Ice doesn't provide any introspection along the lines of the CORBA interface repository. You can create requests dynamically (without using compiled stubs), and you can respond to them dynamically (without using compiled skeletons) but, if you need to find out what types are involved, you have to get this knowledge from somewhere else.
Michi.
I'm undecided as to what classes I could have that could adapt to an existing system which is an online video game. Here's what I want to achieve:
Get a series of settings from objects in the server.
Listen for clients to connect.
For each client, check that the settings on the client correspond with those from the server.
If settings don't correspond (something has been tampered with), either disconnect the client or change their settings.
This will be handled by class that will act as an entry point and can serve as a form of controller.
Now, the settings are strewn accross a number of instances: players, weapons, flags, lights, etc. In procedural programming, I'd get all this information and store it an array. However, is there a better way of doing this according to an OO approach? Can I make one or more classes that will have the values of these settings and act as a form of facade?
Encapsulate the settings data and behavior into at least one object (i.e. Settings). Depending on how your system is constructed this becomes part of other objects' composition (e.g. Player, Weapon, etc...), perhaps via dependency injection, or referenced from some global context. Settings is responsible for validation the match between client and server (e.g. Settings.validateClientServerSettingsMatch()). In terms of retrieving individual settings, two possible approaches explicit or implicit.
Explicit
Your Settings object, or perhaps other entities that make its composition, have methods for each setting managed. So it could be something like Settings.getPlayerSettings().getSomeSetting() or 'Settings.getSomePlayerSetting()`. How nested really depends on your system. Either has the advantage of making clear what settings are available to the client development and it procides compile time type checking if you're using a language such as Java. The tradeoff is needing to alter an object every time a new setting comes into play.
Implicit
This just has a generic method in the Settings object - Settings.getSetting(settingName). This makes it very easy to add settings, at the expense of any sort of useful type checking, unless you do something on your own using some meta magic of sorts in a language such as Python or Ruby or large case statements in Java.