Export ipfs key to human readable text format - cryptography

How export my ipfs key to file (and using it similar gpg).
I need exporting key in openssl/gpg format.

You can export keys from go-ipfs using ipfs key export as long as the daemon isn't running. I'm not sure what you mean by you need to export the key in text mode, but the keys are libp2p keys whose format is described here.
You can of course encode the key in any representation you want (e.g. base16, base32, etc.). On the other hand, if you want to transform the libp2p keys into some other format then you can do so by writing some conversion program to convert the key. A libp2p key unmarshalling function in Go is here.
An example of running export in PowerShell is below:
C:\Users\adin> ipfs key gen example
k51qzi5uqu5dlxhpvewosfhwueh87q9c0rttznvu0k8fhui8mvjd0qmpt2n9b0
C:\Users\adin> ipfs key list
self
example
C:\Users\adin> ipfs key export example -o="myoutput.key"
C:\Users\adin> [Convert]::ToBase64String([IO.File]::ReadAllBytes("myoutput.key"))
CAESQCncEZprjyHaWjMkduj9qcma/Hk7Rjb2sqObS06Rwv+g5pgBN5fZ0DdRMmVLs49OJP0hM/NfkPa2kdOK64u0dNw=

Related

A hot key <hot-key-name> was detected in our Dataflow pipeline

We have been facing a hot key issue in our Dataflow pipeline (streaming pipeline, batch load into BigQuery -- we are using batch for a cost-effective purpose):
We are ingesting data to according tables based on their decoder value. For example, data with http decoder are going to http table, data with ssl decoder are going to ssl table.
So the BigQuery ingestion is using dynamic destinations.
The key is the destination table spec for the data.
An example error log:
A hot key
'key: tableSpec: ace-prod-300923:ml_dataset_us_central1.ssl tableDescription: Table for ssl shard: 1'
was detected in step
'insertTableRowsToBigQuery/BatchLoads/SinglePartitionsReshuffle/GroupByKey/ReadStream' with age of '1116.266s'.
This is a symptom of key distribution being skewed.
To fix, please inspect your data and pipeline to ensure that elements are evenly distributed across your key space.
Error is detected in this step: 'insertTableRowsToBigQuery/BatchLoads/SinglePartitionsReshuffle/GroupByKey/ReadStream'
The hot key issue is because of the nature of data, some decoder data has disproportionately many values. And our pipeline is a streaming pipeline.
We have read the document provided by Google but still not sure how to fix it.
Dataflow shuffle. Our project is already using streaming engine
Rekey. Doesn't seem to apply to our case, as the key is the destination table spec. To make the ingestion work, the key has to match the existing table spec in bigquery.
Combine.PerKey.withHotKeyFanout(). I don't know how to apply this. Because the key is generated in this step: insertTableRowsToBigQuery. This step, we are using BigQueryIO to write to BigQuery. The key is coming from dynamically generate BigQuery table names based on the current window or the current value. Sharding BigQuery output tables
Attached the code where the hot key is detected:
toBq.apply("insertTableRowsToBigQuery",
BigQueryIO
.<DataIntoBQ>write()
.to((ValueInSingleWindow<DataIntoBQ> dataIntoBQ) -> {
try {
String decoder = dataIntoBQ.getValue().getDecoder(); // getter function
String destination = String.format("%s:%s.%s",
PROJECT_ID, DATASET, decoder);
if (!listOfProtocols.contains(decoder)) {
LOG.error("wrong bigquery table decoder destination: " + decoder);
}
return new TableDestination(
destination, // Table spec
"Table for " + decoder // Table description
);
} catch (Exception e) {
LOG.error("insertTableRowsToBigQuery error", e);
return null;
}
}
)
.withFormatFunction(
(DataIntoBQ elem) ->
new DataIntoBQ().toTableRow(elem)
)
.withMethod(BigQueryIO.Write.Method.FILE_LOADS)
.withTriggeringFrequency(Duration.standardMinutes(3))
.withAutoSharding()
.withCustomGcsTempLocation(ValueProvider.StaticValueProvider.of(options.getGcpTempLocation()))
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
You can still try the rekey strategy. For example, you can apply a transformation before the apply("insertTableRowsToBigQuery") such that:
elements with the key "http" -> "randomVal_http" whereas randomVal is a value in a specific range (say from 0 to 10) the width of range depends on how many splits you want your elements with key http to be split into. For example, if you have 10 million elements with key "http", and you want to make sure they are split in 10 groups, each with approx. 10 elements you can generate uniform rand nrs between 0 and 9.
the same mapping you can apply to elements that belong to hot keys, element with non-hot keys don't need to be rekeyed.
Now, in your "insertTableRowsToBigQuery", you know how to go from a key like "someVal_http" to "http" - split the string.
That should help.
Regarding the Combine.PerKey.withHotKeyFanout() I am not sure how to do this for IO Transforms. If it was some intermediate transform, I could have helpled

how to use i18next-chained-backend plugin for missing key level fallback?

It seems by default Reacti18next falls back to the translation key if no translation was found in Remote URL which i passed using HTTPBackend loadpath for the key, e.g. // No translation defined for bill_type_blank yet i18next.t('bill_type_blank') // Returns 'bill_type_blank' If no translation is found for the key on Remote LoadPath, I would prefer to fetch that same key from locales path "locales/{{lng}}/translation.json". i am already using ChainedBackend for network fallback i want the same for key fallback, How can I achieve something like this using the i18next library?

How to convert exported Cosmos SDK private key to format that can be imported by Keplr or Metamask?

Keplr just added the feature to import an account using a private key (which is awesome!). This was primarily requested because a number of lunie users lost their mnemonic phrases but maintained their access to lunie itself. That meant lunie could export a private key but not the mnemonic itself. It may also be userful for future Ethermint users who want to migrate from metamask (which also provides the ability to export and import private keys).
The keys command in the Cosmos SDK CLI also allows a user to export a private key, however I'm unsure how to convert a key from that format into one that could be imported in keplr. The CLI command is as follows:
gaiacli keys export [name]
It requests the passphrase to decrypt the key and then a new passphrase to encrypt the exported key. The results are in the following format:
-----BEGIN TENDERMINT PRIVATE KEY-----
type: secp256k1
kdf: bcrypt
salt: C49BCB6A8358745812F5770A63BD09AD
NmqXg+dPDvVKawZwyER6l3V41tKxWaiIU/or2G7t9SBKRJ0oRREchssK4NpRp+Di
5KNHxHz2QXHLhbPQweo9iVkPPrNQ1uiSGH7maoY=
=GHXH
-----END TENDERMINT PRIVATE KEY-----
(this is an example created for the purpose of this question)
How can I convert this key into something I can use for keplr or metamask?
This feature just got merged!
https://github.com/cosmos/cosmos-sdk/pull/8043
The command is:
packaged export <name> --unarmored-hex and --unsafe
Where packaged is the name of your daemon binary CLI and name is the name of the key you want to export.

phpseclib ssh login fails with no errors

Here is my code:
error_reporting(E_ALL);
require __DIR__ . '/vendor/autoload.php';
use phpseclib\Net\SSH2;
use phpseclib\Crypt\RSA;
$ssh = new SSH2('stg.net');
$key = new RSA();
$key->loadKey(file_get_contents('/Users/me/.ssh/my_private_key'));
if (!$ssh->login('username', $key)) {
print_r($ssh->getLastError());
print_r($ssh->getErrors());
exit('Login Failed');
}
echo $ssh->exec('pwd');
echo $ssh->exec('ls -la');
Output:
Array
(
)
In vendor/phpseclib/phpseclib/phpseclib/Net/SSH2.php there is function _privatekey_login($username, $privatekey)
$publickey = $privatekey->getPublicKey(RSA::PUBLIC_FORMAT_RAW);
if ($publickey === false) {
return false;
}
I'm getting false, maybe I have to set public key too? How can it be done?
How to debug it?
+++ UPDATE +++
I tried advices/hints from these tickets too:
phpseclib always gives login failed with no log
Net/SSH2 - phpseclib login failing - error: "failedArray"
The problem is that the key in question is an ECDSA key. Quoting https://github.com/phpseclib/phpseclib/issues/1082#issuecomment-396122366 :
My library supports EdDSA keys in the OpenSSH format (ie. the kind
that ssh-keygen would generate), the PuTTY format (ie. the kind
puttygen would generate), in libsodium format and in the format
specified in this IETF Draft:
https://datatracker.ietf.org/doc/html/draft-ietf-curdle-pkix-07
If libsodium / sodium_compat are being used the keys are converted
from whatever format they were in to the libsodium format to
facilitate libsodium's use.
Encrypted OpenSSH private keys are not supported for the same reason
sodium_compat does not support Argon2i - it's too slow. OpenSSH uses a
custom form of bcrypt that does bcrypt 128 times as I recall and
encrypts a different string etc so PHP's bcrypt implementation cannot
be used and since bcrypt uses a custom key expansion OpenSSL's
implementation of Blowfish can't be used either.
Here the author is talking about EdDSA - not ECDSA - but from the rest of the post it sounds like ECDSA over prime finite fields is complete as well.
Quoting the post that follows that one:
Also, I'm not ready to make my code public yet. I just thought I'd post a progress report for anyone interested.
My guess is that this implementation will live in the master branch and not the 2.0 branch. I say that because the DSA changes were in the master branch and not the 2.0 branch. Eventually the master branch (as I understand it) will become 3.0.0 but idk when that'd happen.

NSS Secret (symmetric) Key Import

I am trying to figure out how to import a symmetric key into NSS for use with encryption at the core crypto boundary. These functions are described
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Reference/NSS_cryptographic_module/FIPS_mode_of_operation
I have been able to do every other type of crypto operation by following the documentation because it mirrors PKCS 11 described here:
http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/cos01/pkcs11-base-v2.40-cos01.html
However attempting to import any template where the CK_OBJECT_CLASS" is "CKO_SECRET_KEY" always returns "CKR_ATTRIBUTE_VALUE_INVALID 0x00000013". But I have no problem with assymetric (public/private)
CK_RV crv;
CK_FUNCTION_LIST_PTR pFunctionList;
CK_OBJECT_CLASS keyClass = CKO_SECRET_KEY;
CK_ATTRIBUTE keyTemplate[] = {
{CKA_CLASS, &keyClass, sizeof(keyClass)}
};
crv = pFunctionList->C_CreateObject(hRwSession, keyTemplate, 1, &hKey);
printf("failed with 0x%08X\n", crv);
But according to the documentation this should be returning "CKR_TEMPLATE_INCOMPLETE" as "CKO_SECRET_KEY" is a valid object class.
Again I have had no trouble with assymetric. I should Also point out that my function pointers is for FIPS mode only. Any insight is greatly appreciated!
It looks like the code you pasted is either incomplete or simply wrong. In particular, there's no concrete value for the key you're creating in the template (CKA_VALUE), which can easily cause the error you're getting from C_CreateObject.