how to use i18next-chained-backend plugin for missing key level fallback? - fallback

It seems by default Reacti18next falls back to the translation key if no translation was found in Remote URL which i passed using HTTPBackend loadpath for the key, e.g. // No translation defined for bill_type_blank yet i18next.t('bill_type_blank') // Returns 'bill_type_blank' If no translation is found for the key on Remote LoadPath, I would prefer to fetch that same key from locales path "locales/{{lng}}/translation.json". i am already using ChainedBackend for network fallback i want the same for key fallback, How can I achieve something like this using the i18next library?

Related

Why am i getting different asset amount using algorand Indexer vs daemon?

So I created a new ASA (AKA: Algorand Standard Asset) and set the total amount of that asset to be maximum.
Here's a quick snippet of how I did it:
const UINT64_MAX: bigint = BigInt('18446744073709551615');
Now, When I check how many tokens asset creator has with Algorand's Daemon API
curl http://localhost:8980/v2/accounts/3IELQKOD...3C5IB3BP4V4A/assets
I get it exactly right as: 18446744073709551615
But when i check it with the indexer in the sdk its something different.
It shows total assets as "18446744073709552000" to be exact which is not true.
What am i doing wrong here or this is error in library?
you need to set your client to support big int or mixed.
as JS Only supports 2^^53
You can easily set it by setting IntDecoding method for all JSON requests created by client here.

Export ipfs key to human readable text format

How export my ipfs key to file (and using it similar gpg).
I need exporting key in openssl/gpg format.
You can export keys from go-ipfs using ipfs key export as long as the daemon isn't running. I'm not sure what you mean by you need to export the key in text mode, but the keys are libp2p keys whose format is described here.
You can of course encode the key in any representation you want (e.g. base16, base32, etc.). On the other hand, if you want to transform the libp2p keys into some other format then you can do so by writing some conversion program to convert the key. A libp2p key unmarshalling function in Go is here.
An example of running export in PowerShell is below:
C:\Users\adin> ipfs key gen example
k51qzi5uqu5dlxhpvewosfhwueh87q9c0rttznvu0k8fhui8mvjd0qmpt2n9b0
C:\Users\adin> ipfs key list
self
example
C:\Users\adin> ipfs key export example -o="myoutput.key"
C:\Users\adin> [Convert]::ToBase64String([IO.File]::ReadAllBytes("myoutput.key"))
CAESQCncEZprjyHaWjMkduj9qcma/Hk7Rjb2sqObS06Rwv+g5pgBN5fZ0DdRMmVLs49OJP0hM/NfkPa2kdOK64u0dNw=

NSS Secret (symmetric) Key Import

I am trying to figure out how to import a symmetric key into NSS for use with encryption at the core crypto boundary. These functions are described
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Reference/NSS_cryptographic_module/FIPS_mode_of_operation
I have been able to do every other type of crypto operation by following the documentation because it mirrors PKCS 11 described here:
http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/cos01/pkcs11-base-v2.40-cos01.html
However attempting to import any template where the CK_OBJECT_CLASS" is "CKO_SECRET_KEY" always returns "CKR_ATTRIBUTE_VALUE_INVALID 0x00000013". But I have no problem with assymetric (public/private)
CK_RV crv;
CK_FUNCTION_LIST_PTR pFunctionList;
CK_OBJECT_CLASS keyClass = CKO_SECRET_KEY;
CK_ATTRIBUTE keyTemplate[] = {
{CKA_CLASS, &keyClass, sizeof(keyClass)}
};
crv = pFunctionList->C_CreateObject(hRwSession, keyTemplate, 1, &hKey);
printf("failed with 0x%08X\n", crv);
But according to the documentation this should be returning "CKR_TEMPLATE_INCOMPLETE" as "CKO_SECRET_KEY" is a valid object class.
Again I have had no trouble with assymetric. I should Also point out that my function pointers is for FIPS mode only. Any insight is greatly appreciated!
It looks like the code you pasted is either incomplete or simply wrong. In particular, there's no concrete value for the key you're creating in the template (CKA_VALUE), which can easily cause the error you're getting from C_CreateObject.

Default project id in BigQuery Java API

I am performing a query using the BigQuery Java API with the following code:
try (FileInputStream input = new FileInputStream(serviceAccountKeyFile)) {
GoogleCredentials credentials = GoogleCredentials.fromStream(input);
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
QueryRequest request = QueryRequest.of("SELECT * FROM foo.Bar");
QueryResponse response = bigQuery.query(request);
// Handle the response ...
}
Notice that I am using a specific service account whose key file is given by serviceAccountKeyFile.
I was expecting that the API would pick up the project_id from the key file. But it is actually picking up the project_id from the default key file referenced by the GOOGLE_APPLICATION_CREDENTIALS environment variable.
This seems like a bug to me. Is there a way to workaround the bug by setting the default project explicitly?
Yeah, that doesn't sound right at all. It does sound like a bug. I always just use the export the GOOGLE_APPLICATION_CREDENTIALS environment variable in our applications.
Anyway, you try explicitly setting the project id to see if it works:
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.setProjectId("project-id") //<--try setting it here
.build()
.getService();
I don't believe the project is coming from GOOGLE_APPLICATION_CREDENTIALS. I suspect that the project being picked up is the gcloud default project set by gcloud init or gcloud config set project.
From my testing, BigQuery doesn't use a project where the service account is created. I think the key is used only for authorization, and you always have to set a target project. There are a number of ways:
.setProjectId(<target-project>) in the builder
Define GOOGLE_CLOUD_PROJECT
gcloud config set project <target-project>
The query job will then be created in target-project. Of course, your service key should have access to target-project, which may or may not be the same project where your key is created. That is, you can run a query on projects other than the project where your key is created, as long as your key has permission to do so.

Rackspace Cloud Files PHP get_objects at the "Root level"

I have been trying to figure out how to get files that are at the Root level, meaning get all files that don't have a path attached to their file name.
I have a container that looks like this
image.png image/png
ui application/directory
ui/css application/directory
ui/css/test.css text/css
ui/image2.jpg image/jpg
I'm using the call
Container->get_objects(0, null, null, 'ui/');
which returns 2 CF_Objects:
ui/css
ui/image2.jpg
This is the desired output
but if I request the files at the "root level"
Container->get_objects(0, null, null, '/');
returns an empty array.
Container->get_objects(0, null, null, '');
returns all the files in the container.
Ideally It would return two CF_Objects image.png, and ui.
Is there a way to do this?
Thank you!
The Cloud Files Developer guide of Nov 15 2011 page 20 says:
You can also use a delimiter parameter to represent a nested directory
hierarchy without the need for the directory marker objects. You can
use any single character as a delimiter. The listings can return
virtual directories - they are virtual in that they don't actually
represent real objects. like the directory markers, though, they will
have a content-type of application/directory and be in a subdir
section of json and xml results.
If you have the following objects—photos/photo1, photos/photo2,
movieobject, videos/ movieobj4—in a container, your delimiter
parameter query using slash (/) would give you
photos,
movieobject,
videos.
The parameter "delimiter" is not supported by the get_objects in the PHP SDK, and using it seems to be the only way to get the base directory files.
There is currently a merge request in github [this request has since been approved] adding this particular parameter to the get_objects method.
Other users of the Rackspace Cloud Files API PHP SDK have also added support for this parameter.
See if the original php-cloudfiles repo gets updated or just create a fork of the original and add your own code, if you don't feel comfortable adding your own changes, clone a fork that has added the delimiter parameter like
https://github.com/michealmorgan/php-cloudfiles
or
https://github.com/onema/php-cloudfiles
The merge request referenced in the answer was approved on May 09, 2012
An optional parameter for get_objects was added for $delimiter ...
However, there was an error introduced into the code at some other point which falsely reports the Container name is not set if one tries to use any of the optional parameters.
A request has been put in to correct this error.