How to use keys from Solana keygen to use a web wallet? - phantomjs

I ran this CLI command
solana-keygen new --outfile ~/my-solana-wallet/my-keypair.json
And have copied down the public key, BIP39 passphrase and 12 seed words. When I copy the seed words into phantom and sollet it shows empty accounts. I sent SOL to that public key address and worried I have lost it.
How do I access my account through sollet or phatom wallets?

Copy the contents of my-keypair.json and hit import wallet in phantom and paste this private key there. Then your account should be showing up.
Also check on which net you're on. Could be devnet/ testnet/ or localnet.
The balances in each network would be different.
For getting SOL token on devnet you can use the airdrop function from the cli

The first command you should run is:
solana-keygen recover --force 'prompt:?key=0/0' --outfile ~/.config/solana/id.json
Then the system will ask you to send the words. To do that, run a command like this:
solana transfer --from ~/.config/solana/id.json asjfdklasjdfklasjdfaksjdlkdkdkdjfdkjk 60.75 --fee-payer ~/.config/solana/id.json

Related

Azcopy command issue with parameters

I'm using Azcopy within a shell script to copy blobs within a container from one storage account to another on Azure.
Using the following command -
azcopy copy "https://$source_storage_name.blob.core.windows.net/$container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive
I'm generating the SAS token for both source and destination accounts and passing them as parameters in the command above along with the storage account and container names.
On execution, I keep getting this error ->
failed to parse user input due to error: the inferred source/destination combination could not be identified, or is currently not supported
When I manually enter the storage account names, container name and SAS tokens, the command executes successfully and storage data gets transferred as expected. However, when I use parameters in the azcopy command I get the error.
Any suggestions on this would be greatly appreciated.
Thanks!
You can use the below PowerShell Script
param
(
[string] $source_storage_name,
[string] $source_container_name,
[string] $dest_storage_name,
[string] $dest_container_name,
[string] $source_sas,
[string] $dest_sas
)
.\azcopy.exe copy "https://$source_storage_name.blob.core.windows.net/$source_container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive=true
To execute the above script you can run the below command.
.\ScriptFileName.ps1 -source_storage_name "<XXXXX>" -source_container_name "<XXXXX>" -source_sas "<XXXXXX>" -dest_storage_name "<XXXXX>" -dest_container_name "<XXXXXX>" -dest_sas "<XXXXX>"
I am Generating SAS token for both the Storage from here . Make Sure to Check all the boxes as i did in the picture.
OutPut ---

Trying to understand SSLKEYLOGFILE environment variable output format

I have been messing around with the SSLKEYLOGFILE environment variable, and I am trying to understand what everything inside the output that it gives me (the .log file with all the session keys).
Here is a picture of what the output looks like:
I understand that these are keys, but what I notice is a space in the middle of each line, indicating to me that they are separate keys. What exactly are the 2 different keys that they are giving me, and how is WireShark able to use this file to decrypt ssl traffic?
The answer to your question is in a comment from the commmit that added this feature:
* - "CLIENT_RANDOM xxxx yyyy"
* Where xxxx is the client_random from the ClientHello (hex-encoded)
* Where yyy is the cleartext master secret (hex-encoded)
* (This format allows non-RSA SSL connections to be decrypted, i.e.
* ECDHE-RSA.)

In Lettuce(4.x) for Redis how to reduce round trips and use output of one command as input for another command, especially for Georadius

I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.

Generate a ZMK Component only

I have seen the command that generates and also prints the ZMK component. The command is : Generate and Print a ZMK Component for which the Command Code is 'OC'.
But I don't want it to be printed. But in 'OC' command it seems mandatory:
Question:
Is there any way i can tweek this? Or any other command which just generates ZMK without the need to print it? I'm using Thales HSM 9000
from 1270A546-016 Host Command Reference v2.3bNotes:This command is superseded by host command 'A2'.
A printer must be attached to one of the USB ports on the payShield 9000. Serial-to-USB and parallel-to-USB cables are available from Thales, on request.
I believe that there is no way to use the Generate and Print a ZMK Component (OC) command without using a printer.
Follow up
Check the command Generate a Key (A0).
Mode = 0 (Generate key)
Key Type = 000 (Zone Master Key,ZMK)
This is the A0 response using the Thales Test LMK
HEADA100U6809C450D3F68AC78E80BA0C80E1D071F5EE20U6809C450D3F68AC78E80BA0C80E1D071F5EE20

How to simulate keys input in DOS command / C# (GPG)

I'm executing GPG as DOS commands in C#. Works in majority.
I've managed reading passwords from standard input (similarly to written here)
But I've stuck on keys deletion what you need is execute below command:
gpg --delete-key "Key Name"
But problem is GPG asks you if you are sure you want to delete this key and what you
need is press Y < ENTER > what I'm not able to archive...
Seems it doesn't read from StdIO
I've tried DOS-like solution
echo Y | gpg --delete-key "Key Name"
or making txt file with Y as 1st line and < Enter > as 2nd
type yes.txt | gpg --delete-key "Key Name"
Both didnt work...
Any idea how to make it working ??
Try to pass --yes as a param.
From the GPG manual:
--delete-key name
Remove key from the public keyring. In batch mode either --yes is required or the key must be specified by fingerprint. This is a safeguard against accidental deletion of multiple keys.