I'm being tasked to validate the integrity of my downloads from npm by our IT/Security department. I'm a programmer and while I understand at a top level what performing a sha checksum is, I'm having trouble figuring out how to do that on my NPM packages.
I successfully performed a check on a single file download from the browser for something other than npm. NPM installs come with an "integrity" value in the "package-lock.json", but I am unsure how to use that value. For example when trying to perform this check on the D3 library which has the "integrity" value of "sha512-4PL5hHaHwX4m7Zr1UapXW23apo6pexCgdetdJ5kTmADpG/7T9Gkxw0M0tf/pjoB63ezCCm0u5UaFYy2aMt0Mcw==" I have been unable to recreate that value. I tried to create a tarball of the package folder using 7zip, and I even tried directly downloading the ".tgz" file from the "resolved" value "https://registry.npmjs.org/d3/-/d3-5.16.0.tgz" which still does not yield the right checksum.
I have used both of the following commands which both give me the same result. (e0f2f9847687c17e26ed9af551aa575b6ddaa68ea97b10a075eb5d2799139800e91bfed3f46931c34334b5ffe98e807addecc20a6d2ee54685632d9a32dd0c73)
Get-FileHash -Path C:\Path\to\d3-5.16.0.tgz -Algorithm SHA512
certutil -hashfile C:\Path\to\d3-5.16.0.tgz sha512
If anyone can walk me through what I'm doing wrong or missing it would be very appreciated.
You just missed out one step, you need to convert the result (HEX) to base64 instead.
The result you generated is in Hexadecimal (by default), unless you explicitly mention to use base64 encoding while creating the hash.
Summary:
Have your file hashed with algorithm of your choice (eg: SHA-256)
By default, it will create a hash of Hexadecimal. You need to convert it into Base64-encoded hash.
Option 1: You can generate SRI hashes from the command-line with openssl using a command invocation cat FILENAME.js | openssl dgst -sha384 -binary | openssl base64 -A
Option 2: Or with shasum
shasum -b -a 384 FILENAME.js | awk '{ print $1 }' | xxd -r -p | base64
Option 3: Use online tools.
Firstly, upload and hash your file with any online tool (I was using
this) and it should create a Hexadecimal hash output.
Then, convert your Hexadecimal hash output into Base64-encoded value. (I
was using this)
References:
W3's Subresource Integrity
Related
The redis tarball hashes page at this URL: https://github.com/redis/redis-hashes/ lists the following SHA-256 hash for redis-7.0.5.tar.gz
hash redis-7.0.5.tar.gz sha256 67054cc37b58c125df93bd78000261ec0ef4436a26b40f38262c780e56315cc3
How is this hash generated/on what platform?
After downloading the gz file, tried matching the hash and the hashes do not match.
Tried on Red Hat Enterprise Linux Server release 7.9 (Maipo)
Tried sha256sum with and without -b option
$ sha256sum redis-7.0.5.tar.gz
40827fcaf188456ad9b3be8e27a4f403c43672b6bb6201192dc15756af6f1eae redis-7.0.5.tar.gz
$ sha256sum -b redis-7.0.5.tar.gz
40827fcaf188456ad9b3be8e27a4f403c43672b6bb6201192dc15756af6f1eae *redis-7.0.5.tar.gz
Tried python hashlib.sha256() (reading file in "rb" mode)
$ python a.py
40827fcaf188456ad9b3be8e27a4f403c43672b6bb6201192dc15756af6f1eae
Tried Windows 10
certutil -hashfile redis-7.0.5.tar.gz SHA256
SHA256 hash of redis-7.0.5.tar.gz:
40827fcaf188456ad9b3be8e27a4f403c43672b6bb6201192dc15756af6f1eae
How did the redis site get 67054cc37b58c125df93bd78000261ec0ef4436a26b40f38262c780e56315cc3 ?
What am i missing....
Your downloaded file is corrupted. Just delete the file and download again.
I have just tested in my PC. My OS is Debian and I have downloaded by Firefox web browser.
$ sha256sum redis-7.0.5.tar.gz
67054cc37b58c125df93bd78000261ec0ef4436a26b40f38262c780e56315cc3 redis-7.0.5.tar.gz
I'm playing with RISC-V.
I have a .img file and I want to disassemble it into a .asm file, so I ran the following command:
> riscv64-unknown-elf-objdump -d xxx.img > xxx.asm
However, I got this issue:
riscv64-unknown-elf-objdump: xxx.img: file format not recognized
How can I fix it? I have no idea what to do with this issue.
If you run:
riscv64-unknown-elf-objdump --help
You'll see a line like:
riscv64-unknown-elf-objdump: supported architectures: riscv riscv:rv64 riscv:rv32
These are the supported architectures that you need to pass as the -m argument. Normally, an ELF file will encode this information so there's no guesswork, but in the case of using a flat file, there's no way for objdump to know how the instructions are supposed to be interpreted. The final command is:
riscv64-unknown-elf-objdump -b binary -m riscv:rv64 -D xxx.bin
I'm wondering how to verify the checksum of a tarball backup with the original directory after creation.
Is it possible to do so without extracting it for example if it's a large 20GB backup?
Example, a directory with two files:
mkdir test &&
echo "one" > test/one.txt &&
echo "two" > test/two.txt
Get checksum of directory:
find test/ -type f -print0 | sort -z | xargs -0 shasum | shasum
Resulting checksum of directory content:
d191c793cacc4bec1f070eb96fa68524cca566f8 -
Create tarball:
tar -czf test.tar.gz test/
The checksum of the directory content stays constant.
But when creating the archive and getting the checksum of the archive I noticed that the results vary. Why is that?
How would I go about getting the checksum of the tarball content to compare to the directory content checksum?
Or what's a better solution to check that the archive contains all the necessary content from the original directory (without extracting it if it's large)?
Your directory checksum is calculating the SHA-1 of each file's contents. You would need to read and decompress the entire tar archive to do the same calculation. That doesn't mean you'd need to save the contents of the archive anywhere. You'd just need to read it sequentially into memory, and do the calculation there.
I have a trouble with git secret in the gitlab ci jobs.
What I done:
init, add users, add files, hide them using git secret
create a job where I want to reveal files:
git secret:
stage: init
before_script:
- sh -c "echo 'deb https://gitsecret.jfrog.io/artifactory/git-secret-deb git-secret main' >> /etc/apt/sources.list"
- wget -qO - 'https://gitsecret.jfrog.io/artifactory/api/gpg/key/public' | apt-key add -
- apt-get update && apt-get install -y git-secret
script:
- echo $GPG_PRIVATE_KEY | tr ',' '\n' > ./pkey.gpg
- export GPG_TTY=$(tty)
- gpg --batch --import ./pkey.gpg
- git secret reveal -p ${GPG_PASSPHRASE}
Result logs:
...
$ gpg --batch --import ./pkey.gpg
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key SOMEKEY: public key "Email Name <ci#email.com>" imported
gpg: key SOMEKEY: secret key imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: secret keys read: 1
gpg: secret keys imported: 1
$ git secret reveal -p ${GPG_PASSPHRASE}
gpg: [don't know]: partial length invalid for packet type 20
git-secret: abort: problem decrypting file with gpg: exit code 2: /path/to/decrypted/file
I don't understand where the problem. What mean packet type 20? And length of what?
Locally it revealed fine. Command git secret whoknows shows that email on the ci env can decrypt. Passphrase checked and passed to the job.
For me, the problem was the GnuPG versions being different between the encryption machine (v2.3) and the decryption side (v2.2).
After I downgraded it to v2.2 (due to v2.3 not yet being available on Debian), the problem went away.
This is a common problem with the format of the keys.
Since you're using GitLab CI, you should get advantage of the File type in the CI/CD Variables instead of storing the value of the GPG Key as a Variable type.
First of all, forget about generating the armor in one line with the piped | tr '\n' ',' and get the proper multiline armor.
Second, add it to your GitLab CI Variables with type "File", add an empty line at the end and then delete it (this seems stupid but will save you headaches, since it seems to be a problem when copying directly from the shell to the textbox in GitLab).
Third, import directly the file in your keychain:
gpg --batch --import $GPG_PRIVATE_KEY
Whenever I try to SCP files (in bash), they end up in a seemingly random(?) order.
I've found a simple but not-very-elegant way of keeping a desired order, described below. Is there a clever way of doing it?
Edit: deleted my early solution from here, cleaned, adapted using other suggestions, and added as an answer below.
To send files from a local machine (e.g. your laptop) to a remote (e.g. your calculation server), you can use Merlin2011's clever solution:
Go into the folder in your local machine where you want to copy files from.
Execute the scp command, assuming you have an access key for the remote server:
scp -r $(ls -rt) user#foo.bar:/where/you/want/them/.
Note: if you don't have a public access key it may be better to do something similar using tar, then send the tar file, i.e. tar -zcvf files.tar.gz $(ls -rt), and then send that tar file on its own using scp.
But to do it the other way around you might not be able to run the scp command directly from the remote server to send files to, say, your laptop. Instead, you may need to, let's say bring files into your laptop. My brute-force solution is:
In the remote server, cd into the folder you want to copy files from.
Create a list of the files in the order you want. For example, for reverse order of creation (most recent copied last):
ls -rt > ../filenames.txt
Now you need to add the path to each file name. Before you go up to the directory where the list is, print the path using pwd. Now do go up: cd ..
You now need to add this path to each file name in the list. There are many ways to do this, here's one using awk:
cat filenames.txt | awk '{print "path/to/files/" $0}' > delete_me.txt
You need the filenames to be in the same line, separated by a space, so change newlines to spaces:
tr '\n' ' ' < delete_me.txt > filenames.txt
Get filenames.txt to the local server, and put it in the folder where you want to copy the files into.
The scp run would be:
scp -r user#foo.bar:"$(cat filenames.txt)" .
Similarly, this assumes you have a private access key, otherwise it's much simpler to tar the file in the remote, and bring that.
One can achieve file transfer with alphabetical order using rsync:
rsync -P -e ssh -r user#remote_host:/some_path local_path
P allows partial downloading, e sets the SSH protocol and r downloads recursively.
You can do it in one line without an intermediate using xargs:
ls -r <directory> | xargs -I {} scp <Directory>/{} user#foo.bar:folder/
Of course, this would require you to type your password multiple times if you do not have public key authentication.
You can also use cd and still skip the intermediate file.
cd <directory>
scp $(ls -r) user#foo.bar:folder/