How to extract all used hash160 addresses from Bitcoin blockchain - bitcoin

I have all 150GB Bitcoin blocks now what? How to open them and read them in Python? I need to extract all used hash160 so far
I tried to open them with Berkeley DB but no success it seems these files aren't Berkeley DB
and what is the difference between blkxxxxx.dat and revxxxxx.dat files anyway? it seems revxxxxx.dat files got some improvement in file size

Related

How to make the uploaded file available for use after saving it with GetRandomFileName according to the FileHelpers example?

In the documentation sample code for how to deal with user uploaded files, they save it as a trusted filename for filestorage via GetRandomFileName, and a trusted filename for HTML display.
In the comments it says: "In most production scenarios, an anti-virus/anti-malware scanner API is used on the file before making the file available for download or for use by other systems."
Is that going to be before it is saved with a random filename or after? Because that is the point of saving it as a random filename, so that it doesn't get executed? And when the scanning is done, how is the file going to be made available? I guess the file just has to be renamed if it passes the scan or else deleted? If so, what is the proper way to get the original file extenstion? And do you know of any good scanners that are gratis that are popular to use?
I try to learn web development. Thanks for your time and help.
The renaming of the file here has nothing to do with the anti-virus protection. The files don't tend to execute themselves whatever their name is. Same with the virus scan: it's not for the server protection, it's for the users protection. If your server executes the binary it gets from the client, it's a security breach regardless of whether it's a virus or not.
The renaming here is probably done just to be able to store the duplicates. That being said, in the production scenarios you'll probably never store the incoming files as physical files on the FS. They usually go to the DB as blobs, so the name is not an issue.
This is just a sample app designed to teach how to work with binary streams and file controllers. Don't expect too much from it in terms of applicability to the real solutions.

Use RStudio to connect to, and run queries on, a locally stored, compressed SQL databse

I'm trying to connect to and run queries on two large, locally-stored SQL databases with file extensions like so:
filename.sql.zstd.part
filename2.sql.zstd
My preference is to use the RMySQL package- however i am finding it hard to find documentation of a) how to access locally stored SQL files, and b) how to deal with the zstd extension.
This may be very basic but help is appreciated!
Seems like you have problems understanding the file extensions.
filename.sql.zstd.part
.part usually means you are downloading a file from the internet, but the download isn't complete yet (so downloads that are in progress or have been stopped)
So to get from filename.sql.zstd.part to filename.sql.zstd you need to complete your download
.zstd means it is a compressed file (to save disk space). You need a decompression program to get from filename.sql.zstd to filename.sql
The compression algorithm used is called Zstandard so you need a decompressor specifically for this program. Look here https://facebook.github.io/zstd/ for such a program.
There was also once an R package for this - but it has been archived. But you could also download an older version
(https://cran.r-project.org/web/packages/zstdr/index.html)
In filename.sql is actually not a database. In an .sql file are usually SQL statements for creating / modifying database structures. You'd have to install a database e.g. MariaDB and then import this .sql file to actually really have the files in a database on your computer. And then you would access this database via R.

Why would upload binary file to a DB web server?

I'm doing a project on SQLNinja for school and in common types of attacks they discuss uploading binary files as seen here in the upload section: http://sqlninja.sourceforge.net/sqlninja-howto.html#ss2.6
I assume this is to gain access to the database or modify it but I would basically like to know how? What would be uploaded to allow a user to proceed with an attack? Or why else would someone upload a file to a server in this way?
One of the biggest cases of identity theft occurred in 2007, when a hacker uploaded a binary to ATM machines, which recorded credit card and debit card numbers. He accomplished the upload using SQL injection.
Read "The Great Cyberheist" (New York Times)
https://www.nytimes.com/2010/11/14/magazine/14Hacker-t.html
https://www.owasp.org/index.php/Unrestricted_File_Upload
File upload can allow for attacks from filling up the disk space, to potentially making it easier to execute the binary later through other flaws in the system.

How do services like Dropbox implement delta encoding if their files are stored in the cloud?

Dropbox claims that during syncing only the portion of files that changes are transmitted back to main server, which is obviously a great functionality, but how do they perform changes to files stored in Amazon S3 cloud? So for example, lets say a 30 page document on user's desktop contains changes to only page 4. Dropbox now syncs the blocks representing the changes and what happens on the backend if they files that they store are in the cloud? Does that mean they have to download the 30 page document stored in S3 to their server, then perform replacement of blocks representing page 4, and then uploading back to the cloud? I doubt this would be the case because that would be somewhat inefficient. The other option I could think of is if Amazon S3 provides update of file stored in the cloud based on byte ranges, so for example, make a PUT request to file X from bytes 100-200 which will replace all the bytes from 100 to 200 with value of PUT request. So I was curious how companies that use other cloud services such as Amazon, implement this type of syncing.
Thanks
As S3 and similar storages don't offer filesystem capabilities, anything that pretends to store files and directories needs to emulate a file system. And when doing this files are often split to pages of certain size, where each page is stored in a separate file in the storage. This way the changed block requires uploading only one page (for example) and not the whole file. I should note, that with files like office documents this approach can be faulty if file size is changed - for example, if you insert a page at the beginning or delete a page, then the whole file will be changed and the complete file would need to be re-uploaded. We didn't analyze how Dropbox in particular does his job, and I just described the common scenario. There exist also different "patch algorithms", where a patch can be created locally (if Dropbox has an older local copy in the cache) and then applied to one or more blocks on the server.
There are several synchronizing tools which transfer deltas over the wire like rsync, rdiff, rdiff-backup, etc. For bi-directional synchronising with S3 there are paid services like s3rsync for example. For pure client-side synchronising, tools like zsync can be considered (which is what many people employ to roll-out app updates).
An alternative approach would be to tar-ball a directory, generate a delta file (using rdiff or xdelta3), and upload the delta file by using a timestamp as part of the key. In order to sync, all you need to do is to perform these 2 checks client-side:
You have all the delta files from S3. If not pull them and apply them to generate the latest backup state.
Your last backup state corresponds to your current directory. If not generate a new delta file and push to S3.
The concerning factor here would be the at least 100% additional space utilization, client-side. But this approach will help you revert changes if needed.

How to create a fixed blocked (FB) file for IBM mainframe/FTP in VBA

I've got VBA code that generates a text file with some pretty basic information included. I then upload that file via FTP.
I got a message from the server admin of the IBM mainframe today that my file was in variable blocking (VB) format and their job process uses a fixed blocking (FB) up to a max size of 256.
How is this done? During the file creation? 3rd party tool?
B
You can simply convert the VB file into FB in mainframe before running the actual process.VB to FB conversion JCL is a small JCL step to do your conversion
You can use Locsite to set the record format on the host dataset(File).
You can find all the list of FTP sub commands in the below user guide
IP User’s Guide and Commands SC31-8780-05
Sorry all, I have a feeling I didn't explain this correctly, because I do now have an answer which is rather simple. These 2 commands seemed to have setup the environment correctly for the file to be fb and not vb.
ftp> quote site lr=94
ftp> quote site rec=fb
If I rightly remember FB is in multiples of the block sizes, that is just how DASD stores the files on disk, it must fit in that multiple block size, which increases speed and throughput on the Mainframe. If the data file is not within the boundary of multiple block sizes (This has nothing to do with the actual size of the data), the DASD system just access files in blocks of 256 bytes...there will be a host of special fields inserted into the data file to describe the blocking and so on...which will get inserted when transferred to the mainframe and that data gets transferred to magnetic tape backups...
There should be a script available on the Mainframe to convert it using JCL (Job Control Language)..ask the Mainframe administrator to do it for you...
By the way it should be noted that the character set you used in your data file, just be aware that the mainframe uses EBCDIC character set...There are plenty of tools out there that can convert from ASCII data to the format to be readable by the mainframe, just something to bear in mind of...If the data gets converted that could impact the file size...Thought it would be worth mentioning and important!
There is a Unix/Linux utility that can convert the data to a fixed block size using the dd utility, although I do not think it would be the right way to do it...
Here's a useful link that will help you in understanding this. And also here on SO a similar user was asking about MVS/TSO data...