Is Base 64 part of cryptography. Does Base 64 provides any security? Does Base 64 encrypt and decrypt data??
No.
Well, to be really very pedantic, if you are trying to hide content from a complete-computer novice who doesn't really want the content very much: sure, it's plenty good. It's a bit better than the Caesar Cipher.
No. Base64 is an encoding. It only translate characters to another format.
From Wikipedia:
Base64 is a group of similar encoding
schemes that represent binary data in
an ASCII string format by translating
it into a radix-64 representation.
Base64 is simply a method of encoding data, nothing else. I suggest you read this wikipedia article on Base64 encoding.
No, it isn't.
Base 64 is an encoding scheme.
Encoding binary files in base64 was/is a way to send files over email by encoding the binary as textual information.
From wikipedia:
Base64 encoding schemes are commonly used when there is a need to encode binary data that needs be stored and transferred over media that are designed to deal with textual data.
Related
EC_POINT_point2oct(ecGroup,EC_KEY_get0_public_key(key),POINT_CONVERSION_COMPRESSED,_pub._key,sizeof(_pub._key),0)
It wouldn't be anything high level like DER, PKCS*, or anything ASN.1. (Would it?) I'm guessing a raw BN containing an EC compressed point.
I'm curious as to whether this result is something that could be ported to other languages, e.g. Java using BouncyCastle's EC classes.
If you browse the source deep enough you will see statements such as these:
ret = (form == POINT_CONVERSION_COMPRESSED) ? 1 + field_len : 1 + 2*field_len;
so it should not apply any additional encoding, as you expected. It is easy enough try too, of course.
Returning a compressed point should not be too hard. Retrieving the value back is trickier and may get you into trouble regarding software patents.
It seems likely to just be in the ANSI X9.62 format, which is very standard, yes, being used e.g. to encode EC points in TLS handshakes.
In particular, BouncyCastle's EC classes can read them, supporting uncompressed, compressed, and hybrid encodings (basically everything). In lightweight API, if you have an instance of org.bouncycastle.math.ec.ECCurve, you can call ECCurve.decodePoint on the encoding to get back an ECPoint on the curve. ECPoint instances can then be used to create public/private keys.
If you are using BC (or most other providers, I expect) via JCE, I'd be confident it's straight-forward to decode them via that API too.
I am wondering what the differences are between binary and text based protocols.
I read that binary protocols are more compacts/faster to process.
How does that work out? Since you have to send the same amount of data? No?
E.g how would the string "hello" differ in size in binary format?
If all you are doing is transmitting text, then yes, the difference between the two isn't very significant. But consider trying to transmit things like:
Numbers - do you use a string representation of a number, or the binary? Especially for large numbers, the binary will be more compact.
Data Structures - How do you denote the beginning and ending of a field in a text protocol? Sometimes a binary protocol with fixed length fields is more compact.
Text protocols are better in terms of readability, ease of reimplementing, and ease of debugging. Binary protocols are more compact.
However, you can compress your text using a library like LZO or Zlib, and this is almost as compact as binary (with very little performance hit for compression/decompression.)
You can read more info on the subject here:
http://www.faqs.org/docs/artu/ch05s01.html
binary protocols are better if you are using control bits/bytes
i.e instead of sending msg:Hello
in binary it can be 0x01 followed by your message (assuming 0x01 is a control byte which stands for msg)
So, since in text protocol you send msg:hello\0 ...it involves 10 bytes
where as in binary protocol it would be 0x01Hello\0 ...this involves 7 bytes
And another example, suppose you want to send a number say 255, in text its 3 bytes
where as in binary its 1 byte i.e 0xFF
The string "hello" itself wouldn't differ in size. The size/performance difference is in the additional information that Serialization introduces (Serialization is how the program represents the data to be transferred so that it can be re-construted once it gets to the other end of the pipe).
For example, when serializing the following in .NET using XML (one of the text serialization methods):
string helloWorld = "Hello World!";
You might get something like (I know this isn't exact):
<helloWorld type="String">Hello World!</helloWorld>
Whereas Binary Serialization would be able to represent that data natively in binary without all the extra markup.
You need to be clear as to what is part of the protocol and what is part of the data.
Text protocols can send binary data and binary protocols can send text data.
The protocol is the part of the message the states "Hi can I connect? I've got some data, where should I put it?, You've got a reply for me? great! thanks, bye!"
Each bit of the conversion is (probably) much smaller in a binary protocol, Take HTTP for example (which is text based):
if you had an encoding standard I bet you could come up with sequence of characters smaller that the 4 Bytes needed for the word 'PUSH'
Some say that binary protocols are more secure, like, for example, Mike Hearn in What should follow the web?.
I wouldn't say that binary formats are more faster to process. If you have a look at CSV or fixed-field-length textual format - it is still can be processed fast.
I would say, everything depends on who is the consumer. If the human being is at the end (like for HTTP or RSS), then there is no need to somehow compact the data, except maybe compressing it.
Binary protocols need parsers/convertors, difficult to extend and keep the backward compatibility. The higher you go in protocol stack, the more human-oriented protocols are (TCP is binary, as packets have to be processed by routers at high speed, but XML is more human-friendly).
I think, size variations does not matter today a lot. For your example, hello will take the same amount in binary format as in text format, because text format is also "binary" for the computer - only the way we interprete the data matters.
This question already has answers here:
Difference between encoding and encryption
(11 answers)
Closed 9 years ago.
Could please someone explain me what is the difference between Encryption and Encoding? What are the scenarios you should use them and why?
The main difference between this two things is that encoding is converting something to something else that is well known and that is not a secret. For example, encoding some text to base64 to save memory and store data like that. When you use them, you will encode it back to string.
Encryption is, in fact, encoding data, but data that was decrypted - for security reasons.
Decryption should not be public - it means that only one with decryption key, could decrypt data.
Though the two might seem similar, what you wish to accomplish is the main difference.
Where encryption is used to seal the content of a file, so no others can read it, encoding is used for other means. Encryption mostly uses a password of passphrase of some sort.
For instance, if you compress a file with zip, you're encoding it. Everyone can just decode it, if they know the correct algorithm. However, if you compress the zip with a passphrase, it's encrypted as well.
Examples of encryption are:
SSL
Encrypted zip archives
...
Examples of encoding are:
Compression
Channel Coding (adding extra bits to data send across a channel so you can see if the arrived data is correct, ans possibly correct it if it wasn't)
...
I have some binary data (blobs) from a database, and I need to know what compression method was used on them to decompress them.
How do I determine what method of compression that has been used?
Actually it is easier. Assume one of the standard methods was used, there possibly are some magic bytes at the beginning. I suggest taiking the hex values of the first 3-4 bytes and asking google.
It makes no sense to develop your own compressions, so... unless the case was special, or the programmer stupid, he used one of the well known compression methods. YOu could also take libraires of the most popular ones and just try what they say.
The only way to do this, in general, would be to store which compression method was used when you store the BLOB.
Starting from the blob in db you can do the following:
Store in file
For my use case I used DBeaver to export multiple blobs to separate files.
Find out more about the magic numbers from the file by doing
file -i filename
In my case the files are application/zlib; charset=binary.
In the Wikipedia Article on Block Cipher Modes they have a neat little diagram of an
unencrypted image, the same image encrypted using ECB mode and another version of the same image encrypted using another method.
At university I have developed my own implementation of DES (you can find it here) and we must demo our implementation in a presentation.
I would like to display a similar example as shown above using our implementation. However most image files have header blocks associated with them, which when encrypting the file with our implementation, also get encrypted. So when you go to open them in an image viewer, they are assumed to be corrupted and can't be viewed.
I was wondering if anybody new of a simple header-less image format which we could use to display these? Or if anyone had any idea's as to how the original creator of the images above achieved the above result?
Any help would be appreciated,
Thanks
Note: I realise rolling your own cryptography library is stupid, and DES is considered broken, and ECB mode is very flawed for any useful cryptography, this was purely an academic exercise for school. So please, no lectures, I know the drill.
If you are using a high-level language, like Java, python, etc, one thing you could do is load an image and read the pixel data into an array in memory. Then perform the encryption on those raw bytes, then save the image when you are done. Let all of the header data be handled by the libraries of whatever language you are using. In other words, don't treat the file as a raw sequence of bytes. Hope that helps.
Just cut off the headers before you encrypt (save them somewhere). Then encrypt only the rest. Then add the headers in front of the result.
This is especially easy with the Netpbm format, because you only have to know, how many lines to cut off. The data is stored as decimal numbers, so you should probably take that into account when encrypting (convert them to binary first).