How to use nettle library, GCM mode - aes-gcm

I am using nettle cryptography library. I could not do GCM mode properly.
Here is how I am doing it.
What am I doing wrong?
#include<iostream>
#include<nettle/gcm.h>
#include<nettle/aes.h>
using namespace std;
int main()
{
unsigned char key[] = "1234567890123456";
unsigned char iv[] = "123456789012";
unsigned char src[33] = "12345678901234567890123456789012";
unsigned char encoded[32], digest[16], datum[8] = {0,}, decoded[32];
struct gcm_key gk, gk2;
struct gcm_ctx gc, gc2;
struct aes128_ctx ac, ac2;
aes128_set_encrypt_key(&ac, key);
gcm_set_key(&gk, &ac, (nettle_cipher_func*)aes128_encrypt);
gcm_set_iv(&gc, &gk, 12, iv);
gcm_update(&gc, &gk, 8, datum);
gcm_encrypt(&gc, &gk, &ac, (nettle_cipher_func*)aes128_encrypt, 32, encoded, src);
gcm_digest(&gc, &gk, &ac, (nettle_cipher_func*)aes128_encrypt, 16, digest);
aes128_set_decrypt_key(&ac2, key);
gcm_set_key(&gk2, &ac2, (nettle_cipher_func*)aes128_decrypt);
gcm_set_iv(&gc2, &gk2, 12, iv);
gcm_update(&gc2, &gk2, 8, datum);
gcm_decrypt(&gc2, &gk2, &ac2, (nettle_cipher_func*)aes128_decrypt, 32, decoded, encoded);
gcm_digest(&gc2, &gk2, &ac2, (nettle_cipher_func*)aes128_decrypt, 16, digest);
for(unsigned char c : src) cerr << hex << +c;
cout << endl;
for(uint8_t c : encoded) cerr << hex << +c;
cout << endl;
for(uint8_t c : decoded) cerr << hex << +c;
cout << endl;
}
and the output is
31323334353637383930313233343536373839303132333435363738393031320
80435d9ceda763309ec12a876556f72c14641344ef19fbc5c9ca2f51ebeef
f064f9e8db7ae3466979c7b79de95ba6c50714023758ad9abd6eac24d6f565
first line is source and last line is decoded one.
They do not match.. Because I am trying to make a template wrapper class of
GCM, I cannot use the gcm-aes functions..

GCM only uses the encrypt function of the underlying cipher, similarly to CTR mode. So you need to replace aes128_set_decrypt_key and aes128_decrypt with aes128_set_encrypt_key and aes128_encrypt in all places (1 and 3, respectively).
Your example works for me after that change.
To get proper authentication, you also need to compare the digest after decryption, preferably using memeql_sec.

Related

Crypto++ Raw RSA sign short message with a short key

I have a message with a size limitation of 128 bit, which i want to sign and verify me as the originator. I have only one requirement : The cipher must not exceed the 128 bit limit.
So, after some research of the topic of signing and verification, i decided to use Raw RSA with a key-length of 128 bit, because i did not find any other suitable algorithms with fulfil the requirements. Other algorithms seem always to have some sort of memory overhead that need the cipher to be larger, but maybe i missed the correct ones (If so, it would be great to give me an hint)
I know, that this is crackable in no time, but the main requirement is the short message.
So at first, i tried to encrypt the message with the public key. I wanted to use the high-level schemes that Crypto++ provides (eg. CryptoPP::RSAES_OAEP_SHA_Encryptor), but they have some sort of memory overhead for digests and other stuff, so i used Raw RSA to do the calculations directly.
My C++ code for en-/decryption looks like follows :
// This is our message
std::string message = "0123456789ABCDEX";
CryptoPP::Integer m((const byte *)message.data(), message.size());
std::cout << "message ( "<< std::dec << m.ByteCount() <<" bytes) : " << std::hex << m << std::endl;
// Generate keys
Integer n("0x3a1a51415e596a0d3e261661a35a68f99"); // modulus
Integer e("0x11"); // public exponent
Integer d("0x15dfbe36ba1ba36848b5d3ad478bb011"); // private exponent
RSA::PrivateKey privKey;
RSA::PublicKey pubKey;
privKey.Initialize(n, e, d); // Used for decryption
pubKey.Initialize(n, e); // Used for encryption
// Encrypt
CryptoPP::Integer c = pubKey.ApplyFunction(m); //generate cipher
std::cout << "cipher ( " << std::dec << c.ByteCount() << " bytes) : " << std::hex << c << std::endl;
// Decrypt
CryptoPP::Integer r = privKey.CalculateInverse(prng, c);
std::cout << "decrypted ( " << std::dec <<r.ByteCount()<<" bytes) : " << std::hex << r << std::endl;
if (r == m)
{
std::cout << "Decryption successful" << std::endl;
}
else
{
std::cout << "Decryption failed" << std::endl;
}
Output :
message ( 16 bytes) : 30313233343536373839414243444558h
cipher ( 16 bytes) : 2e294ff384751724c7dbbc31def66511h
decrypted ( 16 bytes) : 30313233343536373839414243444558h
Decryption successful
Now, in order to sign my message i need to use the private key for "encryption" and the public key for "decryption" (i quoted, since it's technically not an encryption). Under the above mentioned link, there is also a paragraph on how to perform private key encryption. There it is written, that one has only to switch the e (public exponent) and d (private exponent) parameters, when creating the pubilc and private key, and then follow the same steps like for encryption :
Encode the message
ApplyFunction (with the private key)
CalculateInverse (with the public key)
But this does not work, since pubKey has no CalculateInverse-method. So i tried a weird thing and now do the following
Encode the message
CalculateInverse (with the private key)
ApplyFunction (with the public key)
or in code :
// This is our message
std::string message = "0123456789ABCDEX";
CryptoPP::Integer m((const byte *)message.data(), message.size());
std::cout << "message ( "<< std::dec << m.ByteCount() <<" bytes) : " << std::hex << m << std::endl;
// Generate keys
Integer n("0x3a1a51415e596a0d3e261661a35a68f99"); // modulus
Integer e("0x11"); // now private exponent
Integer d("0x15dfbe36ba1ba36848b5d3ad478bb011"); // now public exponent
RSA::PrivateKey privKey;
RSA::PublicKey pubKey;
privKey.Initialize(n, d, e); // Used for signing (d and e are swapped)
pubKey.Initialize(n, d); // Used for verification
// Sign with the private key
CryptoPP::Integer c = privKey.CalculateInverse(prng, m); //
std::cout << "cipher ( " << std::dec << c.ByteCount() << " bytes) : " << std::hex << c << std::endl;
// Verify with the public key
CryptoPP::Integer r = pubKey.ApplyFunction(c);
std::cout << "decrypted ( " << std::dec <<r.ByteCount()<<" bytes) : " << std::hex << r << std::endl;
if (r == m) { /* sucess */ } else { /* failed */ }
I'm heavily uncertain if this is the correct way of doing it. And why do i have to swap the exponents? It still works, even if i do not swap them.

CGAL example cannot read input files?

this is my first stackoverflow question, so I hope the following text meets the question requirements. If not, please tell me what needs to be changed so I can adapt the question.
I'm new to CGAL and C++ in general. I would like to use CGAL 5.0.2 on a Macbook Pro early 2015 with macOS Catalina Version 10.15.4.
So to begin with, I followed the instruction steps given by the CGAL documentation using the package manager Homebrew. Since CGAL is a header-only library I configured it using CMake, as is recommended by the documentation.
It all worked out fine, so I went on trying the recommended examples given in the file CGAL-5.0.2.tar.xz, which is provided here. I'm particularly interested in the example Voronoi_Diagram_2.
Using the Terminal I executed the command -DCGAL_DIR=$HOME/CGAL-5.0.2 -DCMAKE_BUILD_TYPE=Release . in the example folder called Voronoi_Diagram_2. Then I executed the command make. All went well, no error messages were prompted. But executing the resulting exec file didn't produce any results.
After some research I managed to modify the code in a way that it prints the values of some variables. Problem seems to be that the input file which contains the line segments for which the voronoi diagramm shall be calculated is not correctly read.
The while loop which I highlighted in the code below by inserting //// signs seems not to be entered. That's why I assume that the variable ifs is empty, even though the input file "data1.svd.cin", which can be found in the folder "data" of the example, wasn't.
Does anyone have an idea for the reasons of this behaviour? Any help is appreciated.
This is the vd_2_point_location_sdg_linf.cpp file included in the example, which I modified:
// standard includes
#include <iostream>
#include <fstream>
#include <cassert>
// includes for defining the Voronoi diagram adaptor
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Segment_Delaunay_graph_Linf_filtered_traits_2.h>
#include <CGAL/Segment_Delaunay_graph_Linf_2.h>
#include <CGAL/Voronoi_diagram_2.h>
#include <CGAL/Segment_Delaunay_graph_adaptation_traits_2.h>
#include <CGAL/Segment_Delaunay_graph_adaptation_policies_2.h>
// typedefs for defining the adaptor
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Segment_Delaunay_graph_Linf_filtered_traits_2<K> Gt;
typedef CGAL::Segment_Delaunay_graph_Linf_2<Gt> DT;
typedef CGAL::Segment_Delaunay_graph_adaptation_traits_2<DT> AT;
typedef CGAL::Segment_Delaunay_graph_degeneracy_removal_policy_2<DT> AP;
typedef CGAL::Voronoi_diagram_2<DT,AT,AP> VD;
// typedef for the result type of the point location
typedef AT::Site_2 Site_2;
typedef AT::Point_2 Point_2;
typedef VD::Locate_result Locate_result;
typedef VD::Vertex_handle Vertex_handle;
typedef VD::Face_handle Face_handle;
typedef VD::Halfedge_handle Halfedge_handle;
typedef VD::Ccb_halfedge_circulator Ccb_halfedge_circulator;
void print_endpoint(Halfedge_handle e, bool is_src) {
std::cout << "\t";
if ( is_src ) {
if ( e->has_source() ) std::cout << e->source()->point() << std::endl;
else std::cout << "point at infinity" << std::endl;
} else {
if ( e->has_target() ) std::cout << e->target()->point() << std::endl;
else std::cout << "point at infinity" << std::endl;
}
}
int main()
{
std::ifstream ifs("data/data1.svd.cin");
assert( ifs );
VD vd;
Site_2 t;
// /////////// Inserted Comment ////////////////////////////////
std::cout << "In the following the insertion from ifs should take place" << std::flush;
// ///////////////// while loop which doesn't seem to be active //////////////////
while ( ifs >> t ) {
// Existing Code to insert the points in the voronoi structure
vd.insert(t);
// Inserted Code to check if while loop is entered
std::cout << "Entered while loop" << std::flush;
}
// ///////////////////////////////////////////////////////////////////////////////
ifs.close();
assert( vd.is_valid() );
std::ifstream ifq("data/queries1.svd.cin");
assert( ifq );
Point_2 p;
while ( ifq >> p ) {
std::cout << "Query point (" << p.x() << "," << p.y()
<< ") lies on a Voronoi " << std::flush;
Locate_result lr = vd.locate(p);
if ( Vertex_handle* v = boost::get<Vertex_handle>(&lr) ) {
std::cout << "vertex." << std::endl;
std::cout << "The Voronoi vertex is:" << std::endl;
std::cout << "\t" << (*v)->point() << std::endl;
} else if ( Halfedge_handle* e = boost::get<Halfedge_handle>(&lr) ) {
std::cout << "edge." << std::endl;
std::cout << "The source and target vertices "
<< "of the Voronoi edge are:" << std::endl;
print_endpoint(*e, true);
print_endpoint(*e, false);
} else if ( Face_handle* f = boost::get<Face_handle>(&lr) ) {
std::cout << "face." << std::endl;
std::cout << "The vertices of the Voronoi face are"
<< " (in counterclockwise order):" << std::endl;
Ccb_halfedge_circulator ec_start = (*f)->ccb();
Ccb_halfedge_circulator ec = ec_start;
do {
print_endpoint(ec, false);
} while ( ++ec != ec_start );
}
std::cout << std::endl;
}
ifq.close();
return 0;
}

CGAL hole filling with color

I need to implement a 3D hole filling using CGAL library that support color.
is there any possibility to do it without CGAL library modification? I need to fill the hole with an average color of the hole's edge.
Regards, Ali
....
int main(int argc, char* argv[])
{
const char* filename = (argc > 1) ? argv[1] : "data/mech-holes-shark.off";
Mesh mesh;
OpenMesh::IO::read_mesh(mesh, filename);
// Incrementally fill the holes
unsigned int nb_holes = 0;
BOOST_FOREACH(halfedge_descriptor h, halfedges(mesh))
{
if(CGAL::is_border(h,mesh))
{
std::vector<face_descriptor> patch_facets;
std::vector<vertex_descriptor> patch_vertices;
bool success = CGAL::cpp11::get<0>(
CGAL::Polygon_mesh_processing::triangulate_refine_and_fair_hole(
mesh,
h,
std::back_inserter(patch_facets),
std::back_inserter(patch_vertices),
CGAL::Polygon_mesh_processing::parameters::vertex_point_map(get(CGAL::vertex_point, mesh)).
geom_traits(Kernel())) );
CGAL_assertion(CGAL::is_valid_polygon_mesh(mesh));
std::cout << "* FILL HOLE NUMBER " << ++nb_holes << std::endl;
std::cout << " Number of facets in constructed patch: " << patch_facets.size() << std::endl;
std::cout << " Number of vertices in constructed patch: " << patch_vertices.size() << std::endl;
std::cout << " Is fairing successful: " << success << std::endl;
}
}
CGAL_assertion(CGAL::is_valid_polygon_mesh(mesh));
OpenMesh::IO::write_mesh(mesh, "filled_OM.off");
return 0;
}
If you use CGAL::Surface_mesh as Mesh, you can use dynamic property maps to define attributes for your simplices, which allows for example to define colors per face. The "standard" syntax for this is
mesh.add_property_map<face_descriptor, CGAL::Color >("f:color")
I think. There are examples in the documentation of Surface_mesh.

CGAL-4.8.1 Arrangements - Bezier Curves Save Arrangement to File Error

I am new to CGAL.
I tried to modify Examples/Arrangement_on_surfaces_2 Bezier_curves.cpp to save arrangement to file as shown below:
//! \file examples/Arrangement_on_surface_2/Bezier_curves.cpp
// Constructing an arrangement of Bezier curves.
#include <fstream>
#include <CGAL/basic.h>
#ifndef CGAL_USE_CORE
#include <iostream>
int main ()
{
std::cout << "Sorry, this example needs CORE ..." << std::endl;
return 0;
}
#else
#include <CGAL/Cartesian.h>
#include <CGAL/CORE_algebraic_number_traits.h>
#include <CGAL/Arr_Bezier_curve_traits_2.h>
#include <CGAL/Arrangement_2.h>
#include <CGAL/IO/Arr_iostream.h>
#include "arr_inexact_construction_segments.h"
#include "arr_print.h"
typedef CGAL::CORE_algebraic_number_traits Nt_traits;
typedef Nt_traits::Rational NT;
typedef Nt_traits::Rational Rational;
typedef Nt_traits::Algebraic Algebraic;
typedef CGAL::Cartesian<Rational> Rat_kernel;
typedef CGAL::Cartesian<Algebraic> Alg_kernel;
typedef Rat_kernel::Point_2 Rat_point_2;
typedef CGAL::Arr_Bezier_curve_traits_2<Rat_kernel, Alg_kernel, Nt_traits>
Traits_2;
typedef Traits_2::Curve_2 Bezier_curve_2;
typedef CGAL::Arrangement_2<Traits_2> Arrangement_2;
//typedef CGAL::Arrangement_2<Traits_2> Arrangement;
int main (int argc, char *argv[])
{
// Get the name of the input file from the command line, or use the default
// Bezier.dat file if no command-line parameters are given.
const char *filename = (argc > 1) ? argv[1] : "Bezier.dat";
const char *outfilename = (argc > 1) ? argv[1] : "BezierOut.dat";
// Open the input file.
std::ifstream in_file (filename);
if (! in_file.is_open()) {
std::cerr << "Failed to open " << filename << std::endl;
return 1;
}
// Read the curves from the input file.
unsigned int n_curves;
std::list<Bezier_curve_2> curves;
Bezier_curve_2 B;
unsigned int k;
in_file >> n_curves;
for (k = 0; k < n_curves; k++) {
// Read the current curve (specified by its control points).
in_file >> B;
curves.push_back (B);
std::cout << "B = {" << B << "}" << std::endl;
}
in_file.close();
// Construct the arrangement.
Arrangement_2 arr;
insert (arr, curves.begin(), curves.end());
// Print the arrangement size.
std::ofstream out_file;
out_file.open(outfilename);
out_file << "The arrangement size:" << std::endl
<< " V = " << arr.number_of_vertices()
<< ", E = " << arr.number_of_edges()
<< ", F = " << arr.number_of_faces() << std::endl;
out_file << arr;
out_file.close();
return 0;
}
#endif
If I comment out the line out_file << arr; it works fine. Otherwise it generates a C2678 error in read_x_monotone_curve in Arr_text_formtter.h
I am using Visual Studio 15 x86.
Thank you for any help.
I solve this by modifying the print_arrangement(arr) routine in arr_print.h to save_arrangement(arr) with a std::ofstream in place of std::cout.
It appears that the << operator does not work.
If someone else has a better solution I am open to it.
Points of intersections in an arrangement of Bezier curves cannot be represented in an exact manner. Therefore, such an arrangement cannot be saved using the default export (<<) operator and the standard format.
The easiest solution is to store the curves, but this means that the arrangement must be recomputed each time the curves are read. Perhaps other solution could be devised, but they are not implemented.

Set padding in OpenSSL for AES_ecb_encrypt

I'm decrypting some java encrypted text with OpenSSL. Reading this post I wrote the following code.
unsigned int i = 0;
printf("Out array - Before\n");
for(i = 0; i < sizeof(out); i++) {
if(i % 32 == 0)
printf("\n");
printf("%02X", out[i]);
}
printf("\n");
AES_set_decrypt_key((const unsigned char *)a.at(3).c_str(), 128, &aesKey_);
for(i = 0; i < sizeof(bytes); i += AES_BLOCK_SIZE) {
std::cout << "Decrypting at " << i << " of " << sizeof(bytes) << "\n";
AES_ecb_encrypt(bytes + i, out + i, &aesKey_, AES_DECRYPT);
}
std::cout << "HEX : " << a.at(2).c_str() << "\n"
<< "Decrypting : " << bytes << "\n"
<< "With Key : " << a.at(3).c_str() << "\n"
<< "Becomes : " << out << "\n";
printf("Out array - AFTER\n");
for(i = 0; i < sizeof(out); i++) {
if(i % 32 == 0)
printf("\n");
printf("%02X", out[i]);
}
printf("\n");
It appears to decrypt the data fine, though the PKCS5-padding gets decrypted along and some extra garbage (I'm assuming this is due to the PKCS5-padding).
Out array - BEFORE 0000000000000000000000000000000000000000000000000000000000000000
Decrypting at 0 of 18
Decrypting at 16 of 18
HEX : B00FE0383F2E3CBB95A5A28FA91923FA00
Decrypting : ��8?.<������#�
With Key : I'm a secret key
Becomes : no passwordHQ�EZ��-�=%.7�n
Out array - AFTER 6E6F2070617373776F72644851030303C7457F5ACCF12DAA053D252E3708846E
The above is output from my code, no passwordHQ (6E6F2070617373776F72644851) is the expected output, but you can see the padding is decoded 030303 followed by the garbage C7457F5ACCF12DAA053D252E3708846E.
So how do I set the padding in OpenSSL?
I expected there to be an AES_set_padding (or similar) function, but I'm obviously missing it in the documentation.
Please try and use the higher level function defined in EVP_*. For those functions PKCS#7 padding is standard. Note that PKCS#5 padding officially is only for 8 byte block ciphers.
After some searching I found that evp.h should contain:
const EVP_CIPHER *EVP_aes_128_ecb(void);
which you should be able to use with
int EVP_EncryptInit(EVP_CIPHER_CTX *ctx, const EVP_CIPHER *type,
unsigned char *key, unsigned char *iv);
additional information about EVP functions does suggest that it shoud automatically use the correct padding. The IV is of course ignored for ECB mode, so any pointer should do.