I use MAC(Message Authentication Code) with Crypto++ library to secure MANET.
The library provides a lot of MACs algorithms.
I found DMAC in the MAC list and i don't know what does it mean?
Message Authentication Code
Best ragards
Hajji
A bit hidden in the class description is a link:
CBC MAC for Real-Time Data Sources (08.15.1997) by Erez Petrank and Charles Rackoff
Which then points to the paper of which the description reads:
We first present a variant of CBC MAC, called double MAC (DMAC) which handles messages of variable unknown lengths. Computing DMAC on a message is virtually as simple and as efficient as computing the standard CBC MAC on the message. We provide a rigorous proof that its security is implied by the security of the underlying block cipher. Next, we argue that the basic CBC MAC is secure when applied to prefix free message space. A message space can be made prefix free by authenticating also the (usually hidden) last character which marks the end of the message.
Double MAC consists of performing CBC-MAC followed by another CBC-MAC over the result, which should thwart length extension attacks on CBC-MAC.
I haven't heard of it before; I guess everybody uses CMAC instead nowadays. CMAC is more efficient as it only requires one additional run of the block cipher, and requires this up front (during initialization of the cipher).
Related
I know the way to bypass filter mode, but I don't know the way to bypass strict mode.
At 64bit The code has:
read 1024 bytes at rwxp mapped buf
run buf()
scanf address and scanf value. write value(long) at address(long)
and has only canary and partial RELRO
at this, how can I bypass strict mode seccomp?
If you find a way, you'll get a CVE!
In seriousness, it is possible to find a way around SECCOMP in that if the handler thread listening to the messages from the SECCOMP-ed jail thread has a length check type bug, you could then exploit the handler thread (which presumably isn't as strongly SECCOMP-ed as the jail thread). Then you'd follow a normal exploit chain.
However, in the general case, folks are putting SECCOMP on a process because it's doing untrusted stuff. As a result it's unlikely that code execution on a jailed thread will allow for priv escalation, simply because it's unlikely the code trusts the inputs from the jailed thread!
Bypassing SECCOMP directly would be really hard, you'd have to find a kernel vulnerability in one of the allowed system calls, or a processor-level vulnerability. In strict mode this is normally considered intractable.
I am using a Keygen application (.exe). There are two input fields in it's GUI:
p1 - at least 1 digit, 10 digits max - ^[0-9]{1,10}$
p2 - 12 chars max - uppercase letters/digits/underscores - ^[A-Z0-9_]{0,12}$
Pressing generate button produce a key x.
x - 20 digits exactly - ^[0-9]{20}$
For each pair (p1,p2), there is only one x (in other words: f(p1,p2) = x is a function)
I am interested in it's encryption algorithm.
Is there any way of reverse engineering the algorithm?
I thought of two ways:
decompiling. I used snowman, but the output is too polluted. The decompiled code probably contains non-relevant parts, such as the GUI.
analyzing of input and output. I wonder if there any option to determine the used encryption algorithm by analyzing a set of f(p1,p2) = x results.
As you mentioned, using snowman or some other decompiling tools is probably the way to go.
I doubt you would be able to determine the algorithm just by looking at the input output combinations, since it is possible to write any kind of arbitrary algorithm, that can behave in any way.
Perhaps you could just ask the author what algorithm they're using ?
Unless it's something really simple, I'd rule out your option 2 of trying to figure it out by looking at input and output pairs.
For decompiling / reverse engineering a static binary, you should first determine whether it's a .NET application or something else. If it's written in .NET you can try this for decompilation:
https://www.jetbrains.com/decompiler/
It's really easy to use, unless the binary has been obfuscated.
If the application is not a .NET application, you can try Ghidra and/or Cutter which both has pretty impressive decompilers built in:
https://ghidra-sre.org/
https://cutter.re/
If static code analysis is not enough, you can add a debugger to it. Ghidra and x64dbg work really well together, and can be synced via a plugin installed in both.
If you're new to this, I can recommend both that you look into basic assembler for the x86 platform so you have a general idea of how the CPU works. Another way to get started is "crackme" style challenges from CTF competitions. Often there great write-ups with the solution, so you have both the question and answer available.
Good luck!
Type in p1 and p2. Scan the process for that byte string. Then put a hardware breakpoint for memory access on it. Generate the key, it will hit that hardware breakpoint. Then you have the address which accesses it and start reversing from there in Ghidra(Don't forget to use BASE + OFFSET) since ghidra's output won't have the same base as the running application. The relevant code HAS to access the inputs. So you know where the algorithm is. Since it either directly accesses it, or somewhere within that call chain is accessed relatively fast. Nobody can know without actually seeing the executable.
I want to know the sample code for sending message to server and get back response to verifone vx520 terminal using ISO 8583.
As noted in a comment on your question, this is not a code sharing site, so such an open-ended question is a bit difficult to answer, but perhaps I can get you started on the right foot.
First of all, let me start by suggesting that if you have control over the terminal code and the server that it will be talking to, I suggest you NOT use ISO8583. Yes, it's an industry standard and yes, it communicates data efficiently, BUT it is much more difficult to use than, say, VISA-1 or XML, or JSON etc. That means you have more opportunities for bugs to creep into your code. It also means that if something goes wrong, it takes a lot more effort to try and figure out what happened and try and fix it. I have used all these protocols and others besides and I'll tell you that ISO8583 is one of my least favorite to work with.
Assuming you do not have a choice and you must use ISO8583 then it's worth noting that ISO8583 is nothing but a specification on how to assemble data packets in order to communicate. There is nothing special about the Vx520 terminal (or any other VeriFone terminal) that would change how you would implement it verses how you might do so on any other C++ platform EXCEPT that VeriFone DOES provide you with a library for working with this spec that you are free to use or ignore as you see fit.
You don't need to use this library at all. You can roll your own and be just fine. You can find more information on the specification itself at Wikipedia, Code Project, and several other places (just ask your favorite search engine). Note that when I did my 8583 project, this library was not available to me. Perhaps I wouldn't have hated this protocol so much if I had had access to it... who knows?
If you are still reading this, then I'll assume that ISO8583 is a requirement (or you are a glutton for punishment) and that you are interested in trying out this engine that VeriFone has provided.
The first thing you will need to do (and hopefully, you have already done it) is to install ACT as part of the development suite (I also suggest you head over to DevNet and get the latest version of ACT before you get started...). Once installed, the library header can be found at %evoact%\include\iso8583.h. Documentation on how to use it can be found at %evoact%\docs. In particular, see chapter 6 of DOC00310_Verix_eVo_ACT_Programmers_Guide.pdf.
Obviously, trying to include a whole chapter's worth of information here would be out of scope, but to give you a high-level idea of how the engine works, allow me to share a couple excerpts:
This engine is designed to be table driven. A single routine is used
for the assembly and disassembly of ISO 8583 packets. The assembly and
disassembly of ISO 8583 packets is driven by the following structures:
Maps One or more collections of 64 bits that drive packet assembly and
indicate what is in a message.
Field table Defines all the fields used
by the application.
Convert table Defines data-conversion routines.
Variant tables Optional tables used to define variant fields.
The process_8583() routine is used for the assembly and disassembly of ISO
8583 packets.
An example of using process_8583() is given elsewhere as follows:
#include "appl8583.h"
int packet_sz;
void assemble_packet ()
{
packet_sz = process_8583 (0, field_table, test_map, buffer, sizeof( buffer));
printf ("\ fOUTPUT SIZE %d", packet_sz);
}
void disassemble_packet ()
{
packet_sz = process_8583 (1, field_table, test_map, buffer, packet_sz);
printf ("\ fINPUT NOT PROCESSED %d", packet_sz);
}
To incorporate this engine into an application, modify the APPL8583.C
and APPL8583.H files so that each has all the application variables
required in the bit map and set up the map properly. Compile
APPL8583.C and link it with your application and the ISO 8583 library.
Use the following procedures to transmit or receive an ISO 8583 packet
using the ISO 8583 Interface Engine:
To transmit an ISO 8583 packet
1 Set data values in the application variables for those to transmit.
2 Call the prot8583_main() routine. This constructs the complete
message and returns the number of bytes in the constructed message.
3 Call write() to transmit the message.
To receive a message
1 Call read() to receive the message.
2 Call the process_8583() routine. This results in all fields being
deposited into the application variables.
3 Use the values in the application variables.
I'm experimenting with SSL in Erlang, and I've run into a problem.
The device which I'm talking to requires me to set the max send fragment size. In OpenSSL, this would be done with SSL_CTX_ctrl(ctx, SSL_CTRL_SET_MAX_SEND_FRAGMENT, ...).
Is there any way to do this in Erlang?
Erlang does not rely on OpenSSL for its SSL implementation.
Unfortunately, it seems that it currently does not support an option to limit fragment size or RFC 6066's maximum fragment length negotiation. It simply fragments at 16 KB (2^14), the maximum fragment size defined in RFC 2246.
The code that splits fragments is in ssl_record:encode_data/3. Supporting an option like OpenSSL seems trivial to implement, and RFC 6066 negotiation does not seem hard either. You would probably just need to extend connection_state record. Please do not hesitate to submit a patch.
I'm looking for a little advice on "hacking" Mono (and in fact, .NET too).
Context: As part of the Isis2 library (Isis2.codeplex.com) I want to support very fast "zero copy" replication of memory-mapped files on machines that have the right sort of hardware (Infiband NICs), and minimal copying for more standard Ethernet with UDP. So the setup is this: We have a set of processes {A,B....} all linked to Isis2, and some member, maybe A, has a big memory-mapped file, call it F, and asks Isis2 to please rereplicate F onto B, D, G, and X. The library will do this very efficiently and very rapidly, even with heavy use by many concurrent initiators. The idea would be to offer this to HPC and cloud developers who are running big-data applications.
Now, Isis2 is coded in C# on .NET and cross-compiles to Linux via Mono. Both .NET and Mono are managed, so neither wants to let me do zero-copy network I/O -- the normal model would be "copy your data into a managed byte[] object, then use SendTo or SendAsync to send. To receive, same deal: Receive or ReceiveAsync into a byte[] object, then copy to the target location in the file." This will be slower than what the hardware can sustain.
Turns out that on .NET I can hack around the normal memory protections. I built my own mapped file wrapper (in fact based on one posted years ago by a researcher at Columbia). I pull in the Win32Kernel.dll library, and then use Win32 methods to map my file, initiate the socket Send and Receive calls, etc. With a bit of hacking I can mimic .NET asynchronous I/O this way, and I end up with something fairly clean and coded entirely in C# with nothing .NET even recognizes as unsafe code. I get to treat my mapped file as a big unmanaged byte array, avoiding all that unneeded copying. Obviously I'll protect all of this from my Isis2 users; they won't know.
Now we get to the crux of my question: on Linux, I obviously can't load the Win32 kernel dll since it doesn't exist. So I need to implement some basic functionality using core Linux O/S calls: the fmap() call will map my file. Linux has its own form of asynchronous I/O too: for Infiniband, I'll use the Verbs library from Mellanox, and for UDP, I'll work with raw IP sends and signals ("interrupts") on completion. Ugly, but I can get this to work, I think. Again, I'll then try to wrap all this to look as much like standard asynchronous Windows async I/O as possible, for code cleanness in Isis2 itself, and I'll hide the whole unmanaged, unsafe mess from end users.
Since I'll be sending a gigabyte or so at a time, in chunks, one key goal is that data sent in order would ideally be received in the order I post my async receives. Obviously I do have to worry about unreliable communication (causes stuff to end up dropped, and I'll then have to copy). But if nothing is dropped I want the n'th chunk I send to end up in the n'th receive region...
So here's my question: Has anyone already done this? Does anyone have any tips on how Mono implements the asynchronous I/O calls that .NET uses so heavily? I should presumably do it the same way. And does anyone have any advice on how to do this with minimal pain?
One more question: Win32 is limited to 2Gb of mapped files. Cloud systems would often run Win64. Any suggestions on how to maximize interoperability while allowing full use of Win64 for those who are running that? (A kind of O/S reflection issue...)