How to slow down program execution - executable

I have a simple executable binary. It takes as input a user supplied string and compares it with a private string using strcmp. How can I slow down the execution of this program such that I can launch a statistical timing attack on the string comparison? Currently the early-exit nature of strcmp is too slight to detect.
Assume I have local privileges, the binary is owned by another user, and the system is ulimit protected against fork bombs.
While I get that I could use the strings command or reverse engineering to get the private string, this is intended as a POC for the feasibility of timing attacks on compiled programs on modern systems.

Related

how to communicate between labview and DM software

Hello,I need to use DM software to analyse a txt file and get numbers.Each number was send to a Labview software which controls the moving stage. Then Labview tells it's done then DM will take picture and save the files. So how can this be done?I found few samples with the DM script.Please give a direction.Thanks
If speed is no issue, you could make Labview save an empty .txt. Your dm software could check if the file exists and take a picture as soon as it does. Ofcourse better, faster/safer methods exist, but I dont know how versitile your dm software is. A virtual com-port for example, ActiveX, there are many options to make software communicate with eachother.
There are not a lot of 'outward' or 'inward' communcition possibilities in current DigitalMicrograph and some options are only available in later GMS versions.
I also don't know the options Labview has, so you will need to find out what works and what doesn't. Suggestions are:
If you are using GMS 2.3 or later, you can use the command LaunchExternalProcess() to start any routine from within DigitalMicrograph the same way you would do from the command prompt.
If Labview allows to trigger some functionality by being called with parameters from the command prompt, this might be the easiest option. The DM-script will continue either when the launched process is finished, or after a specified time-out.
If you are using GMS 3.1 or later, you can do the oposite and have an outside program call DigitalMicrograph.exe with a command line parameter to trigger the start of a DM-script.
Essentially, this is the reverse of the first suggestion. Labview would need to "call" DigitalMicrograph whenever it wants the next action performed. I do not know Labview enough to judge if this is a possibility or not.
There are script commands for serial communication via the COM port (RS232) provided your installation has the SerialControl.dll in the plugin-folder.
If Labview supports this, you may be able to establish the inter-program communication using this. The serial communication script calls are not officially supported, but the commands are rather self-explanatory:
Number SPOpen( Number port, Number baud, Number stop, Number parity, Number data )
Number SPOpen( String prefix )
void SPClose( Number serialPortL )
Number SPSendString( Number serialPortL, String string )
Number SPSendHex( Number serialPortL, String string )
void SPFlushInput( Number serialPortL )
Number SPGetPendingBytes( Number serialPortL )
Number SPGetTime( )
String SPReceiveString( Number serialPortL, Number maxLength, NumberVariable actual )
String SPReceiveHexString( Number serialPortL, Number maxLength, NumberVariable actual )
void SPSetRTS( Number serialPortL, Boolean on )
void SPSetDTR( Number serialPortL, Boolean on )
You can also establish 'communication' with a workaround as suggested by Gelliant in his answer. A DM-script can 'monitor' a specific folder on the harddrive and trigger some action whenever a (specific) file in this folder gets created or modified.
If Labview is capable of something similiar, this "write-to-disk" and "watch-for-change" method can be used to have two programs work in synchronization with each other.
If Labview does not support this directly, you may be able to achieve a similar "hacked" synchronisation by using a 3rd party 'scripting' language for the general system. I've personally used a software called AutoIt in the past to synchronize otherwise incompatible software to controll hardware.
If you know C++ programming, you may get the "Software Development Kit (SDK)" for DigitalMicrograph and create your own Labview-communication plugin for DigitalMicrograph.
This option is of course the most versatile, as you're only limited by what you can achieve by your own C++ code. The disadvantage is, that you might need to recompile the plugin-DLL for different versions of DigitalMicrograph.

Hacking Mono to support async I/O on memory-mapped files

I'm looking for a little advice on "hacking" Mono (and in fact, .NET too).
Context: As part of the Isis2 library (Isis2.codeplex.com) I want to support very fast "zero copy" replication of memory-mapped files on machines that have the right sort of hardware (Infiband NICs), and minimal copying for more standard Ethernet with UDP. So the setup is this: We have a set of processes {A,B....} all linked to Isis2, and some member, maybe A, has a big memory-mapped file, call it F, and asks Isis2 to please rereplicate F onto B, D, G, and X. The library will do this very efficiently and very rapidly, even with heavy use by many concurrent initiators. The idea would be to offer this to HPC and cloud developers who are running big-data applications.
Now, Isis2 is coded in C# on .NET and cross-compiles to Linux via Mono. Both .NET and Mono are managed, so neither wants to let me do zero-copy network I/O -- the normal model would be "copy your data into a managed byte[] object, then use SendTo or SendAsync to send. To receive, same deal: Receive or ReceiveAsync into a byte[] object, then copy to the target location in the file." This will be slower than what the hardware can sustain.
Turns out that on .NET I can hack around the normal memory protections. I built my own mapped file wrapper (in fact based on one posted years ago by a researcher at Columbia). I pull in the Win32Kernel.dll library, and then use Win32 methods to map my file, initiate the socket Send and Receive calls, etc. With a bit of hacking I can mimic .NET asynchronous I/O this way, and I end up with something fairly clean and coded entirely in C# with nothing .NET even recognizes as unsafe code. I get to treat my mapped file as a big unmanaged byte array, avoiding all that unneeded copying. Obviously I'll protect all of this from my Isis2 users; they won't know.
Now we get to the crux of my question: on Linux, I obviously can't load the Win32 kernel dll since it doesn't exist. So I need to implement some basic functionality using core Linux O/S calls: the fmap() call will map my file. Linux has its own form of asynchronous I/O too: for Infiniband, I'll use the Verbs library from Mellanox, and for UDP, I'll work with raw IP sends and signals ("interrupts") on completion. Ugly, but I can get this to work, I think. Again, I'll then try to wrap all this to look as much like standard asynchronous Windows async I/O as possible, for code cleanness in Isis2 itself, and I'll hide the whole unmanaged, unsafe mess from end users.
Since I'll be sending a gigabyte or so at a time, in chunks, one key goal is that data sent in order would ideally be received in the order I post my async receives. Obviously I do have to worry about unreliable communication (causes stuff to end up dropped, and I'll then have to copy). But if nothing is dropped I want the n'th chunk I send to end up in the n'th receive region...
So here's my question: Has anyone already done this? Does anyone have any tips on how Mono implements the asynchronous I/O calls that .NET uses so heavily? I should presumably do it the same way. And does anyone have any advice on how to do this with minimal pain?
One more question: Win32 is limited to 2Gb of mapped files. Cloud systems would often run Win64. Any suggestions on how to maximize interoperability while allowing full use of Win64 for those who are running that? (A kind of O/S reflection issue...)

Arbitrary JVM Behaviour

Imagine a setup of 6-7 servers all identical with identical
java version "1.6.0_18"
OpenJDK Runtime Environment (IcedTea6 1.8) (fedora-36.b18.fc11-i386)
OpenJDK Server VM (build 14.0-b16, mixed mode)
each running a program (memory and CPU intensive) for hours even days, completing successfully many times (getting statistical data that sort of stuff), but on 1 machine, no matter the parameters or how I've complied (javac -source 1.5 *.java/javac -O -source 1.5, javac **, imagine any combination yourself :))
or ran it (-Xms200000k or just java blabla.java you get the idea)
I eventually get, not at a specific moment or iteration "java.lang.ArrayIndexOutOfBoundsException: -1341472392" ?! 1st things first the program would never work with such a large value, let alone negative. (the line of code is a contains call of an ArrayList with integers) (that number is different every time as i've noticed)
Note also that i can "resume" a crashed test and i can on this machine, it does few more tests, crashes again.
Not much of a bother, I dont own the boxes and all the others work, but this is quite strange for me.
Out of personal interest how this happens on the not-very-rosy-anyway OpenJDK?
Sounds strange. Is the variable used for indexing the array a long, or is it ever influenced by a long-variable? In that case the access to the variable is not guaranteed to be atomic:
From http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html#28733
If a double or long variable is not declared volatile, then for the purposes of load, store, read, and write actions they are treated as if they were two variables of 32 bits each: wherever the rules require one of these actions, two such actions are performed, one for each 32-bit half. The manner in which the 64 bits of a double or long variable are encoded into two 32-bit quantities is implementation-dependent. The load, store, read, and write actions on volatile variables are atomic, even if the type of the variable is double or long.
You could try to declare the index-variable as volatile or use some other means of synchronization (for instance by using AtomicLong or something similar) if you suspect that this could be the issue.
If this is a single-threaded Java application, I'd suspect a hardware fault. Of course this could be hard to prove, unless you've got someway to run hardware (e.g. memory) diagnostics.

VB.NET SLOW Compile Time - No disk or CPU activity

We have a project for a client that is written in VB.NET. In one of the projects, we have about 100 modules, which are all VERY simple. They're extension methods that convert between object types. Here is a small snippet:
Public Module ScheduleExtensions
<System.Runtime.CompilerServices.Extension()> _
Public Function ToServicesData(ByVal source As Schedule) As ScheduleServicesData
If (source IsNot Nothing) Then
Dim target As New ScheduleServicesData
With target
.CenterId = source.CenterId
.EmployeeGuid = source.EmployeeGuid
.EndDateTime = source.EndDateTime
The problem is, this project alone takes 2+ minutes to build. I ran diskmon and filemon, and it doesn't access the file system while the build appears to hang. The CPU usage is also low during the majority of the execution. After about 2 minutes, the build finishes and there is disk and CPU activity. The problem can be reproduced on any machine (4 tried so far).
I went so far as to compile the project using the vbc command line, and the problem is there as well.
Is there something about VB.NET extension methods that lead to poor compile time? That is the only feature we're using that is more complex than looping/getting/setting, etc.
Performance problems that show no significant CPU or DISK activity are invariably related to Network waits, either network performance itself or, more likely, waiting for responses from Services on other systems. Now I do not see anything in the sample that should have that problem, so I must assume that the problem is coming from something else either in your project, or your project settings or your VS environment, or your system's environment.
You might try to get a tool that can monitor all network calls from your system and see what's going on.
It's hard to know what the problem would be based on a small sample like this. There is nothing inherently slow about extension method support in the compiler and we have numerous regression tests in this area. If there is a bug, it's likely a combination of several factors causing the problem.
If you have the time please file a bug on this issue at
http://connect.microsoft.com
It makes the bug much easier to investigate if you can provide a small sample which reproduces the problem.

Why is there no main() function in vxWorks?

When using vxWorks as a development platform, we can't write our application with the standard main() function. Why can't we have a main function?
Before the 6.0 version VxWorks only
supported kernel execution environment for tasks and did not support
processes, which is the traditional application execution environment
on OS like Unix or Windows. Tasks have an entry point which is the
address of the code to execute as a task. This address corresponds to
a C or assembly function. It can be a symbol named "main" but there
are C/C++ language assumptions about the main() function that are not
supported in the kernel environment (in particular the traditional
handling of the argc and argv parameters). Furthermore, prior to
VxWorks 6.0, all tasks execute kernel code. You can picture the kernel
as a common repository of code all linked together and then you'll see
that you cannot have several symbols of the same name ("main") since
this would create name collisions.
Now this is accurate only if you link your application code to the
kernel image. If you were to download your application code then the
module loader will accept to load several modules each with a main()
routine. However the last "main" symbol registered in the system
symbol table is the only one you can access via the target shell. If
you want to start tasks executing the code of one of the first loaded
modules you'd have to use the addresses of the previous main()
function. This is possible but not convenient. It is far more
practical to give different names to the entry points of tasks (may be
like "xxxStart" where "xxx" is a name meaningful for what the task is
supposed to do).
Starting with VxWorks 6.0 the OS supports a process environment. This
means, among many other things, that you can have a traditional main()
routine and that its argc and argv parameters are properly handled,
and that the application code is executing in a context (user context)
which is different from the kernel context, thus ensuring the
isolation between application code (which can be flaky) and kernel
code (which is not supposed to be flaky).
PAD