Why CBMC is unwinding more number of times? - cbmc

Let consider the code given below, may I know why CBMC is unwinding more than the upper bound limit while we have assumed the initial value of io is greater then 2.
#include<assert.h>
void main()
{
int i0;
int o1;
__CPROVER_assume(i0>=2);
//assert(i0>=0);
while(i0<=10)
{
i0=i0+1;
}
o1=i0+1;
assert((o1 <= 1));
}
CBMC Output:
CBMC version 5.8 64-bit x86_64 linux
Parsing /tmp/in1_1524461553_1936466587.c
Converting
Type-checking in1_1524461553_1936466587
Generating GOTO Program
Adding CPROVER library (x86_64)
Removal of function pointers and virtual functions
Partial Inlining
Generic Property Instrumentation
Starting Bounded Model Checking
Unwinding loop main.0 iteration 1 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 2 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 3 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 4 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 5 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 6 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 7 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 8 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 9 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 10 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 11 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 12 file /tmp/in1_1524461553_1936466587.c line 9 function main thread 0
Unwinding loop main.0 iteration 13 file /tmp/in1_1524461553_1936466587.c

I guess symex hasn't been smart enough to see that the condition on the loop will always be false beyond a certain iteration. It tries to simplify expressions a bit as it goes along, but not all that much. When this is converted to a formula and passed to the SAT solver, the SAT solver will quickly see that the conditions for those iterations of the loop can never be satisfied and discard those parts of the formula, so it shouldn't affect correctness (though of course it may mean that CBMC takes a long time to run).

Related

Python Multiprocessing -- how to make all processes enter a code block at the same time

My code looks like this, for each process:
def foo():
while True:
a = func1()
func2()
and I wish to collect the results of func1 from different processes and process them in func2. Processes may have to enter func2 together, since a varies across iterations and across processes.
Is there any way to synchronize different processes so that they always enter func2 at the same time?
More explanation:
Say func1 gives out values from number series 1,2,3...
and func2 is to add the value a from every process up. What I hope to achieve:
process 1 execute func1 and gives out a=1
process 2 execute func1 and gives out a=2
process 1 and 2 execute func2:1+2=3
process 1 execute func1 and gives out a=3
process 2 execute func1 and gives out a=4
process 1 and 2 execute func2:3+4=7
What I actually achieved :(
process 1 execute func1 and gives out a=1
process 2 execute func1 and gives out a=2
process 1 and 2 execute func2:1+2=3
process 1 execute func1 and gives out a=3
process 1 execute func2:2+3=5
So maybe they need to enter func2 at the same time?

open MPI - ring_c on multiple hosts fails

I have recently installed open MPI on two Ubuntu 14.04 hosts and I am now testing its functionality with the two provided test functions hello_c and ring_c. The hosts are called 'hermes' and 'zeus' and they both have the user 'mpiuser' to log in non-interactively (via ssh-agent).
The functions mpirun hello_c and mpirun --host hermes,zeus hello_c both work properly.
Calling the function mpirun --host zeus ring_c locally also works. Output for both hermes and zeus:
mpiuser#zeus:/opt/openmpi-1.6.5/examples$ mpirun --host zeus ring_c
Process 0 sending 10 to 0, tag 201 (1 processes in ring)
Process 0 sent to 0
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Process 0 decremented value: 3
Process 0 decremented value: 2
Process 0 decremented value: 1
Process 0 decremented value: 0
Process 0 exiting
But calling the function mpirun --host zeus,hermes ring_c fails and gives following output:
mpiuser#zeus:/opt/openmpi-1.6.5/examples$ mpirun --host hermes,zeus ring_c
Process 0 sending 10 to 1, tag 201 (2 processes in ring)
[zeus:2930] *** An error occurred in MPI_Recv
[zeus:2930] *** on communicator MPI_COMM_WORLD
[zeus:2930] *** MPI_ERR_TRUNCATE: message truncated
[zeus:2930] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
Process 0 sent to 1
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 2930 on
node zeus exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
I haven't found any documentation on how to solve such a problem and I don't have a clue where to look for the mistake on the basis of the error output. How can I fix this?
You've changed two things between the first and second runs - you've increased the number of processes from 1 to 2, and run on multiple hosts rather than a single host.
I'd suggest you first check you can run on 2 processes on the same host:
mpirun -n 2 ring_c
and see what you get.
When debugging on a cluster it's often useful to know where each process is running. You should always print out the total number of processes as well. Try using the following code at the top of ring_c.c:
char nodename[MPI_MAX_PROCESSOR_NAME];
int namelen;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Get_processor_name(nodename, &namelen);
printf("Rank %d out of %d running on node %s\n", rank, size, nodename);
The error you're getting is saying that the incoming message is too large for the receive buffer, which is weird given that the code always sends and receives a single integer.

VerifyError #1023 when trying to modify AVM2 bytecodes

I'm trying to patch the bytecode of a SWF from RABCDasm. Here's my patch:
findpropstrict QName(PackageNamespace("flash.net"),"URLRequest")
pushstring "http://www.example.com/fake_proxied_post"
constructprop QName(PackageNamespace("flash.net"),"URLRequest"), 1
coerce QName(PackageNamespace("flash.net"),"URLRequest")
setlocal 9
getlocal 9
getlex QName(PackageNamespace("sample.loaderDanmu"),"CModule")
getlocal3
pushbyte 16
callproperty QName(PackageNamespace(""),"readString"), 2
setproperty QName(PackageNamespace(""),"data")
getlocal 9
pushstring "POST"
coerce_a
setproperty QName(PackageNamespace(""),"method")
findpropstrict QName(PackageNamespace("flash.net"),"URLLoader")
constructprop QName(PackageNamespace("flash.net"),"URLLoader"), 0
getlocal 9
callpropvoid QName(PackageNamespace(""),"load"), 1
I got the error VerifyError #1023 stack overflow occurred. Is there any problem in my patch? The original SWF uses FlasCC and I'm patching a file generated by FlasCC. I believe _loc3_ is a string buffer.
Finally I got it working. I need to enalrge local stack.
body
- maxstack 3
+ maxstack 4
initscopedepth 0

FORTRAN 90 Open file issues

I've been searching around this code for long now and can't seem to find the reason of this not working... Maybe an outsiders view can help.
!I open File 1
!Opening File 1
open(2, File='File1.txt',status='old')
read(2,*)!File 1 header
PRINT*,'File1.txt read'
!Read it
DO b=1,nb
DO i=1,ni(b)
READ(2,*)dum(b,i),Qr(1,xbu(b),i),hr(1,xbu(b),i),Ar(1,xbu(b),i),Pr(1,xbu(b),i),dx(xbu(b),i),sx(xbu(b),i)
END DO
END DO
And it's fine. I've printed it, it's all there. But when i go to File 2, doing the exact same thing:
PRINT*,'Reading File 2 '
open(3, File='File2.txt',status='old') !<- It stays here forever.
PRINT*,'File2.txt read'
The files are plain txt, with real values like this
File 1:
11 0 0 0 0 6500 1.2
File 2
11 0.00 0.00 0.00 0.0
Any thoughts on what could cause the same code to fail the second time?
You should probably throw some error checking in there, try putting
open(3, File='File2.txt',status='old',iostat=io_status, err=100)
And somewhere put
100 write(*,*) 'io status = ', io_status
stop
I also recommend writing a function which checks for the first available fortran unit number rather than hard coding it in, something like getting free unit number in fortran

MSBuild script fails but produces no errors

I have a MSBuild script that I am executing through TeamCity.
One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok.
This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors.
For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following:
My questions are:
Would it be possible that the MSBuild
script has timed out? Once the task
has completed it is after a certain
timeout period so it just fails?
Why would the build fail with no
errors and no warnings?
[05:39:06]: [Target "Obfuscate"] Finished.
[05:39:06]: [Target "Obfuscate"] Saving exception map
[05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds
[05:49:22]: [Target "Obfuscate"] Done.
[05:49:51]: MSBuild output:
Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8)
Done. (TaskId:8)
Done executing task "VeilProject" -- FAILED. (TaskId:8)
Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12)
Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED.
Project Performance Summary:
6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls
6535484 ms All 1 calls
Target Performance Summary:
156 ms PreClean 1 calls
266 ms SetBuildVersionNumber 1 calls
2406 ms CopyFiles 1 calls
6532391 ms Obfuscate 1 calls
Task Performance Summary:
16 ms MakeDir 2 calls
31 ms TeamCitySetBuildNumber 1 calls
31 ms Message 1 calls
62 ms RemoveDir 2 calls
234 ms GetAssemblyIdentity 1 calls
2406 ms Copy 1 calls
6528047 ms VeilProject 1 calls
Build FAILED.
0 Warning(s)
0 Error(s)
Time Elapsed 01:48:57.46
[05:49:52]: Process exit code: 1
[05:49:55]: Build finished
If the .exe is not returning standard exit codes then you may want to specify to ignore the exit code when using the Exec task with IgnoreExitCode="true". If that doesn't work then try the additional parameter IgnoreStandardErrorWarningFormat="true".