Error attempting CrashPlan Home restore using PlanC - "Failed to open block manifest for reading" - backup

Background to Plan C
Code42 decided to terminate their "CrashPlan for Home" service. This means that after the shutdown date of October 22, 2018, CrashPlan will delete your backup on their servers, which is to be expected, but much more annoyingly, you will no longer be able to restore CrashPlan backups that you stored locally. Effectively, Code42 is reaching into your computer to break your backups for you.
PlanC is an open source project to enable restore from existing CrashPlan Home backups to be performed.
My Problem
However, when attempting to restore I received an error:
MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore
Caching block indexes in memory...
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest for reading: ./sg2015/642033544161964565/cpbf0000000000017581637/cpbmf
Abort trap: 6
The file referenced in the error appears to read OK, but the reported error provided no more information.

I reported this GitHub Issue #9.
I then made a minor change to the error reporting GitHub Pull Request #10 to work out that the error was a Too many open files error:
MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore
Caching block indexes in memory...
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest (../../sg2015/642033544161964565/cpbf0000000000017581637/cpbmf) for reading: Too many open files
Abort trap: 6
Just a note that if my pull request (only just submitted) is not merged (and a new binary released) you will need to build from my fork.
Which I then fixed with a ulimit change:
MacBook-Pro:PlanC daniel$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1418
virtual memory (kbytes, -v) unlimited
by increasing number of open files for the shell to 1024:
MacBook-Pro:PlanC daniel$ ulimit -S -n 1024
Recording this answer in case others have problems - backups are important after all :)

Related

"IOError: Can't find a path to system files" for x86 full system on Mac

I'm trying to set up gem5 x86 full system on Mac OS Mojave (10.14)
First I did a git clone to get the gem5 sources, which are located at ~/gem5.
Then I ran scons build/x86/gem5.fast to build the whole thing. I had to change some of the -Werror flags to get it to compile, but it seems to work.
To test it, I ran build/x86/gem5.fast configs/example/se.py -c tests/test-progs/hello/bin/x86/linux/hello and got the following output:
gem5 Simulator System. http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 compiled Jan 18 2020 15:28:50
gem5 started Jan 18 2020 17:48:35
gem5 executing on My-MacBook-Pro-208.local, pid 89984
command line: build/x86/gem5.fast configs/example/se.py -c tests/test-progs/hello/bin/x86/linux/hello
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7000
**** REAL SIMULATION ****
info: Entering event queue # 0. Starting simulation...
Hello world!
Exiting # tick 5941500 because exiting with last active thread context
I wanted to configure full system simulation, so I went to the "Full-System Stuff" section at http://gem5.org/Download and downloaded the Full System Files. I extracted the tar into ~/gem5/x86-system.
So now there's ~/gem5/x86-system/binaries which contains x86_64-vmlinux-2.6.22.9 and ~/gem5/x86-system/disks which contains linux-x86.img
In ~/.bash_profile I added export M5_PATH="/Users/me/gem5/x86-system".
However, when I run scons build/x86/tests/fast/quick, almost all of the tests fail. A lot of them have a failure like this:
...
File "/Users/me/gem5/configs/common/SysPaths.py", line 62, in __call__
raise IOError("Can't find a path to system files.")
IOError: Can't find a path to system files.
I also tried to run build/x86/gem5.fast configs/example/fs.py but I get the following error:
...
File "/Users/me/gem5/configs/common/SysPaths.py", line 71, in __call__
raise IOError("Can't find file '%s' on path." % filename)
IOError: Can't find file 'x86root.img' on path.
I'm not sure what part of configuration I'm missing. The docs and google searches aren't giving any working solutions...

running gem5 with SPEC2006

when running GEM5 X86 in SE mode, I am trying to run bzip2 from SPEC2006, at first it was failing because it says it can't run dynamic execution so I compiled it with -static flag.
now I get this error:
gem5 Simulator System. http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.
gem5 compiled Oct 27 2018 00:36:02
gem5 started Dec 22 2018 18:16:40
gem5 executing on Dan
command line: ./build/X86/gem5.opt configs/example/se.py -c /home/dan/SPEC2006/benchspec/CPU2006/401.bzip2/exe/bzip2_base.ia64-gcc42 -i /home/dan/SPEC2006/benchspec/CPU2006/401.bzip2/data/test/input/dryer.jpg
Could not import 03_BASE_FLAT
Could not import 03_BASE_NARROW
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (4096 Mbytes)
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
**** REAL SIMULATION ****
info: Entering event queue # 0. Starting simulation...
panic: Tried to write unmapped address 0xffffedd8. Inst is at 0x400da4
# tick 5500
[invoke:build/X86/arch/x86/faults.cc, line 160]
Memory Usage: 4316736 KBytes
Program aborted at tick 5500
Aborted (core dumped)
I am running gem5 on ubuntu 17.10.
I tried to find solutions in google but I didn't see any one referring to this problem, does anyone know how to fix the problem?
Please check your host machine configuration. Bzip2 does not work in a 32-bit machine. My desktop is dual core have 32-bit X86 architecture, I tried to run bzip2 it had shown same error.

Push to HG repository through http

I've just made a hg server with apache. I'm able to clone, pull but not to push to some older repositories.
If I made a clean repository on the server, clone it, made and commit a test file, push works fine.
If I use an existing repository made from svn, the push doesn't work. This repository was hosted on an older hg server which we shared with an other project. There it worked fine. But after migrating to newer hg, this started to happen:
$ hg push -r 2 --debug --traceback http://sdsvn.mbid.cz:3145/b
pushing to http://sdsvn.mbid.cz:3145/b
using http://sdsvn.mbid.cz:3145/b
proxying through http://wsab1.lb.mbid.cz:8008
sending capabilities command
http authorization required for http://sdsvn.mbid.cz:3145/b
realm: Moneta Apps Mercurial repository
user: macik
password:
http auth: user macik, password *******
query 1; heads
sending batch command
searching for changes
all remote heads known locally
preparing listkeys for "phases"
sending listkeys command
received listkey for "phases": 15 bytes
checking for updated bookmarks
preparing listkeys for "bookmarks"
sending listkeys command
received listkey for "bookmarks": 0 bytes
sending branchmap command
sending branchmap command
preparing listkeys for "bookmarks"
sending listkeys command
received listkey for "bookmarks": 0 bytes
1 changesets found
list of changesets:
6b69f3649ab68022671048cdd56e94bdfa3d2f8c
bundle2-output-bundle: "HG20", 4 parts total
bundle2-output-part: "replycaps" 155 bytes payload
bundle2-output-part: "check:heads" streamed payload
bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
bundle2-output-part: "pushkey" (params: 4 mandatory) empty payload
sending unbundle command
sending 13699 bytes
bundle2-input-bundle: with-transaction
bundle2-input-part: "output" (advisory) (params: 0 advisory) supported
bundle2-input-part: total payload size 18
remote: adding changesets
bundle2-input-part: "output" (advisory) supported
bundle2-input-part: total payload size 38
remote: transaction abort!
remote: rollback completed
bundle2-input-part: "error:abort" (params: 1 mandatory) supported
bundle2-input-bundle: 2 parts total
remote: stream ended unexpectedly (got 0 bytes, expected 4)
Traceback (most recent call last):
File "mercurial\dispatch.pyo", line 239, in _runcatchfunc
File "mercurial\dispatch.pyo", line 842, in _dispatch
File "mercurial\dispatch.pyo", line 594, in runcommand
File "mercurial\dispatch.pyo", line 850, in _runcommand
File "mercurial\dispatch.pyo", line 839, in <lambda>
File "mercurial\util.pyo", line 1051, in check
File "mercurial\commands.pyo", line 5292, in push
File "mercurial\exchange.pyo", line 481, in push
File "mercurial\exchange.pyo", line 922, in _pushbundle2
Abort: push failed on remote
abort: push failed on remote
I'm using mercurial 4.1 with Py2.7 and Apache2.2.31 on the server an TortoiseHG 4.1 on the client.
Unfortunately, I cann't install something as WireShark to monitor the trafic (corporate policies). So this is the only reasonable output I've.
Can you please direct me, where to look for root of this problem and even better how to solve it?
thx

How do you get Valgrind to show line errors?

How do you get Valgrind to show exactly where an error occured? I compiled my program (on a Windows machine over a Linux terminal via PuTTy) adding the -g debug option.
When I run Valgrind, I get the Leak and Heap summary, and I definitely have lost memory, but I never get information about where it happens (file name, line). Shouldn't Valgrind be telling me on what line after I allocate memory, it fails to deallocate later?
==15746==
==15746== HEAP SUMMARY:
==15746== in use at exit: 54 bytes in 6 blocks
==15746== total heap usage: 295 allocs, 289 frees, 11,029 bytes allocated
==15746==
==15746== LEAK SUMMARY:
==15746== definitely lost: 12 bytes in 3 blocks
==15746== indirectly lost: 42 bytes in 3 blocks
==15746== possibly lost: 0 bytes in 0 blocks
==15746== still reachable: 0 bytes in 0 blocks
==15746== suppressed: 0 bytes in 0 blocks
==15746== Rerun with --leak-check=full to see details of leaked memory
==15746==
==15746== For counts of detected and suppressed errors, rerun with: -v
==15746== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 15 from 8)
I've repeatedly gotten hosed on this, and couldn't figure out why '--leak-check=full' wasn't working for me, so I thought I'd bump up tune2fs comment.
The most likely problem is that you've (Not ShrimpCrackers, but whoever is reading this post right now) placed --leak-check=full at the end of your command line. Valgrind would like you to post the flag before you enter the actual command line to run your program.
i.e.:
valgrind --leak-check=full ./myprogram
NOT:
valgrind ./myprogram --leak-check=full
Try valgrind --leak-check=full
This normally prints more useful information.
Also add the -O0 flag when compiling so your code doesn't get optimized.
It's not an option related to valgrind. Instead, the code have to be compiled with -g options, in order to preserve the debug symbol.
cc -g main.c
valgrind --trace-children=yes --track-fds=yes --track-origins=yes --leak-check=full --show-leak-kinds=all ./a.out
Let me be more specific for other readers (i had the same problem but my arguments were in the right order):
I found out that valgrind needs the path to the executable, if you dont give this then it will run bu it won't give you the line numbers.
In my case the executable was in a different directory, which was in my PATH, but to get the line information you have to run
valgrind --leak-check=full path_to_myprogram/myprogram
In order for valgrind to show the lines where the errors occurred in the file,
I had to add -g to the END of my compile command.
For Example:
gcc -o main main.c -g
Then just run valgrind:
valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes ./main

Terminate process running inside valgrind

Killing the valgrind process itself leaves no report on the inner process' execution.
Is it possible to send a terminate signal to a process running inside valgrind?
There is no "inner process" as both valgrind itself and the client program it is running execute in a single process.
Signals sent to that process will be delivered to the client program as normal. If the signal causes the process to terinate then valgrind's normal exit handlers will run and (for example) report any leaks.
So, for example, if we start valgrind on a sleep command:
bericote [~] % valgrind sleep 240
==9774== Memcheck, a memory error detector
==9774== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==9774== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
==9774== Command: sleep 240
==9774==
then kill that command:
bericote [~] % kill -TERM 9774
then the process will exit and valgrind's exit handlers will run:
==9774==
==9774== HEAP SUMMARY:
==9774== in use at exit: 0 bytes in 0 blocks
==9774== total heap usage: 30 allocs, 30 frees, 3,667 bytes allocated
==9774==
==9774== All heap blocks were freed -- no leaks are possible
==9774==
==9774== For counts of detected and suppressed errors, rerun with: -v
==9774== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6)
[1] 9774 terminated valgrind sleep 240
The only exception would be for kill -9 as in that case the process is killed by the kernel without ever being informed of the signal so valgrind has no opportunity to do anything.