I've noticed that gem5 has a TARMAC tracer at: https://github.com/gem5/gem5/blob/05c4c2b566ce351ab217b2bd7035562aa7a76570/src/arch/arm/tracers/TarmacTrace.py
This seems to be the format also used by FastModels: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0532c/CHDHFCEE.html and might make comparing execution easier.
How to enable that tracer in gem5, e.g. in fs.py?
It does not seem to be exposed in any config as of 9048ef0ffbf21bedb803b785fb68f83e95c04db8, but you can easily enable it with a tiny hack patch, e.g. on fs.py:
diff --git a/configs/example/fs.py b/configs/example/fs.py
index 4d2165884..e3b74ebeb 100644
--- a/configs/example/fs.py
+++ b/configs/example/fs.py
## -374,5 +374,7 ## if buildEnv['TARGET_ISA'] == "arm" and not options.bare_metal \
sys = getattr(root, sysname)
sys.dtb_filename = create_dtb_for_system(sys, '%s.dtb' % sysname)
+for cpu in test_sys.cpu:
+ cpu.tracer = TarmacTracer()
Simulation.setWorkCountOptions(test_sys, options)
Simulation.run(options, root, test_sys, FutureClass)
Then, if you run gem5.opt --debug-file, the debug file m5out/trace.txt is saved in TARMAC format instead of the usual format that is controlled by --debug-flags.
Related
Executive summary: I want to use GDB to extract the coverage execution counts stored in memory in my embedded target, and use them to create .gcda files (for feeding to gcov/lcov).
The setup:
I can successfully cross-compile my binary, targeting my specific embedded target - and then execute it under QEMU.
I can also use QEMU's GDB support to debug the binary (i.e. use tar extended-remote localhost:... to attach to the running QEMU GDB server, and fully control the execution of my binary).
Coverage:
Now, to perform "on-target" coverage analysis, I cross-compile with
-fprofile-arcs -ftest-coverage. GCC then emits 64-bit counters to keep track of execution counts of specific code blocks.
Under normal (i.e. host-based, not cross-compiled) execution, when the app finishes __gcov_exit is called - and gathers all these execution counts into .gcdafiles (that gcov then uses to report coverage details).
In my embedded target however, there's no filesystem to speak of - and libgcov basically contains empty stubs for all __gcov_... functions.
Workaround via QEMU/GDB: To address this, and do it in a GCC-version-agnostic way, I could list the coverage-related symbols in my binary via MYPLATFORM-readelf, and grep-out the relevant ones (e.g. __gcov0.Task1_EntryPoint, __gcov0.worker, etc):
$ MYPLATFORM-readelf -s binary | grep __gcov
...
46: 40021498 48 OBJECT LOCAL DEFAULT 4 __gcov0.Task1_EntryPoint
...
I could then use the offsets/sizes reported to automatically create a GDB script - a script that extracts the counters' data via simple memory dumps (from offset, dump length bytes to a local file).
What I don't know (and failed to find any relevant info/tool), is how to convert the resulting pairs of (memory offset,memory data) into .gcda files. If such a tool/script exists, I'd have a portable (platform-agnostic) way to do coverage on any QEMU-supported platform.
Is there such a tool/script?
Any suggestions/pointers would be most appreciated.
UPDATE: I solved this myself, as you can read below - and wrote a blog post about it.
Turned out there was a (much) better way to do what I wanted.
The Linux kernel includes portable GCOV related functionality, that abstracts away the GCC version-specific details by providing this endpoint:
size_t convert_to_gcda(char *buffer, struct gcov_info *info)
So basically, I was able to do on-target coverage via the following steps:
Step 1
I added three slightly modified versions of the linux gcov files to my project: base.c, gcc_4_7.c and gcov.h. I had to replace some linux-isms inside them - like vmalloc,kfree, etc - to make the code portable (and thus, compileable on my embedded platform, which has nothing to do with Linux).
Step 2
I then provided my own __gcov_init...
typedef struct tagGcovInfo {
struct gcov_info *info;
struct tagGcovInfo *next;
} GcovInfo;
GcovInfo *headGcov = NULL;
void __gcov_init(struct gcov_info *info)
{
printf(
"__gcov_init called for %s!\n",
gcov_info_filename(info));
fflush(stdout);
GcovInfo *newHead = malloc(sizeof(GcovInfo));
if (!newHead) {
puts("Out of memory!");
exit(1);
}
newHead->info = info;
newHead->next = headGcov;
headGcov = newHead;
}
...and __gcov_exit:
void __gcov_exit()
{
GcovInfo *tmp = headGcov;
while(tmp) {
char *buffer;
int bytesNeeded = convert_to_gcda(NULL, tmp->info);
buffer = malloc(bytesNeeded);
if (!buffer) {
puts("Out of memory!");
exit(1);
}
convert_to_gcda(buffer, tmp->info);
printf("Emitting %6d bytes for %s\n", bytesNeeded, gcov_info_filename(tmp->info));
free(buffer);
tmp = tmp->next;
}
}
Step 3
Finally, I scripted my GDB (driving QEMU remotely) via this:
$ cat coverage.gdb
tar extended-remote :9976
file bin.debug/fputest
b base.c:88 <================= This breaks on the "Emitting" printf in __gcov_exit
commands 1
silent
set $filename = tmp->info->filename
set $dataBegin = buffer
set $dataEnd = buffer + bytesNeeded
eval "dump binary memory %s 0x%lx 0x%lx", $filename, $dataBegin, $dataEnd
c
end
c
quit
And finally, executed both QEMU and GDB - like this:
$ # In terminal 1:
qemu-system-MYPLATFORM ... -kernel bin.debug/fputest -gdb tcp::9976 -S
$ # In terminal 2:
MYPLATFORM-gdb -x coverage.gdb
...and that's it - I was able to generate the .gcda files in my local filesystem, and then see coverage results over gcov and lcov.
UPDATE: I wrote a blog post showing the process in detail.
Sadly, there is no RCS tag on Unix.stackexchange or ServerFault, so I am posting this on StackOverflow.
I'm spoiled by SVN/Git, and I need to see the history of a file. With my scripts I am using RCS to track changes made to system configuration files, so it would be neat if I could view them like I do with Git. for git I use git log -p to get this kind of output.
Is there a flag for rlog or rcsdiff or anything that lets me get a log that has the diffs?
Or must I use rcsdiff and a shell script to implement this myself?
rcshist (written by Ian Dowse) does what was requested. I do not know of prebuilt packages, but it builds easily.
Here is sample output:
REV:1.346 aclocal.m4 2012/09/03 17:21:43 tom
tags: xterm-281s, xterm-281r, xterm-281q, xterm-281p, xterm-281o,
xterm-281n, xterm-281m, xterm-281l, xterm-281k, xterm-281j,
xterm-281i, xterm-281h, xterm-281g, xterm-281f, xterm-281e
change default for --with-xpm
--- aclocal.m4 2012/08/25 23:05:32 1.345
+++ aclocal.m4 2012/09/03 17:21:43 1.346
## -1,4 +1,4 ##
-dnl $XTermId: rcshist.html,v 1.16 2015/03/01 20:34:33 tom Exp $
+dnl $XTermId: rcshist.html,v 1.16 2015/03/01 20:34:33 tom Exp $
dnl
dnl ---------------------------------------------------------------------------
dnl
## -3554,7 +3554,7 ##
AC_SUBST(no_pixmapdir)
])dnl
dnl ---------------------------------------------------------------------------
-dnl CF_WITH_XPM version: 1 updated: 2012/07/22 09:18:02
+dnl CF_WITH_XPM version: 2 updated: 2012/09/03 05:42:04
dnl -----------
dnl Test for Xpm library, update compiler/loader flags if it is wanted and
dnl found.
## -3571,7 +3571,7 ##
AC_ARG_WITH(xpm,
[ --with-xpm=DIR use Xpm library for colored icon, may specify path],
[cf_Xpm_library="$withval"],
- [cf_Xpm_library=no])
+ [cf_Xpm_library=yes])
AC_MSG_RESULT($cf_Xpm_library)
if test "$cf_Xpm_library" != no ; then
rlog filename
Will show you the basic history.
rcsdiff -r5.1 -r5.2 filename
To see a diff between two revisions. Do not put a space after the -r.
Read the ,v file. It contains the full history.
I'm using Redis (2.4.2) and with the INFO command I can read stats about my Redis server.
There are many stats, including some about how much memory is used. And one is "used_memory_peak" that seems to hold the maximum amount of memory Redis has ever taken.
I've deleted a bunch of key, and I'd like to reset this stat since it affects the scale of my Munin graphs.
There is a CONFIG RESETSTAT command, but it doesn't seem to affect this particular stat.
Any idea how I could do this, without having to export/delete/import my dataset ?
EDIT :
According to #antirez himself (issue 369 on GitHub), this is an intended behavior, but it this feature could be improved to be more useful in a future release.
The implementation of CONFIG RESETSTAT is quite simple:
} else if (!strcasecmp(c->argv[1]->ptr,"resetstat")) {
if (c->argc != 2) goto badarity;
server.stat_keyspace_hits = 0;
server.stat_keyspace_misses = 0;
server.stat_numcommands = 0;
server.stat_numconnections = 0;
server.stat_expiredkeys = 0;
addReply(c,shared.ok);
So it does not initialize the server.stat_peak_memory field used to store the maximum amount of memory ever used by Redis. I don't know if it is a bug or a feature.
Here is a hack to reset the value without having to stop Redis. The idea is to use gdb in batch mode to just change the value of the variable (which is part of a static structure). Normally Redis is compiled with debugging symbols.
# Here we have plenty of things in this instance
> ./redis-cli info | grep peak
used_memory_peak:1363052184
used_memory_peak_human:1.27G
# Let's do some cleaning: everything is wiped out
# don't do this in production !!!
> ./redis-cli flushdb
OK
# Again the same values, while some memory has been freed
> ./redis-cli info | grep peak
used_memory_peak:1363052184
used_memory_peak_human:1.27G
# Here is the magic command: reset the parameter with gdb (output and warnings to be ignored)
> gdb -batch -n -ex 'set variable server.stat_peak_memory = 0' ./redis-server `pidof redis-server`
Missing separate debuginfo for /lib64/libm.so.6
Missing separate debuginfo for /lib64/libdl.so.2
Missing separate debuginfo for /lib64/libpthread.so.0
[Thread debugging using libthread_db enabled]
[New Thread 0x41001940 (LWP 22837)]
[New Thread 0x40800940 (LWP 22836)]
Missing separate debuginfo for /lib64/libc.so.6
Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7ffff51ff000
0x00002af0b5eef218 in epoll_wait () from /lib64/libc.so.6
# And now, result is different: great !
> ./redis-cli info | grep peak
used_memory_peak:718768
used_memory_peak_human:701.92K
This is a hack: think twice before applying this trick on a production instance.
Simple trick to clear peal memory::
Step 1:
/home/logproc/redis/bin/redis-cli BGREWRITEAOF
wait till it finish rewriting aof file.
Step 2:
restart redis db
Done. Thats It.
I have a compressed Hadoop SequenceFile from a customer which I'd like to inspect. I do not have full schema information at this time (which I'm working on separately).
But in the interim (and in the hopes of a generic solution), what are my options for inspecting the file?
I found a tool forqlift: http://www.exmachinatech.net/01/forqlift/
And have tried 'forqlift list' on the file. It complains that it can't load classes for the custom subclass Writables included. So I will need to track down those implementations.
But is there any other option available in the meantime? I understand that most likely I can't extract the data, but is there some tool for scanning how many key values and of what type?
From shell:
$ hdfs dfs -text /user/hive/warehouse/table_seq/000000_0
or directly from hive (which is much faster for small files, because it is running in an already started JVM)
hive> dfs -text /user/hive/warehouse/table_seq/000000_0
works for sequence files.
Check the SequenceFileReadDemo class in the 'Hadoop : The Definitive Guide'- Sample Code. The sequence files have the key/value types embedded in them. Use the SequenceFile.Reader.getKeyClass() and SequenceFile.Reader.getValueClass() to get the type information.
My first thought would be to use the Java API for sequence files to try to read them. Even if you don't know which Writable is used by the file, you can guess and check the error messages (there may be a better way that I don't know).
For example:
private void readSeqFile(Path pathToFile) throws IOException {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
SequenceFile.Reader reader = new SequenceFile.Reader(fs, pathToFile, conf);
Text key = new Text(); // this could be the wrong type
Text val = new Text(); // also could be wrong
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
}
This program would crash if those are the wrong types, but the Exception should say which Writable type the key and value actually are.
Edit:
Actually if you do less file.seq usually you can read some of the header and see what the Writable types are (at least for the first key/value). On one file, for example, I see:
SEQ^F^Yorg.apache.hadoop.io.Text"org.apache.hadoop.io.BytesWritable
I'm not a Java or Hadoop programmer, so my way of solving problem could be not the best one, but anyway.
I spent two days solving the problem of reading FileSeq locally (Linux debian amd64) without installation of hadoop.
The provided sample
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
works well for Text, but didn't work for BytesWritable compressed input data.
What I did?
I downloaded this utility for creating (writing SequenceFiles Hadoop data)
github_com/shsdev/sequencefile-utility/archive/master.zip
, and got it working, then modified for reading input Hadoop SeqFiles.
The instruction for Debian running this utility from scratch:
sudo apt-get install maven2
sudo mvn install
sudo apt-get install openjdk-7-jdk
edit "sudo vi /usr/bin/mvn",
change `which java` to `which /usr/lib/jvm/java-7-openjdk-amd64/bin/java`
Also I've added (probably not required)
'
PATH="/home/mine/perl5/bin${PATH+:}${PATH};/usr/lib/jvm/java-7-openjdk-amd64/"; export PATH;
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export JAVA_VERSION=1.7
'
to ~/.bashrc
Then usage:
sudo mvn install
~/hadoop_tools/sequencefile-utility/sequencefile-utility-master$ /usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.0-jar-with-dependencies.jar
-- and this doesn't break the default java 1.6 installation that is required for FireFox/etc.
For resolving error with FileSeq compatability (e.g. "Unable to load native-hadoop library for your platform... using builtin-java classes where applicable"), I used the libs from the Hadoop master server as is (a kind of hack):
scp root#10.15.150.223:/usr/lib/libhadoop.so.1.0.0 ~/
sudo cp ~/libhadoop.so.1.0.0 /usr/lib/
scp root#10.15.150.223:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/libjvm.so ~/
sudo cp ~/libjvm.so /usr/lib/
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so.1
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so
One night drinking coffee, and I've written this code for reading FileSeq hadoop input files (using this cmd for running this code "/usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.3-jar-with-dependencies.jar -d test/ -c NONE"):
import org.apache.hadoop.io.*;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.SequenceFile.ValueBytes;
import java.io.DataOutputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
Path file = new Path("/home/mine/mycompany/task13/data/2015-08-30");
reader = new SequenceFile.Reader(fs, file, conf);
long pos = reader.getPosition();
logger.info("GO from pos "+pos);
DataOutputBuffer rawKey = new DataOutputBuffer();
ValueBytes rawValue = reader.createValueBytes();
int DEFAULT_BUFFER_SIZE = 1024 * 1024;
DataOutputBuffer kobuf = new DataOutputBuffer(DEFAULT_BUFFER_SIZE);
kobuf.reset();
int rl;
do {
rl = reader.nextRaw(kobuf, rawValue);
logger.info("read len for current record: "+rl+" and in more details ");
if(rl >= 0)
{
logger.info("read key "+new String(kobuf.getData())+" (keylen "+kobuf.getLength()+") and data "+rawValue.getSize());
FileOutputStream fos = new FileOutputStream("/home/mine/outb");
DataOutputStream dos = new DataOutputStream(fos);
rawValue.writeUncompressedBytes(dos);
kobuf.reset();
}
} while(rl>0);
I've just added this chunk of code to file src/main/java/eu/scape_project/tb/lsdr/seqfileutility/SequenceFileWriter.java just after the line
writer = SequenceFile.createWriter(fs, conf, path, keyClass,
valueClass, CompressionType.get(pc.getCompressionType()));
Thanks to these sources of info:
Links:
If using hadoop-core instead of mahour, then will have to download asm-3.1.jar manually:
search_maven_org/remotecontent?filepath=org/ow2/util/asm/asm/3.1/asm-3.1.jar
search_maven_org/#search|ga|1|asm-3.1
The list of avaliable mahout repos:
repo1_maven_org/maven2/org/apache/mahout/
Intro to Mahout:
mahout_apache_org/
Good resource for learning interfaces and sources of Hadoop Java classes (I used it for writing my own code for reading FileSeq):
http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.action/0.2.7/org/apache/hadoop/io/BytesWritable.java
Sources of project tb-lsdr-seqfilecreator that I used for creating my own project FileSeq reader:
www_javased_com/?source_dir=scape/tb-lsdr-seqfilecreator/src/main/java/eu/scape_project/tb/lsdr/seqfileutility/ProcessParameters.java
stackoverflow_com/questions/5096128/sequence-files-in-hadoop - the same example (read key,value that doesn't work)
https://github.com/twitter/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/RawSequenceFileRecordReader.java - this one helped me (I used reader.nextRaw the same as in nextKeyValue() and other subs)
Also I've changed ./pom.xml for native apache.hadoop instead of mahout.hadoop, but probably this is not required, because the bugs for read->next(key, value) are the same for both so I had to use read->nextRaw(keyRaw, valueRaw) instead:
diff ../../sequencefile-utility/sequencefile-utility-master/pom.xml ./pom.xml
9c9
< <version>1.0</version>
---
> <version>1.3</version>
63c63
< <version>2.0.1</version>
---
> <version>2.4</version>
85c85
< <groupId>org.apache.mahout.hadoop</groupId>
---
> <groupId>org.apache.hadoop</groupId>
87c87
< <version>0.20.1</version>
---
> <version>1.1.2</version>
93c93
< <version>1.1</version>
---
> <version>1.1.3</version>
I was just playing with Dumbo. When you run a Dumbo job on a Hadoop cluster, the output is a sequence file. I used the following to dump out an entire Dumbo-generated sequence file as plain text:
$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar \
-input totals/part-00000 \
-output unseq \
-inputformat SequenceFileAsTextInputFormat
$ bin/hadoop fs -cat unseq/part-00000
I got the idea from here.
Incidentally, Dumbo can also output plain text.
Following the anwer of Praveen Sripati, here a small example of SequenceFileReadDemo.java from Hadoop the Definitive Guide by Tom White.
Data are in HDFS, in this position : user/hduser/output-hashsort/ and the file is
part-r-00001
In eclipse, in the Arguments folder I've written this string :
and this is part of the output, with the debugger
I am currently trying to configure collective.xsendfile, Apache mod_xsendfile and Plone 4.
Apparently the Apache process does not see blobstrage files on the file system because they contain permissions:
ls -lh var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x18/0xd5/0x19/0x038ea09d0eddc611.blob
-r-------- 1 plone plone 1006K May 28 15:30 var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x18/0xd5/0x19/0x038ea09d0eddc611.blob
How do I configure blobstorage to give additional permissions, so that Apache could access these files?
The modes with which the blobstorage writes it's directories and files is hardcoded in ZODB.blob. Specifically, the standard ZODB.blob.FileSystemHelper class creates secure directories (only readable and writable for the current user) by default.
You could provide your own implementation of FileSystemHelper that would either make this configurable, or just sets the directory modes to 0750, and then patch ZODB.blob.BlobStorageMixin to use your class instead of the default:
import os
from ZODB import utils
from ZODB.blob import FilesystemHelper, BlobStorageMixin
from ZODB.blob import log, LAYOUT_MARKER
class GroupReadableFilesystemHelper(FilesystemHelper):
def create(self):
if not os.path.exists(self.base_dir):
os.makedirs(self.base_dir, 0750)
log("Blob directory '%s' does not exist. "
"Created new directory." % self.base_dir)
if not os.path.exists(self.temp_dir):
os.makedirs(self.temp_dir, 0750)
log("Blob temporary directory '%s' does not exist. "
"Created new directory." % self.temp_dir)
if not os.path.exists(os.path.join(self.base_dir, LAYOUT_MARKER)):
layout_marker = open(
os.path.join(self.base_dir, LAYOUT_MARKER), 'wb')
layout_marker.write(self.layout_name)
else:
layout = open(os.path.join(self.base_dir, LAYOUT_MARKER), 'rb'
).read().strip()
if layout != self.layout_name:
raise ValueError(
"Directory layout `%s` selected for blob directory %s, but "
"marker found for layout `%s`" %
(self.layout_name, self.base_dir, layout))
def isSecure(self, path):
"""Ensure that (POSIX) path mode bits are 0750."""
return (os.stat(path).st_mode & 027) == 0
def getPathForOID(self, oid, create=False):
"""Given an OID, return the path on the filesystem where
the blob data relating to that OID is stored.
If the create flag is given, the path is also created if it didn't
exist already.
"""
# OIDs are numbers and sometimes passed around as integers. For our
# computations we rely on the 64-bit packed string representation.
if isinstance(oid, int):
oid = utils.p64(oid)
path = self.layout.oid_to_path(oid)
path = os.path.join(self.base_dir, path)
if create and not os.path.exists(path):
try:
os.makedirs(path, 0750)
except OSError:
# We might have lost a race. If so, the directory
# must exist now
assert os.path.exists(path)
return path
def _blob_init_groupread(self, blob_dir, layout='automatic'):
self.fshelper = GroupReadableFilesystemHelper(blob_dir, layout)
self.fshelper.create()
self.fshelper.checkSecure()
self.dirty_oids = []
BlobStorageMixin._blob_init = _blob_init_groupread
Quite a hand-full, you may want to make this a feature request for ZODB3 :-)
While setting up a backup routine for a ZOPE/ZEO setup, I ran into the same problem with blob permissions.
After trying to apply the monkey patch that Mikko wrote (which is not that easy) i came up with a "real" patch to solve the problem.
The patch suggested by Martijn is not complete, it still does not set the right mode on blob files.
So here's my solution:
1.) Create a patch containing:
Index: ZODB/blob.py
===================================================================
--- ZODB/blob.py (Revision 121959)
+++ ZODB/blob.py (Arbeitskopie)
## -337,11 +337,11 ##
def create(self):
if not os.path.exists(self.base_dir):
- os.makedirs(self.base_dir, 0700)
+ os.makedirs(self.base_dir, 0750)
log("Blob directory '%s' does not exist. "
"Created new directory." % self.base_dir)
if not os.path.exists(self.temp_dir):
- os.makedirs(self.temp_dir, 0700)
+ os.makedirs(self.temp_dir, 0750)
log("Blob temporary directory '%s' does not exist. "
"Created new directory." % self.temp_dir)
## -359,8 +359,8 ##
(self.layout_name, self.base_dir, layout))
def isSecure(self, path):
- """Ensure that (POSIX) path mode bits are 0700."""
- return (os.stat(path).st_mode & 077) == 0
+ """Ensure that (POSIX) path mode bits are 0750."""
+ return (os.stat(path).st_mode & 027) == 0
def checkSecure(self):
if not self.isSecure(self.base_dir):
## -385,7 +385,7 ##
if create and not os.path.exists(path):
try:
- os.makedirs(path, 0700)
+ os.makedirs(path, 0750)
except OSError:
# We might have lost a race. If so, the directory
# must exist now
## -891,7 +891,7 ##
file2.close()
remove_committed(f1)
if chmod:
- os.chmod(f2, stat.S_IREAD)
+ os.chmod(f2, stat.S_IRUSR | stat.S_IRGRP)
if sys.platform == 'win32':
# On Windows, you can't remove read-only files, so make the
You can also take a look at the patch here -> http://pastebin.com/wNLYyXvw
2.) Store the patch under name 'blob.patch' in your buildout root directory
3.) Extend your buildout configuration:
parts +=
patchblob
postinstall
[patchblob]
recipe = collective.recipe.patch
egg = ZODB3
patches = blob.patch
[postinstall]
recipe = plone.recipe.command
command =
chmod -R g+r ${buildout:directory}/var
find ${buildout:directory}/var -type d | xargs chmod g+x
update-command = ${:command}
The postinstall sections sets desired group read permissions on already existing blobs. Note, also execute permission must be given to the blob folders, that group can enter the directories.
I've tested this patch with ZODB 3.10.2 and 3.10.3.
As Martijn suggested, this should be configurable and part of the ZODB directly.