Optimizing size of eps/pdf files generated by Mathematica - pdf

How to optimize size of an eps or pdf file generated by Mathematica?
It is common that the file size is 50-100x bigger that it should be (an example below). For some applications (e.g. putting a figure in a publication, or even more - putting it on a large poster) I need to have axes in vector graphics, so using raster graphics for everything is not the best option for me.
Every practical solution (either with setting the right options in Mathematica or with doing further conversions in other applications) will be appreciated.
For example the following code producing an eps figure of:
plot = ListDensityPlot[
Table[Random[], {100}, {100}],
InterpolationOrder -> 0]
Export["testplot.eps", plot]
Export["testplot.pdf", plot]
produces an eps file of size of 3.3MB and a pdf size of 5MB (on Mathematica 7 on Mac OS X 10.6, if it makes a difference).
For a comparison, a 3x3 plot with the same axes has 8kB (pdf) to 20kB (eps).
100x100 points is 30kB in bmp (and a bit less in png).
The issue is the same for other types of plots, with the emphasis on ListPlot3D.

You may have figured out how to apply Alexey's answer in the link he provided. But in case you are having trouble here I provide how I apply the technique to 2D graphics.
I have found the hard way that if you want to create a good plot you need to be very specific to Mathematica. For this reason, as you may have noticed in my post Rasters in 3D I created an object specifying all the options so that Mathematica can be happy.
in = 72;
G2D = Graphics[{},
AlignmentPoint -> Center,
AspectRatio -> 1,
Axes -> False,
AxesLabel -> None,
BaseStyle -> {FontFamily -> "Arial", FontSize -> 12},
Frame -> True,
FrameStyle -> Directive[Black],
FrameTicksStyle -> Directive[10, Black],
ImagePadding -> {{20, 5}, {15, 5}},
ImageSize -> 5 in,
LabelStyle -> Directive[Black],
PlotRange -> All,
PlotRangeClipping -> False,
PlotRangePadding -> Scaled[0.02]
];
I should mention here that you must specify ImagePadding. If you set it to all your eps file will be different from what Mathematica shows you. In any case, I think having this object allows you to change properties much easily.
Now we can move on to your problem:
plot = ListDensityPlot[
Table[Random[], {100}, {100}],
InterpolationOrder -> 0,
Options[G2D]
]
The following separates the axes and the raster and combines them into result:
axes = Graphics[{}, AbsoluteOptions[plot]];
fig = Show[plot, FrameStyle -> Directive[Opacity[0]]];
fig = Magnify[fig, 5];
fig = Rasterize[fig, Background -> None];
axes = First#ImportString[ExportString[axes, "PDF"], "PDF"];
result = Show[axes, Epilog -> Inset[fig, {0, 0}, {0, 0}, ImageDimensions[axes]]]
The only difference here, which at this point I cannot explain is the axes labels, they have the decimal point. Finally, we export them:
Export["Result.pdf", result];
Export["Result.eps", result];
The result are files of sizes 115 Kb for the pdf file and 168 Kb for the eps file.
UPDATE:
If you are using Mathematica 7 the eps file will not come up correctly. All you will see is your main figure with black on the sides. This is a bug in version 7. This however is fixed in Mathematica 8.
I had mentioned previously that I did not know why the axes label were different. Alexey Popkov came up with a fix for that. To create axes, fig and result use the following:
axes = Graphics[{}, FilterRules[AbsoluteOptions[plot], Except[FrameTicks]]];
fig = Show[plot, FrameStyle -> Directive[Opacity[0]]];
fig = Magnify[fig, 5];
fig = Rasterize[fig, Background -> None];
axes = First#ImportString[ExportString[axes, "PDF"], "PDF"];
result = Show[axes, Epilog -> Inset[fig, {0, 0}, {0, 0}, ImageDimensions[axes]]]

I have had some success with both of the following:
(1) Rasterizing plots before saving. Quality is generally reasonable, size drops considerably.
(2) I Save to postscript, then (I'm on a Linux machine) use ps2pdf to get the pdf. This tends to be significantly smaller than saving directly to pdf from Mathematica.
Daniel Lichtblau

ImageResolution works well for .pdf but I haven't had success with .eps.
Export["testplot600.pdf", plot, ImageResolution -> 600]
The output size is 242 KB for 600 dpi and 94 KB for 300 dpi. You can also set ImageSize for Export.
If you want to go the third-party route, I'd recommend GraphicConverter. It is very reliable and has many many options.

Related

Images rotated when added to PDF in itext7

I'm using the following extension method I built on top of itext7's com.itextpdf.layout.Document type to apply images to PDF documents in my application:
fun Document.writeImage(imageStream: InputStream, page: Int, x: Float, y: Float, width: Float, height: Float) {
val imageData = ImageDataFactory.create(imageStream.readBytes())
val image = Image(imageData)
val pageHeight = pdfDocument.getPage(page).pageSize.height
image.scaleAbsolute(width, height)
val lowerLeftX = x
val lowerLeftY = pageHeight - y - image.imageScaledHeight
image.setFixedPosition(page, lowerLeftX, lowerLeftY)
add(image)
}
Overall, this works -- but with one exception! I've encountered a subset of documents where the images are placed as if the document origin is rotated 90 degrees. Even though the content of the document is presented properly oriented underneath.
Here is a redacted copy of one of the PDFs I'm experiencing this issue with. I'm wondering if anyone would be able to tell me why itext7 is having difficulties writing to this document, and what I can do to fix it -- or alternatively, if it's a potential bug in the higher level functionality of com.itextpdf.layout in itext7?
Some Additional Notes
I'm aware that drawing on a PDF works via a series of instructions concatenated to the PDF. The code above works on other PDFs we've had issues with in the past, so com.itextpdf.layout.Document does appear to be normalizing the coordinate space prior to drawing. Thus, the issue I describe above seems to be going undetected by itext?
The rotation metadata in the PDF that itext7 reports from a "good" PDF without this issue seems to be the same as the rotation metadata in PDFs like the one I've linked above. This means I can't perform some kind of brute-force fix through detection.
I would love any solution to not require me to flatten the PDF through any form of broad operation.
I can talk only about the document you`ve shared.
It contains 4 pages.
/Rotate property of the first page is 0, for other pages is 270 (defines 90 rotation counterclockwise).
IText indeed tries to normalize the coordinate space for each page.
That`s why when you add an image to pages 2-4 of the document it is rotated on 270 (90 counterclockwise) degrees.
... Even though the content of the document is presented properly oriented underneath.
Content of pages 2-4 looks like
q
0 -612 792 0 0 612 cm
/Im0 Do
Q
This is an image with applied transformation.
0 -612 792 0 0 612 cm represents the composite transformation matrix.
From ISO 32000
A transformation matrix in PDF shall be specified by six numbers,
usually in the form of an array containing six elements. In its most
general form, this array is denoted [a b c d e f]; it can represent
any linear transformation from one coordinate system to another.
We can extract a rotation from that matrix.
How to decompose the matrix you can find there.
https://math.stackexchange.com/questions/237369/given-this-transformation-matrix-how-do-i-decompose-it-into-translation-rotati
The rotation is defined by the next matrix
0 -1
1 0
This is a rotation on -90 (270) degrees.
Important note: in this case positive angle means counterclockwise rotation.
ISO 32000
Rotations shall be produced by [rc rs -rs rc 0 0], where rc = cos(q)
and rs = sin(q) which has the effect of rotating the coordinate system
axes by an angle q counter clockwise.
So the image has been rotated on the same angle in the counter direction comparing to the page.

Tuning first_stage_anchor_generator in faster rcnn model

I am trying to detect some very small object (~25x25 pixels) from large image (~ 2040, 1536 pixels) using faster rcnn model from object_detect_api from here https://github.com/tensorflow/models/tree/master/research/object_detection
I am very confused about the following configuration parameters(I have read the proto file and also tried modify them and test):
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
I am kind of very new to this area, if some one can explain a bit about these parameters to me it would be very appreciated.
My Question is how should I adjust above (or other) parameters to accommodate for the fact that I have very small fix-sized objects to detect in large image.
Thanks
I don't know the actual answer, but I suspect that the way Faster RCNN works in Tensorflow object detection is as follows:
this article says:
"Anchors play an important role in Faster R-CNN. An anchor is a box. In the default configuration of Faster R-CNN, there are 9 anchors at a position of an image. The following graph shows 9 anchors at the position (320, 320) of an image with size (600, 800)."
and the author gives an image showing an overlap of boxes, those are the proposed regions that contain the object based on the "CNN" part of the "RCNN" model, next comes the "R" part of the "RCNN" model which is the region proposal. To do that, there is another neural network that is trained alongside the CNN to figure out the best fit box. There are a lot of "proposals" where an object could be based on all the boxes, but we still don't know where it is.
This "region proposal" neural net's job is to find the correct region and it is trained based on the labels you provide with the coordinates of each object in the image.
Looking at this file, I noticed:
line 174: heights = scales / ratio_sqrts * base_anchor_size[0]
line 175: widths = scales * ratio_sqrts * base_anchor_size[[1]]
which seems to be the final goal of the configurations found in the config file(to generate a list of sliding windows with known widths and heights). While the base_anchor_size is created as a default of [256, 256]. In the comments the author of the code wrote:
"For example, setting scales=[.1, .2, .2]
and aspect ratios = [2,2,1/2] means that we create three boxes: one with scale
.1, aspect ratio 2, one with scale .2, aspect ratio 2, and one with scale .2
and aspect ratio 1/2. Each box is multiplied by "base_anchor_size" before
placing it over its respective center."
which gives insight into how these boxes are created, the code seems to be creating a list of boxes based on the scales =[stuff] and aspect_ratios = [stuff] parameters that will be used to slide over the image. The scale is fairly straightforward and is how much the default square box of 256 by 256 should be scaled before it is used and the aspect ratio is the thing that changes the original square box into a rectangle that is more closer to the (scaled) shape of the objects you expect to encounter.
Meaning, to optimally configure the scales and aspect ratios, you should find the "typical" sizes of the object in the image whatever it is ex(20 by 30, 5 by 10 ,etc) and figure out how much the default of 256 by 256 square box should be scaled to optimally fit that, then find the "typical" aspect ratios of your objects(according to google an aspect ratio is: the ratio of the width to the height of an image or screen.) and set those as your aspect ratio parameters.
Note: it seems that the number of elements in the scales and aspect_ratios lists in the config file should be the same but I don't know for sure.
Also I am not sure about how to find the optimal stride, but if your objects are smaller than 16 by 16 pixels the sliding window you created by setting the scales and aspect ratios to what you want might just skip your object altogether.
As I believe proposal anchors are generated only for model types of Faster RCNN. In this file you have specified what parameters may be set for anchors generation within line you mentioned from config.
I tried setting base_anchor_size, however I failed. Though this FasterRCNNTutorial tutorial mentions that:
[...] you also need to configure the anchor sizes and aspect ratios in the .config file. The base anchor size is 255,255.
The anchor ratios will multiply the x dimension and divide the y dimension, so if you have an aspect ratio of 0.5 your 255x255 anchor becomes 128x510. Each aspect ratio in the list is applied, then the results are multiplied by the scales. So the first step is to resize your images to the training/testing size, then manually check what the smallest and largest objects you expect are, and what the most extreme aspect ratios will be. Set up the config file with values that will cover these cases when the base anchor size is adjusted by the aspect ratios and multiplied by the scales.
I think it's pretty straightforward. I also used this 'workaround'.

OpenCV detect blobs on the image

I need to find (and draw rect around)/get max and min radius blobs on the image. (samples below)
the problem is to find correct filters for the image that will allow Canny or Threshold transformation to highlight the blobs. then I going to use findContours to find the rectangles.
I've tryed:
Threshold - with different level
blur->erode->erode->grayscale->canny
change image tone with variety of "lines"
and ect. the better result was to detect piece (20-30%) of blob. and this info not allowed to draw rect around blob. also, thanks for shadows, not related to blob dots were detected, so that also prevents to detect the area.
as I understand I need to find counter that has hard contrast (not smooth like in shadow). Is there any way to do that with openCV?
Update
cases separately: image 1, image 2, image 3, image 4, image 5, image 6, image 7, image 8, image 9, image 10, image 11, image 12
One more Update
I believe that the blob have the contrast area at the edge. So, I've tried to make edge stronger: I've created 2 gray scale Mat: A and B, apply Gaussian blur for the second one - B (to reduce noise a bit), then I've made some calculations: goes around every pixel and find max difference between Xi,Yi of 'A' and nearby dots from 'B':
and apply max difference to Xi,Yi. so I get smth like this:
is i'm on the right way? btw, can I reach smth like this via OpenCV methods?
Update Image Denoising helps to reduce noize, Sobel - to highlight the contours, then threshold + findContours and custome convexHull gets smth similar I'm looking for but it not good for some blobs.
Since there are big differences between the input images, the algorithm should be able to adapt to the situation. Since Canny is based on detecting high frequencies, my algorithm treats the sharpness of the image as the parameter used for preprocessing adaptation. I didn't want to spend a week figuring out the functions for all the data, so I applied a simple, linear function based on 2 images and then tested with a third one. Here are my results:
Have in mind that this is a very basic approach and is only proving a point. It will need experiments, tests, and refining. The idea is to use Sobel and sum over all the pixels acquired. That, divided by the size of the image, should give you a basic estimation of high freq. response of the image. Now, experimentally, I found values of clipLimit for CLAHE filter that work in 2 test cases and found a linear function connecting the high freq. response of the input with a CLAHE filter, yielding good results.
sobel = get_sobel(img)
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
That's the adaptive part. Now for the contours. It took me a while to figure out a correct way of filtering out the noise. I settled for a simple trick: using contours finding twice. First I use it to filter out the unnecessary, noisy contours. Then I continue with some morphological magic to end up with correct blobs for the objects being detected (more details in the code). The final step is to filter bounding rectangles based on the calculated mean, since, on all of the samples, the blobs are of relatively similar size.
import cv2
import numpy as np
def unsharp_mask(img, blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
gaussian = cv2.GaussianBlur(img, (5,5), 0)
return cv2.addWeighted(img, imgWeight, gaussian, gaussianWeight, 0)
def smoother_edges(img, first_blur_size, second_blur_size = (5,5), imgWeight = 1.5, gaussianWeight = -0.5):
img = cv2.GaussianBlur(img, first_blur_size, 0)
return unsharp_mask(img, second_blur_size, imgWeight, gaussianWeight)
def close_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
def open_image(img, size = (5,5)):
kernel = np.ones(size, np.uint8)
return cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
def shrink_rect(rect, scale = 0.8):
center, (width, height), angle = rect
width = width * scale
height = height * scale
rect = center, (width, height), angle
return rect
def clahe(img, clip_limit = 2.0):
clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=(5,5))
return clahe.apply(img)
def get_sobel(img, size = -1):
sobelx64f = cv2.Sobel(img,cv2.CV_64F,2,0,size)
abs_sobel64f = np.absolute(sobelx64f)
return np.uint8(abs_sobel64f)
img = cv2.imread("blobs4.jpg")
# save color copy for visualizing
imgc = img.copy()
# resize image to make the analytics easier (a form of filtering)
resize_times = 5
img = cv2.resize(img, None, img, fx = 1 / resize_times, fy = 1 / resize_times)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# use sobel operator to evaluate high frequencies
sobel = get_sobel(img)
# experimentally calculated function - needs refining
clip_limit = (-2.556) * np.sum(sobel)/(img.shape[0] * img.shape[1]) + 26.557
# don't apply clahe if there is enough high freq to find blobs
if(clip_limit < 1.0):
clip_limit = 0.1
# limit clahe if there's not enough details - needs more tests
if(clip_limit > 8.0):
clip_limit = 8
# apply clahe and unsharp mask to improve high frequencies as much as possible
img = clahe(img, clip_limit)
img = unsharp_mask(img)
# filter the image to ensure edge continuity and perform Canny
# (values selected experimentally, using trackbars)
img_blurred = (cv2.GaussianBlur(img.copy(), (2*2+1,2*2+1), 0))
canny = cv2.Canny(img_blurred, 35, 95)
# find first contours
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# prepare black image to draw contours
canvas = np.ones(img.shape, np.uint8)
for c in cnts:
l = cv2.arcLength(c, False)
x,y,w,h = cv2.boundingRect(c)
aspect_ratio = float(w)/h
# filter "bad" contours (values selected experimentally)
if l > 500:
continue
if l < 20:
continue
if aspect_ratio < 0.2:
continue
if aspect_ratio > 5:
continue
if l > 150 and (aspect_ratio > 10 or aspect_ratio < 0.1):
continue
# draw all the other contours
cv2.drawContours(canvas, [c], -1, (255, 255, 255), 2)
# perform closing and blurring, to close the gaps
canvas = close_image(canvas, (7,7))
img_blurred = cv2.GaussianBlur(canvas, (8*2+1,8*2+1), 0)
# smooth the edges a bit to make sure canny will find continuous edges
img_blurred = smoother_edges(img_blurred, (9,9))
kernel = np.ones((3,3), np.uint8)
# erode to make sure separate blobs are not touching each other
eroded = cv2.erode(img_blurred, kernel)
# perform necessary thresholding before Canny
_, im_th = cv2.threshold(eroded, 50, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(im_th, 11, 33)
# find contours again. this time mostly the right ones
_, cnts, _ = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# calculate the mean area of the contours' bounding rectangles
sum_area = 0
rect_list = []
for i,c in enumerate(cnts):
rect = cv2.minAreaRect(c)
_, (width, height), _ = rect
area = width*height
sum_area += area
rect_list.append(rect)
mean_area = sum_area / len(cnts)
# choose only rectangles that fulfill requirement:
# area > mean_area*0.6
for rect in rect_list:
_, (width, height), _ = rect
box = cv2.boxPoints(rect)
box = np.int0(box * 5)
area = width * height
if(area > mean_area*0.6):
# shrink the rectangles, since the shadows and reflections
# make the resulting rectangle a bit bigger
# the value was guessed - might need refinig
rect = shrink_rect(rect, 0.8)
box = cv2.boxPoints(rect)
box = np.int0(box * resize_times)
cv2.drawContours(imgc, [box], 0, (0,255,0),1)
# resize for visualizing purposes
imgc = cv2.resize(imgc, None, imgc, fx = 0.5, fy = 0.5)
cv2.imshow("imgc", imgc)
cv2.imwrite("result3.png", imgc)
cv2.waitKey(0)
Overall I think that's a very interesting problem, a little bit too big to be answered here. The approach I presented is due to be treated as a road sign, not a complete solution. Tha basic idea being:
Adaptive preprocessing.
Finding contours twice: for filtering and then for the actual classification.
Filtering the blobs based on their mean size.
Thanks for the fun and good luck!
Here is the code I used:
import cv2
from sympy import Point, Ellipse
import numpy as np
x1='C:\\Users\\Desktop\\python\\stack_over_flow\\XsXs9.png'
image = cv2.imread(x1,0)
image1 = cv2.imread(x1,1)
x,y=image.shape
median = cv2.GaussianBlur(image,(9,9),0)
median1 = cv2.GaussianBlur(image,(21,21),0)
a=median1-median
c=255-a
ret,thresh1 = cv2.threshold(c,12,255,cv2.THRESH_BINARY)
kernel=np.ones((5,5),np.uint8)
dilation = cv2.dilate(thresh1,kernel,iterations = 1)
kernel=np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(dilation, cv2.MORPH_OPEN, kernel)
cv2.imwrite('D:\\test12345.jpg',opening)
ret,contours,hierarchy = cv2.findContours(opening,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
c=np.size(contours[:])
Blank_window=np.zeros([x,y,3])
Blank_window=np.uint8(Blank_window)
for u in range(0,c-1):
if (np.size(contours[u])>200):
ellipse = cv2.fitEllipse(contours[u])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
if (eccentricity<0.8):
cv2.drawContours(image1, contours, u, (255,1,255), 3)
cv2.imwrite('D:\\marked.jpg',image1)
Here problem is to find a near circular object. This simple solution is based on finding the eccentricity for each and every contour. Such objects being detected is the drop of water.
I have a partial solution in place.
FIRST
I initially converted the image to the HSV color space and tinkered with the value channel. On doing so I came across something unique. In almost every image, the droplets have a tiny reflection of light. This was highlighted distinctly in the value channel.
Upon inverting this I was able to obtain the following:
Sample 1:
Sample 2:
Sample 3:
SECOND
Now we have to extract the location of those points. To do so I performed anomaly detection on the inverted value channel obtained. By anomaly I mean the black dot present in them.
In order to do this I calculated the median of the inverted value channel. I allotted pixel value within 70% above and below the median to be treated as normal pixels. But every pixel value lying beyond this range to be anomalies. The black dots fit perfectly there.
Sample 1:
Sample 2:
Sample 3:
It did not turn out well for few images.
As you can see the black dot is due to the reflection of light which is unique to the droplets of water. Other circular edges might be present in the image but the reflection distinguishes the droplet from those edges.
THIRD
Now since we have the location of these black dots, we can perform Difference of Gaussians (DoG) (also mentioned in the update of the question) and obtain relevant edge information. If the obtained location of the black dots lie within the edges discovered it is said to be a water droplet.
Disclaimer: This method does not work for all the images. You can add your suggestions to this.
Good day , I am working on this subject and my advice to you is; First, after using many denoising filters such as Gaussian filters, process the image after that.
You can blob-detection these circles not with countors.

Matplotlib difference between two images

I have images (4000x2000 pixels) that are derived from the same image, but with subtle differences in less than 1% of the pixels. I'd like to plot the two images side-by-side and highlight the regions of the array's that are different (by highlight I mean I want the pixels that differ to jump out, but still display the color that matches their value. I've been using rectangles that are unfilled to outline the edges of such pixels so far. I can do this very nicely in small images (~50x50) with:
fig=figure(figsize=(20,15))
ax1=fig.add_subplot(1,2,1)
imshow(image1,interpolation='nearest',origin='lower left')
colorbar()
ax2=fig.add_subplot(122,sharex=ax1, sharey=ax1)
imshow(image2,interpolation='nearest',origin='lower left')
colorbar()
#now show differences
Xspots=im1!=im2
Xx,Xy=nonzero(Xspots)
for x,y in zip(Xx,Xy):
rect=Rectangle((y-.5,x-.5),1,1,color='w',fill=False,ec='w')
ax1.add_patch(rect)
ax2.add_patch(rect)
However this doesn't work so well when the image is very large. Strange things happen, for example when I zoom in the patch disappears. Also, this way sucks because it takes forever to load things when I zoom in/out.
I feel like there must be a better way to do this, maybe one where there is only one patch that determines where all of the things are, rather than a whole bunch of patches. I could do a scatter plot on top of the imshow image, but I don't know how to fix it so that the points will stay exactly the size of the pixel when I zoom in/out.
Any ideas?
I would try something with the alpha channel:
import copy
N, M = 20, 40
test_data = np.random.rand(N, M)
mark_mask = np.random.rand(N, M) < .01 # mask 1%
# this is redundant in this case, but in general you need it
my_norm = matplotlib.colors.Normalize(vmin=0, vmax=1)
# grab a copy of the color map
my_cmap = copy.copy(cm.get_cmap('cubehelix'))
c_data= my_cmap(my_norm(test_data))
c_data[:, :, 3] = .5 # make everything half alpha
c_data[mark_mask, 3] = 1 # reset the marked pixels as full opacity
# plot it
figure()
imshow(c_data, interpolation='none')
No idea if this will work with your data or not.

pygtk / rsvg - getting size of drawing?

Is it possible for RSVG and Cairo to find the extents of a drawing within an SVG image?
i.e. not the page width/height, but the space actually used by drawing elements.
This doesn't work, it just returns page size:
img = rsvg.Handle(file="myfile.svg")
(w, h, w2,h2) = svg.get_dimension_data() # gives document's declared size
This doesn't seem to return any information about size:
svg.render_cairo(context) # returns None
This doesn't work, it also returns the page size:
self.svg.get_pixbuf().get_width()
This is with pygtk-all-in-one-2.24.0.win32-py2.7 and RSVG 2.22.3-1_win32, in which I can't find the get_dimensions_sub() function mentioned in other answers.
I've searched the web tonight trying to solve this seemingly simple problem. There does not seem to be a simple way of getting the bounding box of the drawing with rsvg, cairo or similar tools. Unless I'm missing something obvious.
However, you can call Inkscape with the --query-all option. This gives you the dimensions of all objects in an SVG file, and the full drawing as the first entry in the list.
import subprocess
output = subprocess.check_output(["inkscape", "--query-all", "myfile.svg"])
values = [line.split(',') for line in output.split('\n')]
whole_drawing = values[0]
[x, y, width, height] = map(float, whole_drawing[1:])
Now you'll have the drawing's position in x, y and its width and height. With this, it becomes simple to use rsvg and cairo to redraw the clipped SVG to a new file.
I created a simple tool to do this, I hope the code should be rather easy to understand.