How to apply texture pattern to 3D bars in pdf generated using iReport? - pdf

I have a requirement like applying texture pattern to the 3D bars in 3D Barchart using ireport. I am able to see the texture pattern in the default JRViewer. But when I take the PDF from the same report, I cannot see the texture pattern instead I can see a transparent 3D bars.
Can someone have a solution ?

with a little research we found the answer. There is an option in the iReport for the charts called renderType. We need to set this as svg(Scalable Vector Graphics).
So the texture pattern will apply to the PDF also.
The disadvantage of using this is - The PDF file size gets increased.

Related

Rendering line art with constant screen width

I have a line art texture applied to an object in 3D space. The default behavior is for the object and the texture to receive perspective scaling based on the perspective model view projection matrix. Is there any established technique to keep the positioning and scaling of the 3D object, while keeping the line width constant relative to the screen? The desired effect is as though a pen (fixed screen width) were used to trace a path on the 3D object.
Would something like SDF-based font rendering help?
Or maybe some kind of projective texture mapping?
Or render the object and texture to a buffer and expand the lines using edge detection?
Unfortunately, I'm using OGL ES 2, so I can't use a geom shader or anything like that.
The solution I came up with is inspired by procedural SDF generation, like #Felipe suggested, combined with Chris Green's Improved Alpha-Tested Magnification for Vector Textures and Special Effects.
Basically I hand draw shapes into textures using pure red, green, and blue. Then I render the scene using those textures, and generate an SDF on the fly in a second render pass. The SDF generation uses Green's algorithm with a small spread to improve performance. The SDF is then passed to a final render pass that thresholds and antialiases the SDF per Green's approach, using fwidth to maintain a constant line weight regardless of the distance of the object to the camera.
Since the original question was just for the approach/concept, I'm not posting an example at the moment. But I'll see if I can put together a shadertoy sometime soon.
You could create the texture procedurally in a fragment shader and use the size of a pixel for interpolations.
See:
FabriceNeyret's blog

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

How to use toon shader to convert 3D models to patent drawings

USPTO requires patent drawings to be black and white lines images.
I'm using blender to make 3D models. At first I got this:
The problem is it's grayscale with no black lines.There's a answer to suggest using toon shader. Convert 3D models to patent digrams
I checked "Edge" and set "Threshold" to max 255 in "Render" tab, I got:
It's getting better but need more edges to be drawn. I searched and found a tutorial http://www.minimaexpresion.es/?p=1070&lang=en , then I got:
It's too complicated for me and I don't know how to use render layers. So I tried another tutorial http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/80_Blender%20tutorial%20Toon%20Shading.html , which says I should assign different materials with different colors to different objects, so I tried and got this:
It leaves only one way to give a shot: render layers. Is there any simple methods to make it work? I only want this and convert it to indexed colors with black and white palette:
And the "Freestyle" only has one option about line thickness:
You were close in the second image. Instead of using the Edge postprocessor, look in the Render panel check the box labelled "Freestyle".
Then in the Render Layers panel there will be a list of configurable options for Freestyle, including how thick you want the lines and the minimum angle required to render an edge.
The best results are if you use mostly shadeless materials with simple textures (just solid colour).

What is PDF stroking, non-stroking and filling?

I've just started using Apache PDFBox and I'm completely baffled as to what is meant by stroking, non-stroking and filling when applied to text and lines.
Please can someone point me to a reference / guide which explains what these terms mean (for beginners) and what the difference is between them.
Its pretty simple. Consider a rectangle located at 0,0 and 50 units wide and high. That is described as a path with vertices at 0,0 0,50 50,50 and 50,0
Now, if you stroke the path (imagine drawing along the path using a pen) with black. What you get is a black square, the interior of the square is whatever was on the paper before you drew the border (probably nothing, so white).
If you fill the path, you get a filled in square, but no border drawn.
If you fill and stroke the path you get a filled in square with a border. Because the fill and stroke colours can be different you can have the square filled in one colour and the border drawn in another.
See the PDF Reference, section 4.4 "Path Construction and Painting"
Update (by -kp-)
I've copied the following table from the official PDF-1.7 specification:
This table shows the different text rendering modes. Here too, you can stroke or fill or do both to glyph shapes. You can even do neither stroke nor fill, but still define the shapes: that is, you get invisible text -- a very useful mode for placing OCR-ed text on top of a scanned image! It makes the text searchable, copy'n'paste-able and screen-reader aware.
I am currently writing a book The ABC of PDF with iText that introduces you to all these principles.
You are talking about the "Graphics State" and syntax that is used to define objects on a page. This syntax is stored in content streams.
Ignoring "Text State" (a subset of "Graphics State") for the moment, the idea is that you create paths and shapes (shapes are closed paths). These path and shapes can be drawn using stroke and fill operators. If you fill a path, you need to define whether you're using the non-zero winding rule or the even-odd rule (if you've studied geometry at college level, you've already encountered these rules).
Stroke and fill operators will use the colors of the current graphics state. Lines will be drawn using the stroking color. Shapes will be filled using the non-stroking color.
There's much more info in the free ebook you can download from Leanpub.

Simple algorithm for tracking a rectangular blob

I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.