USPTO requires patent drawings to be black and white lines images.
I'm using blender to make 3D models. At first I got this:
The problem is it's grayscale with no black lines.There's a answer to suggest using toon shader. Convert 3D models to patent digrams
I checked "Edge" and set "Threshold" to max 255 in "Render" tab, I got:
It's getting better but need more edges to be drawn. I searched and found a tutorial http://www.minimaexpresion.es/?p=1070&lang=en , then I got:
It's too complicated for me and I don't know how to use render layers. So I tried another tutorial http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/80_Blender%20tutorial%20Toon%20Shading.html , which says I should assign different materials with different colors to different objects, so I tried and got this:
It leaves only one way to give a shot: render layers. Is there any simple methods to make it work? I only want this and convert it to indexed colors with black and white palette:
And the "Freestyle" only has one option about line thickness:
You were close in the second image. Instead of using the Edge postprocessor, look in the Render panel check the box labelled "Freestyle".
Then in the Render Layers panel there will be a list of configurable options for Freestyle, including how thick you want the lines and the minimum angle required to render an edge.
The best results are if you use mostly shadeless materials with simple textures (just solid colour).
Related
I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.
I'm using createJS to build some easy game.
I have an image (white fill and black stroke) and I would change the black color to another.
Is it possible?
Thanks
The three ways to do color adjustments in EaselJS are:
Composite Operations: You can draw an image using a composite operation (such as "destination-in") to determine how pixels are laid down. This is probably not going to give you the result you want. Here is an example of a black PNG being changed to different colors using compositeOperation.
Color Filters. EaselJS has both a ColorFilter and a ColorMatrixFilter, which assist with modifying colors. The first uses parameters to multiply and add to the color and alpha channels, but is a little harder to use. The second uses a ColorMatrix to adjust hue, saturation, contrast, and brightness. This may not work for you, since changing the black pixels is kind of the opposite of color filters do.
A Custom Filter. EaselJS supports custom filters (such as the Threshold Filter in the extras folder. This is probably your best option, and might take massaging to get what you need.
Hope that sheds some light.
In Photoshop you can set a layer's blending mode to be "Hue". If that layer is, for example, filled with blue then it seems to take the layer below and makes it all blue wherever a non-whiteish color exists.
I'm wondering what it's actually doing though. If I have a background layer with a pixel aarrggbb and the layer on top of that is set to blend mode "Hue" and there's a pixel aarrggbb on that layer, how are those two values combined to give the result that we see?
It doesn't just drop the rrggbb from the layer below. If it did that it'd color white and black as well. It also wouldn't allow color variations through.
If a background pixel is 0xff00ff00 and the corresponding hue layer pixel is 0xff0000ff then I'm assuming the end result will just be 0xff0000ff because the ff blue replaces the ff green. But, if the background pixel is 0x55112233 and the hue layer pixel is 0xff0000ff, how does it come up with the shade of blue that it comes up with?
The reason I ask is that I'd like to take various images and change the hue of the image programmatically in my app. Rather than storing 8 different versions of the same image with different colors, I'd like to store one image and color it as needed.
I've been researching a way to replicate that blending mode in javascript/canvas but I've only come up with the "colorize" filter/blend mode. (Examples below)
Colorize algorithm:
convert the colors from RGB to HSL;
change the Hue value to the wanted one (in my case 172⁰ or 0.477);
revert the update HSL to RGB
Note: this is ok on the desktop but it's noticeably slow on a smartphone, I found.
You can see the difference by comparing these three images. Original:
colorize:
Fireworks' "blend hue" algorithm (which I think is the same as Photoshop's):
The colorize filter might be a good substitute.
RGB/HSL conversion question
Hue/Chroma and HSL on Wikipedia
I found an algorithm to convert RGB to HSV here:
http://www.cs.rit.edu/~ncs/color/t_convert.html
Of course, at the bottom of that page it mentions that the Java Color object already has methods for converting between RGB and HSV, so I just used that.
I have a requirement like applying texture pattern to the 3D bars in 3D Barchart using ireport. I am able to see the texture pattern in the default JRViewer. But when I take the PDF from the same report, I cannot see the texture pattern instead I can see a transparent 3D bars.
Can someone have a solution ?
with a little research we found the answer. There is an option in the iReport for the charts called renderType. We need to set this as svg(Scalable Vector Graphics).
So the texture pattern will apply to the PDF also.
The disadvantage of using this is - The PDF file size gets increased.
I'm completely new to ArcGIS and ArcMap, but someone suggested this program to me for a project I'm working on.
I would like to animate individual entities on a map, and was wondering if it is possible to do so in ArcMap. I asked this earlier here and a member directed me to a tutorial on animating in ArcGIS. The animation in the guide was over a map spread (ie. each pixel on the map displays, say, a different color to indicate population data in the area). However I realized that if I zoom in a lot, eventually the image will degenerate into pixels, which is why I need an actual object to mark a certain point. I checked some online tutorials and it seems like we can place markers on the map. Can someone tell me if it is possible to animate these markers (for example via a for-loop)? And if so, could you point me in a direction where to start?
Thanks in advance!
You can animate layers in ArcMap is the short answer. Its not as simple as using the timeline feature in Google Earth for example though. But then ArcMap is much more than just a visualization tool.
This help page on the ESRI web help looks like a good place to start.
I'm not 100% sure what you mean by the image degenerates into pixels. Are you saying that the markers were single points in the layer. Unlike Google Earth you are not confined to simply plotting points on the map. You can draw completely arbitrary shapes in ArcMap, which can be defined to cover actual areas of the map, so when you zoom-in the shape gets larger.
The way you need to load data into ArcMap to produce an animation isn't too simple. There might be other ways to do this, but the way I know of is to generate a NetCDF file. This file contains a 3D matrix of layer data, where each layer is separated through time. Because you generate a matrix, you are effectively placing a raster image over the map. Thus if you want to cover a large area, each matrix becomes large, and you multiply that by the number of time slices you wish to animate over.
Once you have a NetCDF file with your data in however, getting ArcMap to animate it and produce say a .avi file is pretty simple.
You could try just loading some of the example NetCDF datasets into ArcMap to see how/if they will work to get you started.
Hope that helps.
The upcoming v10 will have better time-aware capabilities, which will allow for animation.