Unwrapped object was filled out with gradient color. As illustrated in below image, some pixels in UV/Texture view on Seamed edges boundaries are not filled with expected color. It looks like edge has to cross more than a half of the pixel to be colorized.
Is there a way to force all pixels crossed by Seam edge to be colorized properly?
Found a solution. Navigate to Texture Paint for given object and open Options, modify Bleed property, which defines how many pixels will be colorized in UV texture outside Seamed edges. Default is 2px.
Related
I have polygons features on a Mapbox map. Their sizes vary a lot (some are big as streets, others are as little as a tree).
Each polygon has an a point feature on it (a circle) that acts as an handle to open a popup related to the polygon data.
But depending on the zoom level, the circle/point is sometimes bigger than the polygon itself; since the polygon "sticks" to the map while the circle size remains unchanged.
What I would like to achieve is to hide the polygon (and its handle) if the polygon size in pixels is smaller than the circle :
when zoom changes
get the size in pixels of a bouding box containing the polygon
compare it to the size of the circle
hide both of them if circle radius > polygon smaller side.
I think I'm capable of coding this, but then... how can I hide the features ?
There is a minzoom / maxzoom setting for sources and layers, but how can I achieve this per-feature ?
Thanks!
Within the Style Specification, there's no way to access a feature's size. https://docs.mapbox.com/mapbox-gl-js/style-spec/
How many features do you have on your map? Is it possible to precompute the size of the feature and use that to style when it's visible?
I have a Poliigon Texture Demo c4d file. The file includes a sphere with a texture which renders correctly (bottom sphere in image). However when I create a sphere (top sphere in image), convert it to a polygonal object and apply the same texture it is being stretched horizontally.
I can fix this by changing the "Length U" setting to 50% in the Texture Tag but I notice that the sphere below does not need this modification so I was wondering how to convert the top sphere to a polygonal object the same way the bottom sphere is.
Cinema 4d Example
I have included a screengrab. The only notable difference is that the sphere below has additional diagonal division.
I am quite new to 3D so hope this all makes sense.
I think you only need to change the Sphere's Type, to a triangular type, like the sphere at the bottom.
If this helps, please consider up-voting and marking you question as solved
When applying in webkit a 2D transform using a percentage, in this case translateY(-50%) it seems that if the object size isn't even, the result will be blurry edges.
Does anybody know how to prevent this effect?
Check this example, the red background box has top and bottom blurry edges.
Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.
Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect?
I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?
The problem with drawing a 1-pixel path is that Quartz draws paths on an exact point grid, starting from {0,0}. This means that if you stroke a vertical path starting at {10,10} with a 1-point width, half of that line will render in the pixel to the left of the coordinate and half in the pixel to the right, causing a blurring effect.
You should therefore shift your drawing by {0.5,0.5} if you want lines to draw on exact pixels.
You can definitely draw what you want with Quartz.
Apple uses images for the tab elements.