How to extend planes and find intersection point [closed] - blender

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
I'm new to blender. I want to restore sharp edge and create vertex that is geometrically intersection of selected triangles. Is there fast way to do this?
selected triangles are orange

This isn't very easy to do without changing the volume of the mesh, but here's the best method I've found:
Select the whole mesh (Or just the bevelled faces you want to sharpen):
Select the "Shrink/fatten" tool from the toolbar:
Shrink the faces, until the vertices at the corners are overlapping.
IMPORTANT: if you want to preserve the volume of the object, make sure to make a note of the exact value that you shrank the mesh by (visible in the bottom left of the screen):
Press the M key and select Merge by distance (or go to Mesh > Merge > By distance), and increase the Merge distance parameter until all of the vertices double vertices have been merged:
Finally, use the shrink/fatten tool again to resize the mesh to its original volume. (If you want to be exact, use the negative version of the amount you shrunk it by to begin with):
Bonus tip, if you want to get rid of the annoying triangles on the faces, you can use the Tris to quads operator to remove them.

Related

How to draw a quartic Bezier curve in inkscape [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
When I draw a cubic bezier curve with 4 control points,
I choose regular bezier path,
draw a line, then drag 2 handles out.
But now, I need draw a quartic curve with 5 control points.
I don't know how to do it.
How can I add the 5th control points in to handles?
Am I right to consider handles as control points?
Thank you, friends~~
I think that you are looking for a BSpline path rather than a regular Bezier path. This is the 3rd mode available (see your 1st attached screenshot).
Regular Bezier will interpolate (go through) your control points. BSpline is the classic N-degree Bézier.

IBM Watson Visual recognition. Is it possible to get X,Y coordinates from an specific object? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm starting with an university project and I'm looking for a tool that help me to find the coordinates(X,Y) in pixels from an specific objects in an image(I'm not talking about text). I'm trying to know if IBM Watson Visual recognition could help me out to get this achieve, or if you know any other tool that could work better.
Thank you.
You can also take a hybrid "cloud-edge" approach, as described here: https://medium.com/unsupervised-coding/dont-miss-your-target-object-detection-with-tensorflow-and-watson-488e24226ef3
It uses a TensorFlow model running locally to detect regions, then uses Watson VR to say what is in each region. This combines the flexibility of TensorFlow with the ability of Watson VR to classify many many (tens of thousands of) different types of objects.
You can "kind of" do this with Watson visual recognition. First you need to train a custom classifier to "find" the objects that you are looking for. Once you have done this, you're halfway done.
The second part involves taking the image that you want to find the object in, and splitting it up into four parts (upper left, lower left, upper right, lower right). Then you search each portion of the image for your target object. If you find it in one of those quadrants, you then take that quadrant and break it up into four parts, and search each portion of the image for the target object. If you continue and do this recursively (and keep track of the pixel boundaries of each quadrant and sub-quadrant), eventually you will narrow down on the object you are searching for.
Now you should also want to do other search algorithms. Consider the case where your target object is in the center of the image - it won't show up in ANY quadrant. If your object happens to span a quadrant boundary, you will not get an accurate location, so multiple search patterns are needed, but the strategy and approach is the same.

Convert expanded blend to one simple vector shape [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to convert an expanded blend to a simple lightweight vector shape, without all these inbetween paths of all n steps? It seems like a complicated object to work with since computer has to recalculate all the changes that are made to the inside paths.
Go to Object > Blend > Expand. Then, with all of the steps selected, go to Pathfinder and merge all the shapes together.
I don't believe there is a way to convert it back to a single vector shape as it would have to be able to translate the blend into either a linear gradient, radial gradient, or gradient mesh.
The beauty of blends is that they aren't bound by the same rules that allow the gradient or gradient mesh tools to work, and you can get some really awesome color blends across complicated shapes.

How to convert 3D models to SVG line art? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I often work with 3D CAD models, which I receive as SolidWorks or PDF files. I need to turn them into black & white line art, like you'd find in a patent application. (In fact, exactly like what you find in a patent application!)
Acrobat-9 allows me to rotate & scale the models, so I can print them with reasonable resolution, but the rest of my drawing toolchain deals with SVG files, while all I can get out of Acrobat is bitmaps. (I also make models from scratch in Blender, and make line drawings using rendering procedures there, but that also produces bitmaps.)
Is there some way to get from a 3D view to an SVG picture (preferably with relatively simple Bezier curves and scaled line weights)?
(As an example, imagine that I have a 3D model of a cube. I position it as desired, then (somehow) convert it to an SVG image with several straight lines where the edges are, with the line weights scaled according to the distance between the edge and the camera/viewer.)
if you have rendered views as PDF files, you can use inkscape's command-line tool to convert PDF to SVG, as discussed on this post.
case there are no rendered PDF's available, you can export PDF snapshots from within CAD prior to converting them.
you can also try other converters made for this purpose, like verydoc or PDF-tron.

How to make real natural photos less-real for games? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a web developer trying to make a 2D game for the first time, I am not good in graphic design so I am using raster natural real photos as graphics for my game like this one:
http://www.cgtextures.com/texview.php?id=23142
But the overall looking of the game is not good because the graphics look very 'real' and unprofessional, how easily can I convert the photos to be more like this:
http://fc06.deviantart.net/fs44/f/2009/076/4/3/VW_DragBus_Destroyer_Carbon_by_M2M_design.jpg
I know you are laughing now as it seems it is not easy to convert a real photo to a such professional polished brilliant vector one, but I need something close, can I use some combinations of Photoshop filters and tricks to accomplish this? can I convert the photos to vector graphics then convert them to raster graphics again and add some effects maybe?
Thanks.
The only thing I can think off is to run a filter over the image so that it reduces the detail, this would amount smoothing the image with quite a high value.
If you consider that when tyding up a photo taken with a high ISO value (say 1600 which creates a lot of noise in the image) a value of 50% smoothing would reduce the noise but leave detail intact.
You would be looking to really go overboard on this value say 400% which would reduce the image to one that looks like it's been painted almost.