Can this topology be simplified? [Blender] - blender

I have a part of mesh as given below. Would like to know if there's a better/lowpoly way to achieve the same result? Also would this be considered a good topology?
The highlighted area is actual needed part of the mesh, rest is my attempt to simplify it.

you can inset a face by selecting it then pressing 'i', then moving the mouse to get the desired size.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

rhombus framed buttons objective c?

basically I am trying to edit not only the appearance of the button(thats the easy part) but the frame that detects the touch to be a rhombus rather then a square
html example:http://irwinproject.com
I've tried CGAfflineTransform however it doesn't allow me to make non rectangular objects. is there a way to skew
Im just wondering if this is possible because, if not could someone point me in a direction the only viable answer I've found is resorting to something along the lines of a spriteKit;
I found this however this implementation leaves dead spots on buttons where they overlap
custom UIButton with skewed area in iPhone
is there a way to message people on here there was a gentleman who said he figured out how to transform but never posted his solution.
In order to modify the touch area of an oddly shaped button you could use a solution similar to OBShapedButton. I have used this particular project in the past myself for adjacent hexagon buttons and it worked perfectly. That said, you may have to modify it a bit to work with drawn shapes instead of images.

Detecting what percent of an drawn image has been "erased"

Let's say I have a solid, irregularly shaped (but enclosed) shape on screen in iOS (one colour). I then want to "erase" portions of that shape by dragging my finger around like you would in a typical kids colouring app, erasing with a fixed brush size where I touch the screen.
I could easily accomplish all this with something like an image mask and touch detection however, as a requirement, I also need to determine the rough percentage of the shape that remains.
For example I need to know when 50% of the random enclosed shape has been "erased".
What's the best way of approaching this problem? Are there any existing iOS compatible libraries that can handle it? I'm thinking that I would need to keep track of a ton of polygons and calculate all the overlaps but it seems like there must be a solution to this problem.
EDIT: I have done research into this problem however tracking all the polygons manually and calculating all their positions and area overlaps seems overly complicated. I was simply wondering if anyone else has run into a similar issue and found a better solution.
you will need to first know the fixed space of the image view. then you will need to know the percentage of blank space when the new image is loaded. pixel
double percentageFilledIn = ((double)nonBlankPixelCount/totalpixels);
After you get that value you will need to use that percentage as your baseline for the existing percentage
your new calculation will look like this.
double percentageOfImageLeft = ((double)nonBlankPixelCount/totalpixels/percentageFilledIn);
this calculation will likely be processor intensive. I would only calculate sparingly.
Since this post is not about code and more about login I will let you determine your logic for detecting non blank pixels.
here is how to find a pixel color.
How to get Coordinates and PixelColor of TouchPoint in iOS/ObjectiveC
Good luck.

Photoshop, merging both these layers so they look the same

Here is my little test image:
http://imageshack.us/photo/my-images/851/testnb.jpg/
Basically they are two images, as you can see on the left is the 1st one and on the right the 2nd one. (and yes, I know one of the images has overlapped and "cut" off the wire)
Unfortunately my lighting was bad so I didnt get the same background color or this would have been easy.
But now that you see it, how do I make it so that it looks like it is one picture? (How do I get rid of that middle seam and get the same background color?)
Thanks!
you can try with this: select part one of your picture (layer) then image-->adjustments--match color then select your sorce psd (the same) and the level !
It's not always a great solution but give it a chance!
Or you can play araound with layer masks and layer effects

Which pixels did that drawmesh operation just draw to?

Ok, it's a relatively simple problem, I want to know where, in screen space, a particular mesh was just drawn. I plan on then storing that information in a data store of some kind so that when I interact with something in screen space, I can lookup in the register and find the object, i.e, click on the spaceship drawn on the screen and then select target etc.
I can't find any way of finding out which pixels the mesh was drawn to though...
Alternatively, if I'm missing something obvious regarding what it is that I Want to do, please let me know!
There is no easy way to do that. But you can use another texture as render target and render those meshes with unique colors.
So for example you give #FF0000 to your mesh A and draw it also to your second render target with that color. Now when you select a pixel from 2nd render target and look at that color, if it is #FF0000 you can understand that, the pixel is a part of mesh A. Thus you can easily pick the mesh drawn on a certain pixel when you click one of those pixels.
Why dont you Unproject your screen space coords into 3D space? The only complication I had was the fact that I'd be left with a plane, I could check if a Mesh intersected with that plane but I often had multiple candidates for 'picking'.
Check out Google for DirectX Unproject and there are various articles discussing it. It's sometimes complicated for some to implement but done well it's actually pretty nifty; don't get put off by the people online who say it doesn't work, it does work!