basically I am trying to edit not only the appearance of the button(thats the easy part) but the frame that detects the touch to be a rhombus rather then a square
html example:http://irwinproject.com
I've tried CGAfflineTransform however it doesn't allow me to make non rectangular objects. is there a way to skew
Im just wondering if this is possible because, if not could someone point me in a direction the only viable answer I've found is resorting to something along the lines of a spriteKit;
I found this however this implementation leaves dead spots on buttons where they overlap
custom UIButton with skewed area in iPhone
is there a way to message people on here there was a gentleman who said he figured out how to transform but never posted his solution.
In order to modify the touch area of an oddly shaped button you could use a solution similar to OBShapedButton. I have used this particular project in the past myself for adjacent hexagon buttons and it worked perfectly. That said, you may have to modify it a bit to work with drawn shapes instead of images.
Related
I've been trying to find the piece of code that draws or initiates drawing of the double black arrow visual indicators that show up when transform rotate is executed by pressing R key (or resize with S key), visible here:
I've been stepping trough the code of the Rotate operator, various drawing functions etc., with no success. I suppose I do not have a good enough picture of the code structure.
I would appreciate it very much if someone could point me into the right direction.
Does someone know at least the right terminology to look for?
I'm using Blender 2.76 but I suppose insight into any version would be helpful.
(What I'm trying to do is to locate the point in code where decision is made whether to draw the indicator or not. I explained the "problem" in this question. The goal is to get it show always.)
I have finally found the place, not by stepping through the code but by browsing it, lol!
The function that draws the indicators is drawHelpline() and the check for the region being 'WINDOW' is done in helpline_poll(), both from transform.c file.
Actual decision is made in wm_paintcursor_draw() from wm_draw.c file which calls the helpline_poll() indirectly with pc->poll(C).
The wm_paintcursor_draw() is called by wm_method_draw_triple() which in turn is called from wm_draw_update() which is called from WM_main().
That answers my question.
However, that does not solve my actual problem because the active subwindow in these functions is the region from which the operator was executed - in my case the ToolShelf! It is because cursor_warp(), which I use to move the mouse in my operator, changes only the mouse pointer position and does not update anything else (i.e. does not update the active subwindow).
So, if I force helpline_poll() to return 1, it will draw the indicator only over the ToolShelf.
The solution is to hack WM_cursor_warp() from wm_window.c to set win->screen->subwinactive to the correct window id, but that is really an ugly hack and not directly related to the question I asked here.
The solution is to use modal timer operator to allow Blender to update the active subwindow, explained here.
Starting with an arbitrary rectangle, a user can place any number of circles within.
The circles are allowed to overlap each other without restrictions.
The circles can be of different sizes.
What would be the best way to test if the rectangle is completely covered by the circles?
It seems a very tricky algorithm, but fortunately dsomebody thought about it before :)
Check this question:
https://cs.stackexchange.com/questions/11163/circles-covering-a-rectangular-how-to-verify-it
Seems to have the same problem as you.
I eventually found that the simplest solution (for me anyway) in both JS and Objective-C was to simply iterate of over each pixel and check for the colour (assuming circles are coloured) and check if it was the colour of a circle (or it's border). As soon as a colour from a circle was found then iteration stops as the area is obviously not fully covered by the shapes.
The advantage to this solution is that the actual shape doesn't matter (we ended up adding other shapes also).
Let's say I have a solid, irregularly shaped (but enclosed) shape on screen in iOS (one colour). I then want to "erase" portions of that shape by dragging my finger around like you would in a typical kids colouring app, erasing with a fixed brush size where I touch the screen.
I could easily accomplish all this with something like an image mask and touch detection however, as a requirement, I also need to determine the rough percentage of the shape that remains.
For example I need to know when 50% of the random enclosed shape has been "erased".
What's the best way of approaching this problem? Are there any existing iOS compatible libraries that can handle it? I'm thinking that I would need to keep track of a ton of polygons and calculate all the overlaps but it seems like there must be a solution to this problem.
EDIT: I have done research into this problem however tracking all the polygons manually and calculating all their positions and area overlaps seems overly complicated. I was simply wondering if anyone else has run into a similar issue and found a better solution.
you will need to first know the fixed space of the image view. then you will need to know the percentage of blank space when the new image is loaded. pixel
double percentageFilledIn = ((double)nonBlankPixelCount/totalpixels);
After you get that value you will need to use that percentage as your baseline for the existing percentage
your new calculation will look like this.
double percentageOfImageLeft = ((double)nonBlankPixelCount/totalpixels/percentageFilledIn);
this calculation will likely be processor intensive. I would only calculate sparingly.
Since this post is not about code and more about login I will let you determine your logic for detecting non blank pixels.
here is how to find a pixel color.
How to get Coordinates and PixelColor of TouchPoint in iOS/ObjectiveC
Good luck.
I'm trying to customize a UISegmentedControl to use a custom image for each segment. I've done a lot of searching, but haven't had any luck with the solutions I've tried so far. This is the most recent post I can find, which is still fairly out of date now, and seems pretty hacky yet. Are there any better or more recent guides on how to do this?
Thanks
Unfortunately, UISegmentedControl doesn’t make it easy to set separate background images for each segment separately. If your control is always a known width, you might be able to make a full-size background image with the three segments drawn in, like this: (yellow][green][red) (where parentheses represent rounded corners), and then use -[UISegmentedControl setBackgroundImage:forState:barMetrics:] to set your image.
However, that solution isn’t very flexible if you want to resize the control later. You might be better off faking it with three adjacent UIButtons, or even subclassing UIControl to make a custom segmented control which can have a separate image for each segment.
Hi i'm thinking about making midi step sequencer and I need to make a note grid/matrix that resizes/ adapts when you zoom. I've been searching for different ways of doing this but cant figure out a way that works well.
I thought about drawing cell objects made with (NSRect) but I couldn't figure out how to get the right interaction when resizing.
This is my first "biggish" OBJ-c project so please don't kill me, im still battling with the frameworks and the syntax is so foreign to me.
You could use Core Animation layers to create your grid.
Take a look at Apple's Geek Game Board sample code project:
http://developer.apple.com/library/mac/#samplecode/GeekGameBoard/Introduction/Intro.html
The code shows a way to display different kinds of card/board games using CALayer.
The Checkers game looks to be the closest to the grid you want to create.