Image resizing using approximation - resize

I've been given a task to make image resizing using approximation. The problem is that i have no idea how may it works. The previous task was to do this using interpolation, and everything was clear about that. Now i need to use approximation. So for example if i have 512x512 picture, and let's say i take every other pixels. With this i get twice smaller picture. So now i would like to use that picture (256x256) to get back to my original one. But I have no idea how to retrieve that data using approximation. Any help will be appreciated :)

Related

How can I render 19451 circles on Rect Native map efficiently?

I have 19451 points exported by coordinates in a JSON file. I am trying to render them in an efficient way on the map with circles. How can I achieve this? It is the first time I am using https://github.com/react-native-maps/react-native-maps with expo, so I am not that experienced in using maps services. I don't even know where to start from. I was thinking of something like rendering the points dynamically, based on whether one point is to be found in the region of the map that is currently shown on the screen, although I have no idea how to actually achieve this. The first thing I tried was to obviously render them at once: it takes ages and it is very buggy!
You have several options:
Use some kind of clustering when there are multiple circles in the same area, for example when you're zoomed out. Have a look at react-native-maps-clustering. Performance wise is decent enough but it may lag on older devices.
When you go over a zoom level you can limit the number of circles you draw, I guess they overlap anyways. When your limit has been reached, you can display some warning to let the user know that the number of circles was limited and he should zoom in. From my experience, drawing max 50 custom markers was the upper limit to avoid lag on older devices. With circles, that limit might be different.
Manually filter your data and decide whether the circle belongs to the current viewport (visible part of the map) or not.
Some code would help me to give you some more hints.

Special Kind of ScrollView

So I have my game, made with SpriteKit and Obj-C. I want to know a couple things.
1) What is the best way to make scroll-views in SpriteKit?
2) How do I get this special kind of scroll-view to work?
The kind of scroll-view I'd like to use is one that, without prior knowledge, seems like it could be pretty complicated. You're scrolling through the objects in it, and when they get close to the center of the screen, they get larger. When they're being scrolled away from the center of the screen, they get smaller and smaller until, when their limit is met, they stop minimizing. That limitation goes for getting bigger when getting closer to the center of the screen, too.
Also, I should probably note that I have tried a few different solutions for cheap remakes of scroll views, like merely adding the objects to a SKNode and moving the SKNode's position relative to the finger's, and its movement . . . but that is not what I want. Now, if there is no real way to add a scroll-view to my game, this is what I'm asking. Will I simply have to do some sort of formula? Make the images bigger when they get closer to a certain spot, and maybe run that formula each time -touchesMoved is called? If so, what sort of formula would that be? Some complicated Math equation subtracting the node's position from the center of the screen, and sizing it accordingly? Something like that? If that's the case, will you please give me some smart Math formula to do that, and give it to me in code (possibly a full-out function) format?
If ALL else fails, and there is no good way to do this, what would some other way be?
It is possible to use UIScrollViews with your SpriteKit scenes, but there's a bit of a workaround involved there. My recommendation is to take a look at this github project, that is what I based my UIScrollView off of in my own projects. From the looks of it, most of the stuff you'd want has actually been converted to Swift now, rather than Objective-C when I first looked at the project, so I don't know how that'll fare with you.
The project linked above would result in your SKScene being larger than the screen (I assume that is why it would need to be scrolled), so determining what is and is not close to the center of the scene won't be difficult. One thing you can do is use the update loop in SpriteKit to constantly update the size of Sprites (Perhaps just those on-screen) based on their distance from a fixed, known center point. For instance, if you have a screen of width and height 10, then the midpoint would be x,y = 5,5. You could then say that size = 1.0 - (2 * distance_from_midpoint). Given you are at the midpoint, the size will be 1.0 (1.0 - (2 * 0)), the farther away you get, the smaller your scale will be. This is a crude example that does not account for a max or min fixed size, and so you will need to work with it.
Good luck with your project.
Edit:
Alright, I'll go a bit out of my way here and help you out with the equation, although mine still isn't perfect.
Now, this doesn't really give you a minimum scale, but it will give you a maximum one (Basically at the midpoint). This equation here does have some flaws though. For one, you might use this to find the x and y scale of your objects based on their distance from a midpoint. However, you don't really want two different components to your scale. What if your Sprite is right next to the x midpoint, and the x_scale spits out 0.95? Well, that's almost full-sized. But if it is far away from the midpoint on the y axis, and it gives you a y scale of, say 0.20, then you have a problem.
To solve that, I just take the magnitude or hypotenuse of the vector between the current coordinate and the coordinate of the current sprite. That hypotenuse gives me an number that represents the true distance, which eliminates the problem with clashing scale values.
I've made an example of how to calculate this inside Google's Go-Playground, so you can run the code and see what different scales you get based on what coordinate you plug in. Also, the equation used in there is slightly modified, It's basically the same thing as above but without the maxscale - part of the front part of the equation.
Hope this helps out!
Embedding Attempt:
see this code in play.golang.org

"[img]" tag formatting for x% simple visualization code and full link to directly load in 1:1 scale

I just want to know a simple complement to the code [img] to get it to be shown at a X% of its original size.
I was thinking about [img width="50%" height="50%"] but that doesn't make any sense. Instead of a huge code, what could I place in the brackets to show the image at a X% scale?
Is that a PHP code?
And the other question. I've seen a little "tag" you could place in the direct link that could load an image into its full resolution or a lower size. I tried to find on the web, but I don't know how to describe being specific in my search criteria. For example, I pasted something like http://example.com/image.jpg=fullscreen and the image would load on the entire screen instead of clicking over it to zoom in. Or could it be http://example.com/image.jpg=half to show in half resolution.

Detecting what percent of an drawn image has been "erased"

Let's say I have a solid, irregularly shaped (but enclosed) shape on screen in iOS (one colour). I then want to "erase" portions of that shape by dragging my finger around like you would in a typical kids colouring app, erasing with a fixed brush size where I touch the screen.
I could easily accomplish all this with something like an image mask and touch detection however, as a requirement, I also need to determine the rough percentage of the shape that remains.
For example I need to know when 50% of the random enclosed shape has been "erased".
What's the best way of approaching this problem? Are there any existing iOS compatible libraries that can handle it? I'm thinking that I would need to keep track of a ton of polygons and calculate all the overlaps but it seems like there must be a solution to this problem.
EDIT: I have done research into this problem however tracking all the polygons manually and calculating all their positions and area overlaps seems overly complicated. I was simply wondering if anyone else has run into a similar issue and found a better solution.
you will need to first know the fixed space of the image view. then you will need to know the percentage of blank space when the new image is loaded. pixel
double percentageFilledIn = ((double)nonBlankPixelCount/totalpixels);
After you get that value you will need to use that percentage as your baseline for the existing percentage
your new calculation will look like this.
double percentageOfImageLeft = ((double)nonBlankPixelCount/totalpixels/percentageFilledIn);
this calculation will likely be processor intensive. I would only calculate sparingly.
Since this post is not about code and more about login I will let you determine your logic for detecting non blank pixels.
here is how to find a pixel color.
How to get Coordinates and PixelColor of TouchPoint in iOS/ObjectiveC
Good luck.

java3d simple way to translate object

I am making my first program using Java3D. I have setup some transformGroups that I now need to move in calculated directions. When I looked this up, I found interpolators and alpha objects and waveforms and couldn't understand a word of it. I have done this in the past in OpenGL using simple vectors and frame refreshment. Is there a similar simple way in Java3d? Thanks.
There's no reason you couldn't do it with vectors and frame refresh in Java3D as well.
A simple way would be to attach a behavior to the scenegraph with a WakeupOnElapsedFrames(0) condition, and then have it update the needed transform every frame.
As its simplest, that is was the interpolators are doing for you. Once you get that working, it will probably make more sense as to how you could do it with interpolators.