how to correctly create a thumbnail picker with a UISlider for selection from a video? - objective-c

Hi its a simple question to a hard problem (for me so far anyway)
i dont consider myself to be a beginner programmer but for the love of god I cannot seem to be able to figure this one out.... I currently have a problem related to updating a uiimageview.. its not being updated when my slider moves left and right.. it does slow the application down a little bit when i drag it which tells me there are processes happening and tells me they are connected correctly the itnerface with their methods.. whats happening or what im doing at the moment is trying to retrieve image data for a specific frame specified by time to then be able to select it as a thumbnail depending on the position of the uislider. so its a manual thumbnail picker.
I have tried many things both by connecting it through interface builder and by doing it programatically.
this is what i have one so far:
.h file
.m file
the slider method for sliderValueChanged which gets called
and finally my class method that i use to help retrieve a thumbnail image returning nsdata... passing in a video and a specified time position.
I have read here on stackoverflow that updating a uiimageview can cause memory leaks if updated regularly since it caches images.. and to use [UIImage imageWithData:] instead to avoid any leaks etc. so i have implemented this in my code yet my thumbnail view still fails to load the images based on the slider's position. (the slider is created to have the min and max values set from 0 - to the duration of the video so that the slider can only ever have a value that can correctly pick a time frame in the video in question)
If anyone could guide me in how i could fix this problem.. it is beyond me for hours now! i appreciate the help. thank you

I realised what was happening here... the slider values i had passed into my method werent appropriate for what i required.
in the class method it says CMTimeMake(value 1, value 2).. and after doing some research i understood how it worked.
basically there is a time interval you specify in value 2 in my case it was 60.. based on some code i copied.. whatever you substitute value 1 for becomes a part of the time interval.. They are a numerator and denominator, so 1 /60 equals 1 60th of a second
2/60 is equal to the time position of 2 60th of a second... if i wanted 3 seconds i would need to do 60*3 = 180 so
180/60 equals 180 60th of a second which equates to 3 seconds in total... so because my slider values were mapped to the duration of the video which the maximum was 15 seconds... the higest time frame i could get using the code i wrote in my question... i could only get the
15 60th of a second. which time frame wise not really a difference between the 1st 60th of a second hence the reason why i could not see the uimageview being updated...
so to correct this i multiplied my slider values by 60 so that each value that gets changed because they were mapped to seconds.. i multiplied by 60 .. and of course the image updated like a charm. however now i need to figure out how to speed up this process since at the moment it seems synchronous as it lags the interface greatly

Related

Adding many actors to the stage

I crating project to play with some sort of simulation. So i create map, this map is actually grid with cells, each cell is 2 actors - 1 background and 1 icon that show cell type - forest, mountain, person and etc.
Here how it looks:
All works just fine, but when i try to increase cells from 20x20 to 100x100 it takes about 20-30 seconds to load. It doesn't seems lags after it loads, so it works just fine, but now question - is there a way to optimize loading time, or it impossible?
Todays systems should be able to handle as little as 100x100 cells. I guess your problem is somewhere in your code.
Some common mistakes are:
Creating new objects in your render method (with the "new" keyword) instead of reusing your objects
Loading your images everytime you render them
Maybe you can add some code of your render method to the question. Without any code it's hard to see the problem.

iOS framerate - always 30fps with UIImage?

i want to create a frame-by-frame animation with an array of images.
There is to set the animationDuration, the standard value is 1.
Can i be sure that it is always 30fps on every iOS-device when i start the animation with startAnimating?
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
E.g. 60/15*(count number of images in array)
I don't think you can count on UIImageView animationImages to give you 30FPS everywhere and everytime.
Indeed, this is meant for very simple and "opportunistic" animations. If you google for it, you will find reports of that method hogging a device (in specific conditions). On the other hand, if your images are small, then chances are that it could work, but you get no guarantees (nor ways to enforce the FPS you need).
So i need exactly 15 images for a 1 second animation - is there a special calculation i can use when i have more or less than 15 images that have to be animated in exactly one second?
If I understand your question right, then you can try with:
duration = number_of_images / FPS; // e.g., 60 images / 30 FPS = 2 seconds
of course, if the device will then show properly the 60 images at 30 FPS is another story.

How do I calculate fps in a simple Direct2D app?

Hey guys, and thanks for looking. I have built the simple D2D app from MSDN, available here. Now, I want to draw some primitives and add an fps counter.
I have an OnRender() event, where I draw the rectangles and so on. I also have a call to RenderTextInfo() where I call RenderTarget->DrawText. Where do I add the logic for counting the number of frames per second?
Thanks much.
I don't know the exact Direct2D stuff, but this might help.
Basically, you have two choices. Either you update the framerate when you draw a frame, or each second (or any other time interval).
If you count it when you draw a frame, you can simply get the current time when you draw a frame, and subtract from it the time you drew the last frame. That gets you the time spent drawing this frame. The reciprocal of that (i.e. 1/x) is the framerate.
If you count it at a regular time interval, you need to have some event firing at every interval that checks how many frames were drawn since the last time that event fired. Divide that by your interval (if it's one second, you don't need to divide, of course) and that's your fps count. Don't forget to increment some counter every time you draw a frame.

How to set/corrent the resting position of UIScrollView contents after it has scrolled

I am implementing a view similar to a Table View which contains rows of data. What I am trying to do is that after scrolling, each row snaps to a set of correct positions so the boundaries of the top and bottom row are completely visible - and not trimmed as it normally happens. Is there a way to get the scroll destination before the scrolling starts? This way I will be able to correct the final y-position, for example, in multiples of row height.
I asked the same question a couple of weeks ago.
There is definitely no public API to determine the final resting Y offset of a scroll deceleration. After researching it further, I wasn't able to figure out Apple's formula for how they manage deceleration. I gathered a bunch of data from scrolling events, recording the beginning velocity and how far the deceleration traveled, and from that made some rough estimates of where it was likely to stop.
My goal was to predict well in advance where it would stop, and to convert the deceleration into a specific move to an offset. The problem with this technique is that scrollRectToVisible:animated: always occurs over a set period of time, so instead of the velocity the user expects from a flick gesture, it's either much faster or much slower, depending on the strength of the flick.
Another choice is to observe the deceleration and wait until it slows down to some threshold, then call scrollRectToVisible:animated:, but even this is difficult to get "just right."
A third choice is to wait until the deceleration completes on its own, check to see if it happened to stop at your desired offset multiple, and then adjust if not. I don't care for this personally, as you either coast to a stop and then speed up or coast to a stop and reverse direction.

cameraViewTransform and CGAffineTransformMakeScale

I'm tring to implement a digital zoom in an application and I use the following line to change the zoom factor (it can be called many time while the camera interface is displayed):
picker.cameraViewTransform = CGAffineTransformMakeScale(zoomFactor, zoomFactor);
It work perfectly the first time I display the camera inteface but not after that, the transform used by the camera is not the tranform I set. Any idea?
Not sure I understand exactly what you are doing but I can tell you that transforms are not accumulative unless you feed the existing transform in recursively.
For example, say you have a transform that rotates an object 45 degrees and you want to use it to spin the object. The first time you call it, it rotates the object 45 degrees but it doesn't rotate it any subsequent times. This is because your just setting the same exact transform over and over. A 45 degree transform is always the same.
To make the object rotate you have to call the 45 degree transform then you have to take the resulting transform from the first operation and rotate that by 45 degrees. Then take the results of that and rotate it 45 degrees.
You need to do something like:
picker.cameraViewTransform =CGAffineTransformScale(picker.cameraViewTransform, zoomfactor);
That way, your transforms will accumulate and you can zoom up and down.
This isn't so much an answer as a clue. Each time you bring the camera back to the front of the app (presumably using presentModalViewController:) this causes a new transform to be created at cameraViewTransform. The tricky thing is, it seems to take about a second or so for this process to complete, and I can find no delegate method to let us know exactly when the new transform is safely in place. In my app, I end up waiting for about 1 second and THEN modifying the cameraViewTransform to suit my needs. Hacky, but the only solution I've found so far...