Hey guys, and thanks for looking. I have built the simple D2D app from MSDN, available here. Now, I want to draw some primitives and add an fps counter.
I have an OnRender() event, where I draw the rectangles and so on. I also have a call to RenderTextInfo() where I call RenderTarget->DrawText. Where do I add the logic for counting the number of frames per second?
Thanks much.
I don't know the exact Direct2D stuff, but this might help.
Basically, you have two choices. Either you update the framerate when you draw a frame, or each second (or any other time interval).
If you count it when you draw a frame, you can simply get the current time when you draw a frame, and subtract from it the time you drew the last frame. That gets you the time spent drawing this frame. The reciprocal of that (i.e. 1/x) is the framerate.
If you count it at a regular time interval, you need to have some event firing at every interval that checks how many frames were drawn since the last time that event fired. Divide that by your interval (if it's one second, you don't need to divide, of course) and that's your fps count. Don't forget to increment some counter every time you draw a frame.
Related
So I've got this animated pie chart working now. It can indicate e.g. progress over time (similar to UIProgressView).
For legacy reasons I am still using it with a timer that fires approx. every second and increases progress. It should now be possible to get rid of this timer and set the overall duration of a pie animation e.g. to 1/2 hour instead of letting the timer fire 30 * 60 times and starting as many short incremental animations.
So my question is this: are there any good reasons that speak against using such long (say up to 1/2 hour long) animations in iOS? In the example of the pie chart no more than approx. 360 frames would be needed even over 1/2 hour.
There is a good reason against very long animations: memory.
CoreAnimation will create a presentationLayer for every frame (see for example your other question), and (at least up to iOS 7.1) it will allocate and initialize them in background the moment you add the animation to the layer.
The frame rate depends on the device, not on the magnitude of the change of the animated property; moreover, there doesn't seem to be a way to tweak CoreAnimation's frame rate on iOS (while on OSX NSAnimation has a frameRate property), so if you animate progress (but it would be the same with any property) and set a duration time of 30 minutes, you will end up with a lot of memory wasted.
Some numbers. I scheduled some CABasicAnimations with path progress on your DZRoundProgressLayer, and added some logging in -initWithLayer:. This revealed that on the simulator, roughly 50 shadow copies (frames) are needed per second of animation.
This means 90K shadow copies are going to be created for 30 minutes: for several seconds after the beginning of the animation, CoreAnimation was still allocating the first thousands of copies. Adding some data payload to the instance variables of DZRoundProgressLayer showed the memory usage raising by several MB in the first seconds (then some memory management took over the unconstrained allocations, presumably freeing the old copies).
Is it a bad idea? It's a waste of resources, memory and CPU, even if your layer occupies a few bytes in memory, considering that the change in the pie area per frame is too small to be noticed. Setting up a NSTimer or KVO doesn't require many lines of code, so it might be worth to change approach.
Hi its a simple question to a hard problem (for me so far anyway)
i dont consider myself to be a beginner programmer but for the love of god I cannot seem to be able to figure this one out.... I currently have a problem related to updating a uiimageview.. its not being updated when my slider moves left and right.. it does slow the application down a little bit when i drag it which tells me there are processes happening and tells me they are connected correctly the itnerface with their methods.. whats happening or what im doing at the moment is trying to retrieve image data for a specific frame specified by time to then be able to select it as a thumbnail depending on the position of the uislider. so its a manual thumbnail picker.
I have tried many things both by connecting it through interface builder and by doing it programatically.
this is what i have one so far:
.h file
.m file
the slider method for sliderValueChanged which gets called
and finally my class method that i use to help retrieve a thumbnail image returning nsdata... passing in a video and a specified time position.
I have read here on stackoverflow that updating a uiimageview can cause memory leaks if updated regularly since it caches images.. and to use [UIImage imageWithData:] instead to avoid any leaks etc. so i have implemented this in my code yet my thumbnail view still fails to load the images based on the slider's position. (the slider is created to have the min and max values set from 0 - to the duration of the video so that the slider can only ever have a value that can correctly pick a time frame in the video in question)
If anyone could guide me in how i could fix this problem.. it is beyond me for hours now! i appreciate the help. thank you
I realised what was happening here... the slider values i had passed into my method werent appropriate for what i required.
in the class method it says CMTimeMake(value 1, value 2).. and after doing some research i understood how it worked.
basically there is a time interval you specify in value 2 in my case it was 60.. based on some code i copied.. whatever you substitute value 1 for becomes a part of the time interval.. They are a numerator and denominator, so 1 /60 equals 1 60th of a second
2/60 is equal to the time position of 2 60th of a second... if i wanted 3 seconds i would need to do 60*3 = 180 so
180/60 equals 180 60th of a second which equates to 3 seconds in total... so because my slider values were mapped to the duration of the video which the maximum was 15 seconds... the higest time frame i could get using the code i wrote in my question... i could only get the
15 60th of a second. which time frame wise not really a difference between the 1st 60th of a second hence the reason why i could not see the uimageview being updated...
so to correct this i multiplied my slider values by 60 so that each value that gets changed because they were mapped to seconds.. i multiplied by 60 .. and of course the image updated like a charm. however now i need to figure out how to speed up this process since at the moment it seems synchronous as it lags the interface greatly
I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.
I need to have multiple time frame on a zedgraph. I have to display the stock data on a daily time frame and then if user wishes to view the view in monthly time frame or hourly time frame i need to support it. Note that the data must be in candle stick bar and not the line bar.
Currently i have 3 curves and i display only one at a time and hide the others. For example initially i set up my graph to be on daily time frame and hide the hour and monthly time frame candle stick curve. When the user gives the command to see the hourly graph i hide the daily candle stick and show the hourly time graph. However i am not able to change the x axis as it still shows the daily time instead of changing to hourly. I need to do something to change the x axis time frame from daily to hourly.
Any kind of help is appreciable. Please advise even if there is a workaround. Thanks.
You probably can do it by changing Min, Max and Step properties of XAxis.Scale object.
So, your method/event handler that supports this user action should:
- show/hide proper curves at pane, change
- adjust the scale using properties I listed above
- refresh the graph.
Note, that Refresh() method of ZedGraphControl isn't cheap. It redraws all elements on your graph, so if you have a lot of data, it isn't good idea to use it.
In that situation you should use combination of AxisChange() and Invalidate() methods. It should be faster and cheaper.
I am implementing a view similar to a Table View which contains rows of data. What I am trying to do is that after scrolling, each row snaps to a set of correct positions so the boundaries of the top and bottom row are completely visible - and not trimmed as it normally happens. Is there a way to get the scroll destination before the scrolling starts? This way I will be able to correct the final y-position, for example, in multiples of row height.
I asked the same question a couple of weeks ago.
There is definitely no public API to determine the final resting Y offset of a scroll deceleration. After researching it further, I wasn't able to figure out Apple's formula for how they manage deceleration. I gathered a bunch of data from scrolling events, recording the beginning velocity and how far the deceleration traveled, and from that made some rough estimates of where it was likely to stop.
My goal was to predict well in advance where it would stop, and to convert the deceleration into a specific move to an offset. The problem with this technique is that scrollRectToVisible:animated: always occurs over a set period of time, so instead of the velocity the user expects from a flick gesture, it's either much faster or much slower, depending on the strength of the flick.
Another choice is to observe the deceleration and wait until it slows down to some threshold, then call scrollRectToVisible:animated:, but even this is difficult to get "just right."
A third choice is to wait until the deceleration completes on its own, check to see if it happened to stop at your desired offset multiple, and then adjust if not. I don't care for this personally, as you either coast to a stop and then speed up or coast to a stop and reverse direction.