As far as I know, love.update and love.draw is called every frame. You can turn vsync off (unlimited calls to love.update) or leave it on (fixed to refresh rate). Since different computers have different refresh rates, you need to be able to support different ups's, otherwise the game will run at different speeds for different computers.
There are 2 solutions I can think of:
Cap UPS.
Run at an arbitrary UPS.
There were a few issues with 2, so I think a constant UPS might be better. My computer's refresh rate is 57Hz so I used that in the code.
function love.update(dt)
t = t + dt
while t >= 1/57 do
t = t - 1/57
--stuff
end
end
The game runs fine if I turn vsync on, but if it's off, then it's slightly jittery and I think it would probably be like that regardless of vsync on other PC's. Is there a way to cap the UPS better or should I just use solution 2?
you need to use the dt in update function,
let's say you have a sprite moving 1 pixel every frame
function love.update(dt)
sprite.x = sprite.x + 1
end
for you with a refresh rate of 57 fps the sprite will move of 57 pixel per second, but others would see it move at say 60 pixel per second
function love.update(dt)
sprite.x = sprite.x + 60*dt
end
now at 57 fps, you call 57 time update in one second and your sprite would move exactly 60 pixel (60*(1/57)*57), and others running at 60fps would also see it moving of 60 pixels (60*(1/60)*60)
using dt you are doing computation time relative, not frame relative discarding the problem altogether
It seems like you know how to set vsync, but here's the code I'm using in love.load() to set it:
love.window.setMode(1, 1, {vsync = false, fullscreen = true})
At the start of my update loop I have the following code:
self.t1 = love.timer.getTime()
This grabs the time at the beginning of the loop and is paired with the following code at the end end of the loop:
self.delta = love.timer.getTime() - self.t1
if self.delta < self.fps then
-- sleep to maintain 60fps
love.timer.sleep(self.fps - self.delta)
end
-- calculate frame rate
self.delta = love.timer.getTime() - self.t1
self.frameRate = 1 / self.delta
If your computer is able to run the update at faster than 60FPS then the game will sleep at the end of the update loop to enforce a constant frame rate. Unfortunately this only caps the frame rate (solution 1 in your post). That being said, this method runs with no jitters (I have tested on multiple Windows machines).
I believe I got this method from this link.
Be careful with vsync since some computer screens can run at different frame-rate (like 60fps, 120fps...). And even with computer screens that run at the same frame-rate, enabling vsync in love2d is a setting that can be overridden by the graphic card settings. So you should always consider using the delta-time (dt). Refer to #nepta answer to know how the delta-time should be used.
Related
I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.
I am new to Qt and trying to implement a real time plot using QSplineSeries with Qt 5.7. I need to scroll the x axis as new data comes in every 100ms. It seems the CPU usage reaches 100% if I do not purge the old data which was appended to the series, using graphSeriesX->remove(0). I found two ways of scrolling the x axis.
const uint8_t X_RANGE_COUNT = 50;
const uint8_t X_RANGE_MAX = X_RANGE_COUNT - 1;
qreal y = (axisX->max() - axisX->min()) / axisX->tickCount();
m_x += y;
if (m_x > axisX->max()) {
axisX->setMax(m_x);
axisX->setMin(m_x - 100);
}
if (graphSeries1->count() > X_RANGE_COUNT) {
graphSeries1->remove(0);
graphSeries2->remove(0);
graphSeries3->remove(0);
}
The problem with the above is that m_x is of type qreal and at some time if I keep the demo running continuously, it will reach it's MAX value and the axisX->setMax call will fail making the plot not work anymore. What would be the correct way to fix this use case?
qreal x = plotArea().width() / X_RANGE_MAX;
chart->scroll(x, 0)
if (graphSeries1->count() > X_RANGE_COUNT) {
graphSeries1->remove(0);
graphSeries2->remove(0);
graphSeries3->remove(0);
}
However it's not clear to me how can I use the graphSeriesX->remove(0) call in this scenario. The graph will keep getting wiped out since once the series get appended with X_RANGE_COUNT values, the if block will always be true removing 0th value but the scroll somehow does not work the way manually setting maximum for x axis works and after a while I have no graph. scroll works if I do not call remove but then my CPU usage reaches 100%.
Can someone point me in the right direction on how to use scroll while using remove to keep the CPU usage low?
It seems like the best way to update data for a QChart is through void QXYSeries::replace(QVector<QPointF> points). From the documentation, it's much faster than clearing all the data (and don't forget to use a vector instead of a list). The audio example from the documentation does exactly that. Updating the axes with setMin, setMax and setRange all seem to use a lot of CPU. I'll try to see if there's a way around that.
What do you mean by "does not work the way manually setting maximum for x axis works"? The Second method you have shown works if you define x-axis range to be between 0 and X_RANGE_MAX. Is this not what you are after?
Something like: chart->axisX()->setRange(0, X_RANGE_MAX);
I am looking for a function to schedule a function call asynchronously, for example presenting an Image after 100ms, after 200ms and after 300ms and masking this image at 150ms 250ms and 350ms.
I am sure I can do this with delays, but I would prefer to do this asynchronously. I was able to do this in pyepl with timing.timedCall.
To be genuinely 'aysnchronous' would need threads and, as Jonas suggests, these aren't safe for use with OpenGL (your graphics card doesn't know which thread a command is coming from and if its commands are executed out of order because of two interleaved threads then the results are unpredictable and could lead to a crash).
I'd naturally handle this with a function like
def checkTimes(t, listOfPermissible):
for start,stop in listofPermissible:
if start < t < stop:
return True #we found a valid window
return False #if we got here there was no valid window
and then in my script I'd have:
targetTimes = [[0.1, 0.15], [0.2, 0.25]]
maskTimes = [[0.18, 0.2], [0.28, 0.3]]
while continueTrial:
t = trialClock.getTime()
#check if we need target
if checkTimes(t, targetTimes):
target.draw()
#check if we need mask
if checkTimes(t, maskTimes):
mask.draw()
#drawing complete so flip the window
win.flip()
#check for response
keys = event.getKeys()
if keys:
continueTrial = False
Jonas is also right though that you should use frame numbers instead of clock time if you have brief stimuli and care about them being precisely timed. As a cheeky example in the code above I've added some impossible times. For example 0.18 (180ms) which isn't possible with a 60Hz monitor. In the code above the 0.18 will effectively get rounded up to the next frame and the stimulus will appear at 183ms (11 frames into the block).
The rest of the logic above (checking in a list of start/stops) should still work just the same though.
Jon
I don't think that there's currently any way to do this. Previous attempts to run parts of psychopy stimulus presentation separate threads have failed, as far as I know. It's something about OpenGL not really being robust to this.
If there is a way to display stimuli asynchroniously, beware that visual stuff should really be timed in terms of number of frames rather than milliseconds for the durations you're considering. Presenting at e.g. 100 ms could just barely miss the 6th frame, thus shown the image on the 7th frame (after 116.7 ms). This is one of the points where I think many other stimulus presentation software packages mislead the user.
The ```psychopy.visual.Window.flip()`` method allows for timing using frames.
How to prevent lag bugs issues in flash games? For example If game have countdown timer 1 minute and player have to catch that much items that possible.
Here are following lag bugs issues:
If items moving (don't have static position) - that higher lag player
have, that slower items move;
Timer starting count slowly when player have lags (CPU usage 90-100%).
So for example If player without lags can get 100 points, player with slow / bad computer can get 4-6x more, like 400-600.
I think that because It's on client side, but how to move It to server side? Should I insert (and update) countdown time to database? But how to update It on every millisecond?
And how about items position solution? If player have big lags, items moving very very slowly, so easy to click on that, have you any ideas?
Moving the functionality to the server side doesn't solve the problem.
Now if there are many players connected to the server, the server will lag and give those players more time to react.
To make your logic independent from lag, do not base it on the screen update.
Because this assumes a constant time between screen updates (or frames)
Instead, make your logic based on the actual time that passed between frames.
Use getTimer to measure how much time passed between the current and the last frame.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html
Of course, your logic should include calculations for what happens in between frames.
In order to mostly fix speed issues on the client you need to make all your speed related code based on actual time, not frames. For example:
Here is a fairly typical example code used to move an object based on frames:
// speed = pixels per frame
var xSpeed:Number = 5;
var ySpeed:Number = 5;
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
player.x += xSpeed;
player.y += ySpeed;
}
While this code is simple and good enough for a single client, it is very dependent on the frame rate, and as you know the frame rate is very "elastic" and actual frame rate is heavily influenced by the client CPU speed.
Instead, here is an example where the movement is based on actual elapsed time:
// speed = pixels per second
var xSpeed:Number = 5 * stage.frameRate;
var ySpeed:Number = 5 * stage.frameRate;
var lastTime:int = getTimer();
addEventListener(Event.ENTER_FRAME, update);
function update(e:Event):void {
var currentTime:int = getTimer();
var elapsedSeconds:Number = (currentTime - lastTime) / 1000;
player.x += xSpeed * elapsedSeconds;
player.y += ySpeed * elapsedSeconds;
lastTime = currentTime;
}
The crucial part here is that the current time is tracked using getTimer(), and each update moves the player based on the actual elapsed time, not a fixed amount. I set the xSpeed and ySpeed to 5 * stage.frameRate to illustrate how it can be equivelent to the other example, but you don't have to do it that way. The end result is that the second example would have consistent speed of movement regardless of the actual frame rate.
I'm working on a drum computer with sequencer for the iPad. The drum computer is working just fine and writing the sequencer wasn't that much of a problem either. However, the sequencer is currently only capable of a straight beat (each step has equal duration). I would like to add a swing (or shuffle as some seem to call it) option, but I'm having trouble figuring out how.
'Swing' according to Wikipedia
Straight beat (midi, low volume)
Beat with Swing (midi, low volume)
If I understand correctly, swing is pretty much achieved by offsetting the eights notes between the 1-2-3-4 with a configurable amount. So instead of
1 + 2 + 3 + 4 +
it becomes something like
1 +2 +3 +4 +
The linked midi files illustrate this better...
However, the sequencer works with 1/16th or even 1/32th steps, so if the 2/8th (4/16th) note is offset, how would that affect the 5/16th note.
I'm probably not approaching this the correct way. Any pointers?
Sequencer code
This is the basics of how I implemented the sequencer. I figured altering the stepDuration at certain points should give me the swing effect I want, but how?
#define STEPS_PER_BAR 32
// thread
- (void) sequencerLoop
{
while(isRunning)
{
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
// prepare for step
currentStep++;
if(currentStep >= STEPS_PER_BAR * activePatternNumBars)
currentStep = 0;
// handle the step/tick
...
//calculate the time to sleep until the next step
NSTimeInterval stepDuration = (60.0f / (float)bpm) / (STEPS_PER_BAR / 4);
nextStepStartTime = nextStepStartTime + stepDuration;
NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate];
// sleep if there is time left
if(nextStepStartTime > now)
[NSThread sleepUntilDate:[NSDate dateWithTimeIntervalSinceReferenceDate:nextStepStartTime]];
else {
NSLog(#"WARNING: sequencer loop is lagging behind");
}
[pool release];
}
}
Edit: added code
I'm not familiar with the sequencer on iOS, but usually sequencers subdivide steps or beats into "ticks", so the way to do this would be to shift the notes that don't fall right on a beat back by a few "ticks" durring playback. So if the user programmed:
1 + 2 + 3 + 4 +
Instead of playing it back like that, you shift any notes falling on the "and" back by however many ticks (depending on exactly where it falls, how much "swing" was used, and how many "ticks" per beat)
1 + 2 + 3 + 4 +
Sorry if that's not much help, or if I'm not much more than restating the question, but the point is you should be able to do this, probably using something called "ticks". You may need to access another layer of the API to do this.
Update:
So say there are 32 ticks per beat. That means the "+" in the diagram above is tick # 16 -- that's what needs to be shifted. (that's not really a lot of resolution, so having more ticks is better).
Lets call the amount we move it, the "swing factor", s. For no swing, s = 1, for "infinite" swing, s=2. You probably want to use a value like 1.1 or 1.2. For simplicity, we'll use linear interpolation to determine the new position. (As a side note, for more on linear interpolation and how it pertains to audio, I wrote a little tutorial) we need to break the time before and after 16 into two sections, since the time before is going to be stretched and the time after is going to be compressed.
if( tick <= 16 )
tick *= s; //stretch
else
tick = (2-s)*tick + 32*(s-1) //compress
How you deal with rounding is up to you. Obviously, you'll want to do this on playback only and not store the new values, since you won't be able to recover the original value exactly due to rounding.
Change the number of steps to 12 instead of 16. Then each beat has 3 steps instead of 4. Triplets instead of 16th notes. Put sounds on the first and third triplet and it swings. Musicians playing swing use the second triplet also.
Offsetting the notes to create a shuffle does not give you access to the middle triplet.