frame rendering time measurement anomaly - kotlin

I draw 2d content on GlSurfaceView using VBO/IBO with following function:
override fun onDrawFrame(gl: GL10?) {
val renderTime= measureTimeMillis {
GLES20.glClearColor(bgComps[0], bgComps[1], bgComps[2], 1f)
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT or GLES20.GL_COLOR_BUFFER_BIT)
bindAttributes()
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo)
//draw half of the polygons
val pieces = GameDataLoader.GameData.piecesByDiff(gameData.diff)
piecesProgram!!.setUniforms(projMatrix,imageTextureId,maskTextureId,PointF(4f,4f),0f)
GLES20.glDrawElements(
GLES20.GL_TRIANGLE_STRIP,
pieces * MeshBuilder.INDICES_PER_PIECE,
GLES20.GL_UNSIGNED_SHORT,
0
)
//draw another half
piecesProgram!!.setUniforms(projMatrix,imageTextureId,maskTextureId,PointF(10f,10f),sin(frame.toFloat() / 60 * 2 * PI).toFloat())
GLES20.glDrawElements(
GLES20.GL_TRIANGLE_STRIP,
pieces * MeshBuilder.INDICES_PER_PIECE,
GLES20.GL_UNSIGNED_SHORT,
pieces * MeshBuilder.INDICES_PER_PIECE * MeshBuilder.BYTES_PER_SHORT
)
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0)
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0)
}
val newFrameTS=System.currentTimeMillis()
if (lastFrameTS!=-1L)
println("frame time:${newFrameTS - lastFrameTS},render time:$renderTime")
lastFrameTS=newFrameTS
frame++
}
There are about 5.6K polygons on the screen.When I run this code I see in the console frame time: ~33ms,render time: 0-2ms(!).In most cases render time=0.If I comment everything except glClearColor and glClear render time jumps to 30ms.How can it be that the short code executes faster than long one?
UPD:The question relates to android OS.I observe such behavior on both emulator and real device.
The question is not why onDrawFrame call occurs too often or too seldom(in fact time between 2 last onDraw calls is measured ok.The question is how can I measure time for opengl calls within onDrawFrame function?How can these calls take 0ms?

This is usually caused by V-sync. The driver waits at the end of every frame until the next image can be presented on the monitor and delays execution of your program.
Depending on your hardware, you can force enable/disable this. Most PC hardware allows the application to choose if it wants to wait. On others, especially on mobile devices, you cannot.

Related

How can I observe an other apps launch?

For a mental health app project, I need to intercept the startup of specific apps (like Instagram) and check if they used instagram the n-th time, possibly opening a questionair etc.
Searching for a solutions online, I came across the "android.app.usage" API. I could not get my around how to use this.
Do I need a for every running background service which does active polling with the usage api?
Or is their a way to say "run this code or start this app/service when appXY launches"?
Looking forward to any kind of input :)
Greetings Pascal
You can't listen out for an "app is being opened" intent unfortunately, Android doesn't support it. Your approach is likely the best workaround, to state it explicitly:
Have a foreground service running (so it is less likely to be killed by the OS) at all times.
Check regularly the currently running app, and see if it's the one you're trying to look for.
If this is different to last time you checked, do whatever you need to. Perhaps this will include keeping track of last time the app was opened, how many times it's been opened etc.
As a warning however, the OS won't really like this, and there's likely to be an impact on battery life. Whatever you do, make sure this check isn't happening when the screen is off, happening as infrequently as possible, and doesn't include any unnecessary computation, otherwise the user will quickly notice the negative impact.
Here's an extract from an article that looks like it'll be able to fetch the latest app even on later versions:
var foregroundAppPackageName : String? = null
val currentTime = System.currentTimeMillis()
// The `queryEvents` method takes in the `beginTime` and `endTime` to retrieve the usage events.
// In our case, beginTime = currentTime - 10 minutes ( 1000 * 60 * 10 milliseconds )
// and endTime = currentTime
val usageEvents = usageStatsManager.queryEvents( currentTime - (1000*60*10) , currentTime )
val usageEvent = UsageEvents.Event()
while ( usageEvents.hasNextEvent() ) {
usageEvents.getNextEvent( usageEvent )
Log.e( "APP" , "${usageEvent.packageName} ${usageEvent.timeStamp}" )
}

GPS data blocked by other tasks in the while loop

I am trying to parse GPS data while reading pressure sensor, IMU sensor and writing some data to SD card. Since reading pressure sensor, IMU sensor and writing SD card takes some time and GPS don't wait my command to send its data, I lost some GPS data so my parser can not find meaningful message. I use uart_receive interrupt to take GPS data and circular buffer to save its data. After I parse it. Since I don't know how much bytes come from GPS, I read one by one. I tried FreeRTOS but it did not work. How can I prevent other tasks to block GPS data. I am using STM32f401cc.
Here is my FreeRTOS task;
void StartDefaultTask(void* argument)
{
IMU_setParameters(&imu, &hi2c1, imu_ADD_LOW, GPIOB, GPIOB,
GPIO_PIN_1, GPIO_PIN_2);
IMU_init(&imu, &htim3);
while ((calState.accel != 3 || calState.system != 3 || calState.gyro != 3 || calState.mag != 3) && calibFlg)
{
IMU_getCalibrationState(&imu, &calState);
}
preSensor_init_default_params(&preSensor.params);
preSensor.addr = preSensor_I2C_ADDRESS_1;
preSensor.i2c = &hi2c1;
preSensor_init(&preSensor, &preSensor.params);
initSD_CARD(&USERFatFS, USERPath);
samplePacket(&telemetry);
controlRecoveryFile(&recoveryFile, "recoveryFile.txt", &telemetry);
for (;;)
{
IMU_getDatas(&imu, &calState, &linearAccel, &IMU, &imuFlg, &offsetFlg, &calibCount);
preSensor_force_measurement(&preSensor);
preSensor_read_float(&preSensor, &temperature, &pressure, &humidty);
preSensor_get_height(pressure, &height);
telemetry.Altitude_PL = height;
telemetry.Pressure_PL = pressure;
telemetry.Temperature = temperature;
telemetry.YRP[0] = IMU.yaw;
telemetry.YRP[1] = IMU.roll;
telemetry.YRP[2] = IMU.pitch;
if (calibCount % 10 == 0)
{
writoToTelemetryDatas(&logFile, "tulparLog.txt", &telemetry, 0);
if (!writeToRecoveryDatas(&recoveryFile, "recoveryFile.txt", &telemetry))
connectionFlg = 1;
}
osDelay(1);
}
}
void StartTask02(void* argument)
{
arrangeCircularBuffer(&gpsCircular, buffer, BUFFER_LENGTH);
initGPS(&huart1, &rDATA, &gps);
for (;;)
{
getGPSdata(&huart1, &gpsCircular, &gps, &rDATA);
osDelay(1);
}
}
Here is my solution to problem.
First of all I do not use FreeRtos at all. I do all the thing in main loop. Problem is "Race Condition". In my GPS data parser, there are 4 states. MSG_ID, Finish, Check, Parse. These four states do not take four loops to find meaningfull message. It depends on message length. It can be at most 103 loop. Besides, In main loop my imu sensor, pressure sensor and SD card module takes approximately 80 ms. As you know, GPS works independent from our code. It does not wait our command to send data. Every 1 second it sends its datas. Now, imagine that your GPS sends datas every 1 seconds and your CircularBuffer has 200 bytes. Your parser begin to parse message. But your parser needs at least 30+ loops to find message. Now, 30*80 = 2400 ms (2.4 s). Untill you find meaningfull data GPS sent 2 more datas and overflow happened. To fix this situation, I write for loop for my GPS parser in the main loop and I send command to GPS for just taking GPGGA and GPRMC datas (for GPS command you can look at here. I use uart_receive_ınterrupt to store data to my circularbuffer. After taking 2 '\n', I stop taking data and wait my parser to parse these datas. At the end I start uart operation taking meaningfull datas. Important thing here is calling parser in a for loop. (it can be 8-16-24 loops depends on your other tasks delay)

Testing Flink window

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.

GML Alarm event not working second time

I have my game setup so that it starts and goes back to a loading screen room for 45 steps after which the next room is randomized. So at alarm[0] the following code activates:
randomize();
chosenRoom = choose(rm_roomOne, rm_roomTwo, rm_roomThree, rm_roomFour);
room_goto(chosenRoom);
The code here works fine the first time, but when it goes back from the randomly chosen room to the loading screen room it stays there and doesn't execute the code again.
Any help would be very much appreciated.
This may sound stupid but did you remember to set the alarm again after it's gone off? I know I've done this several times without thinking. Without seeing your code, I assume that after the alarm goes off it's not being set again, so it won't go off again.
I'm guessing the control object is "persistant", thus the Control Object only exists once and will remain forever (also after swithcing rooms) - thus thie create event only gets fired once - thus the alarm only gets set once.
Try to move your code to the event "Room Start" in your controller and it will work.
you can use event_perform(ev_alarm,0);.
The code here performs alarm[0] after 45 steps. after 45 steps again it triggers alarm[0]. Note that you have to put it in step event. And you have to initialize wait variable and times to zero in create event.
times is the repeat and wait is distance between events.
if(wait == 45 && times !=2){
event_perform(ev_alarm,0);
times++;
wait = 0;
}
else{
wait++;
}

Background in Pygame causes graphical issues

When I run my game which has a scrolling background it periodically starts to glitch out the right side of the screen. The screen will do it even if the speed of the background is 4x slower than previously tested. After the glitchy part has moved for a while everything goes back to normal until it happens again.
The piece of code that controls the animation is this (got this somewhere off the Internet):
def background():
global screen, bgOne, bgTwo, bgOne_x, bgTwo_x
screen.blit(bgOne, (bgOne_x, 0))
screen.blit(bgTwo, (bgTwo_x, 0))
bgOne_x -= 1
bgTwo_x -= 1
if bgOne_x == -1 * bgOne.get_width():
bgOne_x = bgTwo_x + bgTwo.get_width()
if bgTwo_x == -1 * bgTwo.get_width():
bgTwo_x = bgOne_x + bgOne.get_width()
Picture of the glitch:
(Posted on behalf of the OP).
The cause turned out to be a simple oversight. For anyone else encountering this problem: please check the dimensions of the background picture you're using and the dimensions of the display which Pygame uses. In this case the width was shorter (807) than the screen itself (1024). I hope this helps beginners like me in the future.