Read values out of PetscEventPerfInfo - petsc

I have an application using PETSc. For performance monitoring under (near) production runs, I'd like to log a small number of various values. Some generated by PETSc, some not.
Now I wonder: How can I read the time value out of PetscEventPerfInfo to write it into my file? I can't find a documentation entry about PetscEventPerfInfo, so I'm not sure whether I'm not supposed to touch it in any way.
However, I found the following method which basically reveals the structure of PetscEventPerfInfo:
PetscErrorCode EventPerfInfoClear(PetscEventPerfInfo *eventInfo)
{
PetscFunctionBegin;
eventInfo->id = -1;
eventInfo->active = PETSC_TRUE;
eventInfo->visible = PETSC_TRUE;
eventInfo->depth = 0;
eventInfo->count = 0;
eventInfo->flops = 0.0;
eventInfo->flops2 = 0.0;
eventInfo->flopsTmp = 0.0;
eventInfo->time = 0.0;
eventInfo->time2 = 0.0;
eventInfo->timeTmp = 0.0;
eventInfo->numMessages = 0.0;
eventInfo->messageLength = 0.0;
eventInfo->numReductions = 0.0;
PetscFunctionReturn(0);
}
I have a strong guess that it's just eventInfo->time, but I'm absolutely not sure whether it's save to read it oder whether there's an "official" way to read from that structure.
So, what should I do if I just want to read the time value into a variable for further usage?

Good news: there is an example of PETSC, src/dm/impls/plex/examples/tests/ex9.c which makes use of the field time of PetscEventPerfInfo to print performance-related logs!
The structure PetscEventPerfInfo is defined in petsc-3.7.6/include/petsclog.h.
There are plenty of comments in the file.
The field time is indeed defined as:
PetscLogDouble time, time2, timeTmp; /* The time and time^2 taken for this event */
There are clues that time contains the execution time in eventlog.c. Indeed, in function PetscLogEventBeginComplete, there is a eventPerfLog->eventInfo[event].time -= curTime;, which matches the eventPerfLog->eventInfo[event].time += curTime; of PetscLogEventEndComplete().
Consequently, the value of the field time is interesting if an only if both PetscLogEventBeginComplete() and PetscLogEventEndComplete() have been called or both PetscLogEventBeginDefault() and PetscLogEventEndDefault().

Related

Unwanted click when using SoXR Library to do variable rate resampling

I am using the SoXR library's variable rate feature to dynamically change the sampling rate of an audio stream in real time. Unfortunately I have have noticed that an unwanted clicking noise is present when changing the rate from 1.0 to a larger value (ex: 1.01) when testing with a sine wave. I have not noticed any unwanted artifacts when changing from a value larger than 1.0 to 1.0. I looked at the wave form it was producing and it appeared as if a few samples right at rate change are transposed incorrectly.
Here's a picture of an example of a stereo 440Hz sinewave stored using signed 16bit interleaved samples:
I also was unable to find any documentation covering the variable rate feature beyond the fifth code example. Here's is my initialization code:
bool DynamicRateAudioFrameQueue::intialize(uint32_t sampleRate, uint32_t numChannels)
{
mSampleRate = sampleRate;
mNumChannels = numChannels;
mRate = 1.0;
mGlideTimeInMs = 0;
// Intialize buffer
size_t intialBufferSize = 100 * sampleRate * numChannels / 1000; // 100 ms
pFifoSampleBuffer = new FiFoBuffer<int16_t>(intialBufferSize);
soxr_error_t error;
// Use signed int16 with interleaved channels
soxr_io_spec_t ioSpec = soxr_io_spec(SOXR_INT16_I, SOXR_INT16_I);
// "When creating a var-rate resampler, q_spec must be set as follows:" - example code
// Using SOXR_VR makes sense, but I'm not sure if the quality can be altered when using var-rate
soxr_quality_spec_t qualitySpec = soxr_quality_spec(SOXR_HQ, SOXR_VR);
// Using the var-rate io-spec is undocumented beyond a single code example which states
// "The ratio of the given input rate and ouput rates must equate to the
// maximum I/O ratio that will be used: "
// My tests show this is not true
double inRate = 1.0;
double outRate = 1.0;
mSoxrHandle = soxr_create(inRate, outRate, mNumChannels, &error, &ioSpec, &qualitySpec, NULL);
if (error == 0) // soxr_error_t == 0; no error
{
mIntialized = true;
return true;
}
else
{
return false;
}
}
Any idea what may be causing this to happen? Or have a suggestion for an alternative library that is capable of variable rate audio resampling in real time?
After speaking with the developer of the SoXR library I was able to resolve this issue by adjusting the maximum ratio parameters in the soxr_create method call. The developer's response can be found here.

velocity of a joint between two frames using kinect sdk

I'm stack for along time in this problem and i will really appreciate if any one could help me in that.
I asked many times in many forums, i've searched alot but no answer that really helped me.
i'm developping an application where i have to calculate the velocity of a joint of skeleton body using vs c# 2012 and kinect sdk 1.7
i have first to be sure of the logic of things before asking this question so,
if I understood correctly, the delta_time i'm looking for to calculate velocity, is not the duration of one frame (1/30s) but it must be calculated from two instants:
1- the instant when detecting and saving the "joint point" in the first frame
2- the instant when detecting and saving the same "joint point" in the next frame
if it's not true, thank you for clarifying things.
starting from this hypothesis, i wrote a code to :
detectiong a person
tracking the spine joint ==> if it's is tracked then saving its coordinates into a list (I reduced the work for the moment on the Y axis to simplify)
pick up the time when saving the coordinates
increment the framecounter (initially equal to zero)
if the frame counter is > 1 calculate velocity ( x2 - x1)/(T2 - T1) and save it
here is a piece of the code:
System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();
double msNow;
double msPast;
double diff;
TimeSpan currentTime;
TimeSpan lastTime = new TimeSpan(0);
List<double> Sylist = new List<double>();
private int framecounter = 0;
private void KinectSensorOnAllFramesReady(object sender, AllFramesReadyEventArgs allFramesReadyEventArgs)
{
Skeleton first = GetFirstSkeleton(allFramesReadyEventArgs);
if (first == null) // if there is no skeleton
{
txtP.Text = "No person detected"; // (Idle mode)
return;
}
else
{
txtP.Text = "A person is detected";
skeletonDetected = true;
/// look if the person is totally detected
find_coordinates(first);
/*******************************
* time computing *
/*******************************/
currentTime = stopWatch.Elapsed;
msNow = currentTime.Seconds * 1000 + currentTime.Milliseconds;
if (lastTime.Ticks != 0)
{
msPast = lastTime.Seconds * 1000 + lastTime.Milliseconds;
diff = msNow - msPast;
}
lastTime = currentTime;
}
//framecounter++;
}
void find_coordinates(Skeleton first)
{
//*modification 07052014 *****/
Joint Spine = first.Joints[JointType.Spine];
if (Spine.TrackingState == JointTrackingState.Tracked)
{
double Sy = Spine.Position.Y;
/*******************************
* time starting *
/*******************************/
stopWatch.Start();
Sylist.Add(Sy);
framecounter++;
}
else
return;
if (framecounter > 1)
{
double delta_Distance = Sylist[Sylist.Count] - Sylist[Sylist.Count - 1];
}
}
to be honnest, i dont really know how ti use timespan and stopwatch in this context ( i mean when there are frames to process many times/s)
i will be thankfull for any help !
First:
The SkeletonFrame has a property called Timespamp that you can use. It's better to use that one than to create your own timesystem because the timestamp is directly generated by the Kinect.
Second:
Keep track of the previous Timestamp and location.
Then it's just a matter of calculation.
(CurrentLocation - PreviousLocation) = Distance difference
(CurrentTimestamp - PreviousTimestamp) = Time taken to travel the distance.
For example you would get 0.1 meter per 33 miliseconds.
So you can get the meters per seconds like this = (1 second / time taken to travel) * distance difference. In the example this is = (1000/33)*0.1 = 3.03 meter per second.

How do I compare a constant value to a continuously-updated Accelerometer value each time round a loop?

As it was suggested to me in a previous post of mine, the following code takes the data coming from the accelerometer the "minute" the assignment : CMAccelerometerData* data = [manager accelerometerData]; is performed, and then extracts from that data the acceleration exercised on the x-Axis and stores its value in a double (double x) :
CMMotionManager* manager = [[CMMotionManager alloc] init];
CMAccelerometerData* data = [manager accelerometerData];
double x = [data acceleration].x;
Suppose the value stored is 0.03 and suppose that I want to use it in a while loop as follows :
while (x > 0)
{
// do something
}
the above loop will obviously run forever
However, what if I used the following code instead :
CMMotionManager* manager = [[CMMotionManager alloc] init];
while([[manager accelerometerData] acceleration].x > 0)
{
// do something
}
wouldn't I be now comparing zero to a different value each time round the loop?
(which is what I'm going for in my project anyway..)
any thoughts?
the reason I'm asking this is the following :
I want to check the values coming from the x-Axis over a certain period of time, rather than keep checking them at regular intervals, so I basically want to write a loop that would look something like this :
if ([[manager accelerometerData] acceleration].x > 0 )
{
// initialiseTimer
}
while ([[manager accelerometerData] acceleration].x > 0 )
{
if( checkTimer >=250ms )
{
stopTimer;
printOut("X-Axis acceleration was greater than zero for at least 250ms");
breakFromLoop;
}
}
I know the code in my 2nd if-block isn't valid Objective-C..This was just to give you an idea of what I'm going for..
This has a simple solution.
1)Declare an instance variable x that you update each time the accelerometer tell you to.
2)Compare this x to whatever value you need in the loop .
Hope this helps.
Regards,
George

Realloc not expanding my array

I'm having trouble implementing realloc in a very basic way.
I'm trying to expand the region of memory at **ret, which is pointing to an array of structs
with ret = realloc(ret, newsize); and based on my debug strings I know newsize is correctly increasing over the course of the loop (going from the original size of 4 to 8 to 12 etc.), but when I do sizeof(ptr) it's still returning the original size of 4, and the things I'm trying to place into the newly allocated space can't be found (I think I've narrowed it down to realloc() which is why I'm formatting the question like this)
I can post the function in it's entirety if the problem isn't immediately evident to you, I'm just trying to not "cheat" with my homework too much (the code is kind of messy right now anyway, with heavy use of printf() for debug).
[EDIT] Alright, so based on your answers I'm failing at debugging my code, so I guess I'll post the whole function so you can tell me more about what I'm doing wrong.
(You can ignore the printf()'s since most of that is debug that isn't even working)
Booking **bookingSelectPaid(Booking **booking) {
Booking **ret = malloc(sizeof(Booking*));
printf("Initial address of ret = %p\n", ret);
size_t i = 0;
int numOfPaid = 0;
while (booking[i] != NULL)
{
if (booking[i]->paid == 1)
{
printf("Paying customer! sizeof(Booking*) = %d\n", (int)sizeof(Booking*));
++numOfPaid;
size_t newsize = sizeof(Booking*) * (numOfPaid + 1);
printf("Newsize = %d\n", (int)newsize);
Booking **temp = realloc(NULL, (size_t)newsize);
if (temp != NULL)
printf("Expansion success! => %p sizeof(new pointer) = %d ret = %p\n", temp, (int)sizeof(temp), ret);
ret = realloc(ret, newsize);
ret[i] = booking[i];
ret[i+1] = NULL;
}
++i;
printf("Sizeof(ret) = %d numOfPaid = %d\n", (int)sizeof(ret), numOfPaid);
}
return ret; }
[EDIT2] --> http://pastebin.com/xjzUBmPg
[EDIT3] Just to be clear, the printf's, the temp pointer and things of that nature are debug, and not part of the intended functionality. The line that is puzzling me is either the one with realloc(ret, newsize); or ret[i] = booking[i]
Basically I know for sure that booking contains a table of structs that ends in NULL, and I'm trying to bring the ones that have a specific value set to 1 (paid) onto the new table, which is what my main() is trying to get from this function... So where am I going wrong?
I think the problem here is that your sizeof(ptr) only returns the size of the pointer, which will depend on your architecture (you say 4, so that would mean you're running a 32-bit system).
If you allocate memory dynamically, you have to keep track of its size yourself.
Because sizeof(ptr) returns the size of the pointer, not the allocated size
Yep, sizeof(ptr) is a constant. As the other answer says, depends on the architecture. On a 32 bit architecture it will be 4 and on a 64 bit architecture it will be 8. If you need more help with questions like that this homework help web site can be great for you.
Good luck.

Timing function for "Wheel of Fortune" style ticker

I'm trying to create a function that changes a label's text. I want it in a "Wheel of Fortune" style where the text changes really quickly at first but slows down over time, then stops at the final text. I have the text as strings in an array.
I'm guessing I would need an exponential function to do this but maths is not my strong point. Currently I'm trying:
- (void)timer {
float time = 0.5;
float increase = 0.05;
for(int x = 0; x < 100; x++) {
sleep(time);
time = time + increase;
NSLog(#"%f", time); //Log to show time of each iteration
}
}
I don't need help linking it up with labels etc., I just need help to get the timing right.