CVDisplayLink constant outputTime value - objective-c

I'm writing this post in case anyone else is having the same issue I've been having with the lack of documentation for the CVDisplayLink API.
Intro:
In my CVDisplayLink code I've been using the following code to obtain the deltaSeconds value between calls to its callback:
float deltaTime = 1.0 / (outputTime->rateScalar * (float)outputTime->videoTimeScale / (float)outputTime->videoRefreshPeriod);
It seems like this line of code is widely used across different apps & engines.
The issue:
While running my OpenGL app I've noticed that this value is now constant (0.016669 to be precise). I haven't made any big changes to account for this change of behaviour, other than using Mavericks and the new development tools.
Finding the cause has been a lost cause so far.

I've found what I believe is a good way to calculate the deltaSeconds between frames by using the following alternative code:
double deltaSeconds = (outputTime->videoTime - self.previousOutputVideoTime) / (double)outputTime->videoTimeScale;
self.previousOutputVideoTime = outputTime->videoTime;

Related

implementing components of a computer processor using .hdl and the Hardware Simulator (nand2tetris)

I'm having trouble getting my .hdl files to loads in the HardwareSimulator. So far I have implemented FullAdder.hdl and Add16.hdl.
The error message I'm recieving is
line 22, zab has no source pin
Here is the relevant code for the Add16:
CHIP Add16 {
IN x[16], y[16];
OUT out[16];
PARTS:
HalfAdder(x=x[0],y=y[0],sum=out[0],carry=c);
FullAdder(x=x[1],y=y[1],c=c,sum=out[1],carry=d);
FullAdder(x=x[2],y=y[2],c=d,sum=out[2],carry=e);
FullAdder(x=x[3],y=y[3],c=e,sum=out[3],carry=f);
FullAdder(x=x[4],y=y[4],c=f,sum=out[4],carry=g);
FullAdder(x=x[5],y=y[5],c=g,sum=out[5],carry=h);
FullAdder(x=x[6],y=y[6],c=h,sum=out[6],carry=i);
FullAdder(x=x[7],y=y[7],c=i,sum=out[7],carry=j);
FullAdder(x=x[8],y=y[8],c=j,sum=out[8],carry=k);
FullAdder(x=x[9],y=y[9],c=k,sum=out[9],carry=l);
FullAdder(x=x[10],y=y[10],c=l,sum=out[10],carry=m);
FullAdder(x=x[11],y=y[11],c=m,sum=out[11],carry=n);
FullAdder(x=x[12],y=y[12],c=n,sum=out[12],carry=o);
FullAdder(x=x[13],y=y[13],c=o,sum=out[13],carry=p);
FullAdder(x=x[14],y=y[14],c=p,sum=out[14],carry=q);
FullAdder(x=x[15],y=y[15],c=q,sum=out[15],carry=drop);
}
I'm strugginling to find the error since I'm pretty sure I've implemented this chip in exactly the same way in the past and it worked fine.
As for the full adder, it's the same error message but for line 16.
I'll provide the relevant code for this part also:
CHIP FullAdder {
IN x, y, z; // 1-bit inputs
OUT sum, // Right bit of x + y + z
carry; // Left bit of x + y + z
PARTS:
HalfAdder(x=x,y=y,sum=xy,carry=zxy);
HalfAdder(x=z,y=xy,sum=sum,carry=s);
Or(x=zab,y=s,out=carry);
}
I can't wrap my mind around the error referring to line 16. That's way after the terminating bracket in FullAdder.
I've browsed the internet and as far as I can tell my implementation is perfectly correct. Any advice from anyone who are proficient in the Computer Processors area? This would definitely be useful for anyone else who are running into the same/similar problems.
Thanks
edit: According to this link Logic Gates my implementation looks more or less exactly the same. Could it be a faulty HardwareSim at my end? Although I doubt that since I've used it in the past and it was recommended to me by my University.
The problem is in the x input to your Or gate in the FullAdder. You are referring to a signal (pin) "zab" but no such signal is defined.
In future, please remember to post the entire files. It is hard to help you diagnose an error in line 16 when it is difficult to tell what line that actually is.

Root finding with a kinked function using NLsolve in Julia

I am currently trying to solve a complementarity problem with a function that features a downward discontinuity, using the mcpsolve() function of the NLsolve package in Julia. The function is reproduced here for specific parameters, and the numbers below refer to the three panels of the figure.
Unfortunately, the algorithm does not always return the interior solution, even though it exists:
In (1), when starting at 0, the algorithm stays at 0, thinking that the boundary constraint binds,
In (2), when starting at 0, the algorithm stops right before the downward jump, even though the solution lies to the right of this point.
These problems occur regardless of the method used - trust region or Newton's method. Ideally, the algorithm would look for potential solutions in the entire set before stopping.
I was wondering if some of you had worked with similar functions, and had found a clever solution to bypass these issues. Note that
Starting to the right of the solution would not solve these problems, as they would also occur for other parametrization - see (3) this time,
It is not known a priori where in the parameter space the particular cases occur.
As an illustrative example, consider the following piece of code. Note that the function is smoother, and yet here as well the algorithm cannot find the solution.
function f!(x,fvec)
if x[1] <= 1.8
fvec[1] = 0.1 * (sin(3*x[1]) - 3)
else
fvec[1] = 0.1 * (x[1]^2 - 7)
end
end
NLsolve.mcpsolve(f!,[0.], [Inf], [0.], reformulation = :smooth, autodiff = true)
Once more, setting the initial value to something else than 0 would only postpone the problem. Also, as pointed out by Halirutan, fzero from the Roots package would probably work, but I'd rather use mcpsolve() as the problem is initially a complementarity problem.
Thank you very much in advance for your help.

Creating robust real-time monitors for variables

We can create a real-time monitor for a variable like this:
CreatePalette#Panel#Row[{"x = ", Dynamic[x]}]
(This is more interesting and useful if x happens to be something like $Assumptions. It's so easy to set a value and then forget about it.)
Unfortunately this stops working if the kernel is re-launched (Quit[], then evaluate something). The palette won't show changes in the value of x any more.
Is there a way to do this so it keeps working even across kernel sessions? I find myself restarting the kernel quite often. (If the resulting palette causes the kernel to be automatically started after Quit that's fine.)
Update: As mentioned in the comments, it turns out that the palette ceases working only if we quit by evaluating Quit[]. When using Evaluation -> Quit Kernel -> Local, it will keep working.
Link to same question on MathGroup.
I can only guess, because on my Ubuntu here the situations seems buggy. The trick with the Quit from the menu like Leonid suggested did not work here. Another one is: on a fresh Mathematica session with only one notebook open:
Dynamic[x]
x = 1
Dynamic[x]
x = 2
gives as expected
2
1
2
2
Typing in the next line Quit, evaluating and typing then x=3 updates only the first of the Dynamic[x].
Nevertheless, have you checked the command
Internal`GetTrackedSymbols[]
This gives not only the tracked symbols but additionally some kind of ID where the dynamic content belongs. If you can find out, what exactly these numbers are and investigate in the other functions you find in the Internal context, you may be able to add your palette Dynamic-content manually after restarting the kernel.
I thought I had something like that with
Internal`SetValueTrackExtra
but I'm currently not able to reproduce the behavior.
#halirutan's answer jarred my memory...
Have you ever come across: Experimental/ref/ValueFunction? (documentation address)
Although the documentation contains no examples, the 'more information' section provides the following tidbit:
The assignment ValueFunction[symb] = f specifies that whenever
symb gets a new value val, the expression f[symb,val] should be
evaluated.

Circumventing R's `Error in if (nbins > .Machine$integer.max)`

This is a saga which began with the problem of how to do survey weighting. Now that I appear to be doing that correctly, I have hit a bit of a wall (see previous post for details on the import process and where the strata variable came from):
> require(foreign)
> ipums <- read.dta('/path/to/data.dta')
> require(survey)
> ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt)
Error in if (nbins > .Machine$integer.max) stop("attempt to make a table with >= 2^31 elements") :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In pd * (as.integer(cat) - 1L) : NAs produced by integer overflow
2: In pd * nl : NAs produced by integer overflow
> traceback()
9: tabulate(bin, pd)
8: as.vector(data)
7: array(tabulate(bin, pd), dims, dimnames = dn)
6: table(ids[, 1], strata[, 1])
5: inherits(x, "data.frame")
4: is.data.frame(x)
3: rowSums(table(ids[, 1], strata[, 1]) > 0)
2: svydesign.default(id = ~serial, weights = ~perwt, strata = ~strata,
data = ipums)
1: svydesign(id = ~serial, weights = ~perwt, strata = ~strata, data = ipums)
This error seems to come from the tabulate function, which I hoped would be straightforward enough to circumvent, first by changing .Machine$integer.max
> .Machine$integer.max <- 2^40
and when that didn't work the whole source code of tabulate:
> tabulate <- function(bin, nbins = max(1L, bin, na.rm=TRUE))
{
if(!is.numeric(bin) && !is.factor(bin))
stop("'bin' must be numeric or a factor")
#if (nbins > .Machine$integer.max)
if (nbins > 2^40) #replacement line
stop("attempt to make a table with >= 2^31 elements")
.C("R_tabulate",
as.integer(bin),
as.integer(length(bin)),
as.integer(nbins),
ans = integer(nbins),
NAOK = TRUE,
PACKAGE="base")$ans
}
Neither circumvented the problem. Apparently this is one reason why the ff package was created, but what worries me is the extent to which this is a problem I cannot avoid in R. This post seems to indicate that even if I were to use a package that would avoid this problem, I would only be able to access 2^31 elements at a time. My hope was to use sql (either sqlite or postgresql) to get around the memory problems, but I'm afraid I'll spend a while getting that to work, only to run into the same fundamental limit.
Attempting to switch back to Stata doesn't solve the problem either. Again see the previous post for how I use svyset, but the calculation I would like to run causes Stata to hang:
svy: mean age, over(strata)
Whether throwing more memory at it will solve the problem I don't know. I run R on my desktop which has 16 gigs, and I use Stata through a Windows server, currently setting memory allocation to 2000MB, but I could theoretically experiment with increasing that.
So in sum:
Is this a hard limit in R?
Would sql solve my R problems?
If I split it up into many separate files would that fix it (a lot of work...)?
Would throwing a lot of memory at Stata do it?
Am I seriously barking up the wrong tree somehow?
Yes, R uses 32-bit indexes for vectors so they can contain no more than 2^31-1 entries and you are trying to create something with 2^40. There is talk of introducing 64-bit indexes but that will be some way off before appearing in R. Vectors have the stated hard limit and that is it as far as base R is concerned.
I am unfamiliar with the details of what you are doing to offer any further advice on the other parts of your Q.
Why do you want to work with the full data set? Wouldn't a smaller sample that can fit in to the restrictions R places on you be just as useful? You could use SQL to store all the data and query it from R to return a random subset of more appropriate size.
Since this question was asked some time ago, I'd like to point that my answer here uses the version 3.3 of the survey package.
If you check the code of svydesign, you can see that the function that causes all the problem is within a check step that looks whether you should set the nest parameter to TRUE or not. This step can be disabled setting the option check.strata=FALSE.
Of course, you shouldn't disable a check step unless you know what you are doing. In this case, you should be able to decide yourself whether you need to set the nest option to TRUE or FALSE. nest should be set to TRUE when the same PSU (cluster) id is recycled in different strata.
Concretely for the IPUMS dataset, since you are using the serial variable for cluster identification and serial is unique for each household in a given sample, you may want to set nest to FALSE.
So, your survey design line would be:
ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt, check.strata=FALSE, nest=FALSE)
Extra advice: even after circumventing this problem you will find that the code is pretty slow unless you remap strata to a range from 1 to length(unique(ipums$strata)):
ipums$strata <- match(ipums$strata,unique(ipums$strata))
Both #Gavin and #Martin deserve credit for this answer, or at least leading me in the right direction. I'm mostly answering it separately to make it easier to read.
In the order I asked:
Yes 2^31 is a hard limit in R, though it seems to matter what type it is (which is a bit strange given it is the length of the vector, rather than the amount of memory (which I have plenty of) which is the stated problem. Do not convert strata or id variables to factors, that will just fix their length and nullify the effects of subsetting (which is the way to get around this problem).
sql could probably help, provided I learn how to use it correctly. I did the following test:
library(multicore) # make svy fast!
ri.ny <- subset(ipums, statefips_num %in% c(36, 44))
ri.ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri.ny)
svyby(~incwage, ~strata, ri.ny.design, svymean, data=ri.ny, na.rm=TRUE, multicore=TRUE)
ri <- subset(ri.ny, statefips_num==44)
ri.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri)
ri.mean <- svymean(~incwage, ri.design, data=ri, na.rm=TRUE)
ny <- subset(ri.ny, statefips_num==36)
ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ny)
ny.mean <- svymean(~incwage, ny.design, data=ny, na.rm=TRUE, multicore=TRUE)
And found the means to be the same, which seems like a reasonable test.
So: in theory, provided I can split up the calculation by either using plyr or sql, the results should still be fine.
See 2.
Throwing a lot of memory at Stata definitely helps, but now I'm running into annoying formatting issues. I seem to be able to perform most of the calculation I want (much quicker and with more stability as well) but I can't figure out how to get it into the form I want. Will probably ask a separate question on this. I think the short version here is that for big survey data, Stata is much better out of the box.
In many ways yes. Trying to do analysis with data this big is not something I should have taken on lightly, and I'm far from figuring it out even now. I was using the svydesign function correctly, but I didn't really know what's going on. I have a (very slightly) better grasp now, and it's heartening to know I was generally correct about how to solve the problem. #Gavin's general suggestion of trying out small data with external results to compare to is invaluable, something I should have started ages ago. Many thanks to both #Gavin and #Martin.

Calling functions from within function(float *VeryBigArray,long SizeofArray) from within objC method fails with EXC_BAD_ACCESS

Ok I finally found the problem. It was inside the C function(CarbonTuner2) not the objC method. I was creating inside the function an array of the same size as the file size so if the filesize was big it created a really big array and my guess is that when I called another function from there, the local variables were put on the stack which created the EXC_BAD_ACCESS. What I did then is instead of using a variable to declare to size of the array I put the number directly. Then the code didnt even compile. it knew. The error wassomething like: Array size too big. I guess working 20+hours in a row isnt good XD But I am definitly gonna look into tools other than step by step debuggin to figure these ones out. Thanks for your help. Here is the code. If you divide gFileByteCount by 2 you dont get the error anymore:
// ConverterController.h
# import <Cocoa/Cocoa.h>
# import "Converter.h"
#interface ConverterController : NSObject {
UInt64 gFileByteCount ;
}
-(IBAction)ProcessFile:(id)sender;
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp);
#end
// ConverterController.m
# include "ConverterController.h"
#implementation ConverterController
-(IBAction)ProcessFile:(id)sender{
UInt32 packets = gTotalPacketCount;//alloc a buffer of memory to hold the data read from disk.
gFileByteCount=250000;
long LENGTH=(long)gFileByteCount;
CarbonTuner2(LENGTH,(long)8192/2, (long)4*2);
}
#end
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp)
{
long numFrames = numSampsToProcess / fftFrameSize * osamp;
float g2DFFTworksp[numFrames+2][2 * fftFrameSize];
double hello=sin(2.345);
}
Your crash has nothing to do with incompatibilities between C and ObjC.
And as previous posters said, you don't need to include math.h.
Run your code under gdb, and see where the crash happens by using backtrace.
Are you sure you're not sending bad arguments to the math functions?
E.g. this causes BAD_ACCESS:
double t = cos(*(double *)NULL);
Objective C is built directly on C, and the C underpinnings can and do work.
For an example of using math.h and parts of standard library from within an Objective C module, see:
http://en.wikibooks.org/wiki/Objective-C_Programming/syntax
There are other examples around.
Some care is needed around passing the variables around; use the C variables for the C and standard library calls; don't mix the C data types and Objective C data types incautiously. You'll usually want a conversion here.
If that is not the case, then please consider posting the code involved, and the error(s) you are receiving.
And with all respect due to Mr Hellman's response, I've hit errors when I don't have the header files included; I prefer to include the headers. But then, I tend to dial the compiler diagnostics up a couple of notches, too.
For what it's worth, I don't include math.h in my Cocoa app but have no problem using math functions (in C).
For example, I use atan() and don't get compiler errors, or run time errors.
Can you try this without including math.h at all?
First, you should add your code to your question, rather than posting it as an answer, so people can see what you're asking about. Second, you've got all sorts of weird problems with your memory management here - gFileByteCount is used to size a bunch of buffers, but it's set to zero, and doesn't appear to get re-set anywhere.
err = AudioFileReadPackets (fileID,
false, &bytesReturned, NULL,0,
&packets,(Byte *)rawAudio);
So, at this point, you pass a zero-sized buffer to AudioFileReadPackets, which prompty overruns the heap, corrupting the value of who knows what other variables...
fRawAudio =
malloc(gFileByteCount/(BITS/8)*sizeof(fRawAudio));
Here's another, minor error - you want sizeof(*fRawAudio) here, since you're trying to allocate an array of floats, not an array of float pointers. Fortunately, those entities are the same size, so it doesn't matter.
You should probably start with some example code that you know works (SpeakHere?), and modify it. I suspect there are other similar problems in the code yoou posted, but I don't have time to find them right now. At least get the rawAudio buffer appropriately-sized and use the values returned from AudioFileReadPackets appropriately.