How to make an object appear after a specific amount of time in Processing - arraylist

Im trying to make a program where you are a ship and you simply avoid comets that fly towards you. I somewhat know how to use array lists to add and subtract objects, but I'm not sure how to get the program to add and subtract objects after a specific time like 5 seconds. My goal is to make each comet spawn 2 seconds apart but I'm not sure how. If anyone can help please let me know!

Processing exposes a useful variable frameCount that you can use for such timing behaviours.
You could use it in combination with the modulo operator % (an operator that returns the remainder after the division of two numbers), as follows:
draw() {
.
.
.
if (frameCount % t == 0) {
spawnComet();
}
.
.
.
}
Assuming frameRate is fixed at 60, t takes the value of 60*(desired time delay in seconds). You want to spawn comets every 2 seconds: 60*2 = 120. Therefore set t to 120 to satisfy the requirement of your example. This means spawnComet() will trigger every 120 frames.

Related

Create a variable to count from 1 to n in AnyLogic

I am looking to add a variable to count from 1 to 217 every hour in AnyLogic, in order to use as a choice condition to set a parameters row reference.
I am assuming I either need to use an event or a state chart however I am really struggling with the exact and cannot find anything online.
If you have any tips please let me know, any help would be appreciated
Thank you,
Tash
A state machine isn't necessary in this case as this can be achieve using a calculation or a timed event. AnyLogic has time() function which returns time since model start as a double in model time units of measurements.
For example: if model time units is seconds and it has been running for 2hr 2min 10sec then time(SECOND) will return 7330.0 (it is always a double value). 1/217th of an hour corresponds to about 3600/217 = 16.58 seconds. Also, java has a handy function Math.floor() which rounds down a double value, so Math.floor(8.37) = 8.0.
Assembling it all together:
// how many full hours have elapsed from the start of the model
double fullHrsFromStart = Math.floor(time(HOUR));
// how many seconds have elapsed in the current model hour
double secondsInCurrentHour = time(SECOND) - fullHrsFromStart * 3600.0;
// how many full 16.58 (1/217th of an hour) intervals have elapsed
int fullIntervals = (int)(secondsInCurrentHour / 16.58);
This can be packaged into a function and called any time and it is pretty fast.
Alternatively: an Event can be created which increments some count by 1 every 16.58 seconds and ten resets it back to 0 when the count reaches 217.

What is the time complexity of below function?

I was reading book about competitive programming and was encountered to problem where we have to count all possible paths in the n*n matrix.
Now the conditions are :
`
1. All cells must be visited for once (cells must not be unvisited or visited more than once)
2. Path should start from (1,1) and end at (n,n)
3. Possible moves are right, left, up, down from current cell
4. You cannot go out of the grid
Now this my code for the problem :
typedef long long ll;
ll path_count(ll n,vector<vector<bool>>& done,ll r,ll c){
ll count=0;
done[r][c] = true;
if(r==(n-1) && c==(n-1)){
for(ll i=0;i<n;i++){
for(ll j=0;j<n;j++) if(!done[i][j]) {
done[r][c]=false;
return 0;
}
}
count++;
}
else {
if((r+1)<n && !done[r+1][c]) count+=path_count(n,done,r+1,c);
if((r-1)>=0 && !done[r-1][c]) count+=path_count(n,done,r-1,c);
if((c+1)<n && !done[r][c+1]) count+=path_count(n,done,r,c+1);
if((c-1)>=0 && !done[r][c-1]) count+=path_count(n,done,r,c-1);
}
done[r][c] = false;
return count;
}
Here if we define recurrence relation then it can be like: T(n) = 4T(n-1)+n2
Is this recurrence relation true? I don't think so because if we use masters theorem then it would give us result as O(4n*n2) and I don't think it can be of this order.
The reason, why I am telling, is this because when I use it for 7*7 matrix it takes around 110.09 seconds and I don't think for n=7 O(4n*n2) should take that much time.
If we calculate it for n=7 the approx instructions can be 47*77 = 802816 ~ 106. For such amount of instruction it should not take that much time. So here I conclude that my recurrene relation is false.
This code generates output as 111712 for 7 and it is same as the book's output. So code is right.
So what is the correct time complexity??
No, the complexity is not O(4^n * n^2).
Consider the 4^n in your notation. This means, going to a depth of at most n - or 7 in your case, and having 4 choices at each level. But this is not the case. In the 8th, level you still have multiple choices where to go next. In fact, you are branching until you find the path, which is of depth n^2.
So, a non tight bound will give us O(4^(n^2) * n^2). This bound however is far from being tight, as it assumes you have 4 valid choices from each of your recursive calls. This is not the case.
I am not sure how much tighter it can be, but a first attempt will drop it to O(3^(n^2) * n^2), since you cannot go from the node you came from. This bound is still far from optimal.

WAIT UP TO <milliseconds> in ABAP

According to ABAP Documentation, the command WAIT UP TO x SECONDS needs an operand of type i. However, I'd like to WAIT UP TO x Milliseconds or something similar. Neither official documentation nor several other forum posts have been helpful thus far.
Is there any way to specify a wait for a fraction of a second?
You can simply pass a decimal value like:
WAIT UP TO '0.5' SECONDS
or something like:
WAIT UP TO '0.01' SECONDS
See also How to make an abap program pause.
If you want to avoid implicit commit with WAIT UP TO, create a simple RFC function:
FUNCTION ZSLEEP .
*"--------------------------------------------------------------------
*"*"Lokale Schnittstelle:
*" IMPORTING
*" VALUE(DURATION) TYPE SDURATION_SECONDS
*"--------------------------------------------------------------------
* To wait 50 milliseconds write this:
* DATA duration TYPE sduration_seconds VALUE '0.050'.
* CALL FUNCTION 'ZSLEEP' DESTINATION 'NONE' KEEPING LOGICAL UNIT OF WORK EXPORTING duration = duration.
WAIT UP TO duration SECONDS.
ENDFUNCTION.
I've just solved it like this:
DATA: timestart TYPE timestampl,
timeend TYPE timestampl,
millisecs TYPE timestampl,
imilli TYPE i VALUE 200.
GET TIME STAMP FIELD timestart.
millisecs = imilli / 1000.
timestart = timestart + millisecs.
DO.
GET TIME STAMP FIELD timeend.
IF timestart < timeend.
EXIT.
ENDIF.
ENDDO.
WRITE timeend.
If I now rewrite this as a function taking an integer as an import parameter (in place of imilli) I'll - to my knowledge - have exactly what I wanted.
I'll leave this up for a little before tagging it as the correct answer in the hopes that someone may have a better / more elegant solution.
Without asking about the requirement, 2 ways to do this are
GET RUN TIME
where SET RUN TIME CLOCK RESOLUTION can be important.
or
GET TIME STAMP using a target field TIMESTAMPL
Do not use WAIT UP TO for fine time frames due to the Workprocess switching.
Wait also carries other side effects not immediately obvious.

NSSpeechSynthesizer and track duration

This time I have a logic question. Hope someone of you could help me. Using the `NSSpeechSynthesizer' you can set the rate, i.e. 235 words per minute, 100 words per minute and so on...
I found that generally the average of words per minute is calculated using standardized word length of 5 characters per word, counting spaces and symbols too.
I need to automatically subdivide a long text in tracks with a pre-selected duration, let say 15 minutes per track.
How can we calculate the correct number of characters to pass for each 'split' to the speech engine?
My solution is as follow:
// duration is the number of minutes per track
numberOfWordsPerTrack = [rateSlider floatValue] * duration;
splits = [[NSMutableArray alloc] init];
finished = NO;
NSUInteger position = 0;
while( !finished ) {
NSRange range;
// the idea is: I take 5*numberOfWordsPerTrack characters
// until the text allows me to select them
range = NSMakeRange( position, 5*numberOfWordsPerTrack );
if( range.location+range.length > mainTextView.string.length ) {
// If there are not another full character track we get
// the tail of the remaining text
finished = YES;
range = NSMakeRange( position, mainTextView.string.length-position );
}
// Here we get the track and add it to the split list
if( range.location+range.length <= mainTextView.string.length ) {
currentSplit = [mainTextView.string substringWithRange:range];
[splits addObject:currentSplit];
}
position += range.length;
}
The problem with this solution is that the track duration is not correct. It is not quite far from the desired value, but it is not right. For example, using 235 words per minute with duration of 50 minutes, I have 40 minutes per track. If I set 120 minutes per track, I have 1h:39m per track... and so on...
Where do you think is the logic error?
EDIT AFTER JanX2 REPLY
Well, while randomly thinking I came to the following hypotesis Could you tell me what do you think about that before its implementation, because it is not a light change in my code
If I used the speechSynthesizer:willSpeakWord:ofString: delegate member I could test the .aiff file size frequently, i.e. before speaking the next word (real word, not standardized). Because we know the Hzs, bits and channels those file are created with by synthesizer and because we know they are not compressed, we could gain some guess about the current length of the track.
The biggest drawback of this solution could be che continuous disk access, that can highly degrade performance.
What do you think?
I can only guess, but the heuristic you use will include “silent” characters. Why not try an compensate for the measured error? You appear to have an error that is pretty much linear so you could factor that into your calculation:
40 / 50 = 80%
99 / 120 = 82.5%
So you have an error of about 17.5-20%. Just multiply the time you calculate above by 0.8 or 0.825 and you are getting closer. This is crude, but you are already using a heuristic.
BTW: You probably should consider using -enumerateSubstringsInRange:options:usingBlock: to achieve sentence granularity instead of arbitrary word splits.
Using “-speechSynthesizer:willSpeakWord:ofString:” causes bigger issues: in my experience it can be out of sync with the position in the file being written by several hundred ms up to several seconds. And speaking up the next word seems to have problems when used with the Nuance voices.

Elm - How to modify the parameterisation of one signal based on another signal

How can I parameterise one signal based on another signal?
e.g. Suppose I wanted to modify the fps based on the x-position of the mouse. The types are:
Mouse.x : Signal Int
fps : number -> Signal Time
How could I make Elm understand something along the lines of this pseudocode:
fps (Mouse.x) : Signal Time
Obviously, lift doesn't work in this case. I think the result would be Signal (Signal Time) (but I'm still quite new to Elm).
Thanks!
Preamble
fps Mouse.x
Results in a type-error that fps requires an Int, not a Signal Int.
lift fps Mouse.x : Signal (Signal Int)
You are correct there. As CheatX's answers mentions, you cannot using these "nested signals" in Elm.
Answer to your question
It seems like you're asking for something that doesn't exist yet in the Standard Libraries. If I understand your question correctly you would like a time (or fps) signal of which the timing can be changed dynamically. Something like:
dynamicFps : Signal Int -> Signal Time
Using the built-in functions like lift does not give you the ability to construct such a function yourself from a function of type Int -> Signal Time.
I think you have three options here:
Ask to have this function added to the Time library on the mailing-list. (The feature request instructions are a little bloated for a request of such a function so you can skip stuff that's not applicable)
Work around the problem, either from within Elm or in JavaScript, using Ports to connect to Elm.
Find a way to not need a dynamically changing Time signal.
I advise option 1. Option 3 is sad, you should be able to what you asked in Elm. Option 2 is perhaps not a good idea if you're new to Elm. Option 1 is not a lot of work, and the folks on the mailing-list don't bite ;)
To elaborate on option 2, should you want to go for that:
If you specify an outgoing port for Signal Int and an incoming port for Signal Time you can write your own dynamic time function in JavaScript. See http://elm-lang.org/learn/Ports.elm
If you want to do this from within Elm, it'll take an uglier hack:
dynamicFps frames =
let start = (0,0)
time = every millisecond -- this strains your program enormously
input = (,) <~ frames ~ time
step (frameTime,now) (oldDelta,old) =
let delta = now - old
in if (oldDelta,old) == (0,0)
then (frameTime,now) -- this is to skip the (0,0) start
else if delta * frameTime >= second
then (delta,now)
else (0,old)
in dropIf ((==) 0) 0 <| fst <~ foldp step start input
Basically, you remember an absolute timestamp, ask for the new time as fast as you can, and check if the time between the remembered time and now is big enough to fit the timeframe you want. If so you send out that time delta (fps gives time deltas) and remember now as the new timestamp. Because foldp sends out everything it is to remember, you get both the new delta and the new time. So using fst <~ you keep only the delta. But the input time is (likely) much faster than the timeframe you want so you also get a lot of (0,old) from foldp. That's why there is a dropIf ((==) 0).
Nested signals are explicitly forbidden by the Elm's type system [part 3.2 of this paper].
As far as I understand FRP, nested signals are only useful when some kind of flattering provided (monadic 'join' function for example). And that operation is hard to be implemented without keeping an entire signal history.