Cyclic executive scheduling - embedded

Figure shows a model of a real-time application.
The system consumes
periodically the inputs X and Y and produces the output Z. The inputs X and Y
are not updated simultaneously, but input Y is delayed 500ms compared with
input X. Also the subsystems A, B and C have different execution times (eA ≈
100ms, eB ≈ 200ms, eC ≈ 200ms).
How ever I don't understand the correct functionality of the application.
I have so far understood that the subtask A,B and C has the following period and execution time
A(0,100)
B(600, 200)
C(800,200)
But I don't understand how to sketch the table for this and how to handle the execution times and periods for the X and Y and for the subsystem A,B and C.
Any hint would be great thanks

Related

Dependence of data transfer latency on number of target nodes

Suppose some fixed size data is to be transferred between machines. Let us consider two scenarios:
1 target: Data is transferred from machine A to machine B. Suppose this takes x seconds.
2 targets: Data is transferred from machine A to machine B and C. Suppose this takes y seconds.
What can be said about the relationship between x and y? I think usually y should be greater than x. Am I right? Is there any work/paper/blog that gives such an insight?
I think the relationship between x and y should depend on the topology of network connection between the machines too.

Linear programming for wait time optimization

I am trying to solve a problem using simplex method.Although this is a mathematical problem, I need to solve it using any programming language.I am stuck at basic phase itself about dealing those modulus, while coding the matrix Ax=B which is used to solve the problem in a general case simplex.
Route Departure Runtime Arrival Wait time\\
A-B x 4 MOD(x+4,24) MOD(y-(MOD(x+4,24),24)\\
B-C y 6 MOD(y+6,24) MOD(z-(MOD(y+6,24),24)\\
C-D z 8 MOD(z+8,24) MOD(8-(MOD(z+8,24),24)\\
The objective is to minimize the total wait time
subject to constraints
0<= x,y,z <= 24
Simplex is not specifically required, any method may be used.
edit -
This is a part of much bigger problem, so just assuming z = 0 and starting won't help. I need to solve the entire thing.I want to know how to deal with the modulus.
The expression
y = mod(x,24)
is not linear so we can not use it in a continuous LP (Linear Programming) model. However, it can be modeled in a Mixed Integer Program as
x = k*24 + y
k : integer variable
0 <= y <= 23.999
You'll need a MIP solver for this.

Running a logistic model in JAGS - Can you vectorize instead of looping over individual cases?

I'm fairly new to JAGS, so this may be a dumb question. I'm trying to run a model in JAGS that predicts the probability that a one-dimensional random walk process will cross boundary A before crossing boundary B. This model can be solved analytically via the following logistic model:
Pr(A,B) = 1/(1 + exp(-2 * (d/sigma) * theta))
where "d" is the mean drift rate (positive values indicate drift toward boundary A), "sigma" is the standard deviation of that drift rate and "theta" is the distance between the starting point and the boundary (assumed to be equal for both boundaries).
My dataset consists of 50 participants, who each provide 1800 observations. My model assumes that d is determined by a particular combination of observed environmental variables (which I'll just call 'x'), and a weighting coefficient that relates x to d (which I'll call 'beta'). Thus, there are three parameters: beta, sigma, and theta. I'd like to estimate a single set of parameters for each participant. My intention is to eventually run a hierarchical model, where group level parameters influence individual level parameters. However, for simplicity, here I will just consider a model in which I estimate a single set of parameters for one participant (and thus the model is not hierarchical).
My model in rjags would be as follows:
model{
for ( i in 1:Ntotal ) {
d[i] <- x[i] * beta
probA[i] <- 1/(1+exp(-2 * (d[i]/sigma) * theta ) )
y[i] ~ dbern(probA[i])
}
beta ~ dunif(-10,10)
sigma ~ dunif(0,10)
theta ~ dunif(0,10)
}
This model runs fine, but takes ages to run. I'm not sure how JAGS carries out the code, but if this code were run in R, it would be rather inefficient because it would have to loop over cases, running the model for each case individually. The time required to run the analysis would therefore increase rapidly as the sample size increases. I have a rather large sample, so this is a concern.
Is there a way to vectorise this code so that it can calculate the likelihood for all of the data points at once? For example, if I were to run this as a simple maximum likelihood model. I would vectorize the model and calculate the probability of the data given particular parameter values for all 1800 cases provided by the participant (and thus would not need the for loop). I would then take the log of these likelihoods and add them all together to give a single loglikelihood for the all observations given by the participant. This method has enormous time savings. Is there a way to do this in JAGS?
EDIT
Thanks for the responses, and for pointing out that the parameters in the model I showed might be unidentified. I should've pointed out that model was a simplified version. The full model is below:
model{
for ( i in 1:Ntotal ) {
aExpectancy[i] <- 1/(1+exp(-gamma*(aTimeRemaining[i] - aDiscrepancy[i]*aExpectedLag[i]) ) )
bExpectancy[i] <- 1/(1+exp(-gamma*(bTimeRemaining[i] - bDiscrepancy[i]*bExpectedLag[i]) ) )
aUtility[i] <- aValence[i]*aExpectancy[i]/(1 + discount * (aTimeRemaining[i]))
bUtility[i] <- bValence[i]*bExpectancy[i]/(1 + discount * (bTimeRemaining[i]))
aMotivationalValueMean[i] <- aUtility[i]*aQualityMean[i]
bMotivationalValueMean[i] <- bUtility[i]*bQualityMean[i]
aMotivationalValueVariance[i] <- (aUtility[i]*aQualitySD[i])^2 + (bUtility[i]*bQualitySD[i])^2
bMotivationalValueVariance[i] <- (aUtility[i]*aQualitySD[i])^2 + (bUtility[i]*bQualitySD[i])^2
mvDiffVariance[i] <- aMotivationalValueVariance[i] + bMotivationalValueVariance[i]
meanDrift[i] <- (aMotivationalValueMean[i] - bMotivationalValueMean[i])
probA[i] <- 1/(1+exp(-2*(meanDrift[i]/sqrt(mvDiffVariance[i])) *theta ) )
y[i] ~ dbern(probA[i])
}
In this model, the estimated parameters are theta, discount, and gamma, and these parameters can be recovered. When I run the model on the observations for a single participant (Ntotal = 1800), the model takes about 5 minutes to run, which is totally fine. However, when I run the model on the entire sample (45 participants x 1800 cases each = 78,900 observations), I've had it running for 24 hours and it's less than 50% of the way through. This seems odd, as I would expect it to just take 45 times as long, so 4 or 5 hours at most. Am I missing something?
I hope I am not misreading this situation (and I previously apologize if I am), but your question seems to come from a conceptual misunderstanding of how JAGS works (or WinBUGS or OpenBUGS for that matter).
Your program does not actually run, because what you wrote was not written in a programming language. So vectorizing will not help.
You wrote just a description of your model, because JAGS' language is a descriptive one.
Once JAGS reads your model, it assembles a transition matrix to run a MCMC whose stationary distribution is the posteriori distribution of your parameters given your (observed) data. JAGS does nothing else with your program.
All that time you have been waiting the program to run was actually waiting (and hoping) to reach relaxation time of your MCMC.
So, what is taking your program too long to run is that the resulting transition matrix must have bad relaxing properties or anything like that.
That is why vectorizing a program that is read and run only once will be of very little help.
So, your problem lies somewhere else.
I hope it helps and, if not, sorry.
All the best.
You can't vectorise in the same way that you would in R, but if you can group observations with the same probability expression (i.e. common d[i]) then you can use a Binomial rather than Bernoulli distribution which will help enormously. If each observation has a unique d[i] then you are stuck I'm afraid.
Another alternative is to look at Stan which is generally faster for large data sets like yours.
Matt
thanks for the responses. Yes, you make a good point that the parameters in the model I showed might be unidentified.
I should've pointed out that model was a simplified version. The full model is below:
model{
for ( i in 1:Ntotal ) {
aExpectancy[i] <- 1/(1+exp(-gamma*(aTimeRemaining[i] - aDiscrepancy[i]*aExpectedLag[i]) ) )
bExpectancy[i] <- 1/(1+exp(-gamma*(bTimeRemaining[i] - bDiscrepancy[i]*bExpectedLag[i]) ) )
aUtility[i] <- aValence[i]*aExpectancy[i]/(1 + discount * (aTimeRemaining[i]))
bUtility[i] <- bValence[i]*bExpectancy[i]/(1 + discount * (bTimeRemaining[i]))
aMotivationalValueMean[i] <- aUtility[i]*aQualityMean[i]
bMotivationalValueMean[i] <- bUtility[i]*bQualityMean[i]
aMotivationalValueVariance[i] <- (aUtility[i]*aQualitySD[i])^2 + (bUtility[i]*bQualitySD[i])^2
bMotivationalValueVariance[i] <- (aUtility[i]*aQualitySD[i])^2 + (bUtility[i]*bQualitySD[i])^2
mvDiffVariance[i] <- aMotivationalValueVariance[i] + bMotivationalValueVariance[i]
meanDrift[i] <- (aMotivationalValueMean[i] - bMotivationalValueMean[i])
probA[i] <- 1/(1+exp(-2*(meanDrift[i]/sqrt(mvDiffVariance[i])) *theta ) )
y[i] ~ dbern(probA[i])
}
theta ~ dunif(0,10)
discount ~ dunif(0,10)
gamma ~ dunif(0,1)
}
In this model, the estimated parameters are theta, discount, and gamma, and these parameters can be recovered.
When I run the model on the observations for a single participant (Ntotal = 1800), the model takes about 5 minutes to run, which is totally fine.
However, when I run the model on the entire sample (45 participants X 1800 cases each = 78,900 observations), I've had it running for 24 hours and it's less than 50% of the way through.
This seems odd, as I would expect it to just take 45 times as long, so 4 or 5 hours at most. Am I missing something?

Big Theta, Big O, Big Omega for a given function

Consider the function F: 2^(3*n) + n^2
Can the function A: 2^(3*n) be used as a Big Theta, Omega or O as a characterisation of F? Why?
I'm revising the concepts of Big Omega, Big Theta and Big O and I came across this example but don't know where to start.
No.
2^(3*n) is the leading term, but unless you're doing something very wrong it's not going to take you that long to compute. Wikipedia has a good list of time complexities of various functions. The most complicated operation you're doing is raising to a power, the complexity of which is discussed in other posts.
Since you're looking at a function of the form g(f(x)) where g(x) = 2^x and f(x) = 3x, the time to compute is going to be O(h) + O(k), where h is the time complexity of g, k is the time complexity of f. Think about it: it should never be more than this, if it were you could just break the operation in two and save time by doing the parts separately. Because h is going to dominate this sum you'll typically leave the k term off.
Yes, all three. In general, you only need to pay attention to the fastest growing term, as slower growing terms will be "swamped" by faster growing terms.
In detail:
Obviously F grows faster than A, so F \in \Omega(A) is a no-brainer. There is a positive multiple of A (namely A itself) that is smaller than F, for all sufficiently large n.
Try plotting F against 2*A. You will find that 2*A quickly gets bigger than F and stays bigger. Thus there is a positive multiple of A (namely 2*A) that is bigger than F for sufficiently large arguments. So by the definition of O, F \in O(A).
Finally, since F \in \Omega(A) and F \in O(A), F \in \Theta(A).

Efficiency of the modulo operator for bounds checking

I was reading this page on operation performance in .NET and saw there's a really huge difference between the division operation and the rest.
Then, the modulo operator is slow, but how much with respect to the cost of a conditional block we can use for the same purpose?
Let's assume we have a positive number y that can't be >= 20. Which one is more efficient as a general rule (not only in .NET)?
This:
x = y % 10
or this:
x = y
if (x >= 10)
{
x -= 10
}
How many times are you calling the modulo operation? If it's in some tight inner loop that's getting called many times a second, maybe you should look at other ways of preventing array overflow. If it's being called < (say) 10,000 times, I wouldn't worry about it.
As for performance between your snippets - test them (with some real-world data if possible). You don't know what the compiler/JITer and CPU are doing under the hood. The % could be getting optimized to an & if the 2nd argument is constant and a power of 2. At the CPU level you're talking about the difference between division and branch prediction, which is going to depend on the rest of your code.