Scilab Xcos, insert a difference equation in Xcos diagram - differential-equations

How can I insert a difference equation in Xcos diagram, like:
y(k+1) = y(k)[a sqrt(y(k-1))] + b *y(k-1) ;
?
Thanks, Best Regards
EDIT
Finding on Internet I think that the best way to address my problem is use:
scifunck block.

I would actually do something like that:
where the delay blocks are set to the sample time of your model (I assume we are dealing with discrete signals here) and appropriate initial conditions. Make sure to choose a discrete solver and use the same (appropriate for your problem) sample time for the model and the delay blocks.

Related

Procedural generated 2D caves/dungeons for sidescroller like game

I would like to proceduraly generate 2D caves. I already tried out using some 1D simplex noise to determine the terrain of the floor, which is basically everything you can change in a sidescroller, but it turned out rather unimpressive.
I would like to have an interesting terrain for my cave/dungeon and if possible some alternative paths.
I couldn't come up with any ideas for this kind of terrain and I also could not find any promising ways to do this kind of stuff on the internet.
Assuming you've tried something like abs(simplex) > 0.1 ? nocave : cave, maybe the problem you're running into is that it's too connected, and that there are no dead-ends anywhere. You could extend this by doing the following simplex1*simplex1 > threshold(simplex2) where threshold(t)=a*t/(t+b) where a roughly controls the max threshold (cave opening size) and b roughly controls how much of the underground is caves vs not. The function looks like this, and the fact that it quickly rushes to zero is supposed to make dead ends look more rounded and less pointy. Working with the squared noise value is also for this purpose.
Have you tried Perlin Noise? Seems to be the standard way of doing this sort of thing.

Setting initial values for non-linear parameters via tabuSearch

I'm trying to fit the lppl model to KLSE index to predict the most probable crash time. Many papers suggested tabuSearch to identify the initial value for non-linear parameters but none of them publish their code. I have tried to fit the mentioned index with the help of NLS And Log-Periodic Power Law (LPPL) in R. But the obtained error and p values are not significant. I believe that the initial values are not accurate. Can anyone help me on how to find the proper initial values?
library(tseries)
library(zoo)
ts<-get.hist.quote(instrument="^KLSE",start="2003-04-18",end="2008-01-30",quote="Close",provider="yahoo",origin="1970-01-01",compression="d",retclass="zoo")
df<-data.frame(ts)
df<-data.frame(Date=as.Date(rownames(df)),Y=df$Close)
df<-df[!is.na(df$Y),]
library(minpack.lm)
library(ggplot2)
df$days<-as.numeric(df$Date-df[1,]$Date)
f<-function(pars,xx){pars$a + (pars$tc - xx)^pars$m *(pars$b+ pars$c * cos(pars$omega*log(pars$tc - xx) + pars$phi))}
resids<-function(p,observed,xx){df$Y-f(p,xx)}
nls.out <- nls.lm(par=list(a=600,b=-266,tc=3000, m=.5,omega=7.8,phi=-4,c=-14),fn = resids, observed = df$Y, xx = df$days, control= nls.lm.control (maxiter =1024, ftol=1e-6, maxfev=1e6))
par<-nls.out$par
nls.final<-nls(Y~(a+(tc-days)^m*(b+c*cos(omega*log(tc-days)+phi))),data=df,start=par,algorithm="plinear",control=nls.control(maxiter=10024,minFactor=1e-8))
summary(nls.final)
I would look at some of the newer research on this topic, there is a good trig modification that will practically guarantee a singular optimization. Additionally, you can use r's built in linear equation solver, to find the linearizable parameters, ergo you will only need to optimize in 3 dimensions. The link below should get you started. I would cite recent literature and personal experience to strongly advise against a tabu search.
https://www.ethz.ch/content/dam/ethz/special-interest/mtec/chair-of-entrepreneurial-risks-dam/documents/dissertation/master%20thesis/MAS_final_Tuncay.pdf

Maximum Likelihood Estimation of a log function with sevaral parameters

I am trying to find out the parameters for the function below:
$$
\log L(\alpha,\beta,v) = v/\beta(e^{-\beta T} -1) + \alpha/\beta \sum_{i=1}^{n}(e^{-\beta(T-t_i)} -1) + \sum_{i=1}^{N}log(v e^{-\beta t_i} + \alpha \sum_{j=1}^{jmax(t_i)} e^{-\beta(t_i - t_j)}).
$$
However, the conventional methods like fmin, fminsearch are not converging properly. Any suggestions on any other methods or open libraries which I can use?
I was trying CVXPY, but they don't support the division by a variable in the expression.
The problem may not be convex (I have not verified this but it could be why CVXPY refused it). We don't have the data so we cannot try things out, but I can give some general advice:
Provide exact gradients (and 2nd derivatives if needed) or use a modeling system with automatic differentiation. Especially first derivatives should be preferably quite precise. With finite differences you may lose half the precision.
Provide a good starting point. May be using an alternative estimation method.
Some solvers can use bounds on the variables to restrict the feasible region where functions will be evaluated. This can be used to restrict the search to interesting areas only and also to protect operations like division and log functions.

How can I compare two NSImages for differences?

I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.

Process to pass from problem to code. How did you learn?

I'm teaching/helping a student to program.
I remember the following process always helped me when I started; It looks pretty intuitive and I wonder if someone else have had a similar approach.
Read the problem and understand it ( of course ) .
Identify possible "functions" and variables.
Write how would I do it step by step ( algorithm )
Translate it into code, if there is something you cannot do, create a function that does it for you and keep moving.
With the time and practice I seem to have forgotten how hard it was to pass from problem description to a coding solution, but, by applying this method I managed to learn how to program.
So for a project description like:
A system has to calculate the price of an Item based on the following rules ( a description of the rules... client, discounts, availability etc.. etc.etc. )
I first step is to understand what the problem is.
Then identify the item, the rules the variables etc.
pseudo code something like:
function getPrice( itemPrice, quantity , clientAge, hourOfDay ) : int
if( hourOfDay > 18 ) then
discount = 5%
if( quantity > 10 ) then
discount = 5%
if( clientAge > 60 or < 18 ) then
discount = 5%
return item_price - discounts...
end
And then pass it to the programming language..
public class Problem1{
public int getPrice( int itemPrice, int quantity,hourOdDay ) {
int discount = 0;
if( hourOfDay > 10 ) {
// uh uh.. U don't know how to calculate percentage...
// create a function and move on.
discount += percentOf( 5, itemPriece );
.
.
.
you get the idea..
}
}
public int percentOf( int percent, int i ) {
// ....
}
}
Did you went on a similar approach?.. Did some one teach you a similar approach or did you discovered your self ( as I did :( )
I go via the test-driven approach.
1. I write down (on paper or plain text editor) a list of tests or specification that would satisfy the needs of the problem.
- simple calculations (no discounts and concessions) with:
- single item
- two items
- maximum number of items that doesn't have a discount
- calculate for discounts based on number of items
- buying 10 items gives you a 5% discount
- buying 15 items gives you a 7% discount
- etc.
- calculate based on hourly rates
- calculate morning rates
- calculate afternoon rates
- calculate evening rates
- calculate midnight rates
- calculate based on buyer's age
- children
- adults
- seniors
- calculate based on combinations
- buying 10 items in the afternoon
2. Look for the items that I think would be the easiest to implement and write a test for it. E.g single items looks easy
The sample using Nunit and C#.
[Test] public void SingleItems()
{
Assert.AreEqual(5, GetPrice(5, 1));
}
Implement that using:
public decimal GetPrice(decimal amount, int quantity)
{
return amount * quantity; // easy!
}
Then move on to the two items.
[Test]
public void TwoItemsItems()
{
Assert.AreEqual(10, GetPrice(5, 2));
}
The implementation still passes the test so move on to the next test.
3. Be always on the lookout for duplication and remove it. You are done when all the tests pass and you can no longer think of any test.
This doesn't guarantee that you will create the most efficient algorithm, but as long as you know what to test for and it all passes, it will guarantee that you are getting the right answers.
the old-school OO way:
write down a description of the problem and its solution
circle the nouns, these are candidate objects
draw boxes around the verbs, these are candidate messages
group the verbs with the nouns that would 'do' the action; list any other nouns that would be required to help
see if you can restate the solution using the form noun.verb(other nouns)
code it
[this method preceeds CRC cards, but its been so long (over 20 years) that I don't remember where i learned it]
when learning programming I don't think TDD is helpful. TDD is good later on when you have some concept of what programming is about, but for starters, having an environment where you write code and see the results in the quickest possible turn around time is the most important thing.
I'd go from problem statement to code instantly. Hack it around. Help the student see different ways of composing software / structuring algorithms. Teach the student to change their minds and rework the code. Try and teach a little bit about code aesthetics.
Once they can hack around code.... then introduce the idea of formal restructuring in terms of refactoring. Then introduce the idea of TDD as a way to make the process a bit more robust. But only once they are feeling comfortable in manipulating code to do what they want. Being able to specify tests is then somewhat easier at that stage. The reason is that TDD is about Design. When learning you don't really care so much about design but about what you can do, what toys do you have to play with, how do they work, how do you combine them together. Once you have a sense of that, then you want to think about design and thats when TDD really kicks in.
From there I'd start introducing micro patterns leading into design patterns
I did something similar.
Figure out the rules/logic.
Figure out the math.
Then try and code it.
After doing that for a couple of months it just gets internalized. You don't realize your doing it until you come up against a complex problem that requires you to break it down.
I start at the top and work my way down. Basically, I'll start by writing a high level procedure, sketch out the details inside of it, and then start filling in the details.
Say I had this problem (yoinked from project euler)
The sum of the squares of the first
ten natural numbers is, 1^2 + 2^2 +
... + 10^2 = 385
The square of the sum of the first ten
natural numbers is, (1 + 2 + ... +
10)^2 = 55^2 = 3025
Hence the difference between the sum
of the squares of the first ten
natural numbers and the square of the
sum is 3025 385 = 2640.
Find the difference between the sum of
the squares of the first one hundred
natural numbers and the square of the
sum.
So I start like this:
(display (- (sum-of-squares (list-to 10))
(square-of-sums (list-to 10))))
Now, in Scheme, there is no sum-of-squares, square-of-sums or list-to functions. So the next step would be to build each of those. In building each of those functions, I may find I need to abstract out more. I try to keep things simple so that each function only really does one thing. When I build some piece of functionality that is testable, I write a unit test for it. When I start noticing a logical grouping for some data, and the functions that act on them, I may push it into an object.
I've enjoyed TDD every since it was introduced to me. Helps me plan out my code, and it just puts me at ease having all my tests return with "success" every time I modify my code, letting me know I'm going home on time today!
Wishful thinking is probably the most important tool to solve complex problems. When in doubt, assume that a function exists to solve your problem (create a stub, at first). You'll come back to it later to expand it.
A good book for beginners looking for a process: Test Driven Development: By Example
My dad had a bunch of flow chart stencils that he used to make me use when he was first teaching me about programming. to this day I draw squares and diamonds to build out a logical process of how to analyze a problem.
I think there are about a dozen different heuristics I know of when it comes to programming and so I tend to go through the list at times with what I'm trying to do. At the start, it is important to know what is the desired end result and then try to work backwards to find it.
I remember an Algorithms class covering some of these ways like:
Reduce it to a known problem or trivial problem
Divide and conquer (MergeSort being a classic example here)
Use Data Structures that have the right functions (HeapSort being an example here)
Recursion (Knowing trivial solutions and being able to reduce to those)
Dynamic programming
Organizing a solution as well as testing it for odd situations, e.g. if someone thinks L should be a number, are what I'd usually use to test out the idea in pseudo code before writing it up.
Design patterns can be a handy set of tools to use for specific cases like where an Adapter is needed or organizing things into a state or strategy solution.
Yes.. well TDD did't existed ( or was not that popular ) when I began. Would be TDD the way to go to pass from problem description to code?... Is not that a little bit advanced? I mean, when a "future" developer hardly understand what a programming language is, wouldn't it be counterproductive?
What about hamcrest the make the transition from algorithm to code.
I think there's a better way to state your problem.
Instead of defining it as 'a system,' define what is expected in terms of user inputs and outputs.
"On a window, a user should select an item from a list, and a box should show him how much it costs."
Then, you can give him some of the factors determining the costs, including sample items and what their costs should end up being.
(this is also very much a TDD-like idea)
Keep in mind, if you get 5% off then another 5% off, you don't get 10% off. Rather, you pay 95% of 95%, which is 90.25%, or 9.75% off. So, you shouldn't add the percentage.