I want to create a new custom TA-indicator to the stock symbol in R. But I have no idea about how to convert my SQL conditional strategy into R self-defined function and add it up to the ChartSeries in R.
The question are listed in the following code as the explanation.
library("quantmod")
library("FinancialInstrument")
library("PerformanceAnalytics")
library("TTR")
stock <- getSymbols("002457.SZ",auto.assign=FALSE,from="2012-11-26",to="2014-01-30")
head(stock)
chartSeries(stock, theme = "white", subset = "2013-07-01/2014-01-30",TA = "addSMA(n=5,col=\"gray\");addSMA(n=10,col=\"yellow\");
addSMA(n=20,col=\"pink\");addSMA(n=30,col=\"green\");addSMA(n=60,col=\"blue\");addVo()")
Question: How can I rewrite the code below to make it available as a function in R?
#Signal Design
#Today's volume is the lowset during the last 20 trading days
lowvolume <- VOL<=LLV(VOL,20);
#seveal moving average lines stick together
X1:=ABS(MA(C,10)/MA(C,20)-1)<0.01;
X2:=ABS(MA(C,5)/MA(C,10)-1)<0.01;
X3:=ABS(MA(C,5)/MA(C,20)-1)<0.01;
#If the follwing condition is satisfied, then the signal appears
MA(C,5)>REF(MA(C,5),1) AND X1 AND X2 AND X3 AND lowvolume;
#Convert the above SQL code into the following R custom function
VOLINE <- function(x) {
}
#Create a new TA function for the chartseries and then add it up.
addVoline <- newTA(FUN=VOLINE,
+ preFUN=Cl,
+ col=c(rep(3,6),
+ rep(”#333333”,6)),
+ legend=”VOLINE”)
I dont think you need sql in this case
Try this
require(quantmod)
# fetch the data
s <- get(getSymbols('yhoo'))
# add the indicators
s$ma5 <- SMA(Cl(s) ,5)
s$ma10 <- SMA(Cl(s) ,10)
s$ma20 <- SMA(Cl(s) ,20)
s$llv <- rollapply(Vo(s), 20, min)
# generate the signal
s$signal <- (s$ma10 / s$ma20 - 1 < 0.01 & s$ma5 / s$ma10 - 1 < 0.01 & s$ma5 / s$ma20 - 1 < 0.01 & Vo(s) == s$llv)
# draw
chart_Series(s)
add_TA(s$signal == 1, on = 1, col='red')
I'm not sure what REF means but i'm sure you can do that by your self.
This is the output (i cant seem to upload the photo but you see a chart with horizontal lines where signal eq 1)
Use the function as a wrapper for sqldf() in the sqldf package. The argument to sqldf() will be a select statement on the data frame that has the data.
A good tutorial for this can be found at Burns Statistics.
Related
I need to analyse data from a very large dataset. For that, I need to separate a character variable into more than a thousand columns.
The structure of this variable is :
number$number$number$ and so on for a thousand numbers
My data is stored in a .db file from SQLite. I then imported it in R using the package "RSQLite".
I tried splitting this column into multiple columns using dplyr :
#d is a data.table with my data
d2=d %>% separate(column_to_separate, paste0("S",c(1:number_of_final_columns)))
It works, but it is also taking forever. Do someone have a solution to split this column faster (either on R or using SQLite) ?
Thanks.
You may use the tidyfast package (see here), that leverages on data.table. In this test, it is approximately three times faster:
test <- data.frame(
long.var = rep(paste0("V", 1:1000, "$", collapse = ""), 1000)
)
system.time({
test |>
tidyr::separate(long.var, into = paste0("N", 1:1001), sep="\\$")
})
#> user system elapsed
#> 0.352 0.012 0.365
system.time({
test |>
tidyfast::dt_separate(long.var, into = paste0("N", 1:1001), sep="\\$")
})
#> user system elapsed
#> 0.117 0.000 0.118
Created on 2023-02-03 with reprex v2.0.2
You can try to write the file as is and then try to load it with fread, which is in general rather fast.
library(data.table)
library(dplyr)
library(tidyr)
# Prepare example
x <- matrix(rnorm(1000*10000), ncol = 1000)
dta <- data.frame(value = apply(x, 1, function(x) paste0(x, collapse = "$")))
# Run benchmark
microbenchmark::microbenchmark({
dta_2 <- dta %>%
separate(col = value, sep = "\\$", into = paste0("col_", 1:1000))
},
{
tmp_file <- tempfile()
fwrite(dta, tmp_file)
dta_3 <- fread(tmp_file, sep = "$", header = FALSE)
}, times = 3
)
Edit: I tested the speed and it seems faster than dt_seperate from tidyfast, but it depends on the size of your dataset.
Ive written a R-chunk which should provide me a coloured ggplot boxplot. All needed templates are loaded, so is the Data.
The Data for „Healthy“ & „BodyTemperature“ is based inside the Data „Hospital“.
For Healthy there can be only 0 oder 1.
It should plott two Boxplots next to each other on the x-axis, one showing Healthy (0) the other one Unhealthy (1) compared to the BodyTemperature of the patients on y-axis.
The Boxplot should be coloured with the Template „Brewer“.
Everytime i try to run this chunk, a warning occours. Whats the solution?
colour:
colour <- brewer.pal(n = 2, name = "Set1")
colour
Warnung: minimal value for n is 3, returning requested palette with 3 different levels
[1] "#E41A1C" "#377EB8" "#4DAF4A"
R-Chunk
colour = brewer.pal(n = 2, name = "Set1")
ggplot(Hospital, aes(x = Healthy, y = BodyTemperature)) +
geom_boxplot(fill=c(colour)) +
ylab("Temperature") +
xlab("Healthy") +
ggtitle("Health compared to Temperature")
Warning ocours:
Error in `check_aesthetics()`:
! Aesthetics must be either length 1 or the same as the data (1): fill
Backtrace:
1. base (local) `<fn>`(x)
2. ggplot2:::print.ggplot(x)
4. ggplot2:::ggplot_build.ggplot(x)
5. ggplot2 (local) by_layer(function(l, d) l$compute_geom_2(d))
6. ggplot2 (local) f(l = layers[[i]], d = data[[i]])
7. l$compute_geom_2(d)
8. ggplot2 (local) f(..., self = self)
9. self$geom$use_defaults(data, self$aes_params, modifiers)
10. ggplot2 (local) f(..., self = self)
11. ggplot2:::check_aesthetics(params[aes_params], nrow(data))
Error in check_aesthetics(params[aes_params], nrow(data)) :
As you want to color your boxplots by the value of Healthy you could do so by mapping Healthy on the fill aesthetic. Also to use one of the Brewer palettes ggplot2 already offers some convenience functions which in case of the fill aes is called scale_fill_brewer. Not sure whether you want a legend but IMHO it does not make sense so I removed it via guides. Finally as you provided no data it's not clear whether your Healthy column is a numeric or a categorical variable. For this reason I wrapped in factor to make it categorical.
Using some fake random example data:
set.seed(123)
library(ggplot2)
Hospital <- data.frame(
Healthy = rep(c(0, 1), 50),
BodyTemperature = runif(100)
)
ggplot(Hospital, aes(x = factor(Healthy), y = BodyTemperature)) +
geom_boxplot(aes(fill = factor(Healthy))) +
scale_fill_brewer(palette = "Set1") +
ylab("Temperature") +
xlab("Healthy") +
ggtitle("Health compared to Temperature") +
guides(fill = "none")
I'm trying to find the optimal weights for an especific target return using Portfolio Analytics library and ROI optimization; However, even that I know that that target return should be feasable and should be part of the efficient frontier, the ROI optimization does not find any solution.
The code that I'm using is the following:
for(i in 0:n){
target=minret+(i)*Del
p <- portfolio.spec(assets = colnames(t_EROAS)) #Specification of asset classes
p <- add.constraint(p, type = "full_investment") #An investment that has to sum 1
p <- add.constraint(portfolio=p, type="box", min=0, max=1) #No short position long-only
p <- add.constraint(p,
type="group",
groups=group_list,
group_min=VCONSMIN[,1],
group_max=VCONSMAX[,1])
p <- add.constraint(p, type = "return", name = "mean", return_target = target)
p <- add.objective(p, type="risk", name="var")
eff.opt <- optimize.portfolio(t_EROAS, p, optimize_method = "ROI",trace=TRUE)}
n=30 but is just finding 27 portfolios and the efficient frontier that I'm creating is looking empty from portfolio 27 to portfolio 30, the 28 and 29 seems to not have a solution but I'm not sure that this is correct.
What I want to have is an efficient frontier on a data frame format with a fixed number of portfolios, and it seems that the only way to achive this is by this method. Any help or any ideas that could help?
I have this data set:
var_1 = rnorm(1000,1000,1000)
var_2 = rnorm(1000,1000,1000)
var_3 = rnorm(1000,1000,1000)
sample_data = data.frame(var_1, var_2, var_3)
I would like to split this data set into 10 different datasets (each containing 100 rows) and then upload them on to a server.
I know how to do this by hand:
sample_1 = sample_data[1:100,]
sample_2 = sample_data[101:200,]
sample_3 = sample_data[201:300,]
# etc.
library(DBI)
#establish connection (my_connection)
dbWriteTable(my_connection, SQL("sample_1"), sample_1)
dbWriteTable(my_connection, SQL("sample_2"), sample_2)
dbWriteTable(my_connection, SQL("sample_3"), sample_3)
# etc
Is there a way to do this "quicker"?
I thought of a general way to do this - but I am not sure how to correctly write the code for this:
i = seq(1:1000, by = 100)
j = 1 - 99
{
sample_i = sample_data[ i:j,]
dbWriteTable(my_connection, SQL("sample_i"), sample_i)
}
Can someone please help me with this?
Thank you!
Here's an example using the SQLite database engine. We'll start with your sample data set:
var_1 = rnorm(1000,1000,1000)
var_2 = rnorm(1000,1000,1000)
var_3 = rnorm(1000,1000,1000)
sample_data = data.frame(var_1, var_2, var_3)
Now we'll break your large data frame into a list of 10 data frames using split(). The result will be stored in a list:
list_of_dfs <- split(
sample_data, (seq(nrow(sample_data))-1) %/% 100
)
We'll create a vector with the names of the tables in the database. Here, I'm just making simple vector with the names sample_1, sample_2, etc.
table_names <- paste0("sample_", 1:10)
Now we're ready to write to the database. We'll make a connection and then iterate over the list of data frames and the vector of table names simultaneously, calling dbWriteTable() each time:
connection <- dbConnect(RSQLite::SQLite(), dbname = "test.db")
map2(
table_names,
list_of_dfs,
function(x,y) dbWriteTable(connection, x, y)
)
I am new to R and am trying to find a better solution for accomplishing this fairly simple task efficiently.
I have a data.frame M with 100,000 lines (and many columns, out of which 2 columns are relevant to this problem, I'll call it M1, M2). I have another data.frame where column V1 with about 10,000 elements is essential to this task. My task is this:
For each of the element in V1, find where does it occur in M2 and pull out the corresponding M1. I am able to do this using for-loop and it is terribly slow! I am used to Matlab and Perl and this is taking for EVER in R! Surely there's a better way. I would appreciate any valuable suggestions in accomplishing this task...
for (x in c(1:length(V$V1)) {
start[x] = M$M1[M$M2 == V$V1[x]]
}
There is only 1 element that will match, and so I can use the logical statement to directly get the element in start vector. How can I vectorize this?
Thank you!
Here is another solution using the same example by #aix.
M[match(V$V1, M$M2),]
To benchmark performance, we can use the R package rbenchmark.
library(rbenchmark)
f_ramnath = function() M[match(V$V1, M$M2),]
f_aix = function() merge(V, M, by.x='V1', by.y='M2', sort=F)
f_chase = function() M[M$M2 %in% V$V1,] # modified to return full data frame
benchmark(f_ramnath(), f_aix(), f_chase(), replications = 10000)
test replications elapsed relative
2 f_aix() 10000 12.907 7.068456
3 f_chase() 10000 2.010 1.100767
1 f_ramnath() 10000 1.826 1.000000
Another option is to use the %in% operator:
> set.seed(1)
> M <- data.frame(M1 = sample(1:20, 15, FALSE), M2 = sample(1:20, 15, FALSE))
> V <- data.frame(V1 = sample(1:20, 10, FALSE))
> M$M1[M$M2 %in% V$V1]
[1] 6 8 11 9 19 1 3 5
Sounds like you're looking for merge:
> M <- data.frame(M1=c(1,2,3,4,10,3,15), M2=c(15,6,7,8,-1,12,5))
> V <- data.frame(V1=c(-1,12,5,7))
> merge(V, M, by.x='V1', by.y='M2', sort=F)
V1 M1
1 -1 10
2 12 3
3 5 15
4 7 3
If V$V1 might contain values not present in M$M2, you may want to specify all.x=T. This will fill in the missing values with NAs instead of omitting them from the result.