Pass along data from Input Selection into ValueBox in Shiny - input

I am trying to pass along values from a Shiny input parameter into a ValueBox. I have seen versions of this, however I cannot seem to get it to work. Below is a sample of what I am trying to do. I would like to have the selections from the input field pass along to the ValueBox, and the value box returns another column (sum is fine, or just the value of the column). Its hard with mpg data because there is not a one to one relationship of manufacturer and the city column but in my dataset one entry in the drop down aligns with one entry in another table. I cannot seem to pass along the the input list to the valuebox.
library(shiny)
library(shinydashboard)
ui <- dashboardPage(
dashboardHeader(title = "Dynamic boxes"),
dashboardSidebar(
selectInput('column', 'Column:', mpg$manufacturer)
),
dashboardBody(
valueBoxOutput("vbox")
)
)
server <- function(input, output) {
output$vbox <- renderValueBox({
valueBox(
paste('Sum', input$column),
sum(mpg$cty[[input$column]])
)
})
}
shinyApp(ui, server)

Is this what you're looking for?
library(shiny)
library(shinydashboard)
library(ggplot2)
ui <- dashboardPage(
dashboardHeader(title = "Dynamic boxes"),
dashboardSidebar(
selectInput('column', 'Column:', unique(mpg$manufacturer))
),
dashboardBody(
valueBoxOutput("vbox")
)
)
server <- function(input, output) {
output$vbox <- renderValueBox({
valueBox(
paste('Sum', input$column),
sum(mpg$cty[mpg$manufacturer %in% input$column])
)
})
}
shinyApp(ui, server)

Related

Optimization : splitting a column into a thoursand columns in R or SQLite

I need to analyse data from a very large dataset. For that, I need to separate a character variable into more than a thousand columns.
The structure of this variable is :
number$number$number$ and so on for a thousand numbers
My data is stored in a .db file from SQLite. I then imported it in R using the package "RSQLite".
I tried splitting this column into multiple columns using dplyr :
#d is a data.table with my data
d2=d %>% separate(column_to_separate, paste0("S",c(1:number_of_final_columns)))
It works, but it is also taking forever. Do someone have a solution to split this column faster (either on R or using SQLite) ?
Thanks.
You may use the tidyfast package (see here), that leverages on data.table. In this test, it is approximately three times faster:
test <- data.frame(
long.var = rep(paste0("V", 1:1000, "$", collapse = ""), 1000)
)
system.time({
test |>
tidyr::separate(long.var, into = paste0("N", 1:1001), sep="\\$")
})
#> user system elapsed
#> 0.352 0.012 0.365
system.time({
test |>
tidyfast::dt_separate(long.var, into = paste0("N", 1:1001), sep="\\$")
})
#> user system elapsed
#> 0.117 0.000 0.118
Created on 2023-02-03 with reprex v2.0.2
You can try to write the file as is and then try to load it with fread, which is in general rather fast.
library(data.table)
library(dplyr)
library(tidyr)
# Prepare example
x <- matrix(rnorm(1000*10000), ncol = 1000)
dta <- data.frame(value = apply(x, 1, function(x) paste0(x, collapse = "$")))
# Run benchmark
microbenchmark::microbenchmark({
dta_2 <- dta %>%
separate(col = value, sep = "\\$", into = paste0("col_", 1:1000))
},
{
tmp_file <- tempfile()
fwrite(dta, tmp_file)
dta_3 <- fread(tmp_file, sep = "$", header = FALSE)
}, times = 3
)
Edit: I tested the speed and it seems faster than dt_seperate from tidyfast, but it depends on the size of your dataset.

How to apply a value from one row to all rows with the same ID in R

I currently have a large data set that is in long format. I am hoping to apply a value only given at baseline (repeat_instance = 0) to all follow up instances (repeat_instance = 1, 2, 3+) based on the record_id.
While I cannot share the actual data I have created a simplified example below to illustrate the quesiton.
record_id <- c(1,1,1,2,3,4,4,5,6,7,8,8,9,10,10,10)
repeat_instance <- c(0,1,2,0,0,0,1,0,0,0,0,1,0,0,1,2)
reason_for_visit <- c(1,NA,NA,1,2,1,NA,1,2,3,1,NA,1,1,NA,NA)
Current Format:
Desired Outcome:
I have seen solutions in Excel, however am not sure which formula may be useful in R.
We can use fill from tidyr
library(tidyr)
fill(df1, reason_for_visit)
data
df1 <- data.frame(record_id, repeat_instance, reason_for_visit)

Create dynamic SQL query depending on user input in R Shiny App

I have an Shiny App where User can filter a SQL Database of Movies. So far, you can only filter by different countries.
con <- dbConnect(RSQLite::SQLite(), 'Movies.db')
movies_data <- dbReadTable(con, 'Movies')
ui <- fluidPage(
fluidRow(
selectInput(
inputId = "country",
label = "Country:",
choices = movies_data$journal,
multi=T
),
br(),
fluidRow(width="100%",
dataTableOutput("table")
)
)
)
server <- function(input, output, session) {
output$table <- renderDataTable({
dbGetQuery(
conn = con,
statement = 'SELECT * FROM movies WHERE country IN ( ? )',
params = list(input$country))
})
}
shinyApp(ui = ui, server = server)
Now i want to give the user more Filters, for example Actor or Genre. All Filters are Multiselect and optional. How can i create the Statement dynamic? Would i use some switch statement for every possible combination (i.e. no Filter on Country but only Action Movies)? This seems ab it bit exhausting to me.
First off, you say the filter is optional but I see no way to disable it in your code. I'm assuming that deselecting all options is your way of disabling the filter, or at least that it's intended to work that way. If all options are selected for any filter, then the current approach should work fine, and will just show all films.
You can probably just construct the overall query piece by piece, and then paste it all together at the end.
Base query: 'SELECT * FROM movies'
Country filter: 'country in ' input country
Actor filter: 'actor in' input actor
Genre filter: 'genre in' input genre
Then you put it all together with paste.
To summarize: Base query. Then, if any of the filters are active, add a WHERE. Join all filters together, separating by AND. Pass the final query in as a direct string.
You can even put the filters into a list for easier parsing.
# Here, filterList is a list containing input$country, input$actor, input$genre
# and filterNames contains the corresponding names in the database
# e.g. filterList <- list("c1", list("a1", "a2"), "g1")
# filterNames <- filterNames <- list("c", "a", "g")
baseQuery <- "SELECT * FROM movies"
# If any of the filters have greater than 0 value, this knows to do the filters
filterCheck <- any(sapply(filterList, length)>0)
# NOTE: If you have a different selection available for None
# just modify the sapply function accordingly
if(filterCheck)
{
baseQuery <- paste(baseQuery, "WHERE")
# This collapses multiselects for a filter into a single string with a comma separator
filterList <- sapply(filterList, paste, collapse = ", ")
# Now you construct the filters
filterList <- sapply(1:length(filterList), function(x)
paste0(filterNames[x], " IN (", filterList[x], ")"))
# Paste the filters together
filterList <- paste(filterList, collapse = " and ")
baseQuery <- paste(baseQuery, filterList)
}
# Final output using the sample input above:
# "SELECT * FROM movies WHERE c IN (c1) and a IN (a1, a2) and g IN (g1)"
Now use baseQuery as the direct query statement

Add new column to MonetDBLite table

I am trying to use R + MonetDB as a large-data analysis stack and am having trouble creating a new column and populating it with data from my analysis. Here is a toy example:
library(MonetDBLite)
library(DBI)
data(mtcars)
db <- dbConnect(MonetDB.R::MonetDB(), embedded="./test.db")
# load mtcars into the database
dbWriteTable(conn=db, value = mtcars, name = "mtcars", overwrite=TRUE)
## Add a new column
dbSendQuery(db, "ALTER TABLE mtcars ADD v1 DOUBLE;")
## insert data into the new column
v1 <- mtcars["mpg"] * pi
dbSendQuery(db, "INSERT INTO mtcars (v1) VALUES (?)", bind.data=v1)
And the error message:
Error in .local(conn, statement, ...) :
Unable to execute statement 'INSERT INTO mtcars (v1) VALUES ('c(65.9734457253857, 65.9734457253857, 71.6283125018473, 67.23008278...'.
Server says 'ERROR: 49.6371639267187, 61.8893752757189, 47.1238898038469, 67.2300827868216)' to type dbl failed.
' [#conversion of string 'c(65.9734457253857, 65.9734457253857, 71.6283125018473, 67.2300827868216, 58.7477826221291, 56.8628270299753, 44.924774946334, 76.6548607475909, 71.6283125018473, 60.318578948924, 55.9203492338983, 51.5221195188726, 54.3495529071034, 47.7522083345649, 32.6725635973338, 32.6725635973338, 46.18141200777, 101.787601976309, 95.5044166691297, 106.499990956694, 67.5442420521806, 48.6946861306418, 47.7522083345649, 41.7831822927443, 60.318578948924, 85.7654794430014, 81.6814089933346, 95.5044166691297,
].
In addition: Warning message:
In if (is.na(value)) statement <- sub("?", "NULL", statement, fixed = TRUE) else if (valueClass %in% :
the condition has length > 1 and only the first element will be used
From this error I surmise that maybe bind.data can't be used with MonetDBLite?
Question:
How can I add a column(s) to a MonetDBLite table and populate it with data from my R session?
First of all the "INSERT" command in your last statement is not correct. You will need the "UPDATE" statement.
That being said, I propose a solution where you can populate your MonetDBLite table directly from R:
library(MonetDBLite)
library(DBI)
data(mtcars)
db <- dbConnect(MonetDB(), embedded="./test.db")
# I added a rownbr to the dataset so it will be easier later
mtcars$rownbr <- 1:nrow(mtcars)
# load mtcars into the database
dbWriteTable(conn=db, value = mtcars, name = "mtcars", overwrite=TRUE)
## Add a new column
dbSendQuery(db, "ALTER TABLE mtcars ADD v1 DOUBLE;")
## insert data into the new column
v1 <- mtcars["mpg"] * pi
for (i in 1:nrow(mtcars)){
myquery <- paste0("UPDATE mtcars SET v1 = ",v1$mpg[i], "where rownbr =",i," ;")
dbSendQuery(db, myquery )
}
There seems to be an issue with parameterized SQL queries via dbBind with MonetDBLite (see https://github.com/hannesmuehleisen/MonetDBLite-R/issues/16). The code below works with SQLite:
library(RSQLite)
data(mtcars)
db <- dbConnect(SQLite(), ":memory:")
# load mtcars into the database
dbWriteTable(conn=db, value = mtcars, name = "mtcars", overwrite = TRUE, row.names=TRUE)
## Add a new column
dbSendQuery(db, "ALTER TABLE mtcars ADD v1 DOUBLE;")
## do computation with R
mtcars$v1 <- mtcars$mpg * pi
mtcars$row_names <- rownames(mtcars)
update_query <- dbSendQuery(db, 'update mtcars set "v1"=$v1 WHERE row_names=$row_names')
dbBind(update_query, mtcars[, c("v1", "row_names")]) # send the updated data
dbClearResult(update_query) # release the prepared statement
dbReadTable(db, "mtcars")
dbDisconnect(db)
However, with MonetDBLite, it is producing an error (and a warning) for the dbBind step:
> dbBind(update_query, mtcars[, c("v1", "row_names")]) # send the updated data
Error in vapply(params, function(x) { : values must be length 1,
but FUN(X[[1]]) result is length 32
In addition: Warning message:
In if (is.na(x)) "NULL" else if (is.numeric(x) || is.logical(x)) { :
the condition has length > 1 and only the first element will be used
A workaround I came up with is to use the glue_data_sql in the glue package to "manually" compose the queries (without needing to loop through rows):
library(MonetDBLite)
library(DBI)
data(mtcars)
db <- dbConnect(MonetDB(), embedded="./test.db")
dbWriteTable(conn=db, value = mtcars, name = "mtcars", overwrite = TRUE, row.names=TRUE)
dbSendQuery(db, "ALTER TABLE mtcars ADD v1 DOUBLE;")
library(glue)
mtcars$row_names <- rownames(mtcars)
mtcars$v1 <- mtcars$mpg * pi
update_query <- glue_data_sql(mtcars, "update mtcars set v1 = {v1} where row_names = {row_names};", .con=db)
lapply(update_query, dbSendQuery, conn=db)
# verify
dbReadTable(db, "mtcars")
dbDisconnect(db)
Yet another workaround that does the insert in one query (closer to #Zelazny7 's original attempt):
library(MonetDBLite)
library(DBI)
data(mtcars)
db <- dbConnect(MonetDB(), embedded="./test.db")
dbSendQuery(db, "CREATE TABLE mtcars (
row_names VARCHAR(32),
v1 DOUBLE);")
library(glue)
mtcars$row_names <- rownames(mtcars)
mtcars$v1 <- mtcars$mpg * pi
insert_values <- glue_data(mtcars, "('{row_names}', {v1})")
insert_values <- glue_collapse(insert_values, sep=", ", last="")
insert_query <- glue("INSERT INTO mtcars (row_names, v1) VALUES {insert_values}")
dbSendQuery(db, insert_query)
dbReadTable(db, "mtcars")
dbDisconnect(db)

Customizing new trading strategy in R using quantmod

I want to create a new custom TA-indicator to the stock symbol in R. But I have no idea about how to convert my SQL conditional strategy into R self-defined function and add it up to the ChartSeries in R.
The question are listed in the following code as the explanation.
library("quantmod")
library("FinancialInstrument")
library("PerformanceAnalytics")
library("TTR")
stock <- getSymbols("002457.SZ",auto.assign=FALSE,from="2012-11-26",to="2014-01-30")
head(stock)
chartSeries(stock, theme = "white", subset = "2013-07-01/2014-01-30",TA = "addSMA(n=5,col=\"gray\");addSMA(n=10,col=\"yellow\");
addSMA(n=20,col=\"pink\");addSMA(n=30,col=\"green\");addSMA(n=60,col=\"blue\");addVo()")
Question: How can I rewrite the code below to make it available as a function in R?
#Signal Design
#Today's volume is the lowset during the last 20 trading days
lowvolume <- VOL<=LLV(VOL,20);
#seveal moving average lines stick together
X1:=ABS(MA(C,10)/MA(C,20)-1)<0.01;
X2:=ABS(MA(C,5)/MA(C,10)-1)<0.01;
X3:=ABS(MA(C,5)/MA(C,20)-1)<0.01;
#If the follwing condition is satisfied, then the signal appears
MA(C,5)>REF(MA(C,5),1) AND X1 AND X2 AND X3 AND lowvolume;
#Convert the above SQL code into the following R custom function
VOLINE <- function(x) {
}
#Create a new TA function for the chartseries and then add it up.
addVoline <- newTA(FUN=VOLINE,
+ preFUN=Cl,
+ col=c(rep(3,6),
+ rep(”#333333”,6)),
+ legend=”VOLINE”)
I dont think you need sql in this case
Try this
require(quantmod)
# fetch the data
s <- get(getSymbols('yhoo'))
# add the indicators
s$ma5 <- SMA(Cl(s) ,5)
s$ma10 <- SMA(Cl(s) ,10)
s$ma20 <- SMA(Cl(s) ,20)
s$llv <- rollapply(Vo(s), 20, min)
# generate the signal
s$signal <- (s$ma10 / s$ma20 - 1 < 0.01 & s$ma5 / s$ma10 - 1 < 0.01 & s$ma5 / s$ma20 - 1 < 0.01 & Vo(s) == s$llv)
# draw
chart_Series(s)
add_TA(s$signal == 1, on = 1, col='red')
I'm not sure what REF means but i'm sure you can do that by your self.
This is the output (i cant seem to upload the photo but you see a chart with horizontal lines where signal eq 1)
Use the function as a wrapper for sqldf() in the sqldf package. The argument to sqldf() will be a select statement on the data frame that has the data.
A good tutorial for this can be found at Burns Statistics.