R: can a function in a package read a text file in the same package? - sql

The use case is as follows: I have created a package which contains functions which run queries against a data base. Instead of defining the query in an R-script e.g. (excuse the pseudo-code):
house_price_query <- "select * from house_prices"
get.house_prices <- function() run_query(house_price_query)
Can I save the query as a text file in say queries/house_prices.sql and then read this text file into the query?
Cheers

You can put your file house_prices.sql in inst/queries in your package folder, and then load it from within R (once your package is installed) with :
system.file(file.path("queries","house_prices.sql"), package="your_package_name")

Related

How to use sqlite3 in C without the dev library installed?

I'm trying to load a local DB with SQLite under a RedHat Linux server. I have a C code to load the database from a very large file spliting the columns. The bad new is that sqlite3 is not installed in the machine (fatal error: sqlite3.h: No such file or directory) and I won't be able to have permissions to install libsqlite3-dev (acording to this) , so I could only use it throuth bash or python:
[dhernandez#zl1:~]$ locate sqlite3
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0
/opt/xiv/host_attach/xpyv/lib/libsqlite3.so.0.8.6
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3
/opt/xiv/host_attach/xpyv/lib/python2.7/lib-dynload/_sqlite3.so
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/__init__.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dbapi2.py
/opt/xiv/host_attach/xpyv/lib/python2.7/sqlite3/dump.py
/usr/bin/sqlite3
/usr/lib64/libsqlite3.so.0
/usr/lib64/libsqlite3.so.0.8.6
/usr/lib64/python2.6/sqlite3
/usr/lib64/python2.6/lib-dynload/_sqlite3.so
/usr/lib64/python2.6/sqlite3/__init__.py
/usr/lib64/python2.6/sqlite3/__init__.pyc
/usr/lib64/python2.6/sqlite3/__init__.pyo
/usr/lib64/python2.6/sqlite3/dbapi2.py
/usr/lib64/python2.6/sqlite3/dbapi2.pyc
/usr/lib64/python2.6/sqlite3/dbapi2.pyo
/usr/lib64/python2.6/sqlite3/dump.py
/usr/lib64/python2.6/sqlite3/dump.pyc
/usr/lib64/python2.6/sqlite3/dump.pyo
/usr/lib64/xulrunner/libmozsqlite3.so
/usr/share/man/man1/sqlite3.1.gz
/usr/share/mime/application/x-kexiproject-sqlite3.xml
/usr/share/mime/application/x-sqlite3.xml
What would be faster of the following options?
Split the columns in my C program, and then execute the insert like
this:
system("echo 'insert into t values(1,2);'" | sqlite3 mydb.db);
Split the columns in my C program, save it to a temp file and when
I've got 500.000 rows I execute the script like this (and then empty
the temp file to continue loading rows):
system("sqlite3 mydb.db < temp.sql);
Split the columns in my C program adding a delimiter between them, save it to a temp file and import it like this:
.delimiter '#'
.import temp.txt t
You can use the amalgation version. It is a single .c file you can include in your project, and all of SQLite is available. No need for dynamic linking.
You could probably try to dynamically load the sqlite3 library at runtime.
There are quite few stuff to learn about it, but that's a powerful functionality and I am quite sure this would solve your problem.
Here is a link describing how you can do it : http://tldp.org/HOWTO/Program-Library-HOWTO/dl-libraries.html
Download the devel package and use it from your project directory. You only need it for the compilation.

how to create a file in application server with abap programing

I have a file in my D: drive of my computer and I want to copy this file to an SAP application server so that I am able to see my file with transaction AL11.
I know that I can create a file with AL11 but I want do this in ABAP.
Of course in my search I find this code but I cannot solve my problem with it.
data: unixcom like rlgrap-filename.
data: begin of tabl occurs 500,
line(400),
end of tabl.
dir =
unixcom = 'mkdir mydir'. "command to create dir
"to execute the unix command
call 'SYSTEM' id 'COMMAND' field unixcom
id 'TAB' field tabl[].
To upload the file to the application server, there are three steps to be followed. To open the file use the below statement:
Step1: OPEN DATASET file name FOR INPUT IN TEXT MODE ENCODING DEFAULT.
To write into the application server use.
Step2: TRANSFER name TO file name.
Dont forget to close the file once it is transferred.
Step3: CLOSE DATASET file name.
Plese mark with correct answer, if it helps! :)
If you want to do this using ABAP you could create a small report that uses the function module GUI_UPLOAD to get the file from your local disk into an internal table and then write it to the application server with something like this:
lv_filename = '\\path\to\al11\directory\file.txt'.
OPEN DATASET lv_filename FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
LOOP AT lt_contents INTO lv_line.
TRANSFER lv_line TO lv_filename.
ENDLOOP.
CLOSE DATASET lv_filename.
I used CG3Z transaction and with this transaction I was able to copy a file in the application server directory.

Execute scripts by relative path in Oracle SQL Developer

First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"

BIDS Import from changing file name [wildcard?]

I'm attempting to create a process to import data. I created the entire process and it works, but I'm having trouble creating the variable to find the file name of the csv i want to import automatically. Each time a new csv is uploaded to me it has a timestamp on it. I want to be able to grab that file no matter what the name is and do work to it.
So for example this week the file name would be
filename_4-14-2014.csv
And next week
filename_4_21_2014.csv
And so on into eternity. . .
Is there a way to create a variable that picks up the full file name even though its changing?
After doing some poking around, I've discovered the following...
You can use a file system task to perform the copy operation I was referring to. You can set the input file and the output file as variables. This way you can always know that the file you use for import is always named the same, and has the right data.
You just need to add the variables and a File System Task to your package.
Ok so to accomplish what I wanted I created a Foreach Loop Container. Using the foreach loop container I had it look for any files ending with .csv in my specified folder by using a wildcard [denoted by asterisk: *.csv] .
Within the Foreach Loop container is as follows.
Step 1: File System Task - rename file.
Step 2: Data Flow Task - Import data to sql
Step 3: File System Task - Copy the file to another folder, append datetime to filename
Step 4: File System Task - Delete source file.
I used variables to get all the file and folder names plus datetimes.

Using R functions lapply and read.sql.csv

I am trying to open multiple csv files using a list such as the below;
filenames <- list.files("temp", pattern="*.csv", full.names=TRUE)
I have found examples that use lapply and read.csv to open all the files in the temp directory, but I know appriori what data i need to extract from the file, so to save time reading i want to use the SQL extension of this;
somefile = read.csv.sql("temp/somefile.csv", sql="select * from file ",eol="\n")
However i am having trouble combining these two pieces of functionality into a single command such that i can read all the files in a directory applying the same sql query.
Has anybody had success doing this?
If you want a list of dataframes from each file (assuming your working directory contains the .csv files):
files <- list.files(".", pattern="*.csv")
df.list <- sapply(filenames, read.csv.sql,sql="select * from file ",eol="\n",simplify=F)
Or if you want them all combined:
df <- ldply(filenames, read.csv.sql,sql="select * from file ",eol="\n")