I would like to import all csv files in a loop to a DolphinDB database. Is there a function to extract all file names in a folder? Thanks!
Use this
exec filename from files(yourdir)
Related
I have one folder that includes 10-12 subfolders, from each subfolder I need to read a specific .xlsx file. I am stuck, I have got all the .xlsx files I want to use os.walk but I don't know how to proceed further.
for root,dirs,files in os.walk(path):
for name in files:
if name.endswith("abc.xlsx"):
I You would like to us os.walk, this is how.
import os
for root,dirs,files in os.walk(path):
reqfiles = [i for i in files if i.endswith("abc.xlsx")]
You can use just os.listdir.
reqfiles = [i for i in os.listdir(path) if i.endswith("abc.xlsx")]
I try to create a dataset containing multiple csv files from the Blob. In the file path of dataset setting: I create a parameter - #dataset().FolderName and add FolderName in the Parameters.
I leave file (from File Path) empty as I want to grab all files in the folder. However, there is no data when I preview data. Is there anything missing? Thank you
I have tested it on my side and it can work fine.
add FolderName parameter
preview data
If you want to merge all csv files in Data Flow, you can do this:
1.output to single file
2.set Single partition
I have a folder with several csv files, with file names between 100 and 400 (Eg. 142.csv, 278.csv etc). Not all the numbers between 100-400 are associated with a file, for example there is no 143.csv. I want to write a loop that imports 5 random files into separate dataframes in pandas instead of manually searching and typing out the file names over and over. Any ideas to get me started with this?
You can use glob and read all the csv files in the directory.
file = glob.glob('*.csv')
random_files=np.random.choice(file,5)
dataframes= []
for fp in random_files :
dataframes.append(pd.read_csv(fp))
From this you can chose the random 5 files from directory and then read them seprately.
Hope I answer your question
I have a folder in the server
/home/test/pictures
There are multiple .jpg files inside this pictures folder.
I want to delete these files from this pictures using plsql.
Is there a utility for this ?
you can use UTL_FILE.FREMOVE function this function will remove files from a directory, here is the parameters for the function UTL_FILE.FREMOVE (location IN VARCHAR2,filename IN VARCHAR2)
location should be in ALL_DIRECTORIES view
so calling a function would be like this:
begin
UTL_FILE.FREMOVE('C:\','TEXT.TXT');--this will remove text.txt from C drive
end;
you can read this article https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_file.htm#BABEGHIG
I am trying to open multiple csv files using a list such as the below;
filenames <- list.files("temp", pattern="*.csv", full.names=TRUE)
I have found examples that use lapply and read.csv to open all the files in the temp directory, but I know appriori what data i need to extract from the file, so to save time reading i want to use the SQL extension of this;
somefile = read.csv.sql("temp/somefile.csv", sql="select * from file ",eol="\n")
However i am having trouble combining these two pieces of functionality into a single command such that i can read all the files in a directory applying the same sql query.
Has anybody had success doing this?
If you want a list of dataframes from each file (assuming your working directory contains the .csv files):
files <- list.files(".", pattern="*.csv")
df.list <- sapply(filenames, read.csv.sql,sql="select * from file ",eol="\n",simplify=F)
Or if you want them all combined:
df <- ldply(filenames, read.csv.sql,sql="select * from file ",eol="\n")