I have this simple script to upload filenames:
Files:
LOAD
Distinct
FileName() as File
FROM [C:\Matias\Capacity Tracker\AllFiles\*];
And as a result while running the script, it happens the following:
Files << Analyst Time Sheet - Adam W - 0730-0805 0 lines fetched
Files << Analyst Time Sheet - Adam W - 0806-0812 0 lines fetched
Files << Analyst Time Sheet - Agnieszka J - 0702-0708 2 lines fetched
Files << Analyst Time Sheet - Agnieszka J - 0709-0715 3 lines fetched
Files << Analyst Time Sheet - Agnieszka J - 0716-0722 4 lines fetched
And so on...
So, the strange thing is that for the files from "Adam W", doesn't upload anything (no lines fetched). So then, I have the list of files except these ones. I find it very strange, because as I'm just asking for the filename, it can't be a thing of formatting (I think).
Any idea of what can be happening and how could I solve it?
Thank you in advance
Matias
Although QlikView offers that * option on the filename of the LOAD statment, the results are sometimes a little bit random. I would recomend that you try a different approach and see if it works.
For Each FILE in FileList('C:\Matias\Capacity Tracker\AllFiles\*')
Files:
LOAD
Distinct FileName() as File
FROM [$(FILE)];
next file
Hope this helps.
thanks for your idea. I tried that and unfortunately I had the same problem. Finally, I solved like this:
Files:
LOAD
Distinct
FileName() as File
FROM [C:\Matias\Capacity Tracker\AllFiles\*];
SET ErrorMode=0;
Files:
LOAD
Distinct
FileName() as File
FROM [C:\Matias\Capacity Tracker\AllFiles\*]
(ooxml, no labels, table is [Task Log])
Where Not Exists(File,FileName());
IF ScriptError <> 0 THEN
Files:
LOAD
FileName() as File
FROM [C:\Matias\Capacity Tracker\AllFiles\*]
(biff, no labels, table is [Task Log$])
Where Not Exists(File,FileName());
ENDIF
Despite they are all .xls files, it seems to be formatting differences between them. So the ones not uploaded at first, they were uploaded by the first statement after (ooxml), or if it failed, by the second one (biff files). Quite strange.
Maybe this is not the best and proper solution, but it was the only one that worked to upload all the filenames from the folder.
Related
I'm on macOS and I am looking for a way to rename multiple photo files and their variations like the following:
Originals - "productcode"_"photonumber"
3102_1529.jpg
3102_1534.jpg
3102_1766.jpg
3103_1844.jpg
3103_1845.jpg
3103_1854.jpg
3103_1856.jpg
Renamed - "productcode"_"count"
3102_1.jpg
3102_2.jpg
3102_3.jpg
3103_1.jpg
3103_2.jpg
3103_3.jpg
3103_4.jpg
As you can see above, some files have 3 variations while some have 4 or 5, and I want to number the variations(after the dash) starting the count from 1.
Can you help me, please?
I have run into a problem.
I have 10 large separate files, file type File without column headers, which are in total near 4GB which are require merging. I have been told they are text files and pipe delimited, so I added the file extension txt on each files, which I hope is not the problem. R Studio is crashing when I use the following code...
multmerge = function(mypath){
filenames=list.files(path=mypath, full.names=TRUE)
datalist = lapply(filenames, function(x){read.csv(file=x,header=F, sep
= "|")})
Reduce(function(x,y) {merge(x,y, all=T)}, datalist)}
mymergeddata = multmerge("C://FolderName//FolderName")
or when I try to do something like this...
temp1 <- read.csv(file="filename.txt", sep="|")
:
temp10 <- read.csv(file="filename.txt", sep="|")
SomeData = Reduce(function(x, y) merge(x, y), list(temp1...,
temp10))
I seeing errors such as
"Error: C stack usage is too close to the limit r" and
"In scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
Reached total allocation of 8183Mb: see help(memory.size)"
Then I saw a someone asked a question on SO as I am writing this question,
here, so I was wondering if SQL command can used in R Studio or SSMS to merge these large files? If they can how can it be merged to. If it can be done please can you advise me how to do this. I will looking around on the net.
If it can't then what is the best method to merge these rather large files. Can this be achieved in R Studio or is there open source?
I am working on a PC which has 64bit Windows with 8GB RAMS. I have included R and SQL Tags to see what options there are.
Thanks in advance if anyone can help me.
Your machine doesn't have enough memory for your selected operations.
You have 10 files ~ 4GB in total.
When you merge the 10 files you create another object which is also about 4GB, putting you very close to your machine's limit.
Your operating system and R and whatever else you're running also consume RAM so it's no surprise you run out of RAM.
I'd suggest taking a stepwise approach if you don't have access to a bigger maching:
- take the first two files and merge them.
- delete the file objects from R and keep only the merged one.
- load the third object and merge it with the earlier merger.
Repeat until done.
I have 24 spss files in .sav format in a single folder. All these files have the same structure. I want to run the same syntax on all these files. Is it possible to write a code in spss for this?
You can use the SPSSINC PROCESS FILES user submitted command to do this or write your own macro. So first lets create some very simple fake data to work with.
*FILE HANDLE save /NAME = "Your Handle Here!".
*Creating some fake data.
DATA LIST FREE / X Y.
BEGIN DATA
1 2
3 4
END DATA.
DATASET NAME Test.
SAVE OUTFILE = "save\X1.sav".
SAVE OUTFILE = "save\X2.sav".
SAVE OUTFILE = "save\X3.sav".
EXECUTE.
*Creating a syntax file to call.
DO IF $casenum = 1.
PRINT OUTFILE = "save\TestProcess_SHOWN.sps" /"FREQ X Y.".
END IF.
EXECUTE.
Now we can use the SPSSINC PROCESS FILES command to specify the sav files in the folder and apply the TestProcess_SHOWN.sps syntax to each of those files.
*Now example calling the syntax.
SPSSINC PROCESS FILES INPUTDATA="save\X*.sav"
SYNTAX="save\TestProcess_SHOWN.sps"
OUTPUTDATADIR="save" CONTINUEONERROR=YES
VIEWERFILE= "save\Results.spv" CLOSEDATA=NO
MACRONAME="!JOB"
/MACRODEFS ITEMS.
Another (less advanced) way is to use the command INSERT. To do so, repeatedly GET each sav-file, run the syntax with INSERT, and sav the file. Probably something like this:
get 'file1.sav'.
insert file='syntax.sps'.
save outf='file1_v2.sav'.
dataset close all.
get 'file2.sav'.
insert file='syntax.sps'.
save outf='file2_v2.sav'.
etc etc.
Good luck!
If the Syntax you need to run is completely independent of the files then you can either use: INSERT FILE = 'Syntax.sps' or put the code in a macro e.g.
Define !Syntax ()
* Put Syntax here
!EndDefine.
You can then run either of these 'manually';
get file = 'file1.sav'.
insert file='syntax.sps'.
save outfile ='file1_v2.sav'.
Or
get file = 'file1.sav'.
!Syntax.
save outfile ='file1_v2.sav'.
Or if the files follow a reasonably strict naming structure you can embed either of the above in a simple bit of python;
Begin Program.
imports spss
for i in range(0, 24 + 1):
syntax = "get file = 'file" + str(i) + ".sav.\n"
syntax += "insert file='syntax.sps'.\n"
syntax += "save outfile ='file1_v2.sav'.\n"
print syntax
spss.Submit(syntax)
End Program.
I'm doing some optimization using a model whose number of constraints and variables exceeds the cap for the student version of, say, AMPL, so I've found a webpage [http://www.neos-server.org/neos/solvers/milp:Gurobi/AMPL.html] which can solve my type of model.
I've found however that when using a solver where you can provide a commandfile (which I assume is the same as a .run file) the documentation of NEOS server tells that you should see the documentation of the input file. I'm using AMPL input which according to [http://www.neos-guide.org/content/FAQ#ampl_variables] should be able to print the decision variables using a command file with the appearance:
solve;
display _varname, _var;
The problem is that NEOS claim that you cannot add the:
data datafile;
model modelfile;
commands into the .run file, resulting in that the compiler cannot find the variables.
Does anyone know of a way to work around this?
Thanks in advance!
EDIT: If anyone else has this problem (which I believe many people have based on my Internet search). Try to remove any eventual reset; command from the .run file!
You don't need to specify model or data commands in the script file submitted to NEOS. It loads the model and data files automatically, solves the problem, and then executes the script (command file) you provide. For example submitting diet1.mod model diet1.dat data and this trivial command file
display _varname, _var;
produces the output which includes
: _varname _var :=
1 "Buy['Quarter Pounder w/ Cheese']" 0
2 "Buy['McLean Deluxe w/ Cheese']" 0
3 "Buy['Big Mac']" 0
4 "Buy['Filet-O-Fish']" 0
5 "Buy['McGrilled Chicken']" 0
6 "Buy['Fries, small']" 0
7 "Buy['Sausage McMuffin']" 0
8 "Buy['1% Lowfat Milk']" 0
9 "Buy['Orange Juice']" 0
;
As you can see this is the output from the display command.
I have about 350000 one-column csv files, which are essentially 200 - 2000 numbers printed one under another. The numbers are formatted like this: "-1.32%" (no quotes). I want to merge the files to create a monster of a csv file where each file is a separate column. The merged file will have 2000 rows maximum (each column may have a different length) and 350000 columns.
I thought of doing it with MySQL but there is a 30000 column limit. An awk or sed script could do the job but I don't know them all that well and I am afraid it will take a very long time. I could use a server if the solution requires to. Any suggestions?
This python script will do what you want:
#!/usr/bin/env python2
import os
import sys
import codecs
fhs = []
count = 0
for filename in sys.argv[1:]:
fhs.append(codecs.open(filename,'r','utf-8'))
count += 1
while count > 0:
delim = ''
for fh in fhs:
line = fh.readline()
if not line:
count -= 1
line = ''
sys.stdout.write(delim)
delim = ','
sys.stdout.write(line.rstrip())
sys.stdout.write('\n')
for fh in fhs:
fh.close()
Call it with all the CSV files you want to merge and it will print a new file to stdout.
Note that you can't merge all files at once; for one, you can't pass 350,000 file names as arguments to a process and secondly, a process can only open 1024 files at once.
So you'll have to do it in several passes. I.e. merge files 1-1000, then 1001-2000, etc. Then you should be able to merge the 350 resulting intermediate files at once.
Or you could write a wrapper script which uses os.listdir() to get the names or all files and calls this script several times.