Directory.EnumerateFiles with Take and Where - vb.net

I have following Problem:
For Each _file As String In Directory.EnumerateFiles(Quellpfad, "*.rdy").Take(500).Where(Function(item) item.Replace(Quellpfad, "").Length <= 11)
this code should take files from a Directory saved in the String "Quellpfad"
by These 2 criteria:
1.) only 500 files
2.) filename lenth <= 11 eg.: 0330829.rdy
The file 0330829.rdy is in the Directory but I can't find it with the code above.

You should use Take last because you want to apply the filter first, you should also use Path.GetFileName or Path.GetFileNameWithoutExtension instead of String.Replace:
Dim files = From file In Directory.EnumerateFiles(Quellpfad, "*.rdy")
Where Path.GetFileName(file).Length <= 11
Take 500
In VB.NET query syntax supports Take, so i'd prefer that.

You need to reorder your statement to put the Where before the Take:
For Each _file As String In Directory.EnumerateFiles(Quellpfad, "*.rdy").Where(Function(item) item.Replace(Quellpfad, "").Length <= 11).Take(500)
The Where returns all files matching your condition FIRST, and THEN limits those down to 500.

Related

include a txt file in GAMS - errors 140 & 36 in the first lines of my txt file

I am a new user in GAMS and I am trying to include a txt file in my code, but I get the same errors (error 140 & error 36) in the first lines of my txt file again and again.
Could anyone help?
My code goes like this, and I have also attached the txt file
* define the set of asset classes
set n Number of returns /n1*n120/;
* define Tables, Parameters, Scalars
Scalar T /120/;
$INCLUDE prices.txt
Please note, that later in the code I need to use the data from the text file in an equation like this:
EQ1.. sum(n, p(n)*prices(n)) =e= price0*exp(r*time);
Thanks
prices.txt file
Your prices.txt does not contain valid GAMS syntax. What you need is something like this:
Parameter prices(n) /
n1 455.5
n2 44545.5
/;

Creating a $variable from a specific part of a txt file

I'm trying to get PowerShell to use a specific section of a text file as a $variable to be used later in the script.
With Get-Content and index I can get to the point of having a whole line, but I just want one word to be the variable, not the whole thing.
The alphanumeric code will always be in the same location exactly
line 5 (counting the first one as 0 of course) and the position in would be between the characters 22 to 30 (or the last 8 characters of that line).
I would like that section of the document to be identified as $txtdoc, to be used later in:
$inputfield = $ie.Document.getElementByID('input5')
$inputfield.value = $txtdoc
The txt file contains the following
From: *************
Sent: *************
To: *******************
Subject: *************
On-Demand Tokencode: 79960739
Expires after use or 60 minutes
this maybe?
$variable = ( gc mytext.txt )[5].substring(21,8)

How to run same syntax on multiple spss files

I have 24 spss files in .sav format in a single folder. All these files have the same structure. I want to run the same syntax on all these files. Is it possible to write a code in spss for this?
You can use the SPSSINC PROCESS FILES user submitted command to do this or write your own macro. So first lets create some very simple fake data to work with.
*FILE HANDLE save /NAME = "Your Handle Here!".
*Creating some fake data.
DATA LIST FREE / X Y.
BEGIN DATA
1 2
3 4
END DATA.
DATASET NAME Test.
SAVE OUTFILE = "save\X1.sav".
SAVE OUTFILE = "save\X2.sav".
SAVE OUTFILE = "save\X3.sav".
EXECUTE.
*Creating a syntax file to call.
DO IF $casenum = 1.
PRINT OUTFILE = "save\TestProcess_SHOWN.sps" /"FREQ X Y.".
END IF.
EXECUTE.
Now we can use the SPSSINC PROCESS FILES command to specify the sav files in the folder and apply the TestProcess_SHOWN.sps syntax to each of those files.
*Now example calling the syntax.
SPSSINC PROCESS FILES INPUTDATA="save\X*.sav"
SYNTAX="save\TestProcess_SHOWN.sps"
OUTPUTDATADIR="save" CONTINUEONERROR=YES
VIEWERFILE= "save\Results.spv" CLOSEDATA=NO
MACRONAME="!JOB"
/MACRODEFS ITEMS.
Another (less advanced) way is to use the command INSERT. To do so, repeatedly GET each sav-file, run the syntax with INSERT, and sav the file. Probably something like this:
get 'file1.sav'.
insert file='syntax.sps'.
save outf='file1_v2.sav'.
dataset close all.
get 'file2.sav'.
insert file='syntax.sps'.
save outf='file2_v2.sav'.
etc etc.
Good luck!
If the Syntax you need to run is completely independent of the files then you can either use: INSERT FILE = 'Syntax.sps' or put the code in a macro e.g.
Define !Syntax ()
* Put Syntax here
!EndDefine.
You can then run either of these 'manually';
get file = 'file1.sav'.
insert file='syntax.sps'.
save outfile ='file1_v2.sav'.
Or
get file = 'file1.sav'.
!Syntax.
save outfile ='file1_v2.sav'.
Or if the files follow a reasonably strict naming structure you can embed either of the above in a simple bit of python;
Begin Program.
imports spss
for i in range(0, 24 + 1):
syntax = "get file = 'file" + str(i) + ".sav.\n"
syntax += "insert file='syntax.sps'.\n"
syntax += "save outfile ='file1_v2.sav'.\n"
print syntax
spss.Submit(syntax)
End Program.

which set command to display entire line in sql output to txt file

I am trying to output a query result to a txt file in Windows. The query is being executed in Oracle. I want to export the entire record one by one but gets cut off at the end, the query however displays the full line.
I thought the command:
SET linesize 2000 will do the trick but no luck:
Getting:
2702M11F13-XL 38550116-06 Test 3 325 http://www.test.com/clot
Should get (what shows in query output):
2702M11F13-XL 38550116-06 Text 3 325 http://www.test.com/clothing/outerwear/coats/test/hybridge-lite-vest/p/38550116 CAD
Please help.
Thanks in advance
It should be possible using set colsep
Refer to the Oracle Sql*Plus documentation for set colsep
Note that there's also set tab {on/off} which is used to convert tabs to spaces

Create a 350000 column csv file by merging smaller csv files

I have about 350000 one-column csv files, which are essentially 200 - 2000 numbers printed one under another. The numbers are formatted like this: "-1.32%" (no quotes). I want to merge the files to create a monster of a csv file where each file is a separate column. The merged file will have 2000 rows maximum (each column may have a different length) and 350000 columns.
I thought of doing it with MySQL but there is a 30000 column limit. An awk or sed script could do the job but I don't know them all that well and I am afraid it will take a very long time. I could use a server if the solution requires to. Any suggestions?
This python script will do what you want:
#!/usr/bin/env python2
import os
import sys
import codecs
fhs = []
count = 0
for filename in sys.argv[1:]:
fhs.append(codecs.open(filename,'r','utf-8'))
count += 1
while count > 0:
delim = ''
for fh in fhs:
line = fh.readline()
if not line:
count -= 1
line = ''
sys.stdout.write(delim)
delim = ','
sys.stdout.write(line.rstrip())
sys.stdout.write('\n')
for fh in fhs:
fh.close()
Call it with all the CSV files you want to merge and it will print a new file to stdout.
Note that you can't merge all files at once; for one, you can't pass 350,000 file names as arguments to a process and secondly, a process can only open 1024 files at once.
So you'll have to do it in several passes. I.e. merge files 1-1000, then 1001-2000, etc. Then you should be able to merge the 350 resulting intermediate files at once.
Or you could write a wrapper script which uses os.listdir() to get the names or all files and calls this script several times.