Is it possible to direct your output file having the same name as input file, that is to overwrite? - awk

I would like to overwrite the name of input file with the same name of output file owing to limited disk space that I have in my system. Is it possible? I know this is not recommended, but I have the input files already backup. I will have a loop in a shell to do the cut command.
#!/bin/bash
for i in {1..1000}
do
cut --delimiter=' ' --fields=1,3-7 input$i.txt > input$i.txt
done

You could always use a temporary file to which you redirect, and then when you're sure everything went fine, you rename it to the original file.

some gnu utils commands have a -i option (such as sed) that allow you to change a file in place .....most of file filtering and editing (like cut) can be done using sed.

The shell will parse the command and handle the redirections first. When it sees "> afile" it will truncate "afile" and open it for writing. Your data is now destroyed. Then the shell hands the filename to cut which now has nothing to read.
This is how I learned:
some | pipeline < my_file > my_file.tmp
ln my_file my_file.bak # this is a hard link
mv my_file.tmp my_file
That keeps the original data in place for as long as possible.
If you're having disk space issues, you will have to read the input file into memory entirely.

In case of very limited disk space (disk quota) you could try to place a compressed source file in ram (/dev/shm) and use that as the source (uncompressing it to stdout and piping that to your script).

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

How to split sql in MAC OSX?

Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!

OpenVMS - Add STRING to Filename via DCL

I have a number of files created by a program on our selling system that are produced in a format like the following:
CRY_SKI_14_EDI.LIS
CRY_SUM_14_EDI.LIS
THO_SKI_14_EDI.LIS
THO_LAK_14_EDI.LIS
CRY_SKI_IE_14_EDI.LIS
These files differ in numbers depending on the split of our product to different brandings. Is it possible to rename them all so that they read like the following:
CRY_SKI_14_EDI_DEMO.LIS
CRY_SUM_14_EDI_DEMO.LIS
THO_SKI_14_EDI_DEMO.LIS
THO_LAK_14_EDI_DEMO.LIS
CRY_SKI_IE_14_EDI_DEMO.LIS
I need the files to be correctly named prior to their FTP as a hardcoded file could potentially not exist due to the brand not being on sale and terminate the FTP which would prevent the other files following it from being transmitted to our FTP server.
The OpenVMS rename command is more handy (imho) than the windows or unix variants, because it can bulk change chuncks of the full file name. Such as 'name', 'type' or (sub)directory.
For example:
$ rename *.dat *.old
That's great but it will not change within the chunks (components) like the name part requested here.
For that, the classic DCL approach is a quick loop, either parsing directory output (Yuck!) or using F$SEARCH. For example:
$loop:
$ file = f$search("*EDI.LIS")
$ if file .eqs. "" then exit
$ name = f$parse(file,,,"name","syntax_only") ! grab name component from full name
$ rename/log 'file' 'name'_demo ! rename 'fills in the blanks'
$ goto loop
Personally I use PERL one-liners for this kind of work.
(and I test with -le using 'print' instead of 'rename' first. :-)
$ perl -e "for (<*edi.lis>) { $old = $_; s/_edi/_edi_demo/; rename $old,$_}"
Enjoy!
Hein

batch scripting: how to get parent dir name without full path?

I'm working on a script that processes a folder and there is always one file in it I need to rename. The new name should be the parent directory name. How do I get this in a batch file? The full path to the dir is known.
It is not very clear how the script is supposed to become acquainted with the path in question, but the following example should at least give you an idea of how to proceed:
FOR %%D IN ("%CD%") DO SET "DirName=%%~nxD"
ECHO %DirName%
This script gets the path from the CD variable and extracts the name only from it to DirName.
You can use basename command:
FULLPATH=/the/full/path/is/known
JUSTTHENAME=$(basename "$FULLPATH")
You can use built-in bash tricks:
FULLPATH=/the/full/path/is/known
JUSTTHENAME=${FULLPATH##*/}
Explanations:
first # means 'remove the pattern from the begining'
second # means 'remove the longer possible pattern'
*/ is the pattern
Using built-in bash avoid to call an external command (i.e. basename) therefore this optimises you script. However the script is less portable.

Windows Batch File - Using Append with File Name that has spaces

I am creating a batch file to consolidate some hard coded text with a few of other existing text files.
for this I am using the below.
set "txtFile=.\text.txt"
call:Append "C:\test 123\test.txt" %textFile%
over here, when I execute it, it thros an error as it is not able to proceed with the path as it has spaces.
how should this be addressed.
I have no idea what your append batch file is doing, but you can simply use copy to concatenate two files.
It's not clear to me what the needs to be appended to what, but the following will append the contents of text.txt to C:\test 123\test.txt by writing everything to C:\test 123\test.txt.
set txtFile=.\text.txt
copy "C:\test 123\test.txt" /a + %txtFile% /a "C:\test 123\test.txt"
If you want a different output file, just change the last parameter.
Btw: it's better to not rely on a specific working directory
The following:
set txtFile=%~dp0text.txt
makes sure that the text.txt is used that is in the same directory as your batch file.