I got a big file and I would like to replace the first line with other content.
When I use {ok, IoDev} = file:open("/root/FileName", [write, raw, binary]), the whole content is removed.
But when I use {ok, IoDev} = file:open("/root/FileName", [append, raw, binary]) and file:pwrite(S, {bof,0}, <<"new content\n">>), I got the result {error, badarg}.
If I set Location to 0: file:pwrite(S, 0, <<"new content\n">>), the string is appended at tail of the file.
You seem to be confused with the actual file API.
file:open/2 will truncate the file if you pass [write, raw, binary]as you do:
(about write mode): The file is opened for writing. It is created if it does not exist. If the file exists, and if write is not combined with read, the file will be truncated.
So you need to pass either [write, read] or [write, append] as documented.
file:pwrite/3 also works exactly as documented. It allows you to write at a given position in the file. In particular, you cannot pass {bof, 0} as second argument since you opened the file in raw mode:
If IoDevice has been opened in raw mode, some restrictions apply: Location is only allowed to be an integer; and the current position of the file is undefined after the operation.
The following sample code shows how they work:
ok = file:write_file("/tmp/file", "This is line 1.\nThis is line 2.\n"),
{ok, F} = file:open("/tmp/file", [read, write, raw, binary]),
ok = file:pwrite(F, 0, <<"This is line A.\n">>),
ok = file:close(F),
{ok, Content} = file:read_file("/tmp/file"),
io:put_chars(Content),
ok = file:delete("/tmp/file").
It will output:
This is line A.
This is line 2.
This works because text "This is line A.\n" is exactly as long as "This is line 1.\n". It does not really replace the line, but just bytes. If you need to replace the first line with content that has a different length, you need to rewrite the whole content of the file. A common approach is indeed to write a new file and swap them eventually. If the file is small enough, however, you can read it entirely in memory and rewrite it. file:read_file/1 and file:write_file/2 would work:
replace_first_line(Path, NewLine) ->
{ok, Content} = file:read_file(Path),
[FirstLine | Tail] = binary:split(Content, <<"\n">>),
NewContent = [NewLine, <<"\n">> | Tail],
ok = file:write_file(Path, NewContent).
The question is not related to erlang but rather general file operations.
Replacing a line in a file requires to rewrite the file in a whole. The easiest way to do so would be to write all the new content in a new file and then to move the file.
Related
I want to edit the data in my fits file using astropy and then save it to its original file. Below is my code and the error message, please ignore if there's a redundant line because obviously I opened the file twice but I still get the error after deleting it.
file_list = sorted(glob.glob('*.fits')) #read in my three fits files
hdudata = np.full((3,720,1440), 0) # a test list to store the data
for im in range(len(file_list)):
hdu_list = fits.open(file_list[im])
hdudata[im] = hdu_list[0].data # read in the data from fits file
if im == 2: # I only want to change the last image
with fits.open(file_list[im], mode='update') as hdus:
hdu = hdus[0]
hdu.data = (hdudata[im-1] + hdudata[im])/2. # basically add two images
# and take the average
hdu.close() # this is required otherwise an error message pops up saying
# the next line cannot proceed as the file is being run
hdu.flush() # the error line
VerifyError:
Verification reported errors:
HDU 0:
'NAXIS1' card at the wrong place (card 4).
'NAXIS2' card at the wrong place (card 5).
'EXTEND' card at the wrong place (card 6).
Note: astropy.io.fits uses zero-based indexing.
I have only accessed and changed the data but why is the error taking place in my header, I met no problem reading the headers (though I didn't include in this code above) then why is it faulty when saving it?
For a CSV file generated in WLST / Jython 2.2.1 i want to update the header, the first line of the output file, when new metrics have been detected. This works fine by using seek to go to the first line and overwriting the line. But it fails when the number of characters of the first line exceeds 8091 characters.
I made simplified script which does reproduce the issue i am facing here.
#!/usr/bin/python
#
import sys
global maxheaderlength
global initheader
maxheaderlength=8092
logFilename = "test.csv"
# Create (overwrite existing) file
logfileAppender = open(logFilename,"w",0)
logfileAppender.write("." * maxheaderlength)
logfileAppender.write("\n")
logfileAppender.close()
# Append some lines
logfileAppender = open(logFilename,"a",0)
logfileAppender.write("2nd line\n")
logfileAppender.write("3rd line\n")
logfileAppender.write("4th line\n")
logfileAppender.write("5th line\n")
logfileAppender.close()
# Seek back to beginning of file and add data
logfileAppender = open(logFilename,"r+",0)
logfileAppender.seek(0) ;
header = "New Header Line" + "." * maxheaderlength
header = header[:maxheaderlength]
logfileAppender.write(header)
logfileAppender.close()
When maxheaderlength is 8091 or lower i do get the results as expected. The file test.csv starts with “New Header Line" followed by 8076 dots and
followed by the lines
2nd line
3rd line
4th line
5th line
When maxheaderlength is 8092> the test.csv results as a file starting with 8092 dots followed by "New Header Line" and then followed by 8077 dots. The 2nd ... 5th line are now show, probably overwritten by the dots.
Any idea how to work around or fix this ?
I too was able to reproduce this extremely odd behaviour and indeed it works correctly in Jython 2.5.3 so I think we can safely say this is a bug in 2.2.1 (which unfortunately you're stuck with for WLST).
My usual recourse in these circumstances is to fall back to using native Java methods. Changing the last block of code as follows seems to work as expected :-
# Seek back to beginning of file and add data
from java.io import RandomAccessFile
logfileAppender = RandomAccessFile(logFilename, "rw")
logfileAppender.seek(0) ;
header = "New Header Line" + "." * maxheaderlength
header = header[:maxheaderlength]
logfileAppender.writeBytes(header)
logfileAppender.close()
So heres the issue guys,
I have a very simple little program that reads in some setup details from a file (to make it reuseable for other sets of data) and stores them into variables.
It then uses one of those variables to open another file that I need to write some results to, as well as various search parameters.
When passing the variable to the .open() function, it fails saying it cant find the file, but when passing the exact same information, but as a written string instead of a variable, it works.
Is this a known problem, or am I just doing something wrong?
The code(problem bit bolded)
def urlTrawl(filename):
import urllib
read = open(getMediaPath(filename), "rt")
baseurl = read.readline()
orgurl = read.readline()
lasturlfile = read.readline()
linksfile = read.readline()
read.close()
webpage = ""
links = ""
counter = 0
lasturl = ""
nexturl = ""
url = ""
connection = ""
try:
read = open(lasturlfile, "rt")
lasturl = read.readline()
except IOError:
print "IOError"
webpage = connection.read()
connection.close()
**file = open(linksfile, "wt")**
file.close()
file = open(lasturlfile, "wt")
file.write(nexturl)
return 1
The information being passed in
http://www.questionablecontent.net/
http://www.questionablecontent.net/view.php?comic=2480
C:\\Users\\James\\Desktop\\comics\\qclast.txt
C:\\Users\\James\\Desktop\\comics\\comiclinksqc.txt
strip\"
src=\"
\"
Pevious
Next
f=\"
\"
EDIT: removed working code, to narrow down the problem area and updated code to use a direct reference rather then a relative one.
I found the problem in the end.
The problem was that it was reading in the \n at the end of each line in my details file, and of course the \n isn't anywhere in the website data I'm reading. Removing the last character of each read did the trick:
baseurl = baseurl[:-1]
orgurl = orgurl[:-1]
lasturlfile = lasturlfile[:-1]
linksfile = linksfile[:-1]
search1 = search1[:-1]
search2 = search2[:-1]
search3 = search3[:-1]
search4 = search4[:-1]
search5 = search5[:-1]
search6 = search6[:-1]
I might not be right, but I think this is what's happening.
You're saying this works fine:
file = open('C:\\Users\\James\\Desktop\\comics\\comiclinksqc.txt', "wt")
But this doesn't:
# After reading three lines
linksfile = read.readline()
file = open(linksfile, "wt")
There is a difference between these two. In the first piece of code, the double slashes are escapes. They resolve to single slashes when Python is done parsing. Like so:
>>> print 'C:\\Users\\James\\Desktop\\comics\\comiclinksqc.txt'
C:\Users\James\Desktop\comics\comiclinksqc.txt
But when you read that same text from the file, there's no parsing of the text. That means that the string stored in your variable still has double slashes.
Try this command out. I bet it fails the same way as when you read the file path in:
file = open(r'C:\\Users\\James\\Desktop\\comics\\comiclinksqc.txt', "wt")
The r stands for "raw"; it prevents Python from interpreting escape characters. If it does fail the same way, then the double slashes are your problem. To fix it, in your file, you need to remove the double slashes:
C:\Users\James\Desktop\comics\comiclinksqc.txt
This isn't a problem in CPython 2.7; I'm betting it's not in 3.x, either. CPython interprets double slashes in some manner that they are effectively a single slash (in most cases, at least). So this may be an issue specific to Jython.
If unclean paths cause errors, you might want to consider doing something to clean them up. os.path.abspath might be helpful, although I can't say if Jython's implementation works as well as CPython's:
>>> print os.path.abspath(r'C:\\Users\\James\\Desktop\\comics\\comiclinksqc.txt')
C:\Users\James\Desktop\comics\comiclinksqc.txt
>>> print os.path.abspath(r'C:/Users/James/Desktop/comics/comiclinksqc.txt')
C:\Users\James\Desktop\comics\comiclinksqc.txt
I am trying to create a script which will list the datasource name and will show the connection pool utilization(pooled connection, Free Pool Size ext.)
But facing the issue when list the connection pool, if the data source name having space in between the name like "Default Datasource"
then it is listing list "Default Datasource and it is not parsing the datasource name correctly to the next function.
datasource = AdminConfig.list('DataSource', AdminConfig.getid( '/Cell:'
+ cell + '/')).splitlines()
for datasourceID in datasource:
datasourceName = datasourceID.split('(')[0]
print datasourceName
Request you to help if possible drop me mail at bubuldey#gmail.com
Regards,
Bubul
I'm writing a short script in Lua to replicate Search/Replace functionality. The goal is to enter a search term and a replacement term, and it will comb through all the files of a given extension (not input-determined yet) and replace the Search term with the Replacement term.
Everything seems to do what it's supposed to, except the files are not actually written to. My Lua interpreter (compiled by myself in Pelles-C) does not throw any errors or exit abnormally; the script completes as if it worked.
At first I didn't have i:flush(), but I added it after reading that it is supposed to save any written data to the file (see LUA docs). It didn't change anything, and files are still not written to.
I think it might have something to do with how I'm opening the file to edit it, since the "w" option works (but overwrites everything in my test files).
Source:
io.write("Enter your search term:")
term = io.read()
io.write("Enter your replace term:")
replacement = io.read()
io.stdin:read()
t = {}
for z in io.popen('dir /b /a-d'):lines() do
if string.match(string.lower(z), "%.txt$") then
print(z)
table.insert(t, z)
end
end
print("Second loop")
for _, w in pairs(t) do
print(w)
i = io.open(w, "r+")
print(i)
--i:seek("set", 6)
--i:write("cheese")
--i:flush()
for y in i:lines() do
print(y)
p, count = string.gsub(y, term, replacement, 1)
print(p)
i:write(p)
i:flush()
io.stdin:read()
end
i:close()
end
This is the output I get (which is what I want to happen), but in reality isn't being written to the file:
There was one time where it wrote output to a file, but it only output to one file and after that write my script crashed with the message: No error. The line number was at the for y in i:lines() do line, but I don't know why it broke there. I've noticed file:lines() will break if the file itself has nothing in it and give an odd/gibberish error, but there are things in my text files.
Edit1
I tried do this in my for loop:
for y in i:lines() do
print(y)
p, count = string.gsub(y, term, replacement, 1)
print(p)
i:write(p)
i:seek("set", 3) --New
i:write("TESTESTTEST") --New
i:flush()
io.stdin:read()
end
in order to see if I could force it to write regular text. It does but then it crashes with No error and still doesn't write the replacement string (just TESTESTTEST). I don't know what the problem could be.
I guess, one can't write to file while traversing its lines
for y in i:lines() do
i:write(p)
i:flush()
end
I need SAS to read many large log files, which are set up to have the most recent activities at the bottom. All I need is the most recent time a particular activity occurred, and I was wondering if it's possible for SAS to skip parsing the (long) beginning parts of the file.
I looked online and found how to read a dataset backwards, but that would require SAS to first parse everything in the .log file into the dataset first. Is it possible to directly read the file starting from the very end so that I can stop the data step as soon as I find the most recent activity of a particular type?
I read up on infile as well, and the firstobs option, but I have no idea how long these log files are until they are parsed, right? Sounds like a catch-22 to me. So is what I'm describing doable?
I'd probably set up a filename pipe statement to use an operating system command like tail -r or tac to present the file in reverse order to SAS. That way SAS can read the file normally and you don't have to worry about how long the file is.
If you mean parsing a sas log file, I am not sure if reading the log file backward is worth the trouble in practice. For instance, the following code executes less than a tenth of a second on my PC and it is writing and reading a 10,000 line log file. How big is your log files and how many are there? Also as shown below, you don't have to "parse" everything on every line. You can selectively read some parts of the line and if it is not what you are looking for, then you can just go to the next line.
%let pwd = %sysfunc(pathname(WORK));
%put pwd=&pwd;
x cd &pwd;
/* test file. more than 10,000 line log file */
data _null_;
file "test.log";
do i = 1 to 1e4;
r = ranuni(0);
put r binary64.;
if r < 0.001 then put "NOTE: not me!";
end;
put "NOTE: find me!";
do until (r<0.1);
r = ranuni(0);
put r binary64.;
end;
stop;
run;
/* find the last line that starts with
NOTE: and get the rest of the line. */
data _null_;
length msg $80;
retain msg;
infile "test.log" lrecl=80 eof=eof truncover;
input head $char5. #;
if head = "NOTE:" then input #6 msg $char80.;
else input;
return;
eof:
put "last note was on line: " _n_ ;
put "and msg was: " msg $80.;
run;
/* on log
last note was on line: 10013
and msg was: find me!
*/