Python.NET garbage collection - python.net

I try to integrate an external C# Library in a Python project. The library reads and converts some file contents that I want to consume in Python and then delete the file:
import clr
from pathlib import Path
import os
import gc
my_dll = Path('C:/MyLib') #path without .dll suffix
clr.AddReference(str(my_dll))
from MyLib import MyFileReader
file = Path('C:/myfile.txt')
reader = MyFileReader()
result = reader.Initialize(str(file))
# do something with result
del reader
clr.System.GC.Collect()
gc.collect()
os.remove(str(file)) # WinError 32 The process cannot access the file because another process has locked a portion of the file.
MyFileReader properly reads and returns the file contents, but I'm not able to delete the file with os.remove as the reader is not properly garbage collected and still holds a lock on the file. I assume the authors of the external library didn't implement the closing of the file, but I cannot check as I don't have access to the source code of the external library. How can I properly remove MyFileReader from memory?

Related

pyunpack Archive.extractall hangs on password protected .rar file

I'm writing a python script to process some old archives that include many .rar files. I can successfully extract data from them using Archive.extractall from pyunpack, but haven't been able to find much documentation. The documentation link at https://pypi.org/project/pyunpack/0.1.2/ is broken. Everything was going well until the program hung on a specific .rar file. I looked at the file with 7zip, and it said it was password protected. I have no idea what the password is, so I would like the script to simply skip it. I already have extractall in a try block, but I guess there is no exception. How can I test if the file is password protected and not try to extract it if so? Here's the relevant portion of my code.
import os
import rarfile
from pyunpack import Archive
for subdir,dirs,files in os.walk(directoryToExtract):
for file in files:
if rarfile.is_rarfile(os.path.join(subdir, file)):
print("Found rarfile " + os.path.join(subdir, file))
try:
Archive(os.path.join(subdir, file)).extractall(os.path.join(subdir, file[0:len(file)-4]),auto_create_dir=True)
except Exception as e:
print("ERROR: Couldn't unrar " + os.path.join(subdir, file))
print(str(e))

Replacing one character in a long list of file names

I have a list of image files (jpg, png, etc...) that the dot before the suffix was replaced with # this makes the file unrecognizable to the Android os. In dos, it would be easy. How can I do it in js easily because I've never used js before?
If you're using Node.js, you can use the fs module.
I haven't been able to make time to test a solution, but a naive implementation would look probably like this:
// Import the fs module
const fs = require('fs');
// Read all files in directory
var files = fs.readDirSync('/path/to/images/folder/');
// Loop through the files and rename each file
files.forEach(path => fs.rename(path, path.replace('#', '.')));

How to open and read a .gz file in Nim (preferably line by line)

I just sat down to write my first Nim script to parse a .vcf (Variant Call Format) file. This file format stores genetic mutations from sequencing data.
For scripting languages, I 'grew up' on Perl and later migrated to Python, but I would love to use a language with the speed that Nim offers. I realize Nim is still young, but I couldn't even find a clear example for how to open and read a .gz (gzip) file (preferably line by line).
Can anyone provide a simple example to open and read a gzip file using Nim, line by line?
In Python, I'm accustomed to the following (uber-simple) code:
import gzip
my_file = gzip.open('my_file.vcf.gz', 'w')
for line in my_file:
# do something
my_file.close()
I have seen related questions, but they're not clear. The posts are also relatively old and I hope/suspect something better has come about. Here's what I've found:
Read gzip-compressed file line by line
File, FileStream, and GZFileStream
Reading files from tar.gz archive in Nim
Really appreciate it.
P.S. I also think it would be useful if someone created a Nim tag in StackOverflow. I do not have the reputation to create tags.
Just in case you need to handle VCF rather than .gz, there's a nice wrapper for htslib written by Brent Pedersen:
https://github.com/brentp/hts-nim
You need to install the htslib in your system, and then require the library in your .nimble file with requires "hts", or install the library with nimble install hts. If you are going to do NGS analysis in Nim you'll need it.
The code you need:
import hts
var v:VCF
doAssert open(v, "myfile.vcf.gz")
# Here you have the VCF file loaded in v, and can access the headers through
# v.header property
for record in v:
# Here you get a Record object per line, e.g. extract the Ref and Alts:
echo v.REF, " ", v.ALT
v.close()
Be sure to follow the docs, because some things differ from python, specially when getting the INFO and FORMAT fields.
Checkout the whole Brent repo. It has plenty of wrappers, code samples and utilities to handle NGS problems (e.g. an ultrafast coverage tool utility called Mosdepth).
Per suggestion from Maurice Meyer, I looked at the tests for the Nim zip package. It turned out to be quite simple. This is my first Nim script, so my apologies if I didn't follow convention, etc.
import zip/gzipfiles # Import zip package
block:
let vcf = newGzFileStream("my_file.vcf.gz") # Open gzip file
defer: outFile.close() # Close file (like a 'final' statement in 'try' block)
var line: string # Declare line variable
# Loop over each line in the file
while not vcf.atEnd():
line = vcf.readLine()
# Cure disease with my VCF file
To install the zip package, I simply ran because it is already in the Nim package library:
> nimble refresh
> nimble install zip
I tried to use Nim some time ago to parse a fastq or fastq.gz file.
The code should be available here:
https://gitlab.pasteur.fr/bli/qaf_demux/blob/master/Nim/src/qaf_demux.nim
I don't remember exactly how this works, but apparently, I did an import zip/gzipfiles and used newGZFileStream on the input file name to obtain a Stream from which lines can be read using .readLine() in this piece of code:
proc fastqParser(stream: Stream): iterator(): Fastq =
result = iterator(): Fastq =
var
nameLine: string
nucLine: string
quaLine: string
while not stream.atEnd():
nameLine = stream.readLine()
nucLine = stream.readLine()
discard stream.readLine()
quaLine = stream.readLine()
yield [nameLine, nucLine, quaLine]
It is used in something that amounts to this piece of code:
let inputFqs = fastqParser(newGZFileStream($inFastqFilename))
Hopefully you can adapt this to your case.
My .nimble file has a requires "zip#head". I suppose this triggers the installation of zip/gzipfiles.

How does numpy handle mmap's over npz files?

I have a case where I would like to open a compressed numpy file using mmap mode, but can't seem to find any documentation about how it will work under the covers. For example, will it decompress the archive in memory and then mmap it? Will it decompress on the fly?
The documentation is absent for that configuration.
The short answer, based on looking at the code, is that archiving and compression, whether using np.savez or gzip, is not compatible with accessing files in mmap_mode. It's not just a matter of how it is done, but whether it can be done at all.
Relevant bits in the np.load function
elif isinstance(file, gzip.GzipFile):
fid = seek_gzip_factory(file)
...
if magic.startswith(_ZIP_PREFIX):
# zip-file (assume .npz)
# Transfer file ownership to NpzFile
tmp = own_fid
own_fid = False
return NpzFile(fid, own_fid=tmp)
...
if mmap_mode:
return format.open_memmap(file, mode=mmap_mode)
Look at np.lib.npyio.NpzFile. An npz file is a ZIP archive of .npy files. It loads a dictionary(like) object, and only loads the individual variables (arrays) when you access them (e.g. obj[key]). There's no provision in its code for opening those individual files inmmap_mode`.
It's pretty obvious that a file created with np.savez cannot be accessed as mmap. The ZIP archiving and compression is not the same as the gzip compression addressed earlier in the np.load.
But what of a single array saved with np.save and then gzipped? Note that format.open_memmap is called with file, not fid (which might be a gzip file).
More details on open_memmap in np.lib.npyio.format. Its first test is that file must be a string, not an existing file fid. It ends up delegating the work to np.memmap. I don't see any provision in that function for gzip.

Preventing overwriting when using numpy.savetxt

Is there built error handling for prevent overwriting a file when using numpy.savetxt?
If 'my_file' already exists, and I run
numpy.savetxt("my_file", my_array)
I want an error to be generated telling me the file already exists, or asking if the user is sure they want to write to the file.
You can check if the file already exists before you write your data:
import os
if not os.path.exists('my_file'): numpy.savetxt('my_file', my_array)
You can pass instead of a filename a file handle to np.savetxt(), e.g.,
import numpy as np
a = np.random.rand(10)
with open("/tmp/tst.txt", 'w') as f:
np.savetxt(f,a)
So you could write a helper for opening the file.
Not in Numpy. I suggest writing to a namedTemporaryFile and checking if the destination file exists. If not, rename the file to a concrete file on the system. Else, raise an error.
Not an error handler, but it's possible to create a new version in the form of:
file
filev2
filev2v3
filev2v3v4
so that no file ever gets overwritten.
n=2
while os.path.exists(f'{file}.txt'):
file = file + f'v{n}'
n+=1