Why does a read-only open of a named pipe block? - file-io

I've noticed a couple of oddities when dealing with named pipes (FIFOs) under various flavors of UNIX (Linux, FreeBSD and MacOS X) using Python. The first, and perhaps most annoying is that attempts to open an empty/idle FIFO read-only will block (unless I use os.O_NONBLOCK with the lower level os.open() call). However, if I open it for read/write then I get no blocking.
Examples:
f = open('./myfifo', 'r') # Blocks unless data is already in the pipe
f = os.open('./myfifo', os.O_RDONLY) # ditto
# Contrast to:
f = open('./myfifo', 'w+') # does NOT block
f = os.open('./myfifo', os.O_RDWR) # ditto
f = os.open('./myfifo', os.O_RDONLY|os.O_NONBLOCK) # ditto
Note: The behavior is NOT Python specific. Example in Python to make it easier to replicate and understand for a broader audience).
I'm just curious why. Why does the open call block rather than some subsequent read operation?
Also I've noticed that a non-blocking file descriptor can exhibit two different behaviors in Python. In the case where I use os.open() with the os.O_NONBLOCK for the initial opening operation, then an os.read() seems to return an empty string if data is not ready on the file descriptor. However, if I use fcntl.fcnt(f.fileno(), fcntl.F_SETFL, fcntl.GETFL | os.O_NONBLOCK) then an os.read raises an exception (errno.EWOULDBLOCK)
Is there some other flag being set by the normal open() that's not set by my os.open() example? How are they different and why?

That's just the way it's defined. From the Open Group page for the open() function
O_NONBLOCK
When opening a FIFO with O_RDONLY or O_WRONLY set: If O_NONBLOCK is
set:
An open() for reading only will return without delay. An open()
for writing only will return an error if no process currently
has the file open for reading.
If O_NONBLOCK is clear:
An open() for reading only will block the calling thread until a
thread opens the file for writing. An open() for writing only
will block the calling thread until a thread opens the file for
reading.

Related

How to do an incremental read of binary files

TL;DR: can I do an incremental read of binary files with Red or Rebol?
I would like to use Red to process some large (13MB to 2GB) structured binary files (Kurzweil synthesizer files). I've used other languages (C, Go, Tcl, Ruby, Dart) to walk through these files, and now I'd like to do the same with Red or Rebol.
Is there a way to incrementally read binary files, byte by byte? All I see is read/binary which seems to slurp the entire file at once (or a part of a file).
I'll need to jump around a little bit, too (either peek at the next byte, or skip to the end of a section, or skip past variable length strings to the start of data).
(Yes, I could make some helpers that tracked the position and used read/part/seek.)
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
This is on macos, but a portable solution would be great.
Thanks!
PS: "open/read %abc" gives an error "*** Script Error: open does not allow file! for its port argument", even though the help message say the port argument is "port [port! file! url! block!]"
Rebol has ports for that, which are planned for 0.7.0 release in Red. So, current I/O is very basic and buffer-only, and open is a preliminary stub.
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
You can leverage Rebol or Red/System FFI as a learning excercise.
Here is how you would do it in Rebol:
>> file: open/direct/binary %file.dat
>> until [none? probe copy/part file 20]
>> close file
#{732F7072696E74657253657474696E6773312E62}
#{696E504B01022D00140006000800000021006149}
#{0910890100001103000010000000000000000000}
...
#{000000006A290000646F6350726F70732F617070}
#{2E786D6C504B0506000000000D000D0068030000}
#{292C00000000}
none
first file or pick file 1 will return the next byte value (integer!)
This even works with text files: open/lines/direct, in that case copy/part file 20 will return 20 lines, or you can use pick file 1 or first file to get the next line.
Soon this will be available on Red too.

How To Compress Folder-Contents in 1 Statement on Windows?

I'm attempting to zip a folder containing subfolders and items, using Windows shell CopyHere command:
https://msdn.microsoft.com/en-us/library/windows/desktop/bb787866(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/ms723207(v=vs.85).aspx
Update: Note, prefer a native solution-- this is for a distributed Excel VBA tool, so bundling 3rd-party files is not ideal. And, need synchronous compression.
I can easily add a folder and its contents to the zip:
oShell.Namespace(sZipPath).CopyHere "C:\My Folder"
So we know CopyHere can process multiple objects inside a folder in 1 statement.
The problem is, the above command puts the containing-folder at the root of the zip, and it's contents inside of it. But, i don't want the containing folder-- just its contents.
The doc mentions a wildcard (option 128), but when i use a wildcard, i get an error:
oShell.Namespace(sZipPath).CopyHere "C:\My Folder\*"
The file name you specified is not valid or too long.
Perhaps there's a way to use my 1st command above, and then move the items in the zip to the root of the zip?
It would be acceptable to loop through each item in the source folder, adding one at a time to the zip. But, because CopyHere is asynchronous, each subsequent CopyHere fails if the previous CopyHere is not finished. None of the fixes work for this issue:
Comparing number of items in source-folder and destination-zip fail, because if the zip contains a folder, that counts as only 1 item (the items it contains are not counted. https://stackoverflow.com/a/16603850/209942
Waiting a while between each item works, but a timer is unacceptable: it's arbitrary. I cannot guess in advance the size or compress-time of each object.
Checking to see if the zip is locked for access failed for me. If I block my loop until the file is not locked, I still get a file-access error. https://stackoverflow.com/a/6666663/209942
Function FileIsOpen(sPathname As String) As Boolean ' true if file is open
Dim lFileNum As Long
lFileNum = FreeFile
Dim lErr As Long
On Error Resume Next
Open sPathname For Binary Access Read Write Lock Read Write As #lFileNum
lErr = Err
Close #lFileNum
On Error GoTo 0
FileIsOpen = (lErr <> 0)
End Function
Update: VBA can call shell commands synchronously (instead of creating a shell32.shell object in VBA), so if CopyHere works on command-line or PowerShell, that could be the solution. Investigating...
Automating Shell objects really isn't a viable approach as you have already discovered. The Explorer Shell doesn't really expose this capability in any other manner though, at least not before Windows Vista and then not in any fashion easily used from VB6 programs or VBA macros.
Your best bet is a 3rd party ActiveX library, but be careful about 64-bit VBA hosts where you'll need a 64-bit version of such a library.
Another option is to acquire a later copy of the zlibwapi.dll and use some VB6 wrapper code with it. This is also a 32-bit solution.
That's what Zipper & ZipWriter, Zipping from VB programs does. Considering your requirements (which for some reason includes a fear of the Timer control) you could use the synchronous ZipperSync Class. See post #4 there. That code includes a simple AddFolderToZipperSync bundling up the logic to add a folder instead of just a single file.
The downside of the synchronous class is that a large archival operation freezes your program UI until it completes. If you don't want that use the Zipper UserControl instead.
You could also take the ideas from that to write your own wrapper class.
Solution:
Windows contains another native compression utility: CreateFromDirectory at a PowerShell prompt.
https://msdn.microsoft.com/en-us/library/system.io.compression.zipfile.createfromdirectory(v=vs.110).aspx
https://blogs.technet.microsoft.com/heyscriptingguy/2015/03/09/use-powershell-to-create-zip-archive-of-folder/
This requires .Net 4.0 or later:
> Add-Type -AssemblyName System.IO.Compression
> $src = "C:\Users\v1453957\documents\Experiment\rezip\aFolder"
> $zip="C:\Users\v1453957\Documents\Experiment\rezip\my.zip"
> [io.compression.zipfile]::CreateFromDirectory($src, $zip)
Note, you may have to provide the complete pathnames-- active directory was not implicit on my machine.
The above compression is synchronous at the PowerShell prompt, as the OP requests.
Next step is executing synchronously from VBA. The solution there is the .Run method in Windows Script Host Object Model. In VBA, set a reference to that, and do the following, setting the 3rd parameter of .Run command, bWaitOnReturn to True:
Function SynchronousShell(sCmd As String)As Long
Dim oWSH As New IWshRuntimeLibrary.WshShell
ShellSynch = oWSH.Run(sCmd, 3, True)
Set oWSH = Nothing
End Function
Now call SynchronousShell, and pass it the entire compression script.
I believe the only way for this process to work is if CreateFromDirectory is executed in the same session as Add-Type.
So, we must pass the whole thing as 1 string. That is, load all 4 commands into a single sCmd variable, so that Add-Type remains associated with the subsequent CreateFromDirectory. In PowerShell syntax, you can separate them with ;
https://thomas.vanhoutte.be/miniblog/execute-multiple-powershell-commands-on-one-line/
Also, you'll want to use single-quotes instead of double-quotes, else double quotes around the strings are removed when the daisy-chained commands are passed to powershell.exe
https://stackoverflow.com/a/39801732/209942
sCmd = "ps4 Add-Type -AssemblyName System.IO.Compression; $src = 'C:\Users\v1453957\documents\Experiment\rezip\aFolder'; $zip='C:\Users\v1453957\Documents\Experiment\rezip\my.zip'; [io.compression.zipfile]::CreateFromDirectory($src, $zip)"
Solved. The above constitutes the complete solution.
Extra info: Additional comments below are for special circumstances:
Multi-version .Net environments
If a .NET < 4.0 is the active environment on your OS, then System.IO.Compression does not exist-- the Add-Type command will fail. But if your machine has the .NET 4 assemblies available, you can still do this:
Create a batch file which runs PowerShell with .Net 4. See https://stackoverflow.com/a/31279372
In your Add-Type command above, use the exact path to the .Net 4 Compression assembly. On my Win Server 2008:
Add-Type -Path "C:\Windows\Microsoft.NET\assembly\GAC_MSIL\System.IO.Compression.FileSystem\v4.0_4.0.0.0__b77a5c561934e089\System.IO.Compression.FileSystem.dll"
Portability
Turns out, on my machine, I can copy the compression dll to any folder, and make calls to the copy and it works:
Add-Type -Path "C:\MyFunnyFolder\System.IO.Compression.FileSystem.dll"
I don't know what's required to ensure this works-- it might require the full .Net 4.0 or 2.0 files to be located in their expected directories. I assume the dll makes calls to other .Net assemblies. Maybe we just got lucky with this one :)
Character Limit
Depending on the depth of our paths and filenames, character-count may be a concern. PowerShell may have a 260-character limit (not sure).
https://support.microsoft.com/en-us/kb/830473
https://social.technet.microsoft.com/Forums/windowsserver/en-US/f895d766-5ffb-483f-97bc-19ac446da9f8/powershell-command-size-limit?forum=winserverpowershell
Since .Run goes through the Windows shell, you also have to worry about that character limit, but at 8k+, it's a bit roomier:
https://blogs.msdn.microsoft.com/oldnewthing/20031210-00/?p=41553
https://stackoverflow.com/a/3205048/209942
Site below offers a 24k+ character method, but i've not studied it yet:
http://itproctology.blogspot.com/2013/06/handling-freakishly-long-strings-from.html
At minimum, since we can put the dll wherever we like, we can put it in a folder near C: root-- keeping our character-count down.
Update: This post shows how we can put the whole thing in a script-file, and call it with ps4.cmd. This may become my preferred answer:
.\ps4.cmd GC .\zipper.ps1 | IEX
-- depending on answer here.
CopyHere:
Re the question: can CopyHere command execute on command-line?
CopyHere can be executed directly at PowerShell prompt (code below). However, even in powershell it's asynchronous-- control returns to PowerShell prompt before the process is finished. Therefore, no solution for the OP. Here's how it's done:
> $shellapp=new-object -com shell.application
> $zippath="test.zip"
> $zipobj=$shellapp.namespace((Get-Location).Path + "\$zippath")
> $srcpath="src"
> $srcobj=$shellapp.namespace((Get-Location).Path + "\$srcpath")
> $zipobj.Copyhere($srcobj.items())

Setting kermit line on command-line instead of .kermrc

I am establishing a kermit connection to my target via a ~/.kermrc file as suggested...
set line /dev/ttyUSB0
set flow-control none
set carrier-watch off
set speed 115200
connect
... But I periodically must unpower the target, which then changes the ttyUSB enumeration, which means I need to change the USB enumeration in the ~/.kermrc file.
My question is, is there a way to dynamically populate the USB line number without modifying the ~/.kermrc. If I remove the 'set line /dev/ttyUSB' and attempt to populate via command-line like this...
kermit -l /dev/ttyUSB0
... the ~/.kermrc file is read first and the command-line line configuration is not respected, so you receive errors that a line number must be set before setting the additional options in the ~/.kermrc.
Thanks in advance for response.

DATASET_CANT_CLOSE error number 32 "Broken Pipe"

I experienced an error in SAP ABAP which says DATASET_CANT_CLOSE with error number 32 (Broken Pipe). Question is: what procedure triggered this kind of error?
As far as I know, this error was triggered by:
CLOSE DATASET dset
But I can't reproduce the error since I don't know what procedure does trigger this kind of error.
This is the code I use:
method GENERATE_TXT_FILE.
DATA :
lwa_data TYPE t_line,
lv_param TYPE sxpgcolist-parameters.
"Upload File to Server
*Open Dataset
OPEN DATASET im_file_name FILTER 'dos2ux'
FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
CLEAR lwa_data.
LOOP AT it_data INTO lwa_data.
CATCH SYSTEM-EXCEPTIONS file_access_errors = 4
OTHERS = 8.
TRANSFER lwa_data-lines TO im_file_name.
ENDCATCH.
IF sy-subrc <> 0.
CLEAR lwa_data.
EXIT.
ENDIF.
CLEAR lwa_data.
ENDLOOP.
*Close Dataset
CLOSE DATASET im_file_name.
As I have investigated through the background job log, it seems that the current server which run the background job haven't got mapped yet to the text file folder. Solution is to re-map the server to text file folder.
You are using the FILTER extension to OPEN DATASET - which can be a HUGE security issue as well as raise loads of portability issues unless you know what you're doing, but that's not what the question is about. From the documentation:
When the statement OPEN DATASET is executed, a process is started in
the operating system for the specified statement. When the file is
opened for reading, a channel (pipe) is linked with STDOUT of the
process, from which the data is read during file reading. The file
itself is linked with STDIN of the process. When the file is opened
for writing, a channel (pipe) is linked to STDIN of the process, to
which data is passed when writing. The output of the process is
diverted to this file.
In your case, the filter command probably decided to bail out - see this answer among many. Why is hard to investigate - you may have to go through various system logs to find out. If the problem really is some unmapped network folder, you could try switching to UNC paths.

how to fetch a data from one file location and to run using tcl code

In tcl how to get the data from one file location and to run that data using TCL code .
for example
In the folder 1 there is config file ,i want to get the informations of config file and i want to execute the information that is present or not,
If the configuration file contains Tcl code, it's just:
# Put the filename in quotes if you want, or in a variable, or ...
source /the/path/to/the/file.tcl
If the file contains Tcl code but you don't trust it, you can use a “safe interpreter” context. This disables many commands, giving a much more restricted set of capabilities that you can then add specific exceptions to (with interp alias):
# Make the context
set i [interp create -safe]
# Set up a way for the context to let the master find out about what to
# really set
interp alias $i configure {} recordConfiguration
proc recordConfiguration args {
puts "configured with $args"
}
# Evaluate the script (note that [source] is hidden by default) in the context
$i invokehidden source /the/path/to/the/file.tcl
# Dispose the context
interp delete $i
If the file isn't Tcl code, you have to parse it. That's a substantially more complex matter, so much so that we'll need to know the format of the file before we can answer.
If you are trying to read data (like text strings) from a file then you'll have to open a channel for that particular file like this:
set fileid [open "path/to/your/file.txt" r]
Read open manual page.
Then you can use gets command to read data from the file through the channel fileid .