GnuCobol compiler COBC does not work correctly with -I option for NFS mounted disks - nfs

I try to compile cbl file that includes statement like
01 WS02-RETURN-STATUS COPY "boolind.ef"..
COPY "boolind.va".
Both files (boolind.ef and boolind.va) location is NOT local disk
df -k /clchome/clc/ccclc/mb_ccclc/proj/clb134/clcgdd
Filesystem 1K-blocks Used Available Use% Mounted on
isiloncorp1:/dev_storage 1073741824 373998592 699743232 35% /AMD/DEV/storage
When i run compilation like
cobc -c fcd_demo.cbl -I/clchome/clc/ccclc/mb_ccclc/proj/clb134/clcgdd
I receive an error like
fcd_demo.cbl:57: error: boolind.ef: No such file or directory
but when I copy file boolind.ef to local directory or to /tmp
it works good.
How to use "include" on not local disk ?
Could you help ?
I have tried to use -ext option...
Unfortunately it does not help.
When I try to add something like –ext=ef I receive the same compilation error.
Interesting that it happens when "copybook files" location is NFS disk only.
When I copy them to local disk it works well.
I mean : cobc –c myfile.cbl –I/tmp/ttt
Maybe exists some option that allows to use NFS ?

Related

How to mirror directories using Bitvise sftpc.exe

The Bitvise SSH Client version history states that v8.15 supports directory mirroring:
The graphical SSH Client and sftpc now support recursive directory mirroring. A directory and all of its subdirectories and files can be synchronized either in the upload or download direction.
I can find it in the GUI, but I can't find how to do using sftpc.exe. There is no mention of mirroring in sftpc.exe -help.
How can I do directory mirroring from the command line?
You point out a tangential design issue in sftpc: getting help for SFTP commands requires you to use sftpc interactively and connect to the server. You can then get help from the interactive prompt.
This is inconvenient, so I opened a feature request for us to make the interactive help available from the command line, as well.
The help text you are looking for is as follows - for the put command:
sftp> help put
USAGE: put local-path [remote-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-m=mode] [-dm=mode] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Upload file.
PARAMETERS:
-bg Start (queue) upload in background.
-fg Start upload in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove local file after successful upload.
-f Assume remote-path is a file (not a directory)
-noTime Do not synchronize file modification times.
-m=mode Set the access mode for remote files to 'mode'.
-dm=mode Set the access mode for new remote directories to 'mode'.
If directory already exists, access mode will not be changed.
-mirror Mirror local-path to remote-path. Local files that do not exist
remotely will be uploaded. Remote files that are different than
their local versions will be overwritten.
-erase With -mirror, erase remote files that are not present locally.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Upload files as binary; no conversions.
-lf Auto-detect text files. In text files, replace CRLF with LF.
Binary files are unaffected.
-std Auto-detect text files. Upload text files using the SFTP v4+ text
file transfer mechanism. Binary files are unaffected. Not
available when SFTP version 3 or lower is in use.
-tlf Upload all files as textual. Replace all CRLF bytes with LF.
-t Upload all files using the SFTP v4+ text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
And for the get command:
sftp> help get
USAGE: get remote-path [local-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-lit] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Download file.
PARAMETERS:
-bg Start (queue) download in background.
-fg Start download in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove remote file after successful download.
-f Assume remote-path is a file (not a directory).
-noTime Do not synchronize file modification times.
-lit Treat remote-path literally (not a wildcard pattern).
-mirror Mirror remote-path to local-path. Remote files that do not exist
locally will be downloaded. Local files that are different than
their remote versions will be overwritten.
-erase With -mirror, erase local files that are not present remotely.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Download files as binary; no conversions.
-lf Auto-detect text files. In text files, replace LF with CRLF.
Binary files are unaffected.
-std Behaves same as -lf when downloading. Not available when SFTP
version 3 or lower is in use.
-tlf Download all files as textual. Replace all LF bytes with CRLF.
-t Download all files using the SFTP v4 text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
I hope this helps!
I don't normally monitor Stack Overflow, so please feel free to call my attention by opening a support case with Bitvise if you need me to look at something else.
I recommend also using the latest Bitvise SSH Client version. Currently, this is 8.35. It's free of charge for use in any environment, and we try to ensure that each version is a strict upgrade that does not introduce new difficulties. We want there to be no reason to stay behind. :-)

'gunzip' is not recognized as an internal or external command, operable program or batch file. System command 'gunzip' failed

I am trying to analyse my raw GNSS data on the GNSS Analyser app from here https://github.com/google/gps-measurement-tools. The installation guide includes the following step:
4.2 gunzip installation
The automatic ftp code inside GnssAnalysis will download ephemeris zip files, and attempt to
unzip them using gunzip.
Download gzip.exe from here http://ftp.gnu.org/gnu/gzip/gzip-1.9.zip
Extract the files from the zip file, rename gzip.exe to gunzip.exe
Move gunzip.exe to somewhere in your Windows path (type path in the Windows
Command Prompt to see what your path is, typically you will find a directory
C:\Windows\system32 and you can put gunzip.exe there.)
However, upon downloading gunzip, I cant find a gzip.exe file, and hence tried renaming the gzip.c and gzip.h file instead. It did not work and I got this error when attempting to process my own raw data.
I have just tried and got success to import DB from a backup file:
gzip -d < C:\Users\my-user\Downloads\my-db-backup.sql.gz | mysql -u root -p MY_DB_NAME

How do I copy a file into a docker-cloud container? (AKA How to copy a file over ssh without using scp)

docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh

Rule conflict between multiple SSH

As I try to use grunt-rsync, I come to a "code 12" error, my understanting is that I have a conflict between multiple ssh installed on my computer (Git's and cwRsync's) :
where ssh
C:\Program Files\cmder\vendor\msysgit\bin\ssh.exe
C:\Program Files\cwRsync\ssh.exe
C:\Program Files (x86)\Git\bin\ssh.exe
How can I resolve that conflict ?
Thanks a lot.
Your msysgit ssh version is taking precedence over the cwrsync one, this causes issues.
You either need to change you PATH environment variable or create a batch file to override it.
#echo off
SETLOCAL
SET CWRSYNCHOME=C:\Program Files\cwRsync
SET HOME=c:\Users\*YourUserName*\
SET CWOLDPATH=%PATH%
SET PATH=%CWRSYNCHOME%\bin;%PATH%
"C:\Program Files\cwRsync\bin\rsync.exe" %*
(Note: the above also sets the home directoy. You should point this to your .ssh (keys) directory)
I managed to fix this problem by simply adding a single line to my .bashrc file:
export PATH=/c/Program\ Files/cwRsync/:$PATH
This adds the cwRsync directory to the beginning of your PATH environment variable, which means that its copy of ssh moves to the top of the list when you do where ssh and consequently becomes the default.
For me, this fixed problems I was having running the grunt-rsync Grunt task from msysgit (I mention it in case anyone else is having the same problem).

rsync "failed to set times on "XYZ": No such files or directory (2)

I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.