CVS header keywords not replaced - header

I have a unique problem with few of my files where the header information (Author/RCSFile/Date etc) is not automatically replaced when I commit my changes using Eclipse.
It comes back to blank:
######################## COPYRIGHT ################################
#
# $RCSfile$
# Owner: My Company
# $Revision$
# $Date$
#
# Last $Author$
#
# Description: Some description
#
####################################################################
Any idea why it would so? Out of 500 files I have only issue for 6 files.

Keyword substitution is configurable for each file (so that binaries aren't corrupted when you check them out). Do a cvs log -h on the offending files and check the "keyword substitution" line. The default is
keyword substitution: kv
Compare against other files that don't have this problem.
If that's not it, could there be non-printable characters in the file? Try cat -A filename if your system supports it, or cat -v filename if it doesn't (pipe the output to head or your favorite pager).
EDIT: Looks like the keyword substitution line was the problem. Use cvs admin to fix it. (I though it would be better to have this information in the answer rather than just in the comment.)

Related

scp fails with "protocol error: filename does not match request"

I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp

script sh linux SQL procedure execution [duplicate]

I am making an NW.js app on macOS, and want to run the app in dev mode
by double-clicking on an icon.
In the first step, I'm trying to make my shell script work.
Using VS Code on Windows (I wanted to gain time), I have created a run-nw file at the root of my project, containing this:
#!/bin/bash
cd "src"
npm install
cd ..
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &
but I get this output:
$ sh ./run-nw
: command not found
: No such file or directory
: command not found
: No such file or directory
Usage: npm <command>
where <command> is one of: (snip commands list)
(snip npm help)
npm#3.10.3 /usr/local/lib/node_modules/npm
: command not found
: No such file or directory
: command not found
Some things I don't understand.
It seems that it takes empty lines as commands.
In my editor (VS Code) I have tried to replace \r\n with \n
(in case the \r creates problems) but it changes nothing.
It seems that it doesn't find the folders
(with or without the dirname instruction),
or maybe it doesn't know about the cd command ?
It seems that it doesn't understand the install argument to npm.
The part that really weirds me out, is that it still runs the app
(if I did an npm install manually)...
Not able to make it work properly, and suspecting something weird with
the file itself, I created a new one directly on the Mac, using vim this time.
I entered the exact same instructions, and... now it works without any
issues.
A diff on the two files reveals exactly zero difference.
What can be the difference? What can make the first script not work? How can I find out?
Update
Following the accepted answer's recommendations, after the wrong line
endings came back, I checked multiple things.
It turns out that since I copied my ~/.gitconfig from my Windows
machine, I had autocrlf=true, so every time I modified the bash
file under Windows, it re-set the line endings to \r\n.
So, in addition to running dos2unix (which you will have to
install using Homebrew on a Mac), if you're using Git, check your
.gitconfig file.
Yes. Bash scripts are sensitive to line-endings, both in the script itself and in data it processes. They should have Unix-style line-endings, i.e., each line is terminated with a Line Feed character (decimal 10, hex 0A in ASCII).
DOS/Windows line endings in the script
With Windows or DOS-style line endings , each line is terminated with a Carriage Return followed by a Line Feed character. You can see this otherwise invisible character in the output of cat -v yourfile:
$ cat -v yourfile
#!/bin/bash^M
^M
cd "src"^M
npm install^M
^M
cd ..^M
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
In this case, the carriage return (^M in caret notation or \r in C escape notation) is not treated as whitespace. Bash interprets the first line after the shebang (consisting of a single carriage return character) as the name of a command/program to run.
Since there is no command named ^M, it prints : command not found
Since there is no directory named "src"^M (or src^M), it prints : No such file or directory
It passes install^M instead of install as an argument to npm which causes npm to complain.
DOS/Windows line endings in input data
Like above, if you have an input file with carriage returns:
hello^M
world^M
then it will look completely normal in editors and when writing it to screen, but tools may produce strange results. For example, grep will fail to find lines that are obviously there:
$ grep 'hello$' file.txt || grep -x "hello" file.txt
(no match because the line actually ends in ^M)
Appended text will instead overwrite the line because the carriage returns moves the cursor to the start of the line:
$ sed -e 's/$/!/' file.txt
!ello
!orld
String comparison will seem to fail, even though strings appear to be the same when writing to screen:
$ a="hello"; read b < file.txt
$ if [[ "$a" = "$b" ]]
then echo "Variables are equal."
else echo "Sorry, $a is not equal to $b"
fi
Sorry, hello is not equal to hello
Solutions
The solution is to convert the file to use Unix-style line endings. There are a number of ways this can be accomplished:
This can be done using the dos2unix program:
dos2unix filename
Open the file in a capable text editor (Sublime, Notepad++, not Notepad) and configure it to save files with Unix line endings, e.g., with Vim, run the following command before (re)saving:
:set fileformat=unix
If you have a version of the sed utility that supports the -i or --in-place option, e.g., GNU sed, you could run the following command to strip trailing carriage returns:
sed -i 's/\r$//' filename
With other versions of sed, you could use output redirection to write to a new file. Be sure to use a different filename for the redirection target (it can be renamed later).
sed 's/\r$//' filename > filename.unix
Similarly, the tr translation filter can be used to delete unwanted characters from its input:
tr -d '\r' <filename >filename.unix
Cygwin Bash
With the Bash port for Cygwin, there’s a custom igncr option that can be set to ignore the Carriage Return in line endings (presumably because many of its users use native Windows programs to edit their text files).
This can be enabled for the current shell by running set -o igncr.
Setting this option applies only to the current shell process so it can be useful when sourcing files with extraneous carriage returns. If you regularly encounter shell scripts with DOS line endings and want this option to be set permanently, you could set an environment variable called SHELLOPTS (all capital letters) to include igncr. This environment variable is used by Bash to set shell options when it starts (before reading any startup files).
Useful utilities
The file utility is useful for quickly seeing which line endings are used in a text file. Here’s what it prints for for each file type:
Unix line endings: Bourne-Again shell script, ASCII text executable
Mac line endings: Bourne-Again shell script, ASCII text executable, with CR line terminators
DOS line endings: Bourne-Again shell script, ASCII text executable, with CRLF line terminators
The GNU version of the cat utility has a -v, --show-nonprinting option that displays non-printing characters.
The dos2unix utility is specifically written for converting text files between Unix, Mac and DOS line endings.
Useful links
Wikipedia has an excellent article covering the many different ways of marking the end of a line of text, the history of such encodings and how newlines are treated in different operating systems, programming languages and Internet protocols (e.g., FTP).
Files with classic Mac OS line endings
With Classic Mac OS (pre-OS X), each line was terminated with a Carriage Return (decimal 13, hex 0D in ASCII). If a script file was saved with such line endings, Bash would only see one long line like so:
#!/bin/bash^M^Mcd "src"^Mnpm install^M^Mcd ..^M./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
Since this single long line begins with an octothorpe (#), Bash treats the line (and the whole file) as a single comment.
Note: In 2001, Apple launched Mac OS X which was based on the BSD-derived NeXTSTEP operating system. As a result, OS X also uses Unix-style LF-only line endings and since then, text files terminated with a CR have become extremely rare. Nevertheless, I think it’s worthwhile to show how Bash would attempt to interpret such files.
On JetBrains products (PyCharm, PHPStorm, IDEA, etc.), you'll need to click on CRLF/LF to toggle between the two types of line separators (\r\n and \n).
I was trying to startup my docker container from Windows and got this:
Bash script and /bin/bash^M: bad interpreter: No such file or directory
I was using git bash and the problem was about the git config, then I just did the steps below and it worked. It will configure Git to not convert line endings on checkout:
git config --global core.autocrlf input
delete your local repository
clone it again.
Many thanks to Jason Harmon in this link:
https://forums.docker.com/t/error-while-running-docker-code-in-powershell/34059/6
Before that, I tried this, that didn't works:
dos2unix scriptname.sh
sed -i -e 's/\r$//' scriptname.sh
sed -i -e 's/^M$//' scriptname.sh
If you're using the read command to read from a file (or pipe) that is (or might be) in DOS/Windows format, you can take advantage of the fact that read will trim whitespace from the beginning and ends of lines. If you tell it that carriage returns are whitespace (by adding them to the IFS variable), it'll trim them from the ends of lines.
In bash (or zsh or ksh), that means you'd replace this standard idiom:
IFS= read -r somevar # This will not trim CR
with this:
IFS=$'\r' read -r somevar # This *will* trim CR
(Note: the -r option isn't related to this, it's just usually a good idea to avoid mangling backslashes.)
If you're not using the IFS= prefix (e.g. because you want to split the data into fields), then you'd replace this:
read -r field1 field2 ... # This will not trim CR
with this:
IFS=$' \t\n\r' read -r field1 field2 ... # This *will* trim CR
If you're using a shell that doesn't support the $'...' quoting mode (e.g. dash, the default /bin/sh on some Linux distros), or your script even might be run with such a shell, then you need to get a little more complex:
cr="$(printf '\r')"
IFS="$cr" read -r somevar # Read trimming *only* CR
IFS="$IFS$cr" read -r field1 field2 ... # Read trimming CR and whitespace, and splitting fields
Note that normally, when you change IFS, you should put it back to normal as soon as possible to avoid weird side effects; but in all these cases, it's a prefix to the read command, so it only affects that one command and doesn't have to be reset afterward.
Coming from a duplicate, if the problem is that you have files whose names contain ^M at the end, you can rename them with
for f in *$'\r'; do
mv "$f" "${f%$'\r'}"
done
You properly want to fix whatever caused these files to have broken names in the first place (probably a script which created them should be dos2unixed and then rerun?) but sometimes this is not feasible.
The $'\r' syntax is Bash-specific; if you have a different shell, maybe you need to use some other notation. Perhaps see also Difference between sh and bash
Since VS Code is being used, we can see CRLF or LF in the bottom right depending on what's being used and if we click on it we can change between them (LF is being used in below example):
We can also use the "Change End of Line Sequence" command from the command pallet. Whatever's easier to remember since they're functionally the same.
For Notepad++ users, this can be solved by:
One more way to get rid of the unwanted CR ('\r') character is to run the tr command, for example:
$ tr -d '\r' < dosScript.py > nixScript.py
I ran into this issue when I use git with WSL.
git has a feature where it changes the line-ending of files according to the OS you are using, on Windows it make sure the line endings are \r\n which is not compatible with Linux which uses only \n.
You can resolve this problem by adding a file name .gitattributes to your git root directory and add lines as following:
config/* text eol=lf
run.sh text eol=lf
In this example all files inside config directory will have only line-feed line ending and run.sh file as well.
The simplest way on MAC / Linux - create a file using 'touch' command, open this file with VI or VIM editor, paste your code and save. This would automatically remove the windows characters.
If you are using a text editor like BBEdit you can do it at the status bar. There is a selection where you can switch.
For IntelliJ users, here is the solution for writing Linux script.
Use LF - Unix and masOS (\n)
Scripts may call each other.
An even better magic solution is to convert all scripts in the folder/subfolders:
find . -name "*.sh" -exec sed -i -e 's/\r$//' {} +
You can use dos2unix too but many servers do not have it installed by default.
For the sake of completeness, I'll point out another solution which can solve this problem permanently without the need to run dos2unix all the time:
sudo ln -s /bin/bash `printf 'bash\r'`

Set and get BIOS settings using iLO

How can I set/get BIOS settings using iLO connection, I need to automate the configuring of BIOS setting that I do manually. It does not matter what programming/scripting language to use, and also I have a wide variety of machines vendors(IBM, HP, DELL) so let me know how to do that on any of them.
for HP use conrep!
description:
command-line utility for capturing/applying BIOS settings using xml definition files.
capture current settings to xml file
/opt/hp/conrep -s -f <filename>
apply settings from xml input file
/opt/hp/conrep -l -f <filename>
help
Usage /opt/hp/conrep -s | -l [-f output filename] [-x xml configuration filename] [-?]
-s Saves the current configuration to a file.
-l Loads configuration setting from a file.
-f Name of the output file.
-x Name of the XML definition file. Provide the /opt/hp/conrep.xml full path.
If not present, the XML configuration will default to conrep.xml
If not present, the output filename default to conrep.dat
Error Codes:
0 - Success
1 - Bad XML File
2 - Bad Data File
4 - Admin Password set
5 - No XML Tag
6 - Platform is not supported with current XML definition file.

How to set PATH variable in crontab via whenever gem

Is it possible to set the PATH or SHELL variable in a crontab via the whenever schedule.rb file?
# here I want to set the PATH and SHELL variable somehow
every 3.hours do
# some cronjob
end
I want this output in my crontab after my capistrano deploy:
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# some cronjobs
Ok, it seems as I found the solution. I found it here: https://gist.github.com/jjb/950975
I will update this answer when I have tested it
I have to put this into my schedule.rb
# If your ruby binary isn't in a standard place (for example if it's in /usr/local/bin,
# because you installed it yourself from source, or from a thid-party package like REE),
# this tells whenever (or really, the rails runner) where to find it.
env :PATH, '/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin'
You are already doing it when running zenity when setting DISPLAY, LANG etc.
If you want to set the shell, set it in the first line of /home/username/script/script1.sh using #!/bin/bash.
If you want to set the path, one way to do it is to set it before running the command:
5 9-20 * * * PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11 /home/username/script/script1.sh > /dev/null
A alternate/better way is to create a simple wrapper script like so:
#!/bin/bash
export PATH=/usr/local/bin:/usr/local/sbin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/bin/X11
# Absolute path to this script
SCRIPT=`readlink -f $0`
# Absolute directory this script is in
SCRIPTPATH=`dirname $SCRIPT`
#make sure we are in the same directory as the script1.sh - this is useful in case the script assumes it is running from the same directory it's in and makes relative directory/file references
cd $SCRIPTPATH
##run final script, and pass through all parameters that were passed to wrapper script.
/home/username/script/script1.sh "$#"

Remove line in host's known_host file through Vagrant

I've narrowed down this question - regarding a MySQL over SSH connection only working once - to a conflicting line in my host computer's known_hosts file.
Essentially, I can not get into my Database GUI of choice because the key is different for the same IP address (after re-provisioning, reloading, etc.).
Once I delete any offending lines, I can get in just fine.
So, through Vagrant's shell command (that I'm provisioning with) how can I modify the host machine's ~/.ssh/known_hosts file?
EDIT:
I found a temp fix that involves adding/creating a ~/.ssh/config file (this involves using a private IP address):
Host 192.168.*.*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Should let you in. Not a real fix, as this kind of fix can be a security concern. Look below for much better answer.
Sorry for taking you away from what you need!
Changing HOST files from Vagrantfile:
What you actually want is very simple. Vagrantfile is interpreted by Vagrant each time you run vagrant command. It is regular ruby code, so if you want to change HOST file, all you need to do is to put in the Vagrantfile Ruby code that performs this change. Here's the example code I've put at the end of my Vagrantfile:
require 'tempfile'
require 'fileutils'
path = '~/.ssh/known_hosts'
temp_file = Tempfile.new('foo')
begin
File.open(path, 'r') do |file|
file.each_line do |line|
if line !~ /REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE/ then
temp_file.puts line
end
end
end
temp_file.rewind
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Note to edit the code above by putting your own value for REGEX_OF_LINE_YOU_WANT_TO_EXCLUDE.
Hope I at least partially fixed my mistake by providing this answer :)
Original Answer
For anyone's (mine as well) future reference, I'm leaving part of the answer that refers to changing GUEST OS files or copying files to GUEST OS:
Vagrant provides couple of provisioners.
Solution number 1: For simple file copy you may use Vagrant file provisioner. Following code in your Vagrantfile would copy the file ~/known_hosts.template from your host system to VM's file /home/vagrant/.ssh/known_hosts
# Context -------------------------------------------------------
Vagrant.configure('2') do |config|
# ...
# This is the section to add ----------------------------------
config.vm.provision :file do |file|
file.source = '~/known_hosts.template'
file.destination = '/home/vagrant/.ssh/known_hosts'
end
#--------------------------------------------------------------
end
File provisioner is poorly documented on Vagrant site, and we've got to thank #tmatilai who had answered similar question on serverfault.
Keep in mind that you should use absolute paths in destination field, and that copying is being performed by vagrant user, so file will have vagrant's owner:group.
Solution number 2: If you need to copy file with a root privileges, or really have to change the file without using templates, consider using well documented shell provisioner. File copying in this case would work only if you have the file placed in the folder visible from within the VM(guestOS), but you have all the power of shell.
Solution number 3: Though it would be overkill in this case, you might use very powerful Chef or Puppet as provisioners, and perform action via one of those frameworks. I know nothing about Puppet, and may talk only about Chef. Cookbook would be very simple. Create template file (.erb) with desired content, and then your recipe will just place the file where necessary. Of course you'll need a box with Chef packeges in it.
I use plain ssh to enter my machines in order to do provisioning:
$ ssh-add ~/.vagrant.d/insecure_private_key
With this setup the known hosts is bound to give problems, but I do not want to turn off the host key checking as I use that also for external hosts. Given my hosts include pattern foo, I did this on the shell:
$ ssh -i '' '/foo/d' ~/.ssh/known_hosts
Remove the empty '' argument after -i if you have GNU/linux host in stead of BSD/MacOSX.
You can then install the vagrant trigger plugin:
$ vagrant plugin install vagrant-triggers
And add the above snippet to the Vagrantfile (mind the backticks):
config.trigger.after :destroy do
puts "Removing known host entries"
`sed -i '' '/foo/d' ~/.ssh/known_hosts`
end
This is what I do:
I define the IP_ADDRESS and DOMAIN_NAME variables at the top of the Vagrantfile.
Then inside Vagrant.configure I add:
config.trigger.after :destroy do |trigger|
trigger.info = "Removing known_hosts entries"
trigger.run = {inline: "ssh-keygen -R #{IP_ADDRESS}"}
trigger.run = {inline: "ssh-keygen -R #{DOMAIN_NAME}"}
end