replacing shared library on AIX - aix

I have a shared library (.so) on AIX
I know what all processes are using it.
I have stopped all the concerned processes.
I need to replace the above .so file (with the new library) using cp -p command.
But the above command is giving error:
"Cannot remove the running program"
While i am trying "cp -p -f" it is wroking fine,
But I need to use only "cp -p"
Any idea on this matter will be helpful.
Thanks.

used slibclean command,then "cp -p" worked fine.

The safe way is using temporal file:
cp -p /from/libfoo.so /target/libfoo.so.tmp
mv -f /target/libfoo.so.tmp /target/libfoo.so
You don't have to stop any program to perform this; and there won't be any moment when there is no libfoo.so in target directory.
Also it doesn't hurt to call slibclean sometimes, to keep the memory clean. Use 'genkld | wc -l' before and after it to check if it has done anything.

Related

RSYNC and folder hierachcy

After making a full forensic copy of a harddrive using dd, I would like to keep up with changes between the original and backup harddisc, therefore I started using rsync.
Whenever, I run
sync -a -v -n --progress /media/drive1 /media/drive2
the command would start listing all files contained in drive1. However, only a couple of them has changed after I did DD.
Trying that on a single folder
sync -a -v -n --progress /media/drive1/folder /media/drive2
works fine and just displays the new files in that folder - those which are not contained in /media/drive2/folder.
However, executing the command on the level of both volumes
sync -a -v -n --progress /media/drive1 /media/drive2
does not account for the differentials, contrary to the documentation which is everywhere available, but takes all files which are already on both drives.
What is my mistake?
The way rsync treats its source and destination paths is easy to get wrong. When you use the command:
sync -a -v -n --progress /media/drive1 /media/drive2
...it tries to sync the drive1 folder into drive2; that is, it creates and populates /media/drive2/drive1. When you add "/folder" to the source path, it works as expected because then it's trying to sync with /media/drive2/folder, which is what you want.
Fortunately, the solution is easy: add "/" to the end of the source path, which tells it to sync the contents of drive1 into drive2, rather than the folder itself:
sync -a -v -n --progress /media/drive1/ /media/drive2
BTW, I'd recommend adding --dry-run to make sure it's doing what you want before running it "for real". You'll probably also have to delete /media/drive2/drive1.

How to add EnvironmentFile directive to systemctl using Docker with centos7/httpd base image

I am not sure if this is possible without creating my own base image, but I use environment variables in /etc/environment on our servers and typically make them accessible to apache by doing the following:
$ printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
$ mkdir /usr/lib/systemd/system/httpd.service.d
$ printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
$ systemctl daemon-reload
$ systemctl restart httpd
$ reboot
The variables are then available in any PHP calls to getenv('HTTP_VAR1'); and etc. However, in running this from a docker file I get dbus errors on the systemctl commands. Without the systemctl commands it seems the variables are not available to apache as it seems the new EnvironmentFile directive doesn't take effect. My docker file snippet:
FROM centos/httpd:latest
RUN printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
RUN mkdir /usr/lib/systemd/system/httpd.service.d &&\
printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
RUN systemctl daemon-reload &&\
systemctl restart httpd
COPY entrypoint.sh /entrypoint.sh
So I happened upon the answer to the issue today. It seems that systemd drops backslashes inside single quotes, but it may effect double-quotes too from what I saw in testing. I found the systemd development mailing list thread from April 2014 where patching the issue was being discussed. It seems as though the fix never made it in. So we have to work around it.
In attempting to work around it I noticed some issues with actually reading the variables at all. It seemed as though either Apache or php-cli would get the correct variables, and sometimes not at all, it took a bit of sleuthing to figure out what was going on. Then I started reading into the systemd's EnvironmentFile directive to see if there was more to gain from the docs. It turns out it does not evaluate bash so export won't work. It expects a text file with variable assignments and herein lies one of the main issues that might keep this from being resolved.
I then devised a workable solution. Utilizing systemd's ExecStartPre directive I am able to run a script on startup of the httpd service. I then read in the environment file and write a new plain text one that will then be used by httpd's systemd unit. Here is the code:
Firstly, I moved my variables to /etc/profile.d/ directory rather than /etc/environment file.
file: /etc/profile.d/environment.sh
This is where we store all our environment variables, this gets easily sourced on all interactive shell logins. In the rarer cases where we need to have these variables available non-interactively we can either provide --login flag to /bin/bash or source it manually.
export HTTP_VAR1=var1-value-with-a-back\slash
export HTTP_VAR2=var2-value
file: /usr/lib/systemd/system/httpd.service.d/environment.conf
Our drop-in unit file to extend how the httpd service works. I add in a script that runs before httpd starts up. This gets ran on all httpd restarts and starts. The script that runs generates a plain text file at /etc/profile.d/environment.env which we subsequently tell systemd to load as an EnvironmentFile.
[Service]
ExecStartPre=/usr/bin/bash -c "/usr/local/bin/generate-plain-environment-file"
EnvironmentFile=/etc/profile.d/environment.env
file: /usr/local/bin/generate-plain-environment-file
Here is the script I am using, I whipped this together really fast, I really don't think it is that robust and it could be better. It just removes the export from the beginning of the lines and then escapes any backslashes since systemd drops single backslashes. A more proper solution might be to use bash to evaluate each line and obtain the variable value in case of usage of variables or other bash in the actual bash variables, then output them as plain text name=value assignments, however this is not part of my use-case so I didn't bother.
#!/bin/bash
cd /etc/profile.d/
rm -rf "./environment.env"
while IFS='' read -r line || [[ -n "$line" ]]; do
echo $(echo "${line}" | sed 's/^export //' | sed 's/\\/\\\\/g') >> "./environment.env";
done < "./environment.sh"
file: /etc/profile.d/environment.env
This is the resulting file generated by the described script.
HTTP_VAR1=var1-value-with-a-back\\slash
HTTP_VAR2=var2-value
Conclusion is that the I now have two files with the same thing in them but I only need to maintain the one, the other is generated each time we restart httpd. Also, we fix the backslash issue in the process. Hurray!

scp command - transfer folder over ssh

I have a Arduino Yun and want setup the server for Yun.
So what I want is to copy a folder that contain a py file and a index.html to my Yun
I used mac terminal to do this operation
the command looks like this
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
and then terminal asked for the password
after I typed, it shows
scp: /mnt/sda1/LobsterHeartRate: Not a directory
I didn't type /mnt/sda1/LobsterHeartRate why it shows this error
Your code
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
requires that the remote directory /mnt/sda1 exists. This looks like it is not true in your case. Check it using ssh root#192.168.240.1 ls /mnt/sda1.
scp is simple tool and it does not allow you to rename directories on the fly and the target directory must exists. You might try
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/
ssh root#192.168.240.1 mv /mnt/LobsterHeartRate /mnt/sda1
or so, if it will suit your needs. But copying more files, rsync is usually more suitable. Check its manual page and give it a try next time.
As #Jens Höpken notes, your post is a bit sparse. But trying to read between the lines of your post I suspect that LobsterHeartRate is a DIRECTORY on your local system but a FILE named LobsterHeartRate in your target system. This might be happening right at the top of the directory tree, or perhaps you have directories/files of the same name further down the tree. scp -rv might help resolve any confusions here.
Beware: scp -r resolves symbolic links. If you want to preserve symlinks you need to do something else. For historic reasons I use the following, though cpio with a find front-end opens up interesting possibilities for fine-grained file selections.
( cd /Users/gudi/Desktop && tar -cf - LobsterHeartRate ) |
ssh root#192.168.240.1 'cd /mnt/sda1 && tar -xf -'
For a safe "dry run" you could change the -xf to a -tf. The && chains are required to prevent bad things from happening if any prior command fails.
Disclaimer: any debugging is left as an exercise for the student.

How to check if scp command is available?

I am looking for a multiplatform solution that would allow me to check if scp command is available.
The problem is that scp does not have a --version command line and when called without parameters it will return with exit code 1 (error).
Update: in case it wasn't clear, by multiplatform I mean a solution that will work on Windows, OS X and Linux without requiring me to install anything.
Use the command which scp. It lets you know whether the command is available and it's path as well. If scp is not available, nothing is returned.
#!/bin/sh
scp_path=`which scp || echo NOT_FOUND`
if test $scp_path != "NOT_FOUND"; then
if test -x ${scp_path}; then
echo "$scp_path is usable"
exit 0
fi
fi
echo "No usable scp found"
sh does not have a built-in which, thus we rely on a system provided which command. I'm not entirely sure if the -x check is needed - on my system which actually verifies if the found file is executable by the user, but this may not be portable. On the rare case where the system has no which command, one can write a which function here.

Portable makefile creation of directories

I'm looking to save myself some effort further down the line by making a fairly generic makefile that will put together relatively simple C++ projects for me with minimal modifications required to the makefile.
So far I've got it so it will use all .cpp files in the same directory and specified child directories, place all these within a matching structure in a obj subdir and place the resulting file in another subdir called bin. Pretty much what I want.
However, trying to get it so that the required obj and bin directories is created if they don't exist is providing awkward to get working cross-platform - specifically, I'm just testing with Windows 7 & Ubuntu (can't remember version), and I can't get it to work on both at the same time.
Windows misreads mkdir -p dir and creates a -p directory and obviously the two platforms use \ and / respectively for the path separator - and I get errors when using the wrong one.
Here is a few selected portions of the makefile that are relevant:
# Manually edited directories (in this example with forward slashes)
SRC_DIR = src src/subdir1 src/subdir2
# Automagic object directories + the "fixed" bin directory
OBJ_DIR = obj $(addprefix obj/,$(SRC_DIR))
BIN_DIR = bin
# Example build target
debug: checkdirs $(BIN)
# At actual directory creation
checkdirs: $(BIN_DIR) $(OBJ_DIR)
$(BIN_DIR):
#mkdir $#
$(OBJ_DIR):
#mkdir -p $#
This has been put together by me over the last week or so from things I've been reading (mostly on Stack Overflow), so if it happens to be I'm following some horrible bad practice or anything of that nature please let me know.
Question in a nutshell:
Is there a simple way to get this directory creation to work from a single makefile in a way that provides as much portability as possible?
I don't know autoconf. Every experience I've had with it has been tedious. The problem with zwol's solution is that on Windows mkdir returns an error, unlike mkdir -p on Linux. This could break your make rule. The workaround is to ignore the error with - flag before the command, like this:
-mkdir dir
The problem with this is that make still throws an ugly warning for the user. The workaround for this is to run an "always true" command after the mkdir fails as described here, like this:
mkdir dir || true
The problem with this is that Windows and Linux have different syntax for true.
Anyway, I spent too much time on this. I wanted a make file that worked in both POSIX-like and Windows environments. In the end I came up with the following:
ifeq ($(shell echo "check_quotes"),"check_quotes")
WINDOWS := yes
else
WINDOWS := no
endif
ifeq ($(WINDOWS),yes)
mkdir = mkdir $(subst /,\,$(1)) > nul 2>&1 || (exit 0)
rm = $(wordlist 2,65535,$(foreach FILE,$(subst /,\,$(1)),& del $(FILE) > nul 2>&1)) || (exit 0)
rmdir = rmdir $(subst /,\,$(1)) > nul 2>&1 || (exit 0)
echo = echo $(1)
else
mkdir = mkdir -p $(1)
rm = rm $(1) > /dev/null 2>&1 || true
rmdir = rmdir $(1) > /dev/null 2>&1 || true
echo = echo "$(1)"
endif
The functions/variables are used like so:
rule:
$(call mkdir,dir)
$(call echo, CC $#)
$(call rm,file1 file2)
$(call rmdir,dir1 dir2)
Rationale for the definitions:
mkdir: Fix up the path and ignore any errors.
del: In Windows del doesn't delete any files if one of the files is specified to be in a directory that doesn't exist. For example, if you try to delete a set of files and dir/file.c is in the list, but dir doesn't exist, no files will be deleted. This implementation works around that issue by invoking del once for each file.
rmdir: Fix up the path and ignore any errors.
echo: The output's appearance is preserved and doesn't show the extraneous "" in Windows.
I spent a lot of time on this. Perhaps I would have been better off spending my time learning autoconf.
See also:
OS detecting makefile
Windows mkdir always does what Unix mkdir does with the -p switch on. And you can deal with the backslash problem with $(subst). So, on Windows, you want this:
$(BIN_DIR) $(OBJ_DIR):
mkdir $(subst /,\\,$#)
and on Unix you want this:
$(BIN_DIR) $(OBJ_DIR):
mkdir -p -- $#
Choosing between these is not practical to do within a makefile. This is what Autoconf is for.
As a side note, never, ever use the #command feature in your makefiles. There will come a day when you need to debug your build process on a machine you do not have direct access to, and on that day, you will regret it.
I solved the portability problem by creating a Python script called mkdir.py and calling it from the Makefile. A limitation is that Python must be installed, but this is most likely true for any version of UNIX.
#!/usr/bin/env python
# Cross-platform mkdir command.
import os
import sys
if __name__=='__main__':
if len(sys.argv) != 2:
sys.exit('usage: mkdir.py <directory>')
directory = sys.argv[1]
try:
os.makedirs(directory)
except OSError:
pass