Find UID from EUID - aix

I am having a AIX 5.3 host to which we login and when needed uses pbrun tool to become root.Now the question is how do I find from command line as what user I have logged in to get this privileged/root user. If I am not wrong how do I find UID from my current EUID. Tried whoami and who am i both gives output as root.

"who am i" is coming from utmp. If utmp shows you as root then your pbrun tool must be changing it from what it was when you first logged in.
You could do:
ps l $$
which prints out a line with the PID and PPID. Take the PPID and do that again:
ps l <PPID>
The UID column is your numeric user id. If the PPID shows as 1, then pbrun did an exec rather than a folk / exec (which implies that it is a function or alias within your shell). In that case, you could revert to "last" which will show who logged in to which tty at what time.
======
Another idea. You can get the terminal the program is executing on via ps. This is called the controlling terminal. You can also get it via the "tty" command:
tty
/dev/pts/18
Now, feed that to "last" but remove the leading /dev/ part and take the first hit:
last pts/18 | head -1
myname pts/18 myhost.mydomain.com Nov 14 10:22 still logged in.
That is the last person to log into that particular terminal. Will that work?

Related

replace the kscreenlocker_greet

Sorry my English language
I need the following
If you launch the menu -Start -> Shutdown -> Lock
Then you can see the lock screen
Now press CTRL + ALT +F2
let's log in to this terminal under the root user
And let's see the running processes ps -eH | less
We will see the process tree, and who started whom
You can find the following
ksmserver -> kscreenlocker_greet
If you look at how kscreenlocker_greet was launched using ps -aux | grep greet
Then we will see the following
/usr/libexec/kf5/kscreenlocker_greet --immediateLock --ksldfd
Is it possible somewhere (I hope it's not hard code) to replace this command with another command or an application being launched? And if so, where can it be found? Very necessary!
I would like to replace the command call with another application.

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

How can i view all comments posted by users in bitbucket repository

In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.

Expect gets stuck sometimes during login

I have the following script. Sometimes, it runs fine and others it gets stuck. What could be wrong here?
#!/usr/bin/env expect
# set Variables
set timeout 60
set ipaddr [lindex $argv 0]
# start telnet connection
spawn telnet $ipaddr
match_max 100000
# Look for user prompt
expect "username:*"
send -- "admin\r"
expect "password:?"
# Send pass
send "thisisthepass\n"
# look for WWP prompt
expect ">"
send "sendthiscommand\r"
expect ">"
send "exit\r"
interact
The script runs fine till the end, but sometimes it gets stuck during login. This behavior is present even with the same IP: for example, it may run 1 out of 5 tries for the same IP.
I have tried adding some sleep between sending of the user and password, but it's still the same. I have also tried without expect, by sending directly the password string after the user one but still the same: sometimes the script runs fine but others it asks again for the password as if it's incorrect...
username: admin
password:
username:
Things I would do:
change send "thisisthepass\n" to send "thisisthepass\r"
include exp_internal 1 somewhere at the top of your script, and see what is going on when you have a failed attempt
exp_internal 1 will enable debugging with lots of good information on what is going on with expect's pattern matching. You can share it here and I'll be glad to take a look at it.
Are you sure the password prompt has an extra character after it (your ? in expect "password:?". Is it always there? Any chance different devices have slightly different password prompts?

Expect 'send' not working as expected

I am a beginner in expect...I have written a small script which has to login to a router and execute few commands..
But somehow i am finding that even though when i have used send "admin show platform" THRICE, it is only working twice for me.. I only get the output of admin show platform twice.
Can anyone check the code and point me where actually i am screwing up the code..
Gsaxena#
Gsaxena#
Gsaxena# ./testTool
spawn /usr/bin/ksh
telnet 5.28.7.103
$ telnet 5.28.7.103
Trying 5.28.7.103...
Connected to 5.28.7.103.
Escape character is '^]'.
User Access Verification
Username:
Username: lab
Password:
RP/0/RP0/CPU0:Billorani#debug ospf ospf1 adj
Mon Oct 14 17:16:06.144 UTC
**RP/0/RP0/CPU0:Billorani#show platform**
Mon Oct 14 17:16:06.416 UTC
Node Type PLIM State Config State
------------- ----------------- ------------------ --------------- ---------------
x/x/x0 xxxxG N/A IN-RESET PWR,NSHUT,MON
**RP/0/RP0/CPU0:Billorani#show platform**
Mon Oct 14 17:16:06.416 UTC
Node Type PLIM State Config State
------------- ----------------- ------------------ --------------- ---------------
x/x/xxx0 xxxxG N/A IN-RESET PWR,NSHUT,MON
RP/0/RP0/CPU0:Billorani#
Gsaxena#
Gsaxena#
Gsaxena#
Gsaxena#
Gsaxena#
#!/usr/bin/expect
set timeout 30
set hostcut "Bil"
sleep 5
set timeout 5
spawn /usr/bin/ksh
send "telnet 5.8.7.103\r"
expect ".*\'\^\]\'\. *"
send "\r"
expect "Username\:"
send "lab\n"
expect "Password\: "
send "lab\n"
sleep 10
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "debug ospf ospf1 adj\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
I should really be placing this not in an actual answer but in a comment, since I do not have a final answer for you, but it seems comments can only be left by folks who have been around for some time (there's a minimum reputation before you can leave them).
Anyways, what I wanted to suggest was that you add exp_internal 1 somewhere near the start of your script. This will provide a ton of useful debugging information, and will most likely point at what is going on. Feel free to post it here if you need help with it.
I can't tell what is wrong from the information you posted... nothing seems obviously at fault. One thing I would do differently is instead of spawning a Korn shell process and then send a telnet commando to it, I would just spawn the telnet command directly (less code, less resources). But that is not what is bothering you, so never mind that.
I'm not familiar with the OS you're logging in to... is that Cisco IOS XR? I'm baffled by the fact that you issue an admin show platform command, yet only show platform shows in your stdout? Also, what's the deal with the dual asterisks (**) some prompts show, while others don't?
One last thing, which may seem dumb, but... can you manually access the device and issue those 4 exact commands, in that exact order?
Regards,
James
In your code
send "admin show platform\n"
Use "\r" instead of "\n"