Accurev: How can I get a list of all users from a stream? - accurev

I need to get a list of all users who've contributed to a stream. I think I can just dump the entire history of the stream then parse it for the users like this (see hist for details):
accurev hist -s <stream> -a -fv
but this seems very crude, especially since I'm not interested in the history itself. Is there a more elegant way of doing this?

This works nicely:
accurev hist -p <depot> -s <stream> -a -fv | sed -n 's/.*user: \(.*\)/\1/p' | sort | uniq

You need to run the accurev hist command to obtain this information.
You can add the "-k promote" option to restrict the output to show only promote operations.
Also, you can use the -fx option to format the output in XML and create script to generate a simple list of users.

Related

How can I get the disk IO trace with actual input values?

I want to generate some trace file from disk IO, but the problem is I need the actual input data along with timestamp, logical address and access block size, etc.
I've been trying to solve the problem by using the "blktrace | blkparse" with "iozone" on the ubuntu VirtualBox environment, but it seems not working.
There is an option in blkparse for setting the output format to show the packet data, -f "%P", but it dose not print anything.
below is the command that i use:
$> sudo blktrace -a issue -d /dev/sda -o - | blkparse -i - -o ./temp/blktrace.sda.iozone -f "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n"
$> iozone -w -e -s 16M -f ./mnt/iozone.dummy -i 0
In the printing format "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n", all other things are printed well, but the "%P" is not printed at all.
Is there anyone who knows why the packet data is not displayed?
OR anyone who knows other way to get the disk IO packet data with actual input value?
As far as I know blktrace does not capture the actual data. It just capture the metadata. One way to capture real data is to write your own kernel module. Some students at FIU.edu did that in this paper:
"I/O deduplication: Utilizing content similarity to ..."
I would ask this question in linux-btrace mailing list as well:
http://vger.kernel.org/majordomo-info.html

How can i view all comments posted by users in bitbucket repository

In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.

AccuRev : How to get all files changed?

I am looking to get list of files changes between a timestamp.
For example 2013/11/11 11:10:00-now.
accurev hist command given the files changed on that particular stream but it does not include the changes came from parent stream.
Is there a way to get the list of changes flew from parent streams?
Change the basis time of your child stream to the date of 2013/11/11 11:10:00. Then perform a diff by files across the child and parent stream.
Accurev 6 has added some new arguments for the diff command so the following should do the trick:
accurev diff -a -i -v MyStream -V MyStream -t "2013/11/11 11:10:00-now"
Alternatively you could try the accurev.py script, from the ac2git repo, which will return to you all the transactions that could have affected your stream. Run it like this:
python accurev.py deep-hist -p MyDepot -s MyStream -t "2013/11/11 11:10:00-now"

Executing batches of commands using redis cli

I have a long text file of redis commands that I need to execute using the redis command line interface:
e.g.
DEL 9012012
DEL 1212
DEL 12214314
etc.
I can't seem to figure out a way to enter the commands faster than one at a time. There are several hundred thousands lines, so I don't want to just pile them all into one DEL command, they also don't need to all run at once.
the following code works for me with redis 2.4.7 on mac
./redis-cli < temp.redisCmds
Does that satisfy your requirements? Or are you looking to see if there's a way to programmatically do it faster?
If you don't want to make a file, use echo and \n
echo "DEL 9012012\nDEL 1212" | redis-cli
The redis-cli --pipe can be used for mass-insertion. It is available since 2.6-RC4 and in Redis 2.4.14.
For example:
cat data.txt | redis-cli --pipe
More info in: http://redis.io/topics/mass-insert
I know this is an old old thread, but adding this since it seems missed out among other answers, and one that works well for me.
Using heredoc works well here, if you don't want to use echo or explicitly add \n or create a new file -
redis-cli <<EOF
select 15
get a
EOF

Categorize the output of ec2-describe-images or ec2-describe-instances

Is there any command/tool/script to categorize the bulky output of ec2-describe-images or ec2-describe-instances.
I have a list of around 100 servers with each and every detail. I want to categorize them under suitable headings like - RESERVATION, INSTANCE, BLOCKDEVICE, TAG (whatever category is available on the output).
Did you get this sorted? If not...
run ec2-describe-images with the option --headers to give you the categories
ec2-describe-images --private-key ~/private.key --cert ~/my.crt --region us-west-1 --headers
if you just want certain fields then just pipe the output of the above through the linux command cut picking the fields (columns) you are after. Say you want ImageID, Name and Architecture then these would be fields 2,3 and 8 of the output above. Example.
ec2-describe-images --private-key ~/private.key --cert ~/my.crt --region us-west-1 --headers | cut -f2,3,8 -s
Doing the same for ec2-describe-instances would be similar.
It's a job for awk (or perl, python or other general purpose scripting language).
awk can handle records with various record/field length, can create associative arrays, and it's a reporting language, which is usually installed on every *nix.
Add this to your ~/.bashrc or ~/.bash_profile:
ez-ec2-describe-instances() {
ec2-describe-instances $* --headers | egrep '(ReservationID|running|pending)'|cut -f 2,3,4,6,7,10,12;
}
Logout/login or run ". ~/.bashrc". Then you can use:
$ ez-ec2-describe-instances
ReservationID Owner Groups
i-6f194113 ami-1624987f ec2-107-20-75-13.compute-1.amazonaws.com running t1.micro us-east-1a
You can pass arguments to ez-ec2-describe-instances, just like you would pass them to the regular ec2-describe-instances. E.g.:
$ ez-ec2-describe-instances --region eu-west-1
ReservationID Owner Groups
i-e4fd6eaf ami-c37474b7 ec2-54-246-38-35.eu-west-1.compute.amazonaws.com pending t1.micro eu-west-1a