Can I view the RCS log with diffs? - rcs

Sadly, there is no RCS tag on Unix.stackexchange or ServerFault, so I am posting this on StackOverflow.
I'm spoiled by SVN/Git, and I need to see the history of a file. With my scripts I am using RCS to track changes made to system configuration files, so it would be neat if I could view them like I do with Git. for git I use git log -p to get this kind of output.
Is there a flag for rlog or rcsdiff or anything that lets me get a log that has the diffs?
Or must I use rcsdiff and a shell script to implement this myself?

rcshist (written by Ian Dowse) does what was requested. I do not know of prebuilt packages, but it builds easily.
Here is sample output:
REV:1.346 aclocal.m4 2012/09/03 17:21:43 tom
tags: xterm-281s, xterm-281r, xterm-281q, xterm-281p, xterm-281o,
xterm-281n, xterm-281m, xterm-281l, xterm-281k, xterm-281j,
xterm-281i, xterm-281h, xterm-281g, xterm-281f, xterm-281e
change default for --with-xpm
--- aclocal.m4 2012/08/25 23:05:32 1.345
+++ aclocal.m4 2012/09/03 17:21:43 1.346
## -1,4 +1,4 ##
-dnl $XTermId: rcshist.html,v 1.16 2015/03/01 20:34:33 tom Exp $
+dnl $XTermId: rcshist.html,v 1.16 2015/03/01 20:34:33 tom Exp $
dnl
dnl ---------------------------------------------------------------------------
dnl
## -3554,7 +3554,7 ##
AC_SUBST(no_pixmapdir)
])dnl
dnl ---------------------------------------------------------------------------
-dnl CF_WITH_XPM version: 1 updated: 2012/07/22 09:18:02
+dnl CF_WITH_XPM version: 2 updated: 2012/09/03 05:42:04
dnl -----------
dnl Test for Xpm library, update compiler/loader flags if it is wanted and
dnl found.
## -3571,7 +3571,7 ##
AC_ARG_WITH(xpm,
[ --with-xpm=DIR use Xpm library for colored icon, may specify path],
[cf_Xpm_library="$withval"],
- [cf_Xpm_library=no])
+ [cf_Xpm_library=yes])
AC_MSG_RESULT($cf_Xpm_library)
if test "$cf_Xpm_library" != no ; then

rlog filename
Will show you the basic history.
rcsdiff -r5.1 -r5.2 filename
To see a diff between two revisions. Do not put a space after the -r.

Read the ,v file. It contains the full history.

Related

How to use subprocess.run() to run Hive query?

So I'm trying to execute a hive query using the subprocess module, and save the output into a file data.txt as well as the logs (into log.txt), but I seem to be having a bit of trouble. I've look at this gist as well as this SO question, but neither seem to give me what I need.
Here's what I'm running:
import subprocess
query = "select user, sum(revenue) as revenue from my_table where user = 'dave' group by user;"
outfile = "data.txt"
logfile = "log.txt"
log_buff = open("log.txt", "a")
data_buff = open("data.txt", "w")
# note - "hive -e [query]" would normally just print all the results
# to the console after finishing
proc = subprocess.run(["hive" , "-e" '"{}"'.format(query)],
stdin=subprocess.PIPE,
stdout=data_buff,
stderr=log_buff,
shell=True)
log_buff.close()
data_buff.close()
I've also looked into this SO question regarding subprocess.run() vs subprocess.Popen, and I believe I want .run() because I'd like the process to block until finished.
The final output should be a file data.txt with the tab-delimited results of the query, and log.txt with all of the logging produced by the hive job. Any help would be wonderful.
Update:
With the above way of doing things I'm currently getting the following output:
log.txt
[ralston#tpsci-gw01-vm tmp]$ cat log.txt
Java HotSpot(TM) 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
Java HotSpot(TM) 64-Bit Server VM warning: Using the ParNew young collector with the Serial old collector is deprecated and will likely be removed in a future release
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/y/share/hadoop-2.8.3.0.1802131730/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/y/libexec/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Logging initialized using configuration in file:/home/y/libexec/hive/conf/hive-log4j.properties
data.txt
[ralston#tpsci-gw01-vm tmp]$ cat data.txt
hive> [ralston#tpsci-gw01-vm tmp]$
And I can verify the java/hive process did run:
[ralston#tpsci-gw01-vm tmp]$ ps -u ralston
PID TTY TIME CMD
14096 pts/0 00:00:00 hive
14141 pts/0 00:00:07 java
14259 pts/0 00:00:00 ps
16275 ? 00:00:00 sshd
16276 pts/0 00:00:00 bash
But it looks like it's not finishing and not logging everything that I'd like.
So I managed to get this working with the following setup:
import subprocess
query = "select user, sum(revenue) as revenue from my_table where user = 'dave' group by user;"
outfile = "data.txt"
logfile = "log.txt"
log_buff = open("log.txt", "a")
data_buff = open("data.txt", "w")
# Remove shell=True from proc, and add "> outfile.txt" to the command
proc = subprocess.Popen(["hive" , "-e", '"{}"'.format(query), ">", "{}".format(outfile)],
stdin=subprocess.PIPE,
stdout=data_buff,
stderr=log_buff)
# keep track of job runtime and set limit
start, elapsed, finished, limit = time.time(), 0, False, 60
while not finished:
try:
outs, errs = proc.communicate(timeout=10)
print("job finished")
finished = True
except subprocess.TimeoutExpired:
elapsed = abs(time.time() - start) / 60.
if elapsed >= 60:
print("Job took over 60 mins")
break
print("Comm timed out. Continuing")
continue
print("done")
log_buff.close()
data_buff.close()
Which produced the output as needed. I knew about process.communicate() but that previously didn't work. I believe the issue was related to not adding an output file with > ${outfile} to the hive query.
Feel free to add any details. I've never seen anyone have to loop over proc.communicate() so I'm skeptical that I might be doing something wrong.

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

GVim not recognizing commands in plugin

How do I get gvim to recognize sqlcomplete.vim commands?
I'm unable to use the sqlcomplete.vim plugin. When running :version I get the following output:
and scrolling all the way to the bottom here is the rest of the output:
and the env variables:
:echo $VIM
c:\users\me\.babun\cygwin\etc\
:echo $HOME
H:\
Here is the output of :scriptnames:
When running the sqlcomplete.vim command such as :SQLSetType sqlanywhere the output I get is:
How do I get gvim to recognize sqlcomplete.vim commands?
Another piece of helpful information is the output of :echo &rtp :
H:\vimfiles,H:\.vim\bundle\Vundle.vim,H:\.vim\bundle\dbext.vim,H:\.vim\bundle\SQ
LComplete.vim,C:\Users\me\.babun\cygwin\etc\vimfiles,C:\Users\me\.babu
n\cygwin\etc\,C:\Users\me\.babun\cygwin\etc\vimfiles/after,H:\vimfiles/afte
r,H:\.vim/bundle/Vundle.vim,H:\.vim\bundle\Vundle.vim/after,H:\.vim\bundle\dbext
.vim/after,H:\.vim\bundle\SQLComplete.vim/after
Some points you could check:
:scriptnames shows plugin\sqlcomplete.vim
But the link you provided points to .../vim/runtime/autoload/sqlcomplete.vim, there is no .../vim/runtime/plugin/sqlcomplete.vim, and the version at vim.org also doesn't contains a /plugin file:
install details
Copy sqlcomplete.vim to:
.vim/autoload/sqlcomplete.vim (Unix)
vimfiles\autoload\sqlcomplete.vim (Windows)
For documentation:
:h sql.txt
Maybe you have installed it incorrectly.
The file on your link has version 12 at its header, while the latest version is 15. Try updating to the latest version
Note that this plugin does not define the SQLSetType command.
You could check that by simple searching the file the on link. And on its header:
" Vim OMNI completion script for SQL
" Language: SQL
" Maintainer: David Fishburn <dfishburn dot vim at gmail dot com>
" Version: 15.0
" Last Change: 2013 May 13
" Homepage: http://www.vim.org/scripts/script.php?script_id=1572
" Usage: For detailed help
" ":help sql.txt"
" or ":help ft-sql-omni"
" or read $VIMRUNTIME/doc/sql.txt
Following :help sql.txt:
2.1 SQLSetType *sqlsettype* *SQLSetType*
--------------
For the people that work with many different databases, it is nice to be
able to flip between the various vendors rules (indent, syntax) on a per
buffer basis, at any time. The ftplugin/sql.vim file defines this function: >
SQLSetType
And scriptnames is not listing ftplugin/sql.vim

knitr 1.5 / patchDVI 1.9 doesn't seem to generate a concordance acceptable to evince + emacs

Setup : here is sessionInfo() :
R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=fr_FR.UTF-8 LC_NUMERIC=C
[3] LC_TIME=fr_FR.UTF-8 LC_COLLATE=fr_FR.UTF-8
[5] LC_MONETARY=fr_FR.UTF-8 LC_MESSAGES=fr_FR.UTF-8
[7] LC_PAPER=fr_FR.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=fr_FR.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] patchDVI_1.9 knitr_1.5
loaded via a namespace (and not attached):
[1] compiler_3.0.2 evaluate_0.5.1 formatR_0.9 highr_0.2.1 stringr_0.6.2
[6] tcltk_3.0.2 tools_3.0.2
I am trying to get emacs and AucTeX to synchronize my .Rnw source file with evince to go to comiled text from source and back.
I have already checked that the synchronization works fine between a .tex source and a PDF.
My .Rnw file starts with :
\documentclass[a4paper,twoside,12pt]{article}
\synctex=1 %% Should force concordance generation
\pdfcompresslevel=0 %% Should force avoidance of PDF compression, which patchDVI does
\pdfobjcompresslevel=0 %% not handle
<<include=FALSE>>= %% Modificaton of what Sweave2kinitr does
## opts_chunk$set(concordance=TRUE, self.contained=TRUE) ## No possible effect
opts_knit$set(concordance=TRUE, self.contained=TRUE) ## Seems reasonable
#
%% \SweaveOpts{concordance=TRUE} %% That's where inspiration came from
Consider the following log (unrelevant parts edited) :
> options("knitr.concordance")
$knitr.concordance
[1] TRUE
> opts_knit$get("concordance")
[1] TRUE
> knit("IntroStat.Rnw")
processing file: IntroStat.Rnw
|...................... | 33%
ordinary text without R code
|........................................... | 67%
label: unnamed-chunk-1 (with options)
List of 1
$ include: logi FALSE
|.................................................................| 100%
ordinary text without R code
output file: IntroStat.tex
[1] "IntroStat.tex"
> system("pdflatex -synctex=1 IntroStat.tex")
[ Edited irrelevancies ]
SyncTeX written on IntroStat.synctex.gz.
Note : a concordance has *been* generated !!! **
Transcript written on IntroStat.log.
Let's do that again to fix references :
> system("pdflatex -synctex=1 IntroStat.tex")
[ Edited irrelevancies ]
Output written on IntroStat.pdf (1 page, 136907 bytes).
SyncTeX written on IntroStat.synctex.gz.
Note : a concordance has *been* generated *again* !!! **
Transcript written on IntroStat.log.
> patchDVI("IntroStat.pdf")
[1] "0 patches made. Did you set \\SweaveOpts{concordance=TRUE}?"
* This I do not understand *
> patchSynctex("IntroStat.synctex.gz")
[1] "0 patches made. Did you set \\SweaveOpts{concordance=TRUE}?"
* Ditto *
It appears that something in the set of tools does not work as advertized : either dviPatch does not recognize legal concordance \specials or pdflatex dfoes not generate them. It does generate something, however...
I checked that the resulting PDF enables evince to synchronize with the .tex file, but not in the .Rnw file. Furthermore, when the .Rnw file is open in emacs, starting the viewer with 'C-c C-v View" in AucTeX indeed starts the viewer (after requesting to open a server, which I authorize), but the viewers is empty, and i get this :
"TeX-evince-sync-view: Couldn't find the Evince instance for file:///home/charpent/Boulot/Cours/ODF/Chapitres/Ch3-StatMath/IntroStat.Rnw.pdf"
in the "Messages" buffer.
So we have a second problem here.
A third one would be to integrate all of this transparently in the AucTeX production chain, but this is another story...
I'd really like to keep emacs as my main tool for R/\LaTeX/Sage work, rather tha switch to RStudio, which probably won't like much SageTeX and othe various tools I need on a daily/weekly basis...
Any thoughts ?
Maybe this https://github.com/jan-glx/patchKnitrSynctex will help. I tried it on a simple file, and it does work.
As for the second and third problems, I have this script (note that I source the above code from jan-glx; modify path accordingly):
#!/bin/bash
FILE=$1
BASENAME=$(basename $FILE .Rnw)
Rscript -e 'library(knitr); opts_knit$set("concordance" = TRUE); knit("'$1'")'
pdflatex --synctex=1 --file-line-error --shell-escape "${1%.*}"
Rscript -e "source('~/Sources/patchKnitrSynctex.R'); patchKnitrSynctex('${1%.*}.tex')"
ln -s $BASENAME.synctex.gz $BASENAME.Rnw.synctex.gz
ln -s $BASENAME.pdf $BASENAME.Rnw.pdf
The links are my kludgy way of getting around the "Couldn't find the instance (...) ".
If you have your .Rnw in an Emacs buffer, go to a shell buffer, and call that script. When finished, C-c C-v from Emacs will open your configured PDF viewer (okular in my case). In the PDF, shift + left mouse click (okular at least) will bring you to the right place in the Emacs .Rnw buffer.
This is not ideal: if you jump to an error, it goest to the .tex, not the .Rnw. And I'd like to be able to invoke it via C-c C-c or similar (but I don't know how ---elisp ignorance).

Nano hacks: most useful tiny programs you've coded or come across

It's the first great virtue of programmers. All of us have, at one time or another automated a task with a bit of throw-away code. Sometimes it takes a couple seconds tapping out a one-liner, sometimes we spend an exorbitant amount of time automating away a two-second task and then never use it again.
What tiny hack have you found useful enough to reuse? To make go so far as to make an alias for?
Note: before answering, please check to make sure it's not already on favourite command-line tricks using BASH or perl/ruby one-liner questions.
i found this on dotfiles.org just today. it's very simple, but clever. i felt stupid for not having thought of it myself.
###
### Handy Extract Program
###
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via >extract<" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
Here's a filter that puts commas in the middle of any large numbers in standard input.
$ cat ~/bin/comma
#!/usr/bin/perl -p
s/(\d{4,})/commify($1)/ge;
sub commify {
local $_ = shift;
1 while s/^([ -+]?\d+)(\d{3})/$1,$2/;
return $_;
}
I usually wind up using it for long output lists of big numbers, and I tire of counting decimal places. Now instead of seeing
-rw-r--r-- 1 alester alester 2244487404 Oct 6 15:38 listdetail.sql
I can run that as ls -l | comma and see
-rw-r--r-- 1 alester alester 2,244,487,404 Oct 6 15:38 listdetail.sql
This script saved my career!
Quite a few years ago, i was working remotely on a client database. I updated a shipment to change its status. But I forgot the where clause.
I'll never forget the feeling in the pit of my stomach when I saw (6834 rows affected). I basically spent the entire night going through event logs and figuring out the proper status on all those shipments. Crap!
So I wrote a script (originally in awk) that would start a transaction for any updates, and check the rows affected before committing. This prevented any surprises.
So now I never do updates from command line without going through a script like this. Here it is (now in Python):
import sys
import subprocess as sp
pgm = "isql"
if len(sys.argv) == 1:
print "Usage: \nsql sql-string [rows-affected]"
sys.exit()
sql_str = sys.argv[1].upper()
max_rows_affected = 3
if len(sys.argv) > 2:
max_rows_affected = int(sys.argv[2])
if sql_str.startswith("UPDATE"):
sql_str = "BEGIN TRANSACTION\\n" + sql_str
p1 = sp.Popen([pgm, sql_str],stdout=sp.PIPE,
shell=True)
(stdout, stderr) = p1.communicate()
print stdout
# example -> (33 rows affected)
affected = stdout.splitlines()[-1]
affected = affected.split()[0].lstrip('(')
num_affected = int(affected)
if num_affected > max_rows_affected:
print "WARNING! ", num_affected,"rows were affected, rolling back..."
sql_str = "ROLLBACK TRANSACTION"
ret_code = sp.call([pgm, sql_str], shell=True)
else:
sql_str = "COMMIT TRANSACTION"
ret_code = sp.call([pgm, sql_str], shell=True)
else:
ret_code = sp.call([pgm, sql_str], shell=True)
I use this script under assorted linuxes to check whether a directory copy between machines (or to CD/DVD) worked or whether copying (e.g. ext3 utf8 filenames -> fusebl
k) has mangled special characters in the filenames.
#!/bin/bash
## dsum Do checksums recursively over a directory.
## Typical usage: dsum <directory> > outfile
export LC_ALL=C # Optional - use sort order across different locales
if [ $# != 1 ]; then echo "Usage: ${0/*\//} <directory>" 1>&2; exit; fi
cd $1 1>&2 || exit
#findargs=-follow # Uncomment to follow symbolic links
find . $findargs -type f | sort | xargs -d'\n' cksum
Sorry, don't have the exact code handy, but I coded a regular expression for searching source code in VS.Net that allowed me to search anything not in comments. It came in very useful in a particular project I was working on, where people insisted that commenting out code was good practice, in case you wanted to go back and see what the code used to do.
I have two ruby scripts that I modify regularly to download all of various webcomics. Extremely handy! Note: They require wget, so probably linux. Note2: read these before you try them, they need a little bit of modification for each site.
Date based downloader:
#!/usr/bin/ruby -w
Day = 60 * 60 * 24
Fromat = "hjlsdahjsd/comics/st%Y%m%d.gif"
t = Time.local(2005, 2, 5)
MWF = [1,3,5]
until t == Time.local(2007, 7, 9)
if MWF.include? t.wday
`wget #{t.strftime(Fromat)}`
sleep 3
end
t += Day
end
Or you can use the number based one:
#!/usr/bin/ruby -w
Fromat = "http://fdsafdsa/comics/%08d.gif"
1.upto(986) do |i|
`wget #{sprintf(Fromat, i)}`
sleep 1
end
Instead of having to repeatedly open files in SQL Query Analyser and run them, I found the syntax needed to make a batch file, and could then run 100 at once. Oh the sweet sweet joy! I've used this ever since.
isqlw -S servername -d dbname -E -i F:\blah\whatever.sql -o F:\results.txt
This goes back to my COBOL days but I had two generic COBOL programs, one batch and one online (mainframe folks will know what these are). They were shells of a program that could take any set of parameters and/or files and be run, batch or executed in an IMS test region. I had them set up so that depending on the parameters I could access files, databases(DB2 or IMS DB) and or just manipulate working storage or whatever.
It was great because I could test that date function without guessing or test why there was truncation or why there was a database ABEND. The programs grew in size as time went on to include all sorts of tests and become a staple of the development group. Everyone knew where the code resided and included them in their unit testing as well. Those programs got so large (most of the code were commented out tests) and it was all contributed by people through the years. They saved so much time and settled so many disagreements!
I coded a Perl script to map dependencies, without going into an endless loop, For a legacy C program I inherited .... that also had a diamond dependency problem.
I wrote small program that e-mailed me when I received e-mails from friends, on an rarely used e-mail account.
I wrote another small program that sent me text messages if my home IP changes.
To name a few.
Years ago I built a suite of applications on a custom web application platform in PERL.
One cool feature was to convert SQL query strings into human readable sentences that described what the results were.
The code was relatively short but the end effect was nice.
I've got a little app that you run and it dumps a GUID into the clipboard. You can run it /noui or not. With UI, its a single button that drops a new GUID every time you click it. Without it drops a new one and then exits.
I mostly use it from within VS. I have it as an external app and mapped to a shortcut. I'm writing an app that relies heavily on xaml and guids, so I always find I need to paste a new guid into xaml...
Any time I write a clever list comprehension or use of map/reduce in python. There was one like this:
if reduce(lambda x, c: locks[x] and c, locknames, True):
print "Sub-threads terminated!"
The reason I remember that is that I came up with it myself, then saw the exact same code on somebody else's website. Now-adays it'd probably be done like:
if all(map(lambda z: locks[z], locknames)):
print "ya trik"
I've got 20 or 30 of these things lying around because once I coded up the framework for my standard console app in windows I can pretty much drop in any logic I want, so I got a lot of these little things that solve specific problems.
I guess the ones I'm using a lot right now is a console app that takes stdin and colorizes the output based on xml profiles that match regular expressions to colors. I use it for watching my log files from builds. The other one is a command line launcher so I don't pollute my PATH env var and it would exceed the limit on some systems anyway, namely win2k.
I'm constantly connecting to various linux servers from my own desktop throughout my workday, so I created a few aliases that will launch an xterm on those machines and set the title, background color, and other tweaks:
alias x="xterm" # local
alias xd="ssh -Xf me#development_host xterm -bg aliceblue -ls -sb -bc -geometry 100x30 -title Development"
alias xp="ssh -Xf me#production_host xterm -bg thistle1 ..."
I have a bunch of servers I frequently connect to, as well, but they're all on my local network. This Ruby script prints out the command to create aliases for any machine with ssh open:
#!/usr/bin/env ruby
require 'rubygems'
require 'dnssd'
handle = DNSSD.browse('_ssh._tcp') do |reply|
print "alias #{reply.name}='ssh #{reply.name}.#{reply.domain}';"
end
sleep 1
handle.stop
Use it like this in your .bash_profile:
eval `ruby ~/.alias_shares`