how to remove packages from yum list result? - yum

probably a quick question for experts. on linux(Centso)
I did something like
$ yum list | grep cdh
which returns a bunch of items(I was search to remove cloudera packages)
I know that I can remove them by
$ yum remove ...
but certainly like to do it in one command. Is there a way to do so? many thanks
currently. I manually output the list result to a text file, and then copy/paste the names into yum remove. there must be a better way. thanks

yum remove '*cdh*' will do what you want. Though be careful to check the list of packages it wants to remove to make sure the pattern didn't catch too much.

You can remove in bulk by using a wildcard.
yum remove 'cdh*'
There is more information here rpm - erase multiple packages

Related

Is there a way to move a file from one branch to another in ClearCase?

A user checked new files on the wrong branch. I would like to move them in the most efficient way there is a lot of them. My first thought is to remove the element from the branch and have the user recheck in the files on the proper branch. But I was hoping there was a way i could change the pointers?
/VOB/DIRECTORY/file##/main/1.00/1 to /VOB/DIRECTORY/file##/main/2.00/1
Whenever there are a lot of files to checkout and move, clearfsimport is a viable option.
Simply set a view to the destination branch, and import the files found in the source (and wrong) view.
See "How can I use ClearCase to “add to source control …” recursively?"
That will checkout, add, modify or remove files in the destination view in order to mirror the ones from the source (here the source is a ClearCase view, but it could actually be any folder, ClearCase view or not, where the files are).
That will be enough to "recheck in the files on the proper branch", but that won't remove the versions from the wrong branch though, and I would advice against using cleartool rmver (even though I used that here).
Perhaps a subtractive merge is better.
If you know where they are, and where you want them, you could:
1) Merge the directory and files over.
2) Use cleartool ln in a view in the destination branch to link in the files, and then merge the files individually.
If you use clearfsimport, and don't purge the added-in-the-wrong-place files, you can set yourself up for down-the-road "fun" caused by "evil twins."
Personally, since you know the files and directories that got added, where, when and by whom, you could do something like this (command lines are off-the-top-of-my-head:
Get the list of files to copy/merge
cleartool find -type d -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > dirlist.txt
cleartool find -type fl -element "created_by(baduser) && created_since(25-Jul-2016) && !created_since(26-Jul-2016)" -print > filelist.txt
Pull the directories over by merging the parent directories while CD'd/set in a view using the destination path. Not knowing the OS involved I can't say which way you would need to parse this. If you use perl, you can grab the offset of the last instance of the directory separator and use that in substr to get the parent directory path. In the windows command prompt, you can do something like this:
SET SRCDRIVE=D:
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px %SRCDRIVE%~px
for /f "delims==" %x in (dirlist.txt) do cleartool co -nc %~px & cleartool merge -to %~px\%~nx %SRCDRIVE%~px\%~nx
Yes, you can do all that in a single script, and do better error checking and not trying 40x to check out the same directory.
You might also be able to merge them to the 2.0 branch (using a view selecting the 2.0 branch). To identify the elements involved, you can run a 'cleartool find' command something like this:
% cd /vobs/myvob
% cleartool find -all -version 'brtype(1.0) && created_by(user_x)' -print
The 'created_since(date-time)' query might also be useful in the compound query.
Once you're convinced you have the right set of versions, you can use '-exec' in place of the '-print' to actually perform the merge. It might look something like this:
% cleartool find -all -version 'brtype(1.0) && created_by(user_x) && created_since(29-Jun)' -exec 'cleartool merge -to $CLEARCASE_PN -version $CLEARCASE_ID_STR'
If you're happy with the results, check everything in. Then you just have to decide if you need to remove the versions on the 1.0 branch (which you can do with another 'cleartool find ... -exec ...' command).

colorizing golang test run output

I like it when terminal/console test runs actually show their output in either red or green text. It seems like a lot of the testing libraries available for Go have this. However, I'd like to just use the default testing package that comes with Go. Is there a way to colorize it's output with red and green?
You can use grc, a generic colourizer, to colourize anything.
On Debian/Ubuntu, install with apt-get install grc. On a Mac with , brew install grc.
Create a config directory in your home directory:
mkdir ~/.grc
Then create your personal grc config in ~/.grc/grc.conf:
# Go
\bgo.* test\b
conf.gotest
Then create a Go test colourization config in ~/.grc/conf.gotest, such as:
regexp==== RUN .*
colour=blue
-
regexp=--- PASS: .*
colour=green
-
regexp=^PASS$
colour=green
-
regexp=^(ok|\?) .*
colour=magenta
-
regexp=--- FAIL: .*
colour=red
-
regexp=[^\s]+\.go(:\d+)?
colour=cyan
Now you can run Go tests with:
grc go test -v ./..
Sample output:
To avoid typing grc all the time, add an alias to your shell (if using Bash, either ~/.bashrc or ~/.bash_profile or both, depending on your OS):
alias go=grc go
Now you get colourization simply by running:
go test -v ./..
You can create a wrapper shell script for this and color it using color escape sequence. Here's a simple example on Linux (I'm not sure how this would look on windows, but I guess there is a way.. :) )
go test -v . | sed ''/PASS/s//$(printf "\033[32mPASS\033[0m")/'' | sed ''/FAIL/s//$(printf "\033[31mFAIL\033[0m")/''
There's also a tool called richgo that does exactly this, in a user-friendly way.
You would still need a library to add color escape code like:
for Windows: mattn/go-colorable or shiena/ansicolor
for Unix or Windows: fatih/color or kortschak/ct
for Unix or Windows: logrusorgru/aurora (mentioned by Ivan Black in the comments)
From there, you specify what you want to color (StdOut or StdErr, like in this example)
rakyll/gotest (screenshot) is a binary that does this.
Example:
$ gotest -v github.com/rakyll/hey
Emoji
You can use colors for text as others mentioned in their answers to have colorful text with a color code or using a third-party library.
But you can use emojis instead! for example, you can use⚠️ for warning messages and 🛑 for error messages.
Or simply use these notebooks as a color:
📕: error message
📙: warning message
📗: ok status message
📘: action message
📓: canceled status message
📔: Or anything you like and want to recognize immediately by color
🎁 Bonus:
This method also helps you to quickly scan and find logs directly in the source code.
But some distributions of Linux default emoji font are not colorful by default and you may want to make them colorful, first.
BoltDB has some test methods that look like this:
func assert(tb testing.TB, condition bool, msg string, v ...interface{}) {
if !condition {
_, file, line, _ := runtime.Caller(1)
fmt.Printf("\033[31m%s:%d: "+msg+"\033[39m\n\n", append([]interface{}{filepath.Base(file), line}, v...)...)
tb.FailNow()
}
}
Here are the rest. I added the green dots here.

Shorthand command to remove a dir, while retaining integrity of everything under it?

I am working in a unix environment, and I am curious if there is a nifty/shorthand way to do the following.
remove the first of the foo directories.
(not: the first foo dir has nothing in it but the subsequent foo dir/* )
/directory1/foo/foo/bar/baz/
to:
/directory1/foo/bar/baz/
while keeping the integrity of everything under foo/bar/baz.
I am building a server and I inadvertently created an additional folder.
Of course, I can copy everything from foo/* into another folder, delete everything then grab it and put it back without the first foo. curious if there was a shorthand way of doing this.
Thanks
You could find the inode of foo directories, then unlink the first one, and link the second one in it's place, but I have yet to find a way to link inodes directly with the ln command.
Instead, you could do this:
ln /directory/foo/foo /directory/foonew && unlink /directory/foo/foo && unlink /directory/foo && mv /directory/foonew /directory/foo
mv /directory1/foo/foo/* /directory1/foo && rmdir /directory1/foo/foo

Git - how do I view the change history of a method/function?

So I found the question about how to view the change history of a file, but the change history of this particular file is huge and I'm really only interested in the changes of a particular method. So would it be possible to see the change history for just that particular method?
I know this would require git to analyze the code and that the analysis would be different for different languages, but method/function declarations look very similar in most languages, so I thought maybe someone has implemented this feature.
The language I'm currently working with is Objective-C and the SCM I'm currently using is git, but I would be interested to know if this feature exists for any SCM/language.
Recent versions of git log learned a special form of the -L parameter:
-L :<funcname>:<file>
Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>. You may not give any pathspec limiters. This is currently limited to a walk starting from a single revision, i.e., you may only give zero or one positive revision arguments. You can specify this option more than once.
...
If “:<funcname>” is given in place of <start> and <end>, it is a regular expression that denotes the range from the first funcname line that matches <funcname>, up to the next funcname line. “:<funcname>” searches from the end of the previous -L range, if any, otherwise from the start of file. “^:<funcname>” searches from the start of file.
In other words: if you ask Git to git log -L :myfunction:path/to/myfile.c, it will now happily print the change history of that function.
Using git gui blame is hard to make use of in scripts, and whilst git log -G and git log --pickaxe can each show you when the method definition appeared or disappeared, I haven't found any way to make them list all changes made to the body of your method.
However, you can use gitattributes and the textconv property to piece together a solution that does just that. Although these features were originally intended to help you work with binary files, they work just as well here.
The key is to have Git remove from the file all lines except the ones you're interested in before doing any diff operations. Then git log, git diff, etc. will see only the area you're interested in.
Here's the outline of what I do in another language; you can tweak it for your own needs.
Write a short shell script (or other program) that takes one argument -- the name of a source file -- and outputs only the interesting part of that file (or nothing if none of it is interesting). For example, you might use sed as follows:
#!/bin/sh
sed -n -e '/^int my_func(/,/^}/ p' "$1"
Define a Git textconv filter for your new script. (See the gitattributes man page for more details.) The name of the filter and the location of the command can be anything you like.
$ git config diff.my_filter.textconv /path/to/my_script
Tell Git to use that filter before calculating diffs for the file in question.
$ echo "my_file diff=my_filter" >> .gitattributes
Now, if you use -G. (note the .) to list all the commits that produce visible changes when your filter is applied, you will have exactly those commits that you're interested in. Any other options that use Git's diff routines, such as --patch, will also get this restricted view.
$ git log -G. --patch my_file
Voilà!
One useful improvement you might want to make is to have your filter script take a method name as its first argument (and the file as its second). This lets you specify a new method of interest just by calling git config, rather than having to edit your script. For example, you might say:
$ git config diff.my_filter.textconv "/path/to/my_command other_func"
Of course, the filter script can do whatever you like, take more arguments, or whatever: there's a lot of flexibility beyond what I've shown here.
The closest thing you can do is to determine the position of your function in the file (e.g. say your function i_am_buggy is at lines 241-263 of foo/bar.c), then run something to the effect of:
git log -p -L 200,300:foo/bar.c
This will open less (or an equivalent pager). Now you can type in /i_am_buggy (or your pager equivalent) and start stepping through the changes.
This might even work, depending on your code style:
git log -p -L /int i_am_buggy\(/,+30:foo/bar.c
This limits the search from the first hit of that regex (ideally your function declaration) to thirty lines after that. The end argument can also be a regexp, although detecting that with regexp's is an iffier proposition.
git log has an option '-G' could be used to find all differences.
-G Look for differences whose added or removed line matches the
given <regex>.
Just give it a proper regex of the function name you care about. For example,
$ git log --oneline -G'^int commit_tree'
40d52ff make commit_tree a library function
81b50f3 Move 'builtin-*' into a 'builtin/' subdirectory
7b9c0a6 git-commit-tree: make it usable from other builtins
The correct way is to use git log -L :function:path/to/file as explained in eckes answer.
But in addition, if your function is very long, you may want to see only the changes that various commit had introduced, not the whole function lines, included unmodified, for each commit that maybe touch only one of these lines. Like a normal diff does.
Normally git log can view differences with -p, but this not work with -L.
So you have to grep git log -L to show only involved lines and commits/files header to contextualize them. The trick here is to match only terminal colored lines, adding --color switch, with a regex. Finally:
git log -L :function:path/to/file --color | grep --color=never -E -e "^(^[\[[0-9;]*[a-zA-Z])+" -3
Note that ^[ should be actual, literal ^[. You can type them by pressing ^V^[ in bash, that is Ctrl + V, Ctrl + [. Reference here.
Also last -3 switch, allows to print 3 lines of output context, before and after each matched line. You may want to adjust it to your needs.
Show function history with git log -L :<funcname>:<file> as showed in eckes's answer and git doc
If it shows nothing, refer to Defining a custom hunk-header to add something like *.java diff=java to the .gitattributes file to support your language.
Show function history between commits with git log commit1..commit2 -L :functionName:filePath
Show overloaded function history (there may be many function with same name, but with different parameters) with git log -L :sum\(double:filepath
git blame shows you who last changed each line of the file; you can specify the lines to examine so as to avoid getting the history of lines outside your function.

Remove unknown files in Bazaar

I have a bunch of unknown files in my Bazaar working tree that I no longer want. I can get a list of them using bzr stat, but I'd like an easy way to get rid of them. (I'd expect an option for bzr revert to do this, but I'm not finding one.)
I can always write a tiny script to parse the output of bzr stat and rm or mv the unknowns, but I thought something might already exist.
I have Bazaar (bzr) 1.13.1.
bzr clean-tree will get rid of all unknown files in a working tree. It also has switches to remove ignored files, merges backups and other types of unwanted files. See bzr clean-tree --usage for full details.
Edit to add: This is true for Bazaar 2.0.0, I'm not sure about 1.13
Made a script:
#!/usr/bin/env ruby
# Move unknown files in a Bazaar repository to the trash.
#
# Author: Benjamin Oakes
require 'fileutils'
TRASH_DIRECTORY = File.expand_path('~/.Trash/')
stdout = %x(bzr stat)
within = false
stdout.each_line do |line|
if line.match(/^unknown:$/)
within = true
next
elsif line.match(/^[a-z]+:$/i)
within = false
next
end
if within
FileUtils.move(line.match(/^\s+(.*?)$/)[1], TRASH_DIRECTORY)
end
end
I've only tested it a little, but it seems to work just fine. Please let me know if you find an issue via the comments.
On a separate topic, should I learn sed & awk? I tend to write these things using ruby -e "some ruby code".