Is there some way I can programmatically change the users account image on OSX?I know I can retrieve it, but can it be changed like apple does on the account page in the settings app?
You can use the Address Book framework. You need to use the me method to get the record for the current user, and then the setImageData: method to set the user's image:
#import <AddressBook/AddressBook.h>
#implementation YourClass
- (void)setUserImage:(NSImage*)anImage
{
ABPerson* me = [[ABAddressBook addressBook] me];
[me setImageData:[anImage TIFFRepresentation]];
}
#end
There's more detail here in the docs.
You can path out to a file and merge it with the current record using the /usr/bin/dsimport command, which could be run from a NSTask. Here is an example of how to do so with BASH as root, this could be done with passed credentials as well
export OsVersion=`sw_vers -productVersion | awk -F"." '{print $2;exit}'`
declare -x UserPicture="/path/to/$UserName.jpg"
# Add the LDAP picture to the user record if dsimport is avaiable 10.6+
if [ -f "$UserPicture" ] ; then
# On 10.6 and higher this works
if [ "$OsVersion" -ge "6" ] ; then
declare -x Mappings='0x0A 0x5C 0x3A 0x2C'
declare -x Attributes='dsRecTypeStandard:Users 2 dsAttrTypeStandard:RecordName externalbinary:dsAttrTypeStandard:JPEGPhoto'
declare -x PictureImport="/Library/Caches/$UserName.picture.dsimport"
printf "%s %s \n%s:%s" "$Mappings" "$Attributes" "$UserName" "$UserPicture" >"$PictureImport"
# Check to see if the username is correct and import picture
if id "$UserName" &>/dev/null ; then
# No credentials passed as we are running as root
dsimport -g "$PictureImport" /Local/Default M &&
echo "Successfully imported users picture."
fi
fi
fi
Gist of this code
I am almost certain that you can do this through the OpenDirectory interface. See this guide:
https://developer.apple.com/library/mac/#documentation/Networking/Conceptual/Open_Directory/Introduction/Introduction.html#//apple_ref/doc/uid/TP40000917
Basically you have to open a Open Directory Node (say the /Search node), and then find the ODRecord for your user, then use:
setValue:forAttribute:error
on the ODRecord to set the JPEGPhoto attribute.
If you query using dscl from the command line for this attribute you will see the attribute's value:
dscl /Search read /Users/luser JPEGPhoto
I believe the dscl tool uses the Open Directory framework (or the older/harder to use/deprecated directory service framework) to read and write attributes to user records. You can read and write any other attribute using this tool and the associated framework. I don't see any reason the JPEGPhoto would be any different.
The /Search node MIGHT be read only (since it is kind of a meta node). Not really sure. You might have to explicitly open the appropriate node (for example the /Local/Default node) before you can write to the record:
dscl /Local/Default read /Users/luser JPEGPhoto
Related
There is a documentation about setting window title: https://www.gnu.org/software/screen/manual/html_node/Naming-Windows.html
I don't want to change .bashrc on all machines to make dynamic screen. However it will be good set hotkey in my .screenrc to set title using :title (C-a A) command with proper argument.
Maybe there is a solution to provide an output from uname -n to :titile command. Or something similar which automatically or semi-automatically sets window title to hostname.
Questions are:
is there a way to provide output of uname -n to :title command?
is there any other way to set window title to current hostname without changing .bashrc?
There are two approaches to solve it:
Print out sequence which gnu screen can parse for window titile ESC k my-titile ESC \. Screen will extract my-title. It can be done via:
printf '\ek%s\e\\' $(uname -n);
Run screen -X title my-titile to set a titile of current window.
Since I want to do it for all hosts which I sshed into, there is a trick to wrap actual ssh command into ssh() function using one of aforementioned approaches.
E.g.:
ssh() { printf '\ek%s\e\\' "$1"; command ssh "$#"; }
Thanks to #screen on freenode : )
ref.: https://www.gnu.org/software/screen/manual/html_node/Dynamic-Titles.html#Dynamic-Titles
I'm using screen to monitor several parallel jobs to test small variations of my program. I gave each screen session a different logfile. I do not remember which logfile I set for which session, and now wish I did!
Is there a way to query which session name (usually of the form #####.ttys000N.hostname) goes with which logfile, or vice-versa?
(To whom it concerns: the gnu-screen tag suggests determining which SX site the question is most relevant to. Based on the help pages of SuperUser and StackOverflow, this question appears roughly equally applicable to either community. Feel free to migrate it if you think it belongs elsewhere.)
I didn't find my suggested comment of using screen -ls to list the process ids, and then doing an lsof -p on these to find the filenames very satisfactory, so here is another not entirely satisfactory alternative:
There is an option -X to send commands to a remote screen, but unfortunately any output is shown on the remote. There is an option -Q to send a command and print the result locally, but it only accepts a very limited set of commands. However, one of these is lastmsg, which repeats the last message displayed.
So you can use -X logfile to display the name of the logfile remotely, then immediately use -Q lastmsg to duplicate that display locally! There is, of course, the possibility of some event occurring in the middle of this non-atomic action. The two commands cannot be combined. Here's an example:
#!/bin/bash
screen -ls |
while read session rest
do if [[ "$session" =~ [0-9]+\..+ ]]
then screen -S "$session" -X logfile # shows in status
msg=$(screen -S "$session" -Q lastmsg)
# logfile is '/tmp/xxxxx'
echo "$session $msg"
fi
done
and some typical output:
21017.test2 logfile is '/tmp/xxxxx'
20166.test logfile is '/tmp/mylog.%n'
the code I have in my .zshrc is:
ytdcd () { #youtube-dl that automatically puts stuff in a specific folder and returns to the former working directory after.
cd ~/youtube/new/ && {
youtube-dl "$#"
cd - > /dev/null
}
}
ytd() { #sofar, this function can only take one page. so, i can only send one youttube video code per line. will modify it to accept multiple lines..
for i in $*;
do
params=" $params https://youtu.be/$i"
done
ytdcd -f 18 $params
}
so, on the commandline (terminal), when i enter ytd DFreHo3UCD0, i would like to have the video at https://youtu.be/DFreHo3UCD0 to be downloaded. the problem is that when I enter the command in succession, the system just tries to download the video for the previous command and rightly claims the download is complete.
For example, entering:
> ytd DFreHo3UCD0
> ytd L3my9luehfU
would not attempt to download the video for L3my9luehfU but only the video for DFreHo3UCD0 twice.
First -- there's no point to returning to the old directory for ytdcd: You can change to a new directory only inside a subshell, and then exec youtube-dl to replace that subshell with the application process:
This has fewer things to go wrong: Aborting the function's execution can't leave things in the wrong directory, because the parent shell (the one you're interactively using) never changed directories in the first place.
ytdcd () {
(cd ~/youtube/new/ && exec youtube-dl "$#")
}
Second -- use an array when building argument lists, not a string.
If you use set -x to log its execution, you'll see that your original command runs something like:
ytdcd -f 18 'https://youtu.be/one https://youtu.be/two https://youtu.be/three'
See those quotes? That's because $params is a string, passed as a single argument, not an array. (In bash -- or another shell following POSIX rules -- an unquoted string expansion would be string-split and glob-expanded, but zsh doesn't follow POSIX rules).
The following builds up an array of separate arguments and passes them individually:
ytd() {
local -a params=( )
local i
for i; do
params+=( "https://youtu.be/$i" )
done
ytdcd -f 18 "${params[#]}"
}
Finally, it's come up that you don't actually intend to pass all the URLs to just one youtube-dl instance. To run a separate instance per URL, use:
ytd() {
local i retval=0
for i; do
ytdcd -f 18 "$i" || retval=$?
done
return "$retval"
}
Note here that we're capturing non-success exit status, so as not to hide an error in any ytdcd instance other than the last (which would otherwise occur).
I would declare param as local, so that you are not appending url after urls...
You can try to add this awesome function to your .zshrc:
funfun() {
local _fun1="$_fun1 fun1!"
_fun2="$_fun2 fun2!"
echo "1 says: $_fun1"
echo "2 says: $_fun2"
}
To observe the thing ;)
EDIT (Explanation):
When sourcing shell script, you add it to you current environment, that is why you can run those function you define. So, when those function use variables, by default, those variable will be global and accessible from anywhere in your environment! Therefore, In this case param is defined globally for all the length of your shell session. Since you want to allow the download of several video at once, you are appending values to this global variable, which will grow all the time.
Enforcing local tells zsh to limit the scope of params to the function only.
Another solution is to reset the variable when you call the function.
I need to be able to read in a path file from my simple_switch.py application.I have added the following code to my simple_switch.py in python.
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.register_cli_opts([
cfg.StrOpt('path-file', default='test.txt',
help='path-file')
])
I attempt to start the application as follows.
bin/ryu-manager --observe-links --path-file test.txt ryu/app/simple_switch.py
However I get the following error.
usage: ryu-manager [-h] [--app-lists APP_LISTS] [--ca-certs CA_CERTS]
[--config-dir DIR] [--config-file PATH]
[--ctl-cert CTL_CERT] [--ctl-privkey CTL_PRIVKEY]
[--default-log-level DEFAULT_LOG_LEVEL] [--explicit-drop]
[--install-lldp-flow] [--log-config-file LOG_CONFIG_FILE]
[--log-dir LOG_DIR] [--log-file LOG_FILE]
[--log-file-mode LOG_FILE_MODE]
[--neutron-admin-auth-url NEUTRON_ADMIN_AUTH_URL]
[--neutron-admin-password NEUTRON_ADMIN_PASSWORD]
[--neutron-admin-tenant-name NEUTRON_ADMIN_TENANT_NAME]
[--neutron-admin-username NEUTRON_ADMIN_USERNAME]
[--neutron-auth-strategy NEUTRON_AUTH_STRATEGY]
[--neutron-controller-addr NEUTRON_CONTROLLER_ADDR]
[--neutron-url NEUTRON_URL]
[--neutron-url-timeout NEUTRON_URL_TIMEOUT]
[--noexplicit-drop] [--noinstall-lldp-flow]
[--noobserve-links] [--nouse-stderr] [--nouse-syslog]
[--noverbose] [--observe-links]
[--ofp-listen-host OFP_LISTEN_HOST]
[--ofp-ssl-listen-port OFP_SSL_LISTEN_PORT]
[--ofp-tcp-listen-port OFP_TCP_LISTEN_PORT] [--use-stderr]
[--use-syslog] [--verbose] [--version]
[--wsapi-host WSAPI_HOST] [--wsapi-port WSAPI_PORT]
[--test-switch-dir TEST-SWITCH_DIR]
[--test-switch-target TEST-SWITCH_TARGET]
[--test-switch-tester TEST-SWITCH_TESTER]
[app [app ...]]
ryu-manager: error: unrecognized arguments: --path-file
It does look like I need to register a new command line option somewhere before I can use it.Can some-one point out to me how to do that? Also can someone explain how to access the file(text.txt) inside the program?
You're on the right track, however the CONF entry that you are creating actually needs to be loaded before your app is loaded, otherwise ryu-manager has no way of knowing it exists!
The file you are looking for is flags.py, under the ryu directory of the source tree (or under the root installation directory).
This is how the ryu/tests/switch/tester.py Ryu app defines it's own arguments, so you might use that as your reference:
CONF.register_cli_opts([
# tests/switch/tester
cfg.StrOpt('target', default='0000000000000001', help='target sw dp-id'),
cfg.StrOpt('tester', default='0000000000000002', help='tester sw dp-id'),
cfg.StrOpt('dir', default='ryu/tests/switch/of13',
help='test files directory')
], group='test-switch')
Following this format, the CONF.register_cli_opts takes an array of config types exactly as you have done it (see ryu/cfg.py for the different types available).
You'll notice that when you run the ryu-manager help, i.e.
ryu-manager --help
the list that comes up is sorted by application (e.g. the group of arguments under 'test-switch options'). For that reason, you will want to specify a group name for your set of commands.
Now let us say that you used the group name 'my-app' and have an argument named 'path-file' in that group, the command line argument will be --my-app-path-file (this can get a little long), while you can access it in your application like this:
from ryu import cfg
CONF = cfg.CONF
path_file = CONF['my-app']['path_file']
Note the use of dash versus the use of underscores.
Cheers!
In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.