TCL, How to name a variable that includes another variable - variables

In TCL, I'm writing a procedure that returns the depth of a clock.
But since I have several clocks I want to name the var: depth_$clk
proc find_depth {} {
foreach clk $clocks {
…
set depth_$clk $max_depth
echo $depth_$clk
}
}
But I get:
Error: can't read "depth_": no such variable
Use error_info for more info. (CMD-013)

Your problem is this line:
echo $depth_$clk
The issue is that the syntax for $ only parses a limited set of characters afterwards for being part of the variable name; the $ is not part of that. Instead, you can use the set command with one argument; $ is effectively syntactic sugar for that, but the command lets you use complex substitutions.
echo [set depth_$clk]
HOWEVER!
The real correct thing to do here is to switch to using an associative array. It's a bit larger change to your code, but lets you do more as you've got proper access to substitutions in array element names:
proc find_depth {} {
foreach clk $clocks {
…
set depth($clk) $max_depth
echo $depth($clk)
}
}

echo ${depth}_$cell
This can help too.
Thanks !

Related

How do I store the value returned by either run or shell?

Let's say I have this script:
# prog.p6
my $info = run "uname";
When I run prog.p6, I get:
$ perl6 prog.p6
Linux
Is there a way to store a stringified version of the returned value and prevent it from being output to the terminal?
There's already a similar question but it doesn't provide a specific answer.
You need to enable the stdout pipe, which otherwise defaults to $*OUT, by setting :out. So:
my $proc = run("uname", :out);
my $stdout = $proc.out;
say $stdout.slurp;
$stdout.close;
which can be shortened to:
my $proc = run("uname", :out);
say $proc.out.slurp(:close);
If you want to capture output on stderr separately than stdout you can do:
my $proc = run("uname", :out, :err);
say "[stdout] " ~ $proc.out.slurp(:close);
say "[stderr] " ~ $proc.err.slurp(:close);
or if you want to capture stdout and stderr to one pipe, then:
my $proc = run("uname", :merge);
say "[stdout and stderr] " ~ $proc.out.slurp(:close);
Finally, if you don't want to capture the output and don't want it output to the terminal:
my $proc = run("uname", :!out, :!err);
exit( $proc.exitcode );
The solution covered in this answer is concise.
This sometimes outweighs its disadvantages:
Doesn't store the result code. If you need that, use ugexe's solution instead.
Doesn't store output to stderr. If you need that, use ugexe's solution instead.
Potential vulnerability. This is explained below. Consider ugexe's solution instead.
Documentation of the features explained below starts with the quote adverb :exec.
Safest unsafe variant: q
The safest variant uses a single q:
say qx[ echo 42 ] # 42
If there's an error then the construct returns an empty string and any error message will appear on stderr.
This safest variant is analogous to a single quoted string like 'foo' passed to the shell. Single quoted strings don't interpolate so there's no vulnerability to a code injection attack.
That said, you're passing a single string to the shell which may not be the shell you're expecting so it may not parse the string as you're expecting.
Least safe unsafe variant: qq
The following line produces the same result as the q line but uses the least safe variant:
say qqx[ echo 42 ]
This double q variant is analogous to a double quoted string ("foo"). This form of string quoting does interpolate which means it is subject to a code injection attack if you include a variable in the string passed to the shell.
By default run just passes the STDOUT and STDERR to the parent process's STDOUT and STDERR.
You have to tell it to do something else.
The simplest is to just give it :out to tell it to keep STDOUT. (Short for :out(True))
my $proc = run 'uname', :out;
my $result = $proc.out.slurp(:close);
my $proc = run 'uname', :out;
for $proc.out.lines(:close) {
.say;
}
You can also effectively tell it to just send STDOUT to /dev/null with :!out. (Short for :out(False))
There are more things you can do with :out
{
my $file will leave {.close} = open :w, 'test.out';
run 'uname', :out($file); # write directly to a file
}
print slurp 'test.out'; # Linux
my $proc = run 'uname', :out;
react {
whenever $proc.out.Supply {
.print
LAST {
$proc.out.close;
done; # in case there are other whenevers
}
}
}
If you are going to do that last one, it is probably better to use Proc::Async.

perl6 Is there a way to do editable prompt input?

In bash shell, if you hit up or down arrows, the shell will show you your previous or next command that you entered, and you can edit those commands to be new shell commands.
In perl6, if you do
my $name = prompt("Enter name: ");
it will print "Enter name: " and then ask for input; is there a way to have perl6 give you a default value and then you just edit the default to be the new value. E.g.:
my $name = prompt("Your name:", "John Doe");
and it prints
Your name: John Doe
where the John Doe part is editable, and when you hit enter, the edited string is the value of $name.
https://docs.raku.org/routine/prompt does not show how to do it.
This is useful if you have to enter many long strings each of which is just a few chars different from others.
Thanks.
To get the editing part going, you could use the Linenoise module:
zef install Linenoise
(https://github.com/hoelzro/p6-linenoise)
Then, in your code, do:
use Linenoise;
sub prompt($p) {
my $l = linenoise $p;
linenoiseHistoryAdd($l);
$l
}
Then you can do your loop with prompt. Remember, basically all Perl 6 builtin functions can be overridden lexically. Now, how to fill in the original string, that I haven't figure out just yet. Perhaps the libreadline docs can help you with that.
Well by default, programs are completely unaware of their terminals.
You need your program to communicate with the terminal to do things like pre-fill an input line, and it's unreasonable to expect Perl 6 to handle something like this as part of the core language.
That said, your exact case is handled by the Readline library as long as you have a compatible terminal.
It doesn't look like the perl 6 Readline has pre-input hooks setup so you need to handle the callback and read loop yourself, unfortunately. Here's my rough attempt that does exactly what you want:
use v6;
use Readline;
sub prompt-prefill($question, $suggestion) {
my $rl = Readline.new;
my $answer;
my sub line-handler( Str $line ) {
rl_callback_handler_remove();
$answer = $line;
}
rl_callback_handler_install( "$question ", &line-handler );
$rl.insert-text($suggestion);
$rl.redisplay;
while (!$answer) {
$rl.callback-read-char();
}
return $answer;
}
my $name = prompt-prefill("What's your name?", "Bob");
say "Hi $name. Go away.";
If you are still set on using Linenoise, you might find the 'hints' feature good enough for your needs (it's used extensively by the redis-cli application if you want a demo). See the hint callback used with linenoiseSetHintsCallback in the linenoise example.c file. If that's not good enough you'll have to start digging into the guts of linenoise.
Another solution :
Use io-prompt
With that you can set a default value and even a default type:
my $a = ask( "Life, the universe and everything?", 42, type => Num );
Life, the universe and everything? [42]
Int $a = 42
You can install it with:
zef install IO::Prompt
However, if just a default value is not enough. Then it is better you use the approach Liz has suggested.

issue with a modification of youtube-dl in .zshrc

the code I have in my .zshrc is:
ytdcd () { #youtube-dl that automatically puts stuff in a specific folder and returns to the former working directory after.
cd ~/youtube/new/ && {
youtube-dl "$#"
cd - > /dev/null
}
}
ytd() { #sofar, this function can only take one page. so, i can only send one youttube video code per line. will modify it to accept multiple lines..
for i in $*;
do
params=" $params https://youtu.be/$i"
done
ytdcd -f 18 $params
}
so, on the commandline (terminal), when i enter ytd DFreHo3UCD0, i would like to have the video at https://youtu.be/DFreHo3UCD0 to be downloaded. the problem is that when I enter the command in succession, the system just tries to download the video for the previous command and rightly claims the download is complete.
For example, entering:
> ytd DFreHo3UCD0
> ytd L3my9luehfU
would not attempt to download the video for L3my9luehfU but only the video for DFreHo3UCD0 twice.
First -- there's no point to returning to the old directory for ytdcd: You can change to a new directory only inside a subshell, and then exec youtube-dl to replace that subshell with the application process:
This has fewer things to go wrong: Aborting the function's execution can't leave things in the wrong directory, because the parent shell (the one you're interactively using) never changed directories in the first place.
ytdcd () {
(cd ~/youtube/new/ && exec youtube-dl "$#")
}
Second -- use an array when building argument lists, not a string.
If you use set -x to log its execution, you'll see that your original command runs something like:
ytdcd -f 18 'https://youtu.be/one https://youtu.be/two https://youtu.be/three'
See those quotes? That's because $params is a string, passed as a single argument, not an array. (In bash -- or another shell following POSIX rules -- an unquoted string expansion would be string-split and glob-expanded, but zsh doesn't follow POSIX rules).
The following builds up an array of separate arguments and passes them individually:
ytd() {
local -a params=( )
local i
for i; do
params+=( "https://youtu.be/$i" )
done
ytdcd -f 18 "${params[#]}"
}
Finally, it's come up that you don't actually intend to pass all the URLs to just one youtube-dl instance. To run a separate instance per URL, use:
ytd() {
local i retval=0
for i; do
ytdcd -f 18 "$i" || retval=$?
done
return "$retval"
}
Note here that we're capturing non-success exit status, so as not to hide an error in any ytdcd instance other than the last (which would otherwise occur).
I would declare param as local, so that you are not appending url after urls...
You can try to add this awesome function to your .zshrc:
funfun() {
local _fun1="$_fun1 fun1!"
_fun2="$_fun2 fun2!"
echo "1 says: $_fun1"
echo "2 says: $_fun2"
}
To observe the thing ;)
EDIT (Explanation):
When sourcing shell script, you add it to you current environment, that is why you can run those function you define. So, when those function use variables, by default, those variable will be global and accessible from anywhere in your environment! Therefore, In this case param is defined globally for all the length of your shell session. Since you want to allow the download of several video at once, you are appending values to this global variable, which will grow all the time.
Enforcing local tells zsh to limit the scope of params to the function only.
Another solution is to reset the variable when you call the function.

Sharing variables between C Shell and TCL

I have a variable which is being used in two separate scripts: a C Shell one and a TCL one. Is there a way to define it just once and access it in both the scripts?
vars.sh
#!/usr/bin/env tcsh
set a=b
run.sh
#!/usr/bin/env tcsh
source vars.sh
echo $a
vars.tcl
#!/usr/bin/env tclsh
set a b
run.tcl
#!/usr/bin/env tclsh
source vars.tcl
puts $a
I do not like to idea of generating two separate files to store the same variables in two different formats. Is there a way to use a single vars file and have the variables available to both C Shell and TCL?
The simplest method is to make the variables be environment variables, since those are inherited by a child process from their parent. (On the Tcl side, they're elements in the ::env global array, and on the C shell side, they can be read like any other variable but need to be set via setenv.)
Sharing a single configuration file is much harder, since the two languages use a different syntax. Provided you don't use anything complicated in the way of quoting, you can make Tcl parse the C shell format.
proc loadVariablesFromCshellFile {filename arrayName} {
upvar 1 $arrayName array
set f [open $filename]
while {[gets $f line] >= 0} {
if {[regexp {^\s*set (\w+)=([""'']?)(.*)\2\s*$} $line -> key quote value]} {
set array($key) $value
}
}
close $f
}
This isn't complete code, since it doesn't handle substitution of variables inside the value, but it is enough to get you started. (I also hope you're not using that feature; if you are, portability is going to be quite a bit harder to achieve.) Here's how you'd use it:
#!/usr/bin/env tclsh
proc loadVariablesFromCshellFile {filename arrayName} {
upvar 1 $arrayName array
set f [open $filename]
while {[gets $f line] >= 0} {
if {[regexp {^\s*set (\w+)=([""'']?)(.*)\2\s*$} $line -> key quote value]} {
set array($key) $value
}
}
close $f
}
loadVariablesFromCshellFile vars.sh myvars
puts $myvars(a)
While it is entirely possible to load the values straight into scalar globals on the Tcl side, I really don't recommend it as it is polluting the global variable space from a source outside the Tcl program, a known piece of poor practice.

Using echo and read $variable not working in UNIX! (UNIX beginner!)

So I just started learning UNIX yesterday, and I'm trying to create a basic script that asks for your contact details (name, address, phone number), and then stores that into a file called details.out.
This is driving me NUTS! Its such an easy/basic thing, yet I cant do it, and I've been stuck on it for a solid hour now...
after much googling and searching, I still can't find the answer. So this is what I've done so far, and was wondering where I am going wrong!
echo Please type your first and last name
read $firstname $lastname
echo Please type in your address
read $address
echo Please type in your phone number
read $phone
echo Thank you very much!
echo The details have been stored in '"details.out"'
cat >> details.out <<EOF
Name: echo $firstname echo $lastname
Address: echo $address
Phone Number: echo $phone
EOF
When I read "details.out" it it displays as follows:
Name: echo
Address: echo
Phone Number: echo
ANY help would be appreciated! (and if you get try and point me in the right directions as opposed to straight up giving me the answer, I would appreciate that!)
P.S I'm using Putty if that helps!
when you use read (or declaring variables), don't put $ sigil on the variable names
when you display a variable, always put double quotes around : ex. echo "$var"
when you use here-doc, no need to put echo command
when you use echo, use quotes :
"Double quote" every expansion, and anything that could contain a special character, eg. "$var", "$#", "${array[#]}", "$(command)". Use 'single quotes' to make something literal, eg. 'Costs $5 USD'. See http://mywiki.wooledge.org/Quotes http://mywiki.wooledge.org/Arguments and http://wiki.bash-hackers.org/syntax/words
Whenever you put a $ before a variable name, you're retrieving the current value of that variable. You don't want to do that in your read command. The variables are empty when the script starts, the empty values are put in place of the $firstname and $lastname and read is called with no arguments, causing it to read a line and discard it.
Setting a variable with assignment:
var=value
Setinng a variable with read:
read var
Neither of them use $var because they don't want to look at the current value, they want to replace it.
There's no need for those echos in the heredoc either. They aren't in command position, so they'll just get copied as part of the input to cat.