How to get a list of system users in vlang? - cross-platform

I am trying to list all the system users, I know that on Linux I could use
awk -F: '{ print $1 }' /etc/passwd and on Windows I could use wmic useraccount get name but I was wondering if there was function in V that would be better suited for it and didn't require different commands for Linux and Windows.

EDIT: You can now use fn user_names() ?[]string from the os module in order to get the names of users on the system.
There currently is now way to get a list of users on the system. However you can you compile time if statements to execute the commands you referred to above.
import os
fn main() {
$if windows {
result := os.execute('wmic useraccount get name')?
} $else $if linux {
result := os.execute('awk -F: "{ print $1 }" /etc/passwd')?
} $else {
println('unsupported os')
}
// do something with result
}

Related

perl6 Is there a way to do editable prompt input?

In bash shell, if you hit up or down arrows, the shell will show you your previous or next command that you entered, and you can edit those commands to be new shell commands.
In perl6, if you do
my $name = prompt("Enter name: ");
it will print "Enter name: " and then ask for input; is there a way to have perl6 give you a default value and then you just edit the default to be the new value. E.g.:
my $name = prompt("Your name:", "John Doe");
and it prints
Your name: John Doe
where the John Doe part is editable, and when you hit enter, the edited string is the value of $name.
https://docs.raku.org/routine/prompt does not show how to do it.
This is useful if you have to enter many long strings each of which is just a few chars different from others.
Thanks.
To get the editing part going, you could use the Linenoise module:
zef install Linenoise
(https://github.com/hoelzro/p6-linenoise)
Then, in your code, do:
use Linenoise;
sub prompt($p) {
my $l = linenoise $p;
linenoiseHistoryAdd($l);
$l
}
Then you can do your loop with prompt. Remember, basically all Perl 6 builtin functions can be overridden lexically. Now, how to fill in the original string, that I haven't figure out just yet. Perhaps the libreadline docs can help you with that.
Well by default, programs are completely unaware of their terminals.
You need your program to communicate with the terminal to do things like pre-fill an input line, and it's unreasonable to expect Perl 6 to handle something like this as part of the core language.
That said, your exact case is handled by the Readline library as long as you have a compatible terminal.
It doesn't look like the perl 6 Readline has pre-input hooks setup so you need to handle the callback and read loop yourself, unfortunately. Here's my rough attempt that does exactly what you want:
use v6;
use Readline;
sub prompt-prefill($question, $suggestion) {
my $rl = Readline.new;
my $answer;
my sub line-handler( Str $line ) {
rl_callback_handler_remove();
$answer = $line;
}
rl_callback_handler_install( "$question ", &line-handler );
$rl.insert-text($suggestion);
$rl.redisplay;
while (!$answer) {
$rl.callback-read-char();
}
return $answer;
}
my $name = prompt-prefill("What's your name?", "Bob");
say "Hi $name. Go away.";
If you are still set on using Linenoise, you might find the 'hints' feature good enough for your needs (it's used extensively by the redis-cli application if you want a demo). See the hint callback used with linenoiseSetHintsCallback in the linenoise example.c file. If that's not good enough you'll have to start digging into the guts of linenoise.
Another solution :
Use io-prompt
With that you can set a default value and even a default type:
my $a = ask( "Life, the universe and everything?", 42, type => Num );
Life, the universe and everything? [42]
Int $a = 42
You can install it with:
zef install IO::Prompt
However, if just a default value is not enough. Then it is better you use the approach Liz has suggested.

issue with a modification of youtube-dl in .zshrc

the code I have in my .zshrc is:
ytdcd () { #youtube-dl that automatically puts stuff in a specific folder and returns to the former working directory after.
cd ~/youtube/new/ && {
youtube-dl "$#"
cd - > /dev/null
}
}
ytd() { #sofar, this function can only take one page. so, i can only send one youttube video code per line. will modify it to accept multiple lines..
for i in $*;
do
params=" $params https://youtu.be/$i"
done
ytdcd -f 18 $params
}
so, on the commandline (terminal), when i enter ytd DFreHo3UCD0, i would like to have the video at https://youtu.be/DFreHo3UCD0 to be downloaded. the problem is that when I enter the command in succession, the system just tries to download the video for the previous command and rightly claims the download is complete.
For example, entering:
> ytd DFreHo3UCD0
> ytd L3my9luehfU
would not attempt to download the video for L3my9luehfU but only the video for DFreHo3UCD0 twice.
First -- there's no point to returning to the old directory for ytdcd: You can change to a new directory only inside a subshell, and then exec youtube-dl to replace that subshell with the application process:
This has fewer things to go wrong: Aborting the function's execution can't leave things in the wrong directory, because the parent shell (the one you're interactively using) never changed directories in the first place.
ytdcd () {
(cd ~/youtube/new/ && exec youtube-dl "$#")
}
Second -- use an array when building argument lists, not a string.
If you use set -x to log its execution, you'll see that your original command runs something like:
ytdcd -f 18 'https://youtu.be/one https://youtu.be/two https://youtu.be/three'
See those quotes? That's because $params is a string, passed as a single argument, not an array. (In bash -- or another shell following POSIX rules -- an unquoted string expansion would be string-split and glob-expanded, but zsh doesn't follow POSIX rules).
The following builds up an array of separate arguments and passes them individually:
ytd() {
local -a params=( )
local i
for i; do
params+=( "https://youtu.be/$i" )
done
ytdcd -f 18 "${params[#]}"
}
Finally, it's come up that you don't actually intend to pass all the URLs to just one youtube-dl instance. To run a separate instance per URL, use:
ytd() {
local i retval=0
for i; do
ytdcd -f 18 "$i" || retval=$?
done
return "$retval"
}
Note here that we're capturing non-success exit status, so as not to hide an error in any ytdcd instance other than the last (which would otherwise occur).
I would declare param as local, so that you are not appending url after urls...
You can try to add this awesome function to your .zshrc:
funfun() {
local _fun1="$_fun1 fun1!"
_fun2="$_fun2 fun2!"
echo "1 says: $_fun1"
echo "2 says: $_fun2"
}
To observe the thing ;)
EDIT (Explanation):
When sourcing shell script, you add it to you current environment, that is why you can run those function you define. So, when those function use variables, by default, those variable will be global and accessible from anywhere in your environment! Therefore, In this case param is defined globally for all the length of your shell session. Since you want to allow the download of several video at once, you are appending values to this global variable, which will grow all the time.
Enforcing local tells zsh to limit the scope of params to the function only.
Another solution is to reset the variable when you call the function.

Function doesn't have the same behavior if called from a function

I have written a small function that converts a hex number to decimal using bc. When called directly, it works, however when called multiple times from a single statement in another function, it fails.
Here's a demo script to reproduce the issue:
awk 'function hd(h) {
cmd=sprintf("echo \"ibase=16; obase=A; %s\"|bc", h);
cmd|getline d;
printf("hd(%s)=%s\n", h, d);
return d;
}
function test() {
printf("A=%d, FF=%d\n", hd("A"), hd("FF"));
}
BEGIN {
printf("A=%d, FF=%d\n", hd("A"), hd("FF"));
test();
}'
Here is the output of this:
hd(A)=10
hd(FF)=255
A=10, FF=255
hd(A)=255
hd(FF)=255
A=255, FF=255
As you can see, when executed directly in BEGIN, it works; however when executed through the test() function it fails.
I am using GNU Awk 3.1.5. I also tried GNU Awk 4.1.1 on another machine, it fails in a similar fashion.
The problem is, you didn't close the pipe after cmd|getline d.
adding
close(cmd)
after the getline should fix your problem. getline should be used with caution.
P.S. printf in awk is a statement, not a function.

Cronjob does not execute command line in perl script

I am unfamiliar with linux/linux environment so do pardon me if I make any mistakes, do comment to clarify.
I have created a simple perl script. This script creates a sql file and as shown, it would execute the lines in the file to be inserted into the database.
#!/usr/bin/perl
use strict;
use warnings;
use POSIX 'strftime';
my $SQL_COMMAND;
my $HOST = "i";
my $USERNAME = "need";
my $PASSWORD = "help";
my $NOW_TIMESTAMP = strftime '%Y-%m-%d_%H-%M-%S', localtime;
open my $out_fh, '>>', "$NOW_TIMESTAMP.sql" or die 'Unable to create sql file';
printf {$out_fh} "INSERT INTO BOL_LOCK.test(name) VALUES ('wow');";
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
while( my $sql_file = glob '*.sql' )
{
my $status = system ( "$SQL_COMMAND < $sql_file" );
if ( $status == 0 )
{
print "pass";
}
else
{
print "fail";
}
}
}
insert();
This works if I execute it while I am logged in as a user(I do not have access to Admin). However, when I set a cronjob to run this file let's say at 10.08am by using the line(in crontab -e):
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl > /dev/null 2>&1
I know the script is being executed as the sql file is created. However no new rows are inserted into the database after 10.08am. I've searched for solutions and some have suggested using the DBI module but it's not available on the server.
EDIT: Didn't manage to solve it in the end. A root/admin account was used to to execute the script so that "solved" the problem.
First things first, get rid of the > /dev/null 2>&1 at the end of your crontab entry (at least temporarily) so you can actually see any errors that may be occurring.
In other words, change it temporarily to something like:
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl >/tmp/myfile 2>&1
Then you can examine the /tmp/myfile file to see what's being output.
The most likely case is that mysql is not actually on the path in your cron job, because cron itself gives a rather minimal environment.
To fix that problem (assuming that's what it is), see this answer, which gives some guidelines on how best to expand the cron environment to give you what you need. That will probably just involve adding the MySQL executable directory to your PATH variable.
The other thing you may want to consider is closing the out_fh file before trying to pass it to mysql - if the buffers haven't been flushed, it may still be an empty file as far as other processes are concerned.
The expression glob(".* *") matches all files in the current working
directory.
- http://perldoc.perl.org/functions/glob.html
you should not rely on the wd in a cron job. If you want to use a glob (or any file operation) with a relative path, set the wd with chdir first.
source: http://www.perlmonks.org/bare/?node_id=395387
So if your working directory is, for example /home/user, you should insert
chdir('/home/user/');
before the WHILE, ie:
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
chdir('/home/user/');
while( my $sql_file = glob '*.sql' )
{
...
replace /home/user with wherever your sql files are being created.
It's better to do as much processing within Perl as possible. It avoids the overhead of generating a separate shell process and leaves everything under the control of the program so that you can handle any errors much more simply
Database access from Perl is done using the DBI module. This program demonstrates how to achieve what you have written using the mysql utility. As you can see it's also much more concise
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $host = "i";
my $username = "need";
my $password = "help";
my $dbh = DBI->connect("DBI:mysql:database=test;host=$host", $username, $password);
my $insert = $dbh->prepare('INSERT INTO BOL_LOCK.test(name) VALUES (?)');
my $rv = $insert->execute('wow');
print $rv ? "pass\n" : "fail\n";

Open a piped process (sqlplus) in perl and get information from the query?

Basically, I'd like to open a pipe to sqlplus using Perl, sending a query and then getting back the information from the query.
Current code:
open(PIPE, '-|', "sqlplus user/password#server_details");
while (<PIPE>) {
print $_;
}
This allows me to jump into sqlplus and run my query.
What I'm having trouble figuring out is how to let Perl send sqlplus the query (since it's always the same query), and once that's done, how can I get the information written back to a variable in my Perl script?
PS - I know about DBI... but I'd like to know how to do it using the above method, as inelegant as it is :)
Made some changes to the code, and I can now send my query to sqlplus but it disconnects... and I don't know how to get the results back from it.
my $squery = "select column from table where rownum <= 10;"
# Open pipe to sqlplus, connect to server...
open(PIPE, '|-', "sqlplus user/password#server_details") or die "I cannot fork: $!";
# Print the query to PIPE?
print PIPE $squery;
Would it be a case of grabbing the STDOUT from sqlplus and then storing it using the Perl (parent) script?
I'd like to store it in an array for parsing later, basically.
Flow diagram:
Perl script (parent) -> open pipe into sqlplus (child) -> print query on pipe -> sqlplus outputs results on screen (STDOUT?) -> read the STDOUT into an array in the Perl script (parent)
Edit: It could be that forking the process into sqlplus might not be viable using this method and I will have to use DBI. Just waiting to see if anyone else answers...
Forget screen scraping, Perl has a perfectly cromulent database interface.
I think you probably want IPC::Run. You'll be using the start function to get things going:
my $h = start \#cat, \$in, \$out;
You would assign your query to the $input variable and pump until you got the expected output in the $output variable.
$in = "first input\n";
## Now do I/O. start() does no I/O.
pump $h while length $in; ## Wait for all input to go
## Now do some more I/O.
$in = "second input\n";
pump $h until $out =~ /second input/;
## Clean up
finish $h or die "cat returned $?";
This example is stolen from the CPAN page, which you should visit if you want more examples.
If your query is static consider moving it into it's own file and having sqlplus load and execute it.
open(my $pipe, '-|', 'sqlplus', 'user/password#server_details', '#/path/to/sql-lib/your-query.sql', 'query_param_1', 'query_param_2') or die $!;
while (<$pipe>) {
print $_;
}