Where are Ksh User-Defined Variables stored on UNIX machine? - variables

Where are KornShell (ksh) User-Defined Variables (UDV) stored on a AIX (Advanced Interactive eXecutive) machine?
Sample Commands:
#:/dir #variable=fooValue
#:/dir #echo $variable
fooValue
So is there a file on the AIX server with "fooValue" in text? Is the value stored in memory? Can the variable be sniffed out anyway?

The shell is a running process with its own little chunk of memory it gets dealt by the operating system.
As you define and set variables, the shell stores their names and values
inside its own process memory.
When the shell process exits, that memory is released back to the operating system, and the variables and their values are lost.

it's certainly not stored on disk. You can access the memory region where the variables are stored using
extern char** environ;
see man 5 environ.
But the less higher-level way is via
char * getenv(char * varname)
which returns you the value of a single environment variable (which ought to be part of the environ pointer list) See man 3 getenv

Related

Raku-native disk space usage

Purpose:
Save a program that writes data to disk from vain attempts of writing to a full filesystem;
Save bandwidth (don't download if nowhere to store);
Save user's and programmer's time and nerves (notify them of the problem instead of having them tearing out their hair with reading misleading error messages and "why the heck this software is not working!").
The question comes in 2 parts:
Reporting storage space statistics (available, used, total etc.), either of all filesystems or of the filesystem that path in question belongs to.
Reporting a filesystem error on running out of space.
Part 1
Share please NATIVE Raku alternative(s) (TIMTOWTDIBSCINABTE "Tim Toady Bicarbonate") to:
raku -e 'qqx{ df -P $*CWD }.print'
Here, raku -executes df (disk free) external program via shell quoting with interpolation qqx{}, feeding -Portable-format argument and $*CWD Current Working Directory, then .prints the df's output.
The snippet initially had been written as raku -e 'qqx{ df -hP $*CWD }.print' — with both -human-readable and -Portable — but it turned out that it is not a ubiquitously valid command. In OpenBSD 7.0, it exits with an error: df: -h and -i are incompatible with -P.
For adding human-readability, you may consider Number::Bytes::Human module
raku -e 'run <<df -hP $*CWD>>'
If you're just outputting what df gives you on STDOUT, you don't need to do anything.
The << >> are double quoting words, so that the $*CWD will be interpolated.
Part 1 — Reporting storage space statistics
There's no built in function for reporting storage space statistics. Options include:
Write Raku code (a few lines) that uses NativeCall to invoke a platform / filesystem specific system call (such as statvfs()) and uses the information returned by that call.
Use a suitable Raku library. FileSystem::Capacity picks and runs an external program for you, and then makes its resulting data available in a portable form.
Use run (or similar1) to invoke a specific external program such as df.
Use an Inline::* foreign language adaptor to enable invoking of a foreign PL's solution for reporting storage space statistics, and use the info it provides.2
Part 2 — Reporting running out of space
Raku seems to neatly report No space left on device:
> spurt '/tmp/failwrite', 'filesystem is full!'
Failed to write bytes to filehandle: No space left on device
in block <unit> at <unknown file> line 1
> mkdir '/tmp/failmkdir'
Failed to create directory '/tmp/failmkdir' with mode '0o777': Failed to mkdir: No space left on device
in block <unit> at <unknown file> line 1
(Programmers will need to avoid throwing away these exceptions.)
Footnotes
1 run runs an external command without involving a shell. This guarantees that the risks attendant with involving a shell are eliminated. That said, Raku also supports use of a shell (because that can be convenient and appropriate in some scenarios). See the exchange of comments under the question (eg this one) for some brief discussion of this, and the shell doc for a summary of the risk:
All shell metacharacters are interpreted by the shell, including pipes, redirects, environment variable substitutions and so on. Shell escapes are a severe security concern and can cause confusion with unusual file names. Use run if you want to be safe.
2 Foreign language adaptors for Raku (Raku modules in the Inline:: namespace) allow Raku code to use code written in other languages. These adaptors are not part of the Raku language standard, and most are barely experimental status, if that, but, conversely, the best are in great shape and allow Raku code to use foreign libraries as if they were written for Raku. (As of 2021 Inline::Perl5 is the most polished.)

what is the x86 opcode for assembly variables and constants?

i know that every instruction has opcodes.
i could find opcodes for mov , sub instructions.
but what is the opcode for variables and it's types.
we use assembler directives to define a variable and constant?
how they are represented in x86 opcodes?
nasm assembler x86: segment .bss
largest resb 2 ; reserves two bytes for largest
segment .data
number1 DW 12345 ; defines a constant number1
i tried online this https://defuse.ca/online-x86-assembler.htm#disassembly assembly to opcode conveter. but when i used nasm code to define a variable it shows error!
There's no opcode for variables. There are even no variables in machine code.
There is CPU and memory. Memory contains some values (bytes).
The CPU has cs:ip instruction pointer, pointing to memory address, where is the next instruction to execute, so it will read byte(s) from that address, and interpret them as opcode, and execute it as an instruction.
Whether you have in memory stored data or machine code doesn't matter, both are byte values.
What makes part of memory "data" or "variable" is the logical interpretation created by the running code, it's the code which does use certain part of memory only as "data/variables" and other part of memory as "code" (or eventually as both at the same time, like in this DOS 51B long COM code drawing Greece flag on screen, where the XLAT instruction is using the code opcodes also as source data for blue/white strips configuration).
Whether you write in your source:
x:
add al,al
or
x:
db 0x00, 0xC0
Doesn't matter, the resulting machine code is identical (in both cases the CPU will execute add al,al when pointed to that memory to be executed as instruction, and mov ax,[x] will set ax to 0xC000 in both cases, when used as "variable".
You may want to check listing file from the assembler (-l <listing_file_name> command line option for nasm) to see yourself there's no way to tell which bytes are code and which are data.
Assembler directives like segment, resb, or dw are not instructions and do not correspond to opcodes. That's why they are directives instead of instructions. Roughly speaking, there are two kinds of directives:
one kind of directive configures the assembler. For example, the segment directive configures the assembler to continue assembly in the section you provided.
another kind of directive emits data. For example, the dw directive emits the given datum into the object file. This can be used to place arbitrary data into memory for use with your program.

How exactly do you use variables in Jenkins?

Can someone concisely explain what the differences between the three variables below are? Because in all honesty, when I create a Jenkins job, I randomly guess between the three types until something works, but I'd love to understand rather than blindly picking.
${ENV,var="BUILD_USER"}
${BUILD_USER}
$BUILD_USER
Also, are there other ways of writing variables in Jenkins that I missed other than the 3 ways above?
When used in a statement:
${ENV,var="BUILD_USER"}--evaluates the system environment variables and returns the value for the variable BUILD_USER.
example: curl ${ENV,var="BUILD_USER"}/api/xml
${BUILD_USER} --returns the value of the BUILD_USER variable in the current script memory space.
example: curl ${BUILD_USER}/api/xml
$BUILD_USER--used to assign values to the BUILD_USER variable.
example: $BUILD_USER = "BUILD_USER"
In general, variable expansion is up to the plugin that interprets a configuration value.
For example, if you set up a job parameter GIT_REPOSITORY and use it to configure an address where git clone should go by putting $GIT_REPOSITORY into the git repository field, it works, but only because the Jenkins git plugin has implemented variable expansion support.
Many plugins do implement it but you cannot know it unless you test it. However, these days the support is so common it is safe to assume it should work.
Both forms of reference, $VAR and ${VAR}, work and are equivalent. The latter form is useful if you need to use the variable in a place where it is surrounded by other characters that could be interpreted as part of variable, like $VARX (Jenkins would be looking for variable named VARX) and ${VAR}X (Jenkins understands the variable is named VAR).
These rules have been modeled after variable expansion rules in Unix shells. Indeed, the job variables are made available as environment variables to build steps and in the Unix shell build step the variables are used the same way as above.
In a Windows CMD build step the variables are again used like any Windows environment variable: %VAR%.

Batch file FOR command

I need to know some basic information on the for command in batch scripts. One thing I want to know is why its %%variable instead of %variable%.
%A is for use on command lines only.
%%A is used when used in batch files.
The single-percent %a syntax indicates a local variable within a batch file.
The percent-bracketed %foo% represents the value of an environment variable named foo.
From ss64.com:
If you are using the FOR command at the command line rather than in a batch program, specify %parameter instead of %%parameter
So %%param is for batch scripts and %param is for live commands. Greg is right, %param% is a different kind of variable. The "variable" in the FOR command exists only that scope, while environment %variables% persist in a wider scope.

limitations of #! in scripts

It seems as if a script with #! prefix can have the interpreter name and ONLY one argument. Thus:
#!/bin/ls -l
works, but
#!/usr/bin/env ls -l
doesn't
Do you agree? Any thoughts?
Francesc
Different Unixes interpret #! differently. Here's a comprehensive-looking writeup: http://www.in-ulm.de/~mascheck/various/shebang/
It seems that the lowest common denominator across platforms is "the interpreter (which must not itself be a script) and no more than one argument".
Originally, we only had one shell on Unix. When you asked to run a command, the shell would attempt to invoke one of the exec() system calls on it. It the command was an executable, the exec would succeed and the command would run. If the exec() failed, the shell would not give up, instead it would try to interpret the command file as if it were a shell script.
Then unix got more shells and the situation became confused. Most folks would write scripts in one shell and type commands in another. And each shell had differing rules for feeding scripts to an interpreter.
This is when the “#! /” trick was invented. The idea was to let the kernel’s exec () system calls succeed with shell scripts. When the kernel tries to exec () a file, it looks at the first 4 bytes which represent an integer called a magic number. This tells the kernel if it should try to run the file or not. So “#! /” was added to magic numbers that the kernel knows and it was extended to actually be able to run shell scripts by itself. But some people could not type “#! /”, they kept leaving the space out. So the kernel was expended a bit again to allow “#!/” to work as a special 3 byte magic number.