How to edit json file resides on the remote server? - ssh

I have a json file on my remote server
location at remote host : ".docker/test.josn"
{
"key1" : "Value1",
"Key2" : "Value2"
}
I want to add new element to the test.josn from my local machine. I am trying following command but it is not working.
ssh <test-server> "jq '.key3 = "Value3"' .docker/test.json > .docker/test2.json && mv .docker/test2.json .docker/test.json"
Its giving me the following error:
bash: .docker/test2.json: No such file or directory

You have a shell quoting issue. You didn't escape the inner double quotes.
You can try the following:
ssh <test-server> 'jq ".key3 = \"Value3\"" .docker/test.json > .docker/test2.json && mv .docker/test2.json .docker/test.json'
which replace the outer double quote with single ones because you don't need variable expansion in this statement.

Related

Dynamic import in Jsonnet

I want to get input file in Jsonnet so the following is working great for me:
local input = import './inputfile.json';
Problem is that I want to pass the file name through the Jsonnet CLI and I tried to use --ext-str or TLA but in both cases im getting the following error:
computed imports are not allowed.
I also tried to use --ext-code like here:
jsonnet -J grafonnet-lib --ext-code input=(import "./inputfile.json") createDash.jsonnet
but then I'm getting:
zsh: unknown file attribute: i
Is there any solution for this problem?
I'm guessing zsh is gobbling up either the parenthesis or the double quotes. Drop both between single quotes.
➜ tmp jsonnet -J grafonnet-lib --ext-code input='(import "./inputfile.json")' createDash.jsonnet
{ }

Postgresql 12 command line arguments initdb via Python 3 code, how to proceed?

Using Python 3.8.1 code, I unzipped the distribution of postgresql12 in a specific folder of root C, inside the code it creates two other folders, data and log. In the code I write the following line to set up a first database:
subP = subprocess.run([link_initdb, "-U NewDataBase", "-A AlphaBeta", "-E utf8", "-D C:\\Database\\pgsql\\data"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8")
But I get the following error message:
ubP.stderr: initdb: error: could not create directory " C:": Invalid argument
I don't know what I'm wrong!
For whatever reason there should be no space between -D and the filename.
To solve it you can use:
subP = subprocess.run([
link_initdb,
"-U postgres",
"-A password",
"-E utf8",
"--pgdata=C:\Database\pgsql\data",
],
shell=True,
)

Passing Parameter in pig

A = load '$path' using PigStorage('$Delimiter') as ($table_schema);
I want to pass these parameter in pig command dynamically.
Can any help me in this by showing an example?
Try this :
test.cfg
path=/input/file/path
delimiter=,
table_schema=requiredschema:chararray
N.B. Valid values to be given for above keys before test run.
test.pig
A = load '$path' using PigStorage('$delimiter') as ($table_schema);
DUMP A;
Invocation :
pig -f test.pig -m test.cfg
-f : To specify pig file name
-m : To specify the param file where
Ref : Error getting when passing parameter through pig script for a similar use case.

WebLogic - Using environment variable / double quotes in "Arguments" in "Server Start"

I have an admin server, NodeManager, and 1 managed server, all on the same machine.
I am trying to enter something similar to this to the arguments field in the Server Start tab:
-Dmy.property=%USERPROFILE%\someDir\someJar.jar
But when the managed server is started it throws this exception:
Error opening zip file or JAR manifest missing : %USERPROFILE%\someDir\someJar.jar
It appears that the environment variable is not being translated into it's value. It is just passed on to the managed server as plain-text.
I tried surrounding the path with double quotes (") but the console validates the input and does not allow this: "Arguments may not contain '"'"
Even editing the config.xml file manually cannot work, as the admin server fails to startup after this:
<Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason: [Management:141266]Parsing failure in config.xml: java.lang
.IllegalArgumentException: Arguments may not contain '"'.>
I also tried using %20 to no avail, it is just passed as %20.
I thought that perhaps this had something to do with the spaces in the value of %USERPROFILE% (which is "C:\documents and settings.."), but the same thing happens with other env. variables which point to other directories with no spaces.
My question:
Is there any supported way of :
using double quotes? what if i have to reference a folder with spaces in it's name?
reference an environment variable? What if i have to rely on it's value for distributed servers where i do not know in advance the variable's value?
Edit based on comments:
Approach 1:
Open setDomainEnv.cmd and search for export SERVER_NAME in Linux or for set SERVER_NAME in Windows. Skip to next to next line (i.e skip current and the next line)
On the current line, insert:
customServerList="server1,server2" #this serverList should be taken as input
isCurrServerCustom=$(echo ${customServerList} | tr ',' '\n' | grep ${SERVER_NAME} | wc -l)
if [ $isCurrServerCustom -gt 0 ]; then
# add customJavaArg
JAVA_OPTIONS="-Dmy.property=${USERPROFILE}/someDir/someJar.jar"
fi
Save the setDomainEnv.sh file and re-start servers
Note that I have only given logic for Linux , for Windows similar logic can be used but with batch scripting syntax.
Approach 2:
Assuming domain is already installed and user provides the list of servers to which the JVM argument -Dmy.property need to be added. Jython script (use wlst.sh to execute). WLST Reference.
Usage: wlst.sh script_name props_file_location
import os
from java.io import File
from java.io import FileInputStream
# extract properties from properties file.
print 'Loading input properties...'
propsFile = sys.argv[1]
propInputStream = FileInputStream(propsFile)
configProps = Properties()
configProps.load(propInputStream)
domainDir = configProps.get("domainDir")
# serverList in properties file should be comma seperated
serverList = configProps.get("serverList")
# The current machine's logical name as mentioned while creating the domain has to be given. Basically the machine name on which NM for current host is configured on.
# This param may not be required as an input if the machine name is configured as same as the hostname , in which case , socket module can be imported and socket.getHostName can be used.
currMachineName = configProps.get("machineName")
jarDir = os.environ("USERPROFILE")
argToAdd = '-Dmy.property=' + jarDir + File.separator + 'someDir' + File.separator + 'someJar.jar'
readDomain(domainDir)
for srvr in serverList.split(",") :
cd('/Server/' + srvr)
listenAddr = get('ListenAddress')
if listenAddr != currMachineName :
# Only change current host's servers
continue
cd('/Server/' + srvr + '/ServerStart/' + srvr)
argsOld = get('Arguments')
if argsOld is not None :
set('Arguments', argsOld + ' ' + argToAdd)
else:
set('Arguments', argToAdd)
updateDomain()
closeDomain()
# now restart all affected servers (i.e serverList)
# one way is to connect to adminserver and shutdown them and then start again
Script has to be run from all hosts where the managed servers are going to be deployed in order to have the host specific value of "USERPROFILE" in the JVM argument.
BTW, to answer your question in a line : looks like the JVM arguments have to be supplied with the literal text eventually. But looks like WLS doesn't translate the environment variables if provided as JVM arguments. It gives an impression that it is translating when its done from startWebLogic.cmd (ex: using %DOMAIN_HOME% etc.) but its the shell/cmd executor that translates and then starts the JVM.

More efficient way of looping over SSH in KSH?

I currently have the following lines of code in a script:
set -A ARRAY OPTION1 OPTION2 OPTION3 OPTION4
set -A matches
for OPTION in ${ARRAY[#]}; do
DIFF=$(ssh $USER#$host " diff $PERSONALCONF $PRESETS$OPTION" )
if [[ $DIFF == "" ]]; then
set -A matches"${matches[#]}" $OPTION
fi
done
Basically, I have a loop that goes through each element in a pre-defined array, connects to a remote server (same server each time), and then compares a file with a file as defined by the loop using the diff command. Basically, it compares a personal.conf file with personal.conf.option1, personal.conf.option2, etc. If there is no difference, it adds it to the array. If there is a difference, nothing happens.
I was wondering if its possible to execute this or get the same result (storing the matching files in an array ON THE HOST MACHINE, not the server that's being connected to) by way of only connecting once via SSH. I cannot store anything on the remote server, nor can I execute a remote script on that server. I can only issue commands via ssh (kind of a goofy setup). Currently, it connects as many times as there are options. This seems inefficient. If anyone has a better solution I'd love to hear it.
Several options:
You can use OpenSSH multiplexing feature (see ssh(1)).
Also, most shells will gladly accept a script to run over stdin, so you could just run something like
cat script.sh | ssh $HOST /bin/sh
Most scripting languages (Perl, Python, Ruby, etc.) have some SSH module that allows connection reuse:
#!/usr/bin/perl
use Net::OpenSSH;
my ($user, $host) = (...);
my #options = (...);
my #matches;
my $ssh = Net::OpenSSH->new("$user\#$host");
for my $option (#options) {
my $diff = $ssh->capture("diff $personal_conf $presets$option");
if ($ssh->error) {
warn "command failed: " . $ssh->error;
}
else {
push #matches, $option if $diff eq '';
}
}
print "#matches\n";