What is simple API for copy file(s) on VxWorks (look like a CopyFile() on Windows API)?
I assume you are talking about working in the command shell, so the commands may look like:
-> ls // lists the current directory contents
Myfile1
Myfile2
YourFile2.txt
value = 0 = 0x0 //return status of the ls command - executed w/o errors*
-> copy "Myfile1","/YourDirectory/Myfile1" // FORMAT: copy "src" , "dest"*
// NB: src & dest argument must be strings*
value = 0 = 0x0 // return status of copy command.
-> cd "/YourDirectory/" // change default directory - notice the trailing slash (/)
value = 0 = 0x0 // return status of cd command
-> ls
xyzfile
Myfile1
value = 0 = 0x0
I hope this helps
HadziJo
Generally, anything that can be executed at the shell can be called from a program other than the shell.
status = copy("Myfile1", "/YourDirectory/Myfile1");
if (status == OK) .....
You might look at the man page for xcopy as well depending on the functionality you need.
You can also use "cp" command on cmd shell (vxWorks 6.x), but that is not API, so probably doesn't answer your question exactly.
The best method I found is xcopy("fromPath", "toPath"). It will recursively (including folders and subfolders) copy (duplicate) everything fromPath toPath.
check out the VxWork Manual: http://www.vxdev.com/docs/vx55man/vxworks/ref/usrFsLib.html#xcopy
Related
I am a newcomer to Nextflow and I am trying to process multiple files in a workflow. The number of these files is more than 300, so I would like to not to paste it into a command line as an option. So what I have done is I've created a file with every filename of the files I need to process, but I am not sure how to pass it into the process. This is what I've tried:
params.SRRs = "srr_ids.txt"
process tmp {
input:
file ids
output:
path "*.txt"
script:
'''
while read id; do
touch ${id}.txt;
echo ${id} > ${id}.txt;
done < $ids
'''
}
workflow {
tmp(params.SRRs)
}
The script is supposed to read in the file srr_ids.txt, and create files that have their ids in it (just testing on a smaller task). The error log says that the id variable is unbound, but I don't understand why. What is the conventional way of passing lots of filenames to a pipeline? Should I write some other process that parses the list?
Maybe there's a typo in your question, but the error is actually that the ids variable is unbound:
Command error:
.command.sh: line 5: ids: unbound variable
The problem is that when you use a single-quote script string, you will not be able to access Nextflow variables in your script block. You can either define your script using a double-quote string and escape your shell variables:
params.SRRs = "srr_ids.txt"
process tmp {
input:
path ids
output:
path "*.txt"
script:
"""
while read id; do
touch "\${id}.txt"
echo "\${id}" > "\${id}.txt"
done < "${ids}"
"""
}
workflow {
SRRs = file(params.SRRs)
tmp(SRRs)
}
Or, use a shell block which uses the exclamation mark ! character as the variable placeholder for Nextflow variables. This makes it possible to use both Nextflow and shell variables in the same piece of code without having to escape each of the shell variables:
params.SRRs = "srr_ids.txt"
process tmp {
input:
path ids
output:
path "*.txt"
shell:
'''
while read id; do
touch "${id}.txt"
echo "${id}" > "${id}.txt"
done < "!{ids}"
'''
}
workflow {
SRRs = file(params.SRRs)
tmp(SRRs)
}
What is the conventional way of passing lots of filenames to a
pipeline?
The conventional way, I think, is to actually supply one (or more) glob patterns to the fromPath channel factory method. For example:
params.SRRs = "./path/to/files/SRR*.fastq.gz"
workflow {
Channel
.fromPath( params.SRRs )
.view()
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.4
Launching `main.nf` [sleepy_bernard] DSL2 - revision: 30020008a7
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1910483.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1910482.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448795.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448793.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448794.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448792.fastq.gz
If instead you would prefer to pass in a list of filenames, like in your example, use either the splitCsv or the splitText operator to get what you want. For example:
params.SRRs = "srr_ids.txt"
workflow {
Channel
.fromPath( params.SRRs )
.splitText() { it.strip() }
.view()
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.4
Launching `main.nf` [fervent_ramanujan] DSL2 - revision: 89a1771d50
SRR1448794
SRR1448795
SRR1448792
SRR1448793
SRR1910483
SRR1910482
Should I write some other process that parses the list?
You may not need to. My feeling is that your code might benefit from using the fromSRA factory method, but we don't really have enough details to say one way or the other. If you need to, you could just write a function that returns a channel.
I have maintained my user.lua project folder specific. Is there anything in place where I can exclude Zerobrane's env path when I check a Module require statement with "Evaluate in Console"?
The reason for this is , i will ensure that everything is working within the plugin Engine himself.
This is what would be checked for a missing Module
lualibs and bin is cerobrane specific, if I see it right
Output
local toast = require("toast")
[string " local toast = require("toast")"]:1: module 'toast' not found:
no field package.preload['toast']
no file 'lualibs/toast.lua'
no file 'lualibs/toast/toast.lua'
no file 'lualibs/toast/init.lua'
no file './toast.lua'
no file '/usr/local/share/luajit-2.0.4/toast.lua'
no file '/usr/local/share/lua/5.1/toast.lua'
no file '/usr/local/share/lua/5.1/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast/init.lua'
no file 'bin/clibs/libtoast.dylib'
no file 'bin/clibs/toast.dylib'
no file './toast.so'
no file '/usr/local/lib/lua/5.1/toast.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/libtoast_64.so'
Here is my user.lua file at this time
--[[--
Use this file to specify **User** preferences.
Review [examples](+/Applications/ZeroBraneStudio.app/Contents/ZeroBraneStudio/cfg/user-sample.lua) or check [online documentation](http://studio.zerobrane.com/documentation.html) for details.
--]]--
--https://studio.zerobrane.com/doc-general-preferences#debugger
-- to automatically open files requested during debugging
editor.autoactivate = true
--enable verbose output
--debugger.verbose=true
--[[--
specify how print results should be redirected in the application being debugged (v0.39+). Use 'c' for ‘copying’ (appears in the application output and the Output panel), 'r' for ‘redirecting’ (only appears in the Output panel), or 'd' for ‘default’ (only appears in the application output). This is mostly useful for remote debugging to specify how the output should be redirected.
--]]--
debugger.redirect="c"
-- to force execution to continue immediately after starting debugging;
-- set to `false` to disable (the interpreter will stop on the first line or
-- when debugging starts); some interpreters may use `true` or `false`
-- by default, but can be still reconfigured with this setting.
debugger.runonstart = true
-- FlyWithLua.ini version 2.7.6 build 2018-10-24
-- Where to search for modules.
-- use this to evaluate your project folder , select the print function / right Mousebutton --> Evaluate in Console
--print(ide.filetree.projdir)
ZBSProjDir = "/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua"
INTERNALS_DIRECTORY = ZBSProjDir .. "/Internals/"
MODULES_DIRECTORY = ZBSProjDir .. "/Modules/"
package.path = table.concat({
package.path,
INTERNALS_DIRECTORY .. "?.lua",
INTERNALS_DIRECTORY .. "?/init.lua",
MODULES_DIRECTORY .. "?.lua",
MODULES_DIRECTORY .. "?/init.lua",
}, ";")
package.cpath = table.concat({
package.cpath,
INTERNALS_DIRECTORY .. "?.ext",
MODULES_DIRECTORY .. "?.ext",
}, ";")
-- Produce a correct name pattern for binary modules for OS and architecture.
-- This resolves clash between OS X and Linux binary modules by requiring "lib"
-- prefix for Linux ones.
local library_pattern = "?_64."
if SYSTEM == "IBM" then
library_pattern = library_pattern .. "dll"
elseif SYSTEM == "APL" then
library_pattern = library_pattern .. "so"
else
library_pattern = "lib" .. library_pattern .. "so"
end
package.cpath = package.cpath:gsub("?.ext", library_pattern)
Version --> ZeroBrane Studio (1.90; MobDebug 0.706)
Greetings Lars
You should get the desired effect if toast module file is located in your project directory. When a command is executed in the Console, the current directory is set to the project directory, so even though lualibs folder from the IDE may be in the path, it should make no difference (unless you copied the module to lualibs).
I wrote the below code, which will extract the directory name along with the file name and I will use purge command on that extracted Text.
$ sear VAXMANAGERS_ROOT:[PROC]TEMP.LIS LOG/out=VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ OPEN IN VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ LOOP:
$ READ/END_OF_FILE=ENDIT IN ABCD
$ GOTO LOOP
$ ENDIT:
$ close in
$ ERROR=F$EXTRACT(0,59,ABCD)
$ sh sym ERROR
$ purge/keep=1 'ERROR'
The output is as follows:
ERROR = "$1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;27392 "
Problem here is --- Every time the directory length varies (Length may be 59 or 40 or some other value, but the directory and filename length will not exceed 59 characters in my system). So in the above output, the system is also fetching the Version number of that file number. So I am not able to purge the file along with the version number.
%PURGE-E-PURGEVER, version numbers not permitted
Any suggestion -- How to eliminate the version number from the output ?
I cannot use the exact length of the directory, as directory length varies everytime.... :(
The answer with F$ELEMENT( 0, ";", ABCD ) should work, as confirmed. I might script something like this:
$ ERROR = F$PARSE(";",ERROR) ! will return $1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;
$ ERROR = ERROR - ";"
$ PURGE/KEEP=1 'ERROR'
Not sure why you have the read loop. What you will get is the last line in the file, but assuming that's what you want.
While HABO explained it, some more explanations
Suppose I use f$search to check if a file exists
a = f$search("sys$manager:net$server.log")
then I find I it exists
wr sys$output a
shows
SYS$SYSROOT:[SYSMGR]NET$SERVER.LOG;9
From the help of f$parse I get
help lex f$parse arg
shows, among other things
`Specifies a character string containing the name of a field
in a file specification. Specifying the field argument causes
the F$PARSE function to return a specific portion of a file
specification.
Specify one of the following field names (do not abbreviate):
NODE Node name
DEVICE Device name
DIRECTORY Directory name
NAME File name
TYPE File type
VERSION File version number`
So I can do
wr sys$output f$parse(a,,,"DEVICE")
which shows
SYS$SYSROOT:
and also
wr sys$output f$parse(a,,,"DIRECTORY")
so I get
[SYSMGR]
and
wr sys$output f$parse(a,,,"NAME")
shows
NET$SERVER
and
wr sys$output f$parse(a,,,"TYPE")
shows
.LOG
the version is
wr sys$output f$parse(a,,,"VERSION")
shown as
;9
The lexicals functions can be handy, check it using
help lexical
it shows
F$CONTEXT F$CSID F$CUNITS F$CVSI F$CVTIME F$CVUI F$DELTA_TIME F$DEVICE F$DIRECTORY F$EDIT
F$ELEMENT F$ENVIRONMENT F$EXTRACT F$FAO F$FID_TO_NAME F$FILE_ATTRIBUTES F$GETDVI F$GETENV
F$GETJPI F$GETQUI F$GETSYI F$IDENTIFIER F$INTEGER F$LENGTH F$LICENSE F$LOCATE F$MATCH_WILD
F$MESSAGE F$MODE F$MULTIPATH F$PARSE F$PID F$PRIVILEGE F$PROCESS F$READLINK F$SEARCH
F$SETPRV F$STRING F$SYMLINK_ATTRIBUTES F$TIME F$TRNLNM F$TYPE F$UNIQUE F$USER
the code I have in my .zshrc is:
ytdcd () { #youtube-dl that automatically puts stuff in a specific folder and returns to the former working directory after.
cd ~/youtube/new/ && {
youtube-dl "$#"
cd - > /dev/null
}
}
ytd() { #sofar, this function can only take one page. so, i can only send one youttube video code per line. will modify it to accept multiple lines..
for i in $*;
do
params=" $params https://youtu.be/$i"
done
ytdcd -f 18 $params
}
so, on the commandline (terminal), when i enter ytd DFreHo3UCD0, i would like to have the video at https://youtu.be/DFreHo3UCD0 to be downloaded. the problem is that when I enter the command in succession, the system just tries to download the video for the previous command and rightly claims the download is complete.
For example, entering:
> ytd DFreHo3UCD0
> ytd L3my9luehfU
would not attempt to download the video for L3my9luehfU but only the video for DFreHo3UCD0 twice.
First -- there's no point to returning to the old directory for ytdcd: You can change to a new directory only inside a subshell, and then exec youtube-dl to replace that subshell with the application process:
This has fewer things to go wrong: Aborting the function's execution can't leave things in the wrong directory, because the parent shell (the one you're interactively using) never changed directories in the first place.
ytdcd () {
(cd ~/youtube/new/ && exec youtube-dl "$#")
}
Second -- use an array when building argument lists, not a string.
If you use set -x to log its execution, you'll see that your original command runs something like:
ytdcd -f 18 'https://youtu.be/one https://youtu.be/two https://youtu.be/three'
See those quotes? That's because $params is a string, passed as a single argument, not an array. (In bash -- or another shell following POSIX rules -- an unquoted string expansion would be string-split and glob-expanded, but zsh doesn't follow POSIX rules).
The following builds up an array of separate arguments and passes them individually:
ytd() {
local -a params=( )
local i
for i; do
params+=( "https://youtu.be/$i" )
done
ytdcd -f 18 "${params[#]}"
}
Finally, it's come up that you don't actually intend to pass all the URLs to just one youtube-dl instance. To run a separate instance per URL, use:
ytd() {
local i retval=0
for i; do
ytdcd -f 18 "$i" || retval=$?
done
return "$retval"
}
Note here that we're capturing non-success exit status, so as not to hide an error in any ytdcd instance other than the last (which would otherwise occur).
I would declare param as local, so that you are not appending url after urls...
You can try to add this awesome function to your .zshrc:
funfun() {
local _fun1="$_fun1 fun1!"
_fun2="$_fun2 fun2!"
echo "1 says: $_fun1"
echo "2 says: $_fun2"
}
To observe the thing ;)
EDIT (Explanation):
When sourcing shell script, you add it to you current environment, that is why you can run those function you define. So, when those function use variables, by default, those variable will be global and accessible from anywhere in your environment! Therefore, In this case param is defined globally for all the length of your shell session. Since you want to allow the download of several video at once, you are appending values to this global variable, which will grow all the time.
Enforcing local tells zsh to limit the scope of params to the function only.
Another solution is to reset the variable when you call the function.
I have an admin server, NodeManager, and 1 managed server, all on the same machine.
I am trying to enter something similar to this to the arguments field in the Server Start tab:
-Dmy.property=%USERPROFILE%\someDir\someJar.jar
But when the managed server is started it throws this exception:
Error opening zip file or JAR manifest missing : %USERPROFILE%\someDir\someJar.jar
It appears that the environment variable is not being translated into it's value. It is just passed on to the managed server as plain-text.
I tried surrounding the path with double quotes (") but the console validates the input and does not allow this: "Arguments may not contain '"'"
Even editing the config.xml file manually cannot work, as the admin server fails to startup after this:
<Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason: [Management:141266]Parsing failure in config.xml: java.lang
.IllegalArgumentException: Arguments may not contain '"'.>
I also tried using %20 to no avail, it is just passed as %20.
I thought that perhaps this had something to do with the spaces in the value of %USERPROFILE% (which is "C:\documents and settings.."), but the same thing happens with other env. variables which point to other directories with no spaces.
My question:
Is there any supported way of :
using double quotes? what if i have to reference a folder with spaces in it's name?
reference an environment variable? What if i have to rely on it's value for distributed servers where i do not know in advance the variable's value?
Edit based on comments:
Approach 1:
Open setDomainEnv.cmd and search for export SERVER_NAME in Linux or for set SERVER_NAME in Windows. Skip to next to next line (i.e skip current and the next line)
On the current line, insert:
customServerList="server1,server2" #this serverList should be taken as input
isCurrServerCustom=$(echo ${customServerList} | tr ',' '\n' | grep ${SERVER_NAME} | wc -l)
if [ $isCurrServerCustom -gt 0 ]; then
# add customJavaArg
JAVA_OPTIONS="-Dmy.property=${USERPROFILE}/someDir/someJar.jar"
fi
Save the setDomainEnv.sh file and re-start servers
Note that I have only given logic for Linux , for Windows similar logic can be used but with batch scripting syntax.
Approach 2:
Assuming domain is already installed and user provides the list of servers to which the JVM argument -Dmy.property need to be added. Jython script (use wlst.sh to execute). WLST Reference.
Usage: wlst.sh script_name props_file_location
import os
from java.io import File
from java.io import FileInputStream
# extract properties from properties file.
print 'Loading input properties...'
propsFile = sys.argv[1]
propInputStream = FileInputStream(propsFile)
configProps = Properties()
configProps.load(propInputStream)
domainDir = configProps.get("domainDir")
# serverList in properties file should be comma seperated
serverList = configProps.get("serverList")
# The current machine's logical name as mentioned while creating the domain has to be given. Basically the machine name on which NM for current host is configured on.
# This param may not be required as an input if the machine name is configured as same as the hostname , in which case , socket module can be imported and socket.getHostName can be used.
currMachineName = configProps.get("machineName")
jarDir = os.environ("USERPROFILE")
argToAdd = '-Dmy.property=' + jarDir + File.separator + 'someDir' + File.separator + 'someJar.jar'
readDomain(domainDir)
for srvr in serverList.split(",") :
cd('/Server/' + srvr)
listenAddr = get('ListenAddress')
if listenAddr != currMachineName :
# Only change current host's servers
continue
cd('/Server/' + srvr + '/ServerStart/' + srvr)
argsOld = get('Arguments')
if argsOld is not None :
set('Arguments', argsOld + ' ' + argToAdd)
else:
set('Arguments', argToAdd)
updateDomain()
closeDomain()
# now restart all affected servers (i.e serverList)
# one way is to connect to adminserver and shutdown them and then start again
Script has to be run from all hosts where the managed servers are going to be deployed in order to have the host specific value of "USERPROFILE" in the JVM argument.
BTW, to answer your question in a line : looks like the JVM arguments have to be supplied with the literal text eventually. But looks like WLS doesn't translate the environment variables if provided as JVM arguments. It gives an impression that it is translating when its done from startWebLogic.cmd (ex: using %DOMAIN_HOME% etc.) but its the shell/cmd executor that translates and then starts the JVM.