Proper way to define a main() script in Deno - program-entry-point

When writing a Deno script, sometimes they could be a executed from the command line using deno run but at the same time may contain libraries that can be consumed through an import from another script.
What is the proper way to do this in Deno.
The equivalent in Python would be to put at the bottom of the script:
if __name__ == '__main__':
main(sys.argv[1:])
How should this be done in Deno?

Deno has a flag available at runtime called import.meta.main. Here is an example of how it should be used in a script:
if (import.meta.main) main()
// bottom of file
Note: import namespace is not available in the Deno REPL at v1.0.0

Related

What is the structure of the executable transformation script for transform_script of GCSFileTransformOperator?

Currently working on a task in Airflow that requires pre-processing a large csv file using GCSFileTransformOperator. Reading the documentation on the class and its implementation, but don't quite understand how the executable transformation script for transform_script should be structured.
For example, is the following script structure correct? If so, does that mean with GCSFileTransformOperator, Airflow is calling the executable transformation script and passing arguments from command line?
# Import the required modules
import preprocessing modules
import sys
# Define the function that passes source_file and destination_file params
def preprocess_file(source_file, destination_file):
# (1) code that processes the source_file
# (2) code then writes to destination_file
# Extract source_file and destination_file from the list of command-line arguments
source_file = sys.argv[1]
destination_file = sys.argv[2]
preprocess_file(source_file, destination_file)
GCSFileTransformOperator passes the script to subprocess.Popen, so your script will work but you will need to add a shebang #!/usr/bin/python (of wherever Python is on your path in Airflow).
Your arguments are correct and the format of your script can be anything you want. Airflow passes in the path of the downloaded file, and a temporary new file:
cmd = (
[self.transform_script]
if isinstance(self.transform_script, str)
else self.transform_script
)
cmd += [source_file.name, destination_file.name]
with subprocess.Popen(
args=cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True
) as process:
# ...
process.wait()
(you can see the source here)

New to Jython Need help extracting data from file

I am new to scripting and programing in general. I am trying to run WebSphere command line tool, wsadmin, and it keeps failing. I am looking for answers for 2 questions about the following code:
**import sys
import os
import re
execfile('wsadminlib.py')
appName = sys.argv[1]
configLocation = "/location/of/config/"
config_prop = open(configLocation + appName+"-Config.csv", "r")
for line in config_prop:
line = line.split(",")
print line**
I launch run the scripts in as wsadmin and from the command line is as follows:
>>>>./wsadmin.sh -lang jython -f deploy.sh param1
Questions:
The problem is that it fails on the "for line in config_prop" with AttributeError: getitem?
when I run this through python on the same machine, the code works. Just not when I run it through wsadmin tool?
Is there other ways to extract data from txt or csv with comma delimited and setting a variable for each word that is only one line long.
Issue has been resolved. the libraries used is 2.1 and the syntax i was using was post 2.2.

Generate data file at install time

My python package depends on a static data file which is automatically generated from a smaller seed file using a function that is part of the package.
It makes sense to me to do this generation at the time of running setup.py install, is there a standard way in setup() to describe “run this function before installing this package's additional files” (the options in the docs are all static)? If not, where should I place the call to that function?
Best done in two steps using the cmdclass mechanism:
add a custom command to generate the data file
override build_py to call that before proceeding
from distutils.cmd import Command
from setuptools import setup
from setuptools.command.install import install
class GenerateDataFileCommand(Command):
description = 'generate data file'
user_options = []
def run(self):
pass # Do something here...
class InstallCommand(install):
def run(self):
self.run_command('generate_data_file')
return super().run()
setup(
cmdclass={
'generate_data_file': GenerateDataFileCommand,
'install': InstallCommand,
},
# ...
)
This way you can call python setup.py generate_data_file to generate the data file as a stand-alone step, but the usual setup procedure (python setup.py install) will also ensure it's called.
(However, I'd recommend including the built file in the distribution archive, so end users don't have to build it themselves – that is, override build_py (class setuptools.command.build_py.build_py) instead of install.)

How to test "main()" routine from "go test"?

I want to lock the user-facing command line API of my golang program by writing few anti-regression tests that would focus on testing my binary as a whole. What testing "binary as a whole" means is that go-test should:
be able to feed STDIN to my binary
be able to check that my binary produces correct STDOUT
be able to ensure that error cases are handled properly by binary
However, it is not obvious to me what is the best practice to do that in go? If there is a good go test example, could you point me to it?
P.S. in the past I have been using autotools. And I am looking for something similar to AT_CHECK, for example:
AT_CHECK([echo "XXX" | my_binary -e arg1 -f arg2], [1], [],
[-f and -e can't be used together])
Just make your main() single line:
import "myapp"
func main() {
myapp.Start()
}
And test myapp package properly.
EDIT:
For example, popular etcd conf server uses this technique: https://github.com/coreos/etcd/blob/master/main.go
I think you're trying too hard: I just tried the following
func TestMainProgram(t *testing.T) {
os.Args = []string{"sherlock",
"--debug",
"--add", "zero",
"--ruleset", "../scripts/ceph-log-filters/ceph.rules",
"../scripts/ceph-log-filters/ceph.log"}
main()
}
and it worked fine. I can make a normal tabular test or a goConvey BDD from it pretty easily...
If you really want to do such type of testing in Go, you can use Go os/exec package https://golang.org/pkg/os/exec/ to execute your binary and test it as a whole - for example, executing go run main.go command. Essentially it would be an equivalent of a shell script done in Go. You can use StdinPipe https://golang.org/pkg/os/exec/#Cmd.StdinPipe and StdouPipe/StderrPipe (https://golang.org/pkg/os/exec/#Cmd.StdoutPipe and https://golang.org/pkg/os/exec/#Cmd.StderrPipe) to feed the desired input and verify output. The examples on the package documentation page https://golang.org/pkg/os/exec/ should give you a good starting point.
However, the testing of compiled programs goes beyond the unit testing so it is worth to consider other tools (not necessarily Go-based) that more typically used for functional / acceptance testing such as Cucumber http://cucumber.io.

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github