How to use multiple urls in a python program - awk

I am using a tool named https://github.com/epinna/tplmap which tests for Template Injection in a site. This is how it test for each url
python tplmap.py -u https://leadform.microsoft.com/?lang=FUZZ
Developer has not included option for multiple urls
How can i use awk/sed/cat to add multiple urls from a text file?

Developer has not included option for multiple urls
I have look at tplmap.py and it seems viable to alterting so it would work as you wish, thing which needs change is function named main which looks as follows
def main():
args = vars(cliparser.options)
if not args.get('url'):
cliparser.parser.error('URL is required. Run with -h for help.')
# Add version
args['version'] = version
checks.check_template_injection(Channel(args))
what is happening here: args dict is created based on command line arguments, presence of url checked, version added and checking function called. I would alter that function to
def main():
args = vars(cliparser.options)
if not args.get('url'):
cliparser.parser.error('URL is required. Run with -h for help.')
# Add version
args['version'] = version
url_filename = args['url']
with open(url_filename, 'r') as f:
for line in f:
args['url'] = line.rstrip()
checks.check_template_injection(Channel(args))
so now it does open file provided where url was earlier provided for each line it is rstrip-ped to jettison trailing newline and provided as url in args then check is run. Note: I did not test that code, please apply described change and first to try use it with file holding few addresses and check if it does what you want.

Related

Meson equivalent of automake's CONFIG_STATUS_DEPENDENCIES?

I have a project whose build options are complicated enough that I have to run several external scripts during the configuration process. If these scripts, or the files that they read, are changed, then configuration needs to be re-run.
Currently the project uses Autotools, and I can express this requirement using the CONFIG_STATUS_DEPENDENCIES variable. I'm experimenting with porting the build process to Meson and I can't find an equivalent. Is there currently an equivalent, or do I need to file a feature request?
For concreteness, a snippet of the meson.build in progress:
pymod = import('python')
python = pymod.find_installation('python3')
svf_script = files('scripts/compute-symver-floor')
svf = run_command(python, svf_script, files('lib'),
host_machine.system())
if svf.returncode() == 0
svf_results = svf.stdout().split('\n')
SYMVER_FLOOR = svf_results[0].strip()
SYMVER_FILE = svf_results[2].strip()
else
error(svf.stderr())
endif
# next line is a fake API expressing the thing I can't figure out how to do
meson.rerun_configuration_if_files_change(svf_script, SYMVER_FILE)
This is what custom_target() is for.
Minimal example
svf_script = files('svf_script.sh')
svf_depends = files('config_data_1', 'config_data_2') # files that svf_script.sh reads
svf = custom_target('svf_config', command: svf_script, depend_files: svf_depends, build_by_default: true, output: 'fake')
This creates a custom target named svf_config. When out of date, it runs the svf_script command. It depends on the files in the svf_depends file object, as well as
all the files listed in the command keyword argument (i.e. the script itself).
You can also specify other targets as dependencies using the depends keyword argument.
output is set to 'fake' to stop meson from complaining about a missing output keyword argument. Make sure that there is a file of the same name in the corresponding build directory to stop the target from always being considered out-of-date. Alternatively, if your configure script(s) generate output files, you could list them in this array.

Any way to remove the red/green color of report.html output from robot framework api using python script

I am using below code to run the suite i have created. it is working fine.
suite.run()
# suite.run(critical='Medium')
# Reading from output XML and creating Report and Log files
writer = ResultWriter('output.xml')
writer.write_results(report='report.html', log='log.html')
i need to remove the green/red color coming in the report.html output file.
is there any argument in the function call to do that?
You could pass parameter reportbackground, to write_results function.
writer = ResultWriter('output.xml')
writer.write_results(report='report.html', log='log.html', reportbackground='#ffffff:#ffffff:#ffffff')

Print file name in jython

let say the path is Desktop\Chem\test.png
i want to print the name of the file without the .png
this is my code
def test():
file=pickAFile()
shortFile=getShortPath(file)
end = shortFile.split('\\')[1]
print"this is a",end
so the solution would be "this is a test" instead of "this is a test.png"
First off, you should probably be using os.sep instead of an explicit \ (so this will work on windows, linux, OS-X, etc. instead of just windows, but better yet, in this case, use os.path.splitext and os.path.basename (as documented in the jython docs which appear to exactly match the python equivalent
something like:
import os
def test():
file=pickAFile()
shortFile=getShortPath(file) #assume this returns c:\foo\bar\test.png
basename = os.path.basename(shortFile) # returns test.png
end,ext = os.path.splitext(basename) # this returns test,png
print"this is a",end,"which is a",ext,"file"

rpm spec file skeleton to real spec file

The aim is to have skeleton spec fun.spec.skel file which contains placeholders for Version, Release and that kind of things.
For the sake of simplicity I try to make a build target which updates those variables such that I transform the fun.spec.skel to fun.spec which I can then commit in my github repo. This is done such that rpmbuild -ta fun.tar does work nicely and no manual modifications of fun.spec.skel are required (people tend to forget to bump the version in the spec file, but not in the buildsystem).
Assuming the implied question is "How would I do this?", the common answer is to put placeholders in the file like ##VERSION## and then sed the file, or get more complicated and have autotools do it.
We place a version.mk file in our project directories which define environment variables. Sample content includes:
RELPKG=foopackage
RELFULLVERS=1.0.0
As part of a script which builds the RPM, we can source this file:
#!/bin/bash
. $(pwd)/Version.mk
export RELPKG RELFULLVERS
if [ -z "${RELPKG}" ]; then exit 1; fi
if [ -z "${RELFULLVERS}" ]; then exit 1; fi
This leaves us a couple of options to access the values which were set:
We can define macros on the rpmbuild command line:
% rpmbuild -ba --define "relpkg ${RELPKG}" --define "relfullvers ${RELFULLVERS}" foopackage.spec
We can access the environment variables using %{getenv:...} in the spec file itself (though this can be harder to deal with errors...):
%define relpkg %{getenv:RELPKG}
%define relfullvers %{getenv:RELFULLVERS}
From here, you simply use the macros in your spec file:
Name: %{relpkg}
Version: %{relfullvers}
We have similar values (provided by environment variables enabled through Jenkins) which provide the build number which plugs into the "Release" tag.
I found two ways:
a) use something like
Version: %(./waf version)
where version is a custom waf target
def version_fun(ctx):
print(VERSION)
class version(Context):
"""Printout the version and only the version"""
cmd = 'version'
fun = 'version_fun'
this checks the version at rpm build time
b) create a target that modifies the specfile itself
from waflib.Context import Context
import re
def bumprpmver_fun(ctx):
spec = ctx.path.find_node('oregano.spec')
data = None
with open(spec.abspath()) as f:
data = f.read()
if data:
data = (re.sub(r'^(\s*Version\s*:\s*)[\w.]+\s*', r'\1 {0}\n'.format(VERSION), data, flags=re.MULTILINE))
with open(spec.abspath(),'w') as f:
f.write(data)
else:
logs.warn("Didn't find that spec file: '{0}'".format(spec.abspath()))
class bumprpmver(Context):
"""Bump version"""
cmd = 'bumprpmver'
fun = 'bumprpmver_fun'
The latter is used in my pet project oregano # github

lua module is not loading libraries

Background Information:
I'm just new to lua and I'm trying to understand how modules work. But I'm trying to load a pre-existing module into a new script and run this script from the command line.
Code:
I have a file called main.lua module that looks something like this:
module (..., package.seeall)
-- Load libraries
require("luasql.postgres")
require("luasql.sqlite3")
local connect_to_db = function()
if not global_con then
env = assert (luasql.postgres())
global_con = assert (env:connect(databasename, databaseUser, databasepassword, databaseserver))
return true
else
return false
end
end
update_widget = function (parm1, parm2, parm3)
local connected = connect_to_db()
if connected then
-- do something else
return true
end
end -- end function.
I'm now trying to create a test script for this module. I have the following logic in a separate lua file:
package.path = '/usr/share/myapp/main.lua;'
local my_object = require("main")
print my_object.update_widget
Problem:
I'm getting the following error when I try to run my test script:
attempt to call field 'postgres' (a table value)
The line it's failing on is in the connect_to_db() method where I try to create an environment variable:
env = assert (luasql.postgres())
What I've Tried so far:
I've modified the package.path in my test script to match what is being used by main.lua. I did so by executing main.lua the "regular" way - driven by a web app - and dumping out the contents of package.path to a log file.
I've copied the path from the log file and used it as the package.path value in my test script... of course, I had to modify it by adding an additional entry - a path leading to main.lua.
But other than that, the package paths are the same.
I've added print statements inside main.lua to prove that it is getting into the update_widget method... and that it's just failing trying to create the postgres.
I've added the luasql.postgres library in the test script to see if that would help... like so:
package.path = '/var/x/appname/main.lua;'
local pgdb = require("luasql.posgres")
print(pgdb)
myenv = assert(lua.postgres()) -- fails
The test script also dies trying to create this object... I'm going to keep hunting around. It must be a problem with the paths... but I can't see the difference between the path thats created when loaded by the web app, vs. what I have in the test script.
I'm going to use a DIFF tool to compare for now.
Any suggestions would be appreciated.
Thanks.
EDIT 1
I definitely think it's the path, although I can't see just yet what's wrong with here.
I created yet another test script (let's call it test3).. but this time, I didn't explicitly set the path by assigning values to package.path.
I just tried to include the luasql.postgres pacakge and use it the way the original test script does... and it works!
So here's code that works:
luasql = require "luasql.postgres"
local myenv = assert (luasql.postgres())
print(myenv)
But this fails:
package.path = package.path .. ';/usr/share/myapp/main.lua'
luasql = require "luasql.postgres"
myenv = assert (luasql.postgres())
print(myenv)
Also to greatwolf's point, I tried from interactive mode in lua... and my code works just fine.
Lua 5.1.5 Copyright (C) 1994-2012 Lua.org, PUC-Rio
> pgdb = require("luasql.postgres")
> print(pgdb)
table: 0x176cb228
> myenv=assert(luasql.postgres())
> print(myenv)
PostgreSQL environment (0x176c9d5c)
>
So... here's the package.path variable from interactive mode:
> print(package.path)
./?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/lib/lua/5.1/?.lua;/usr/local/lib/lua/5.1/?/init.lua;/usr/share/lua/5.1/?.lua;/usr/share/lua/5.1/?/init.lua
>
And here's the path from my original test script where it fails.
/usr/share/myapp/main.lua;./?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/lib/lua/5.1/?.lua;/usr/local/lib/lua/5.1/?/init.lua;/usr/share/lua/5.1/?.lua;/usr/share/lua/5.1/?/init.lua
It was a problem with the path. I'm still not sure exactly what was wrong, but I changed my logic in the test script from:
package.path = '/usr/share/myapp/main.lua;' -- resetting package path manually
package.path=package.path ..'./?.lua;/usr/local/share/lua/5.1/?.lua;/usr/local/share/lua/5.1/?/init.lua;/usr/local/lib/lua/5.1/?.lua;/usr/local/lib/lua/5.1/?/init.lua;/usr/share/lua/5.1/?.lua;/usr/share/lua/5.1/?/init.lua
'
to
package.path = package.path .. '/usr/share/myapp/main.lua' -- append to default path.
Now it finds the lua postgres package and lets me call the functions too
When Lua executes require "luasql.postgres" it tries to find postgres.lua in luasql folder anywhere in its LUA_PATH, loads it, and executes it, thereby putting any non-local variables (including functions) appearing at module level of postgres.lua in the global namespace. The main.lua you show is requiring a module then using it as a function: luasql.postgres(). This will only work if some trick is used. For instance, if the loaded module returns a function, you could use
fn = require 'luasql.postgres'
fn()
to execute the function returned.
Also, unlike python where you can import items from within a module, in Lua you can't. So it's not like postgres could be a function or callable table.
If you replace main.lua with the following,
require 'luasql.postgres'
luasql.postgres()
and run your test script, or run main.lua directly, you should get an error. If you don't, the module is definitely doing something special to support this use.
If you change your main.lua as above and it doesn't work, then neither can you do
env = assert (luasql.postgres())
but you could do any of these, depending on what postgres.lua does:
env = assert (luasql.postgres)
env = assert (someFunctionDefinedInPostgresModule)
env = assert (someFunctionDefinedInPostgresModule())
env = assert (luasql.postgres.someFunction)
env = assert (luasql.postgres.someFunction())