Is there a way to trigger the integration of an MBO through MIF using an automation script? Here's the use case:
Child object with no application to manage it is sent through integration
Integration fails at the destination and needs to be resent
Admin opens the automation script in Automation Scripts application, updates the script with the record ID to resend, and click our custom "Execute Script Manually" action which runs the script without the need for a launchpoint.
At a high level the script would look something like this:
from psdi.server import MXServer
server = MXServer.getMXServer()
adminuser = server.getUserInfo("MAXADMIN")
matUseTransSet = server.getMboSet("MATUSETRANS", adminuser)
matUseTransSet.setWhere("MATUSETRANSID = 123456")
matUseTransSet.reset()
matUseTransMbo = matUseTransSet.moveFirst()
while (matUseTransMbo):
# Send integration here
matUseTransMbo = matUseTransSet.moveNext()
Thanks!
Perhaps something along the lines of this:
from psdi.server import MXServer
server = MXServer.getMXServer()
adminuser = server.getUserInfo("MAXADMIN")
extSysName = 'SYSNAME'
ifaceName = 'iFaceName'
whereClause = "PRNUM = '12345'"
maxRecCount = 1
# Send integration here
server.lookup("MIC").exportData(ifaceName, extSysName, whereClause, adminuser, maxRecCount)
Related
We are trying to take a backup of Splunk dashboards and reports source code for versioning. we are on an enterprise implementation where our rest calls are restricted. we can create and access dashboards and reports via Slunk UI, but would like know if we can automatically take backup of them and store in our versioning system.
Automated versioning will be quite a challenge without REST access. I assume you don't have CLI access or you wouldn't be asking.
There are apps available to do this for you. See https://splunkbase.splunk.com/app/4355/ and https://splunkbase.splunk.com/app/4182/ .
There's also a .conf presentation on the topic. See https://conf.splunk.com/files/2019/slides/FN1315.pdf
For the time being, I have written a Python script to read/intercept the UI inspet -> performance -> network responses of Splunk's reports URL which lists the complete set of reports under my app and as well as its full details.
from time import sleep
import json
from selenium import webdriver
from selenium.webdriver import DesiredCapabilities
# make chrome log requests
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"} # newer: goog:loggingPrefs
driver = webdriver.Chrome(
desired_capabilities=capabilities, executable_path="/Users/username/Downloads/chromedriver_92"
)
spl_reports_url="https://prod.aws-cloud-splunk.com/en-US/app/sre_app/reports"
driver.get(spl_reports_url)
sleep(5) # wait for the requests to take place
# extract requests from logs
logs_raw = driver.get_log("performance")
logs = [json.loads(lr["message"])["message"] for lr in logs_raw]
# create directory to save all reports as .json files
from pathlib import Path
main_bkp_folder='splunk_prod_reports'
# Create a main directory in which all dashboards will be downloaded to
Path(f"./{main_bkp_folder}").mkdir(parents=True, exist_ok=True)
# Function to write json content to file
def write_json_to_file(filenamewithpath,json_source):
with open(filenamewithpath, 'w') as jsonfileobj:
json_string = json.dumps(json_source, indent=4)
jsonfileobj.write(json_string)
def log_filter(log_):
return (
# is an actual response
log_["method"] == "Network.responseReceived"
# and json
and "json" in log_["params"]["response"]["mimeType"]
)
counter = 0
# extract Network entries from each log event
for log in filter(log_filter, logs):
#print(log)
request_id = log["params"]["requestId"]
resp_url = log["params"]["response"]["url"]
# print only results_preview
if "searches" in resp_url:
print(f"Caught {resp_url}")
counter = counter + 1
nw_resp_body = json.loads(driver.execute_cdp_cmd("Network.getResponseBody", {"requestId": request_id})['body'])
for each_report in nw_resp_body["entry"]:
report_name = each_report['name']
print(f"Extracting report source for {report_name}")
report_filename = f"./{main_bkp_folder}/{report_name.replace(' ','_')}.json"
write_json_to_file(report_filename,each_report)
print(f"Completed.")
print("All reports source code exported successfully.")
The above code is far from production version as yet to add error handling, logging, and modularization.
Also, to note that the above script uses browser UI, in production the scrip will be run in a docker image with ChromeOptions to use headless mode.
Instead of
driver = webdriver.Chrome(
desired_capabilities=capabilities, executable_path="/Users/username/Downloads/chromedriver_92"
)
Use:
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1420,2080')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
driver = webdriver.Chrome(
desired_capabilities=capabilities,options=chrome_options, executable_path="/Users/username/Downloads/chromedriver_92"
)
Here you can customize as per your need.
I use Chrome DevTools to debug java script code and I need it to run programmatically from my plugin.
(If I understood the slightly brief question.)
You want to make a ILaunchConfigurationWorkingCopy, set the attributes on it, optionally save it, then launch it.
The Launch Manager is very useful as you can do stuff with launches using it.
Here is a simple example:
ILaunchManager manager = DebugPlugin.getDefault().getLaunchManager();
ILaunchConfigurationType launchType = launchMgr.getLaunchConfigurationType("type id (from plugin.xml)");
ILaunchConfigurationWorkingCopy wc = launchType.newInstance(null, manager.generateLaunchConfigurationName("Name Here"));
wc.setAttributes(launchAttributes);
ILaunchConfiguration lc = wc.doSave();
Launch launch = lc.launch(ILaunchManager.DEBUG_MODE, new NullProgressMonitor());
Please help me by providing a sample automation script (jython) for retrieving the data from database table using automation script which runs in maximo/tivoli 7.5
I'm new to get those details.
hopefully this helps ...
a example on how to load database information into maximo via rmi
https://www.ibm.com/developerworks/community/blogs/a9ba1efe-b731-4317-9724-a181d6155e3a/entry/import_data_from_db_into_maximo_by_using_rmi_with_jython?lang=en
You can review the Automation Scripting examples on the IBM Wiki:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/IBM%20Maximo%20Asset%20Management/page/Customizing%20with%20automation%20scripts
This code is getting data from table directly
from psdi.security import UserInfo
from psdi.server import MXServer
maximo = MXServer.getMXServer()
ui = maximo.getSystemUserInfo()
sessionSet = maximo.getMboSet("MAXSESSION", ui)
sessionSet.setWhere ("issystem = '1'")
s = sessionSet.getMbo(0)
servername = s.getString("SERVERNAME")
print servername
I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user
We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)