I am trying to programmatically retrieve information from a database(BRENDA) using Zeep.
The following is the code.
import zeep
import hashlib
wsdl = "https://www.brenda-enzymes.org/soap/brenda.wsdl"
password = hashlib.sha256("xx".encode('utf-8')).hexdigest()
parameters = "xxx," + password + ",ecNumber*{}#organism*{}#".format("2.7.1.2", "Homo sapiens")
client = zeep.Client(wsdl=wsdl)
print(client)
km_string = client.getKmValue(parameters)
However, I get the following error
AttributeError: 'Client' object has no attribute 'getKmValue'
Could someone help me with this?
The above code works fine while using SOAPpy library in python 2. However, I couldn't successfully install SOAPpy in python 3, therefore I tried Zeep.
The sample code that shows SOAP implementation is available here
We fixed the webservice. It should work, now. Please have a look at the SOAP documentation on our website.
not the resolution but some hints.
1) with zeep you need to put .service between client and the name of the method. the correct syntax is client.service.getKmValue(parameters) (take a look at documentation)
anyway for zeep, getKmValue doesn't exists (but it exists on the wsdl schema and SoapUi see it).
you can also try py-suds,
but for some reason i obtain a 403 calling the wsdl.
from suds.client import Client
import hashlib
client = Client("https://www.brenda-enzymes.org/soap/brenda.wsdl")
Related
I have been trying to update our test management API once the Katalon test has completed running.
We are using Adaptavist Test Management in JIRA. I am not trying to update the Katalon JIRA add-on by the way.
The API call, for Adaptavist, needs to be a POST and have a body message of items like the example {"projectKey": "FVS", "testCaseKey": "FVS-T1", "status": "Pass", "environment": "DEV"}
I would eventually replace these items with the Katalon test result variables as appropriate.
I have created a Service Call in the Object Repository which deals with auth settings, this works fine if I test the request in the editor with these sample values.
When I come to add the script in the Test Case itself I am struggling to get it to work, let alone replace the variables with the actual values.
I current have this :
//run test
WebUI.openBrowser('')
WebUI.navigateToUrl(GlobalVariable.MainURL)
WebUI.verifyElementClickable(findTestObject('img_img-responsive_1'))
WebUI.verifyElementClickable(findTestObject('img_img-responsive_2'))
WebUI.verifyElementClickable(findTestObject('img_img-responsive_3'))
WebUI.closeBrowser()
//update JIRA
RequestObject getJIRAUpdateObject = (RequestObject)findTestObject('Web Service
Calls/Update JIRA')
String vsRequestBody = '{"projectKey": "FVS", "testCaseKey": "FVS-T1",
"status": "Pass", "environment": "DEV"}';
body = getJIRAUpdateObject.setHttpBody(vsRequestBody)
WS.sendRequest(getJIRAUpdateObject)
I also have the following additional imports
import com.kms.katalon.core.testobject.ResponseObject
import com.kms.katalon.core.testobject.RequestObject
Now in the script editor, I am told that setHttpBody is now depreciated in Katalon version 5.4+ (I am using 5.4.1) and I should use setBodyContent(HttpBodyContent) instead, but when I look at the API documentation for this, I cannot work out the syntax of how I am supposed to use this instead.
Does anyone know how I should change the code, or have examples of how I need to change the above code to use this new method ??
Any help is much appreciated.
As answered on the Katalon forum:
In your case, the body content is text body then the suitable implementation should be:
import com.kms.katalon.core.testobject.impl.HttpTextBodyContent //for text in body
import com.kms.katalon.core.testobject.impl.HttpFileBodyContent //for file in body
import com.kms.katalon.core.testobject.impl.HttpFormDataBodyContent //for form data body
import com.kms.katalon.core.testobject.impl.HttpUrlEncodedBodyContent //for URL encoded text body
setBodyContent(new HttpTextBodyContent(your_text))
(API docs for HttpBodyContent implementation.)
I am trying to do something which is very simple in other data services. I am trying to make a relatively simple SQL query and return it as a dataframe in python. I am on Windows 10 and using Phython 2.7 (specifically Canopy 1.7.4)
Typically this would be done with pandas.read_sql_query but due to some specifics with BigQuery they require a different method pandas.io.gbq.read_gbq
This method works fine unless you want to make a Big Query. If you make a Big Query on BigQuery you get the error
GenericGBQException: Reason: responseTooLarge, Message: Response too large to return. Consider setting allowLargeResults to true in your job configuration. For more information, see https://cloud.google.com/bigquery/troubleshooting-errors
This was asked and answered before in this ticket but neither of the solutions are relevant for my case
Python BigQuery allowLargeResults with pandas.io.gbq
One solution is for python 3 so it is a nonstarter. The other is giving an error due to me being unable to set my credentials as an environment variable in windows.
ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
I was able to download the JSON credentials file and I have set it as an environment variable in the few ways I know how but I still get the above error. Do I need to load this in some way in python? It seems to be looking for it but unable to find is correctly. Is there a special way to set it as an environment variable in this case?
You can do it in Python 2.7 by changing the default dialect from legacy to standard in pd.read_gbq function.
pd.read_gbq(query, 'my-super-project', dialect='standard')
Indeed, you can read in Big Query documentation for the parameter AllowLargeResults:
AllowLargeResults: For standard SQL queries, this flag is
ignored and large results are always allowed.
I have found two ways of directly importing the JSON credentials file. Both based on the original answer in Python BigQuery allowLargeResults with pandas.io.gbq
1) Credit to Tim Swast
First
pip install google-api-python-client
pip install google-auth
pip install google-cloud-core
then
replace
credentials = GoogleCredentials.get_application_default()
in create_service() with
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path/file.json')
2)
Set the environment variable manually in the code like
import os,os.path
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=os.path.expanduser('path/file.json')
I prefer method 2 since it does not require new modules to be installed and is also closer to the intended use of the JSON credentials.
Note:
You must create a destinationTable and add the information to run_query()
Here is a code that fully works within python 2.7 on Windows:
import pandas as pd
my_qry="<insert your big query here>"
### Here Put the data from your credentials file of the service account - all fields are available from there###
my_file="""{
"type": "service_account",
"project_id": "cb4recs",
"private_key_id": "<id>",
"private_key": "<your private key>\n",
"client_email": "<email>",
"client_id": "<id>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "<x509 url>"
}"""
df=pd.read_gbq(qry,project_id='<your project id>',private_key=my_file)
That's it :)
I'm trying to test my Luigi pipelines inside a vagrant machine using FakeS3 to simulate my S3 endpoints. For boto to be able to interact with FakeS3 the connection must be setup with the OrdinaryCallingFormat as in:
from boto.s3.connection import S3Connection, OrdinaryCallingFormat
conn = S3Connection('XXX', 'XXX', is_secure=False,
port=4567, host='localhost',
calling_format=OrdinaryCallingFormat())
but when using Luigi this connection is buried in the s3 module. I was able to pass most of the options by modifying my luigi.cfg and adding an s3 section as in
[s3]
host=127.0.0.1
port=4567
aws_access_key_id=XXX
aws_secret_access_key=XXXXXX
is_secure=0
but I don't know how to pass the required object for the calling_format.
Now I'm stuck and don't know how to proceed. Options I can think of:
Figure out how to pass the OrdinaryCallingFormat to S3Connection through luigi.cfg
Figure out how to force boto to always use this calling format in my Vagrant machine, by setting an unknown option to me either in .aws/config or boto.cfg
Make FakeS3 to accept the default calling_format used by boto that happens to be SubdomainCallingFormat (whatever it means).
Any ideas about how to fix this?
Can you not pass it into the constructor as kwargs for the S3Client?
client = S3Client(aws_access_key, aws_secret_key,
{'calling_format':OrdinaryCallingFormat()})
target = S3Target('s3://somebucket/test', client=client)
I did not encounter any problem when using boto3 connect to fakeS3.
import boto3
s3 = boto3.client(
"s3", region_name="fakes3",
use_ssl=False,
aws_access_key_id="",
aws_secret_access_key="",
endpoint_url="http://localhost:4567"
)
no specially calling method required.
Perhaps I am wrong that you really need OrdinaryCallingFormat, If my code doesn't work, please go through the github topic boto3 support on :
https://github.com/boto/boto3/issues/334
You can set it with the calling_format parameter. Here is a configuration example for fake-s3:
[s3]
aws_access_key_id=123
aws_secret_access_key=abc
host=fake-s3
port=4569
is_secure=0
calling_format=boto.s3.connection.OrdinaryCallingFormat
Currently I'm using Magento 1.9.01 and PHP 5.3.28. In ASP .NET I'm trying to retrieve the catalog tree by using the SOAP API using the following code:
var magentoService = new MagentoService.Mage_Api_Model_Server_Wsi_HandlerPortTypeClient();
var sessionId = magentoService.login(userName, apiKey);
var categoryTree = magentoService.catalogCategoryTree(sessionId, "", "");
The errror I get is "Internal Error. Please see log for details."
And in the logs I can see the following:
Argument 1 passed to Mage_Catalog_Model_Category_Api::_nodeToArray() must be an instance of Varien_Data_Tree_Node, null given
From what I've read it can be a bug with PHP 5.4 or greater, but not the version I'm using... So if someone has any idea how to solve this, it will be greatly appreaciated.
Seems pretty straight forward, though the error thrown suggests a much bigger problem. First make sure that the variables are exactly as you specified in your Magento installation (pay attention to caps). Second you can't pass empty strings, instead try "Null".
Good luck
I need to write a "standalone" script in Python to upload sales taxes to the account_tax table in the database using ONLY the ORM module of OpenERP. What I would like to do is something like the pseudo code below.
Can someone provide me a more details on the following:
1) what sys.path's do I need to set
2) what modules do I need to import before importing the "account" module. Currently when I import the "account" module I get the following error:
AssertionError: The report "report.custom" already exists!
3) What is the proper way to get my database cursor. In the code below I am simply calling psycopg2 directly to get a cursor.
If this approach cannot work, can anyone suggest an alternative approach other than writing XML files to load the data from the OpenERP application itself. This process needs to run outside of the the standard OpenERP application.
PSEUDO CODE:
import sys
# set Python paths to access openerp modules
sys.path.append("./openerp")
sys.path.append("./openerp/addons")
# import OpenERP
import openerp
# import the account addon modules that contains the tables
# to be populated.
import account
# define connection string
conn_string2 = "dbname='test2' user='xyz' password='password'"
# get a db connection
conn = psycopg2.connect(conn_string2)
# conn.cursor() will return a cursor object
cursor = conn.cursor()
# and finally use the ORM to insert data into table.
If you wanna do it via web service then have look at the OpenERP XML-RPC Web services
Example code top work with OpenERP Web Services :
import xmlrpclib
username = 'admin' #the user
pwd = 'admin' #the password of the user
dbname = 'test' #the database
# OpenERP Common login Service proxy object
sock_common = xmlrpclib.ServerProxy ('http://localhost:8069/xmlrpc/common')
uid = sock_common.login(dbname, username, pwd)
#replace localhost with the address of the server
# OpenERP Object manipulation service
sock = xmlrpclib.ServerProxy('http://localhost:8069/xmlrpc/object')
partner = {
'name': 'Fabien Pinckaers',
'lang': 'fr_FR',
}
#calling remote ORM create method to create a record
partner_id = sock.execute(dbname, uid, pwd, 'res.partner', 'create', partner)
More clearly you can also use the OpenERP Client lib
Example Code with client lib :
import openerplib
connection = openerplib.get_connection(hostname="localhost", database="test", \
login="admin", password="admin")
user_model = connection.get_model("res.users")
ids = user_model.search([("login", "=", "admin")])
user_info = user_model.read(ids[0], ["name"])
print user_info["name"]
You see both way are good but when you use the client lib, code is less and easy to understand while using xmlrpc proxy is lower level calls that you will handle
Hope this will help you.
As per my view one must go for XMLRPC or NETSVC services provided by Open ERP for such needs.
You don't need to import accounts module of Open ERP, there are possibilities that other modules have inherited accounts.tax object and had altered its behaviour as per your business needs.
Eventually if you feed data by calling those methods manually without using Open ERP Web service its possible you'll get undesired result / unexpected failures / inconsistent database state.
You can use Erppeek to browse data, but not sure if you can really upload data to DB, personally I use/prefer XMLRPC
Why don't you use the xmlrpc call of openerp.
it will not need to import account or openerp . and even you can have all orm functionality.
You can use python library to access openerp server using xmlrpc service.
Please check https://github.com/OpenERP/openerp-client-lib
It is officially supported by OpenERP SA.
If you want to interacti directly with the DB, you could just import psycopg2 and:
conn = psycopg2.connect(dbname='dbname', user='dbuser', password='dbpassword', host='dbhost')
cur = conn.cursor()
cur.execute('select * from table where id = %d' % table_id)
cur.execute('insert into table(column1, column2) values(%d, %d)' % (value1, value2))
cur.close()
conn.close()
Why you want to fix it like that?! You should create a localization module and define data in XML files. This is the standard way to fix such a problem in OpenERP.
You want to insert sales taxes for which country? Explain more plz.
from openerp.modules.registry import RegistryManager
registry = RegistryManager.get("databasename")
with registry.cursor() as cr:
user = registry.get('res.users').browse(cr, userid, listids)
print user