If enter wrong web code in my below example i get 500: Internal Server Error. Is it possible get validation error alert?
#http.route('/web/process', type="http", auth="public", website=True)
def send_ticket(self, **kwargs):
values = {}
for field_name, field_value in kwargs.items():
values[field_name] = field_value
if values['web_code'] != "9999":
raise ValidationError(_('Wrong web code!'))
vals = {'name': values['web_name']}
create_new = http.request.env['project.task'].create(vals)
in odoo website module you can not raise error like we do in sale and purchase module.
to display error to user you need render same page with error label in your template .
this link many help you more raise warring in odoo website
Related
I have an API which gets the success or error message on console.I am new to python and trying to read the response. Google throws so many examples to use subprocess but I dont want to run,call any command or sub process. I just want to read the output after below API call.
This is the response in console when success
17:50:52 | Logged in!!
This is the github link for the sdk and documentation
https://github.com/5paisa/py5paisa
This is the code
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
client.login()
In general it is bad practice to get a value from STDOUT. There are some ways but it's pretty tricky (it's not made for it). And the problem doesn't come from you but from the API which is wrongly designed, it should return a value e.g. True or False (at least) to tell you if you logged in, and they don't do it.
So, according to their documentation it is not possible to know if you're logged in, but you may be able to see if you're logged in by checking the attribute client_code in the client object.
If client.client_code is equal to something then it should be logged in and if it is equal to something else then not. You can try comparing it's value when you successfully login or when it fails (wrong credential for instance). Then you can put a condition : if it is None or False or 0 (you will have to see this by yourself) then it is failed.
Can you try doing the following with a successful and failed login:
client.login()
print(client.client_code)
Source of the API:
# Login function :
# (...)
message = res["body"]["Message"]
if message == "":
log_response("Logged in!!")
else:
log_response(message)
self._set_client_code(res["body"]["ClientCode"])
# (...)
# _set_client_code function :
def _set_client_code(self, client_code):
try:
self.client_code = client_code # <<<< That's what we want
except Exception as e:
log_response(e)
Since this questions asks how to capture "stdout" one way you can accomplish this is to intercept the log message before it hits stdout.
The minimum code to capture a log message within a Python script looks this:
#!/usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
class RequestHandler(logging.Handler):
def emit(self, record):
if record.getMessage().startswith("Hello"):
print("hello detected")
handler = RequestHandler()
logger.addHandler(handler)
logger.warning("Hello world")
Putting it all together you may be able to do something like this:
import logging
from py5paisa import FivePaisaClient
email = "myemailid#gmail.com"
pw = "mypassword"
dob = "mydateofbirth"
cred={
"APP_NAME":"app-name",
"APP_SOURCE":"app-src",
"USER_ID":"user-id",
"PASSWORD":"pw",
"USER_KEY":"user-key",
"ENCRYPTION_KEY":"enc-key"
}
client = FivePaisaClient(email=email, passwd=pw, dob=dob,cred=cred)
class PaisaClient(logging.Handler):
def __init__():
self.loggedin = False # this is the variable we can use to see if we are "logged in"
def emit(self, record):
if record.getMessage().startswith("Logged in!!")
self.loggedin = True
def login():
client.login()
logging.getLogger(py5paisa) # get the logger for the py5paisa library
# tutorial here: https://betterstack.com/community/questions/how-to-disable-logging-from-python-request-library/
logging.basicConfig(handlers=[PaisaClient()], level=0, force=True)
c = PaisaClient()
c.login()
I am trying to extract userid, rating and review from the following site using selenium and it is showing "Invalid selector error". I think, the Xpath I have tried to define to get the review text is the reason for error. But I am unable to resolve the issue. The site link is as below:
teslamotor review
The code that I have used is following:
#Class for Review webscraping from consumeraffairs.com site
class CarForumCrawler():
def __init__(self, start_link):
self.link_to_explore = start_link
self.comments = pd.DataFrame(columns = ['rating','user_id','comments'])
self.driver = webdriver.Chrome(executable_path=r'C:/Users/mumid/Downloads/chromedriver/chromedriver.exe')
self.driver.get(self.link_to_explore)
self.driver.implicitly_wait(5)
self.extract_data()
self.save_data_to_file()
def extract_data(self):
ids = self.driver.find_elements_by_xpath("//*[contains(#id,'review-')]")
comment_ids = []
for i in ids:
comment_ids.append(i.get_attribute('id'))
for x in comment_ids:
#Extract dates from for each user on a page
user_rating = self.driver.find_elements_by_xpath('//*[#id="' + x +'"]/div[1]/div/img')[0]
rating = user_rating.get_attribute('data-rating')
#Extract user ids from each user on a page
userid_element = self.driver.find_elements_by_xpath('//*[#id="' + x +'"]/div[2]/div[2]/strong')[0]
userid = userid_element.get_attribute('itemprop')
#Extract Message for each user on a page
user_message = self.driver.find_elements_by_xpath('//*[#id="' + x +'"]]/div[3]/p[2]/text()')[0]
comment = user_message.text
#Adding date, userid and comment for each user in a dataframe
self.comments.loc[len(self.comments)] = [rating,userid,comment]
def save_data_to_file(self):
#we save the dataframe content to a CSV file
self.comments.to_csv ('Tesla_rating-6.csv', index = None, header=True)
def close_spider(self):
#end the session
self.driver.quit()
try:
url = 'https://www.consumeraffairs.com/automotive/tesla_motors.html'
mycrawler = CarForumCrawler(url)
mycrawler.close_spider()
except:
raise
The error that I am getting is as following:
Also, The xpath that I tried to trace is from following HTML
You are seeing the classic error of...
as find_elements_by_xpath('//*[#id="' + x +'"]]/div[3]/p[2]/text()')[0] would select the attributes, instead you need to pass an xpath expression that selects elements.
You need to change as:
user_message = self.driver.find_elements_by_xpath('//*[#id="' + x +'"]]/div[3]/p[2]')[0]
References
You can find a couple of relevant detailed discussions in:
invalid selector: The result of the xpath expression "//a[contains(#href, 'mailto')]/#href" is: [object Attr] getting the href attribute with Selenium
I have a question about using apollo-upload-client and graphene-django. Here I've discovered that apollo-upload-client adding operations to formData. But here graphene-django is only trying to get query parameter. And the question is, where and how it should be fixed?
If you're referring to the data that has a header like (when viewing the HTTP from Chrome tools):
Content-Disposition: form-data; name="operations"
and data like
{"operationName":"MyMutation","variables":{"myData"....}, "query":"mutation MyMutation"...},
the graphene-python library interprets this and assembles it into a query for you, inserting the variables and removing the file data from the query. If you are using Django, you can find all of the uploaded files in info.context.FILES when writing a mutation.
Here's my solution to support the latest apollo-upload-client (8.1). I recently had to revisit my Django code when I upgraded from apollo-upload-client 5.x to 8.x. Hope this helps.
Sorry I'm using an older graphene-django but hopefully you can update the mutation syntax to the latest.
Upload scalar type (passthrough, basically):
class Upload(Scalar):
'''A file upload'''
#staticmethod
def serialize(value):
raise Exception('File upload cannot be serialized')
#staticmethod
def parse_literal(node):
raise Exception('No such thing as a file upload literal')
#staticmethod
def parse_value(value):
return value
My upload mutation:
class UploadImage(relay.ClientIDMutation):
class Input:
image = graphene.Field(Upload, required=True)
success = graphene.Field(graphene.Boolean)
#classmethod
def mutate_and_get_payload(cls, input, context, info):
with NamedTemporaryFile(delete=False) as tmp:
for chunk in input['image'].chunks():
tmp.write(chunk)
image_file = tmp.name
# do something with image_file
return UploadImage(success=True)
The heavy lifting happens in a custom GraphQL view. Basically it injects the file object into the appropriate places in the variables map.
def maybe_int(s):
try:
return int(s)
except ValueError:
return s
class CustomGraphqlView(GraphQLView):
def parse_request_json(self, json_string):
try:
request_json = json.loads(json_string)
if self.batch:
assert isinstance(request_json,
list), ('Batch requests should receive a list, but received {}.').format(
repr(request_json))
assert len(request_json) > 0, ('Received an empty list in the batch request.')
else:
assert isinstance(request_json, dict), ('The received data is not a valid JSON query.')
return request_json
except AssertionError as e:
raise HttpError(HttpResponseBadRequest(str(e)))
except BaseException:
logger.exception('Invalid JSON')
raise HttpError(HttpResponseBadRequest('POST body sent invalid JSON.'))
def parse_body(self, request):
content_type = self.get_content_type(request)
if content_type == 'application/graphql':
return {'query': request.body.decode()}
elif content_type == 'application/json':
return self.parse_request_json(request.body.decode('utf-8'))
elif content_type in ['application/x-www-form-urlencoded', 'multipart/form-data']:
operations_json = request.POST.get('operations')
map_json = request.POST.get('map')
if operations_json and map_json:
operations = self.parse_request_json(operations_json)
map = self.parse_request_json(map_json)
for file_id, f in request.FILES.items():
for name in map[file_id]:
segments = [maybe_int(s) for s in name.split('.')]
cur = operations
while len(segments) > 1:
cur = cur[segments.pop(0)]
cur[segments.pop(0)] = f
logger.info('parse_body %s', operations)
return operations
else:
return request.POST
return {}
I have made a web2py web application. The api endpoints exposed are as follows.
"/comments[comments]"
"/comments/id/{comments.id}"
"/comments/id/{comments.id}/:field"
"/comments/user-id/{comments.user_id}"
"/comments/user-id/{comments.user_id}/:field"
"/comments/date-commented/{comments.date_commented.year}"
"/comments/date-commented/{comments.date_commented.year}/:field"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/:field"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/:field"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}/:field"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}/{comments.date_commented.minute}"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}/{comments.date_commented.minute}/:field"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}/{comments.date_commented.minute}/{comments.date_commented.second}"
"/comments/date-commented/{comments.date_commented.year}/{comments.date_commented.month}/{comments.date_commented.day}/{comments.date_commented.hour}/{comments.date_commented.minute}/{comments.date_commented.second}/:field"
"/comments/complaint-id/{comments.complaint_id}"
"/comments/complaint-id/{comments.complaint_id}/:field"
The comments model is as follows
models/db.py
db.define_table(
'comments',
Field('user_id', db.auth_user),
Field('comment_made', 'string', length=2048),
Field('date_commented', 'datetime', default=datetime.now),
Field('complaint_id', db.complaints),
Field('detailed_status', 'string', length=2048),
)
I have been successful in retriving a single comment via the following request
localhost:8000/api/comments/id/1.json
Now I wish to retrieve all the comments. I am not able to figure out how to use /comments[comments] to retrieve all comments.?
I have tried
localhost:8000/api/comments.json
But it gives an output with "invalid path"
I have realized requests such as http://localhost:8000/api/comments/complaint-id/1.json
also give "invalid path" as output.
Please help.
EDIT:
Controllers/default.py
#request.restful()
def api():
response.view='generic.' + request.extension
def GET(*args,**kargs):
patterns='auto'
parser = db.parse_as_rest(patterns,args,kargs)
if parser.status == 200:
return dict(content=parser.response)
else:
raise HTTP(parser.status,parser.error)
def POST(*args,**kargs):
return dict()
return locals()
routes.py in the main web2py folder to change the default application:
routers = dict(
BASE = dict(
default_application='GRS',
)
)
Another observation:
I added another endpoint as below:
def comm():
"""" Comments api substitute"""
rows=db().select(db.comments.ALL) ## this line shows error
# rows = db(db.comments.id > 0).select()
#rows=[[5,6],[3,4],[1,2]]
#for row in rows:
# print row.id
return dict(message=rows)
Even now I am not able to retrieve all comments with "/comm.json". This gives a web2py error ticket which says "need more than 1 value to unpack" on the line "rows=db.select(db.comments.ALL)". Are the above invalid path and this error related in someway?
I am developing mobile application for ODOO / OpenERP website in iOS.
I want to convert Opportunity to Quotation in ODOO mobile application ("Convert To Quotation").
For that i am using Model as: "crm.make.sale"
and method as: "makeOrder"
I choose to use above model and method by enabling developer mode. As shown in below screenshot
Below are the input parameters for XMLRPC web-service call:
(
Database name,
1,
Password,
crm.make.sale,
makeOrder,
{
id = 21;
"partner_id" = 12;
}
)
But for above web-service call i am getting error as follow:
invalid input syntax for integer: "partner_id"
LINE 2: ... WHERE "crm_make_sale".id IN ('partner_i...
Where does i am doing wrong ?
Thanks
This might fix your error:
(Database name, 1, Password, crm.make.sale, makeOrder, { 'id': 21; 'partner_id': 12;})