Read line by line odoo 9 - odoo

How can I print using the new api line by line in myrecord_ids field(name)
#api.multi
def func(self):
for line in self.browse(myrecord_ids):
print(line.name)
I want print this line: https://postimg.org/image/nvl45fuuh/

#api.multi
def func(self):
for line in self.myrecord_ids:
print(line.name)
If you encounter unexpected output you can print all values of your related field by doing the following.
#api.multi
def func(self):
for line in self.myrecord_ids:
print(line.read([]))

if self.myrecord_ids returns myrecord_ids(1,) myrecord_ids(2,) myrecord_ids(3,) then you are working with a recordset, so you can use list comprehension to get all names:
list_of_names = [i.name for i in self.myrecord_ids]
and then simply print it in loop or with ', '.join(list_of_names).
In case you want to see IDs, put it in expression, for example:
list_with_ids = ["%d: %s" % (i.id, i.name) for i in self.myrecord_ids]

Related

PySpark : AttributeError: 'DataFrame' object has no attribute 'values'

I'm a newbie in PySpark and I want to translate the following scripts which are pythonic into pyspark:
api_param_df = pd.DataFrame([[row[0][0], np.nan] if row[0][1] == '' else row[0] for row in http_path.values], columns=["api", "param"])
df = pd.concat([df['raw'], api_param_df], axis=1)
but I face the following error, which error trackback is following:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-df055fb7d6a1> in <module>()
21 # Notice we also make \? and the second capture group optional so that when there are no query parameters in http path, it returns NaN.
22
---> 23 api_param_df = pd.DataFrame([[row[0][0], np.nan] if row[0][1] == '' else row[0] for row in http_path.values], columns=["api", "param"])
24 df = pd.concat([df['raw'], api_param_df], axis=1)
25
/usr/local/lib/python3.7/dist-packages/pyspark/sql/dataframe.py in __getattr__(self, name)
1642 if name not in self.columns:
1643 raise AttributeError(
-> 1644 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
1645 jc = self._jdf.apply(name)
1646 return Column(jc)
AttributeError: 'DataFrame' object has no attribute 'values'
The full script is as follow, and explanations are commented for using regex to apply on the certain column http_path in df to parse api and param and merge/concat them to df again.
#Extract features from http_path ["API URL", "URL parameters"]
regex = r'([^\?]+)\?*(.*)'
http_path = df.filter(df['http_path'].rlike(regex))
# http_path
#0 https://example.org/path/to/file?param=42#frag...
#1 https://example.org/path/to/file
# api param
#0 https://example.org/path/to/file param=42#fragment
#1 https://example.org/path/to/file NaN
#where in regex pattern:
#- (?:https?://[^/]+/)? optionally matches domain but doesn't capture it
#- (?P<api>[^?]+) matches everything up to ?
#- \? matches ? literally
#- (?P<param>.+) matches everything after ?
# Notice we also make \? and the second capture group optional so that when there are no query parameters in http_path, it returns NaN.
api_param_df = pd.DataFrame([[row[0][0], np.nan] if row[0][1] == '' else row[0] for row in http_path.values], columns=["api", "param"])
df = pd.concat([df['raw'], api_param_df], axis=1)
df
Any help will be appreciated.
The syntax is valid with Pandas DataFrames but that attribute doesn't exist for the PySpark created DataFrames. You can check out this link for the documentation.
Usually, the collect() method or the .rdd attribute would help you with these tasks.
You can use the following snippet to produce the desired result:
http_path = sdf.rdd.map(lambda row: row['http_path'].split('?'))
api_param_df = pd.DataFrame([[row[0], np.nan] if len(row) == 1 else row for row in http_path.collect()], columns=["api", "param"])
sdf = pd.concat([sdf.toPandas()['raw'], api_param_df], axis=1)
Note that I removed the comments to make it more readable and I've also substituted the regex with a simple split.

I get this error when i try to use Wolfram Alpha in VS code python ValueError: dictionary update sequence element #0 has length 1; 2 is required

This is my code
import wolframalpha
app_id = '876P8Q-R2PY95YEXY'
client = wolframalpha.Client(app_id)
res = client.query(input('Question: '))
print(next(res.results).text)
the question I tried was 1 + 1
and i run it and then i get this error
Traceback (most recent call last):
File "c:/Users/akshi/Desktop/Xander/Untitled.py", line 9, in <module>
print(next(res.results).text)
File "C:\Users\akshi\AppData\Local\Programs\Python\Python38\lib\site-packages\wolframalpha\__init__.py", line 166, in text
return next(iter(self.subpod)).plaintext
ValueError: dictionary update sequence element #0 has length 1; 2 is required
Please help me
I was getting the same error when I tried to run the same code.
You can refer to "Implementing Wolfram Alpha Search" section of this website for better understanding of how the result was extracted from the dictionary returned.
https://medium.com/#salisuwy/build-an-ai-assistant-with-wolfram-alpha-and-wikipedia-in-python-d9bc8ac838fe
Also, I tried the following code by referring to the above website....hope it might help you :)
import wolframalpha
client = wolframalpha.Client('<your app_id>')
query = str(input('Question: '))
res = client.query(query)
if res['#success']=='true':
pod0=res['pod'][0]['subpod']['plaintext']
print(pod0)
pod1=res['pod'][1]
if (('definition' in pod1['#title'].lower()) or ('result' in pod1['#title'].lower()) or (pod1.get('#primary','false') == 'true')):
result = pod1['subpod']['plaintext']
print(result)
else:
print("No answer returned")

Scrapy Item not return unicode when append to dataframe?

I'm using Scrapy Pipeline to get all the items to a dataframe.
The code runs well but the unicode text is not showing correctly on the output of the dataframe.
However the result in csv file exported by feed_exporter is still fine. Could you guys please advise?
Here are the code
#In pipelines.py
class CrawlerPipeline(object):
def open_spider(self, spider):
settings = get_project_settings()
self.df = pd.DataFrame(columns=settings.get('FEED_EXPORT_FIELDS'))
print('SUCCESS CREATE DATAFRAME', self.df.columns)
def process_item(self, item, spider):
self.df = self.df.append([dict(item)]) #I think it has problem in this line of code
print('SUCCESS APPEND RECORD TO DATAFRAME, DF LEN:', len(self.df))
return item
#In spider.py
def parse_detail_page(self, response):
ads = CrawlerItem()
ads['body'] = (response.css('#sgg > div > div> div.car_des > div::text').extract_first() or "").encode('utf-8').strip()
yield(ads)
This is the incorrect output of the scraped text:
b'Salon \xc3\xb4 t\xc3\xb4 \xc3\x81nh L\xc3\xbd b\xc3\xa1n xe Kia Carens s\xe1\xba\xa3n xu\xe1\xba\xa5t 2015 m\xc3\xa0u c\xc3\xa1t'
The incorrect output you mention is the UTF-8-encoded bytes string corresponding to the desired text string.
You have two options:
Remove .encode('utf-8') from your code.
Add .decode('utf-8') when reading the string from the dataframe.

Update line in odoo 9

How update line in odoo 9?
Source:
#api.multi
def my_func(self):
for line in self.test_line_ids:
#print(line.name)
print(line.create_date)
if (line.create_date == "2017-01-25 10:50:56"):
self.write({'create_date': '2017-01-11 10:50:56'})
print("YES")
else:
print("NO")
I'm try above example, in console print YES but row is not updated in database!

Python 3 with the range function

I can type in the following code in the terminal, and it works:
for i in range(5):
print(i)
And it will print:
0
1
2
3
4
as expected. However, I tried to write a script that does a similar thing:
print(current_chunk.data)
read_chunk(file, current_chunk)
numVerts, numFaces, numEdges = current_chunk.data
print(current_chunk.data)
print(numVerts)
for vertex in range(numVerts):
print("Hello World")
current_chunk.data is gained from the following method:
def read_chunk(file, chunk):
line = file.readline()
while line.startswith('#'):
line = file.readline()
chunk.data = line.split()
The output for this is:
['OFF']
['490', '518', '0']
490
Traceback (most recent call last):
File "/home/leif/src/install/linux2/.blender/scripts/io/import_scene_off.py", line 88, in execute
load_off(self.properties.path, context)
File "/home/leif/src/install/linux2/.blender/scripts/io/import_scene_off.py", line 68, in load_off
for vertex in range(numVerts):
TypeError: 'str' object cannot be interpreted as an integer
So, why isn't it spitting out Hello World 490 times? Or is the 490 being thought of as a string?
I opened the file like this:
def load_off(filename, context):
file = open(filename, 'r')
'490' is a string. Try int('490').
Sigh, never mind, it did turn out to by evaluated as a string. Changing the for loop to
for vertex in range(int(numVerts)):
print("Hello World")
fixed the problem.