List of dictionarys to dataframe - pandas

im trying to make a dataframe out of a list of dictionaries. I am quite new at this whole programming thing, and google just makes me more confused. That is why i am turning to you guys hoping for some assistance.
The first two list values (YV01', '3nP3RFgGnBrOfILK4DF2Tp) i would like to have under columns called: Name and GlobalId. I would lie to drop Pset_wallcommon, AC_Pset_RenovationAndPhasing, and BaseQuantities. And use the rest of the keys(if that what they are called) as column names.
It would be great if someone could give me the right push :)
For the record: Im am parsing an Ifc file with the IfcOpenshell package
The data:
['YV01', '3nP3RFgGnBrOfILK4DF2Tp', {'Pset_WallCommon': {'Combustible': False, 'Compartmentation': False, 'ExtendToStructure': False, 'SurfaceSpreadOfFlame': '', 'ThermalTransmittance': 0.0, 'Reference': '', 'AcousticRating': '', 'FireRating': '', 'LoadBearing': False, 'IsExternal': False}, 'AC_Pset_RenovationAndPhasing': {'Renovation Status': 'New'}, 'BaseQuantities': {'Length': 13786.7314346, 'Height': 2700.0, 'Width': 276.0, 'GrossFootprintArea': 3.88131387595, 'NetFootprintArea': 3.88131387595, 'GrossSideArea': 37.9693748734, 'NetSideArea': 37.9693748734, 'GrossVolume': 10.4795474651, 'NetVolume': 10.4795474651}}, 'YV01', '1M4JyBJhXD5xt8fBFUcjUU', {'Pset_WallCommon': {'Combustible': False, 'Compartmentation': False, 'ExtendToStructure': False, 'SurfaceSpreadOfFlame': '', 'ThermalTransmittance': 0.0, 'Reference': '', 'AcousticRating': '', 'FireRating': '', 'LoadBearing': False, 'IsExternal': False}, 'AC_Pset_RenovationAndPhasing': {'Renovation Status': 'New'}, 'BaseQuantities': {'Length': 6166.67382573, 'Height': 2700.0, 'Width': 276.0, 'GrossFootprintArea': 1.6258259759, 'NetFootprintArea': 1.6258259759, 'GrossSideArea': 15.9048193295, 'NetSideArea': 15.9048193295, 'GrossVolume': 4.38973013494, 'NetVolume': 4.38973013494}}
all_walls = ifc_file.by_type('IfcWall')
wallList = []
for wall in all_walls:
propertySets = (ifcopenshell.util.element.get_psets(wall))
wallList.append(wall.Name)
wallList.append(wall.GlobalId)
wallList.append(propertySets)
print(wallList)
wall_table = pd.DataFrame.from_records(wallList)
print(wall_table)
I have tried these basic pd.DataFrame.from_dict/records/arrays(data)
but the output looks like this
enter image description here
UPDATE: Thank you so much for your help, i am learning alot from this!
So i made a dictionary out of the wallList, and flattened the dict. like this:
#list of walls
for wall in all_walls:
propertySets = (ifcopenshell.util.element.get_psets(wall))
wallList.append(wall.Name)
wallList.append(wall.GlobalId)
wallList.append(propertySets)
#dict from list
wall_dict = {i: wallList[i] for i in range(0, len(wallList))}
new_dict = {}
#flattening dict
for key, value in wall_dict.items():
if isinstance(value, dict):
for key in value.keys():
for key2 in value[key].keys():
new_dict[key + '_' + key2] = value[key][key2]
else:
new_dict[key] = value
wall_table = pd.DataFrame.from_dict(new_dict, orient='index')
print(wall_table)
It seems to work pretty good, the only problem is that the dataframe contains all walls, but only propertyset data from the first in the list. I cant seem to understand how the dict flattening loop works. I would also like the index names (Pset_WallCommon_Combustible, and so on) to be the columns in my dataframe. Is that possible?
enter image description here

EDIT : Simply flattening a list as i did goes nowhere. Actually, i think you should drop this list thing altogether and try to load the Dataframe from a dictionnary. We'd need to see what does all_walls look like to help you for that, tho.
Have you tried directly loading the all_walls dictionary into a dataframe : df = pd.Dataframe.from_dict(all_walls) ?
I think if that doesnt work, flattening the dictionnaries in a fashion similar to the following should do the trick.
new_dict = {}
for key, value in all_walls.items():
if isinstance(value, dict):
for key in value.keys():
for key2 in value[key].keys():
new_dict[key + '_' + key2] = value[key][key2]
else:
new_dict[key] = value

Related

Karate JSON filter paths not able to be parametrize

I am trying to dynamically substitute the value 'US' in the below expression.
where in configs is an array not a json object.
* def selectedConfig= $configs[?(#.configId=='US')]
#this above one works
But I am not successful making it a dynamic expression. I tried the below variant and fe, please help
* def activeConfigId = 'US'
* def selectedConfig= karate.jsonPath("$configs[?(#.configId=='" + activeConfigId + "')]")
I see there is someone else also asked similar question but no answer how to make this expression dynamic.
Karate: parametric json path expressions
You need to read the docs and examples a little more carefully. Here is an example that works:
* def configId = 'US'
* def response = [{configId: 'AA', data: 'first'}, {configId: 'US', data: 'second'}]
* def selectedConfig = karate.jsonPath(response, "$[?(#.configId=='" + configId + "')]")
* match selectedConfig[0] == { configId: 'US', data: 'second' }
If using karate.jsonPath() is too hard, please look at karate.filter(): https://stackoverflow.com/a/62897131/143475
And since you seem to be trying to do some smart config switching, refer this: https://stackoverflow.com/a/49693808/143475

How to get section heading of tables in wikipedia through API

How do I get section headings for individual tables: Xia dynasty (夏朝) (2070–1600 BC), Shang dynasty (商朝) (1600–1046 BC), Zhou dynasty (周朝) (1046–256 BC) etc. for the Chinese Monarchs list on Wikipedia via API? I use the code below to connect:
from pprint import pprint
import requests, wikitextparser
r = requests.get(
'https://en.wikipedia.org/w/api.php',
params={
'action': 'query',
'titles': 'List_of_Chinese_monarchs',
'prop': 'revisions',
'rvprop': 'content',
'format': 'json',
}
)
r.raise_for_status()
pages = r.json()['query']['pages']
body = next(iter(pages.values()))['revisions'][0]['*']
doc = wikitextparser.parse(body)
print(f'{len(doc.tables)} tables retrieved')
han = doc.tables[5].data()
doc.tables[6].data()
doc.tables[i].data() only return the table values, without its <h2> section headings. I would like the API to return me a list of title strings that correspond to each of the 83 tables returned.
Original website:
https://en.wikipedia.org/wiki/List_of_Chinese_monarchs
I'm not sure why you are using doc.tables when it is the sections you are interested in. This works for me:
for i in range(1,94,1):
print(doc.sections[i].title.replace('[[','').replace(']]',''))
I get 94 sections though rather than 83 and while you can use len(doc.sections) this will include See also etc. There must be a more elegant way of removing the wikilinks.

Erlang ets:select sublist

Is there a way in Erlang to create a select query on ets table, which will get all the elements that contains the searched text?
ets:select(Table,
[{ %% Match spec for select query
{'_', #movie_data{genre = "Drama" ++ '_' , _ = '_'}}, % Match pattern
[], % Guard
['$_'] % Result
}]) ;
This code gives me only the data that started (=prefix) with the required text (text = "Drama"), but the problem is I need also the the results that contain the data, like this example:
#movie_data{genre = "Action, Drama" }
I tried to change the guard to something like that -
{'_', #movie_data{genre = '$1', _='_'}}, [string:str('$1', "Drama") > 0] ...
But the problem is that it isn't a qualified guard expression.
Thanks for the help!!
It's not possible. You need to design your data structure to be searchable by the guard expressions, for example:
-record(movie_data, {genre, name}).
-record(genre, {comedy, drama, action}).
example() ->
Table = ets:new('test', [{keypos,2}]),
ets:insert(Table, #movie_data{name = "Bean",
genre = #genre{comedy = true}}),
ets:insert(Table, #movie_data{name = "Magnolia",
genre = #genre{drama = true}}),
ets:insert(Table, #movie_data{name = "Fight Club",
genre = #genre{drama = true, action = true}}),
ets:select(Table,
[{#movie_data{genre = #genre{drama = true, _ = '_'}, _ = '_'},
[],
['$_']
}]).

What does the path /shop/get_suggest mean?

I found a module that autocomplete search in a Website e-commerce with high-light match words and image. But I did not really understand what each command do .
Can you please explain to me how this code work, and why they did /shop/get_suggest?
class WebsiteSale(http.Controller):
#http.route(['/shop/get_suggest'], type='http', auth="public", methods=['GET'], website=True)
def get_suggest_json(self, **kw):
query = kw.get('query')
names = query.split(' ')
domain = ['|' for k in range(len(names) - 1)] + [('name', 'ilike', name) for name in names]
products = request.env['product.template'].search(domain, limit=15)
products = sorted(products, key=lambda x: SequenceMatcher(None, query.lower(), x.name.lower()).ratio(),
reverse=True)
results = []
for product in products:
results.append({'value': product.name, 'data': {'id': product.id, 'after_selected': product.name}})
return json.dumps({
'query': 'Unit',
'suggestions': results
})
This controller function will be activated when you load page your_domain/shop/get_suggest.
The function just searches for products with similar name of the query given in the search.
Please go through this documentation to learn basics of building a website

Update Context in Odoo Controller (request.env.context)

I want to update request context,
request.env.context
At the moment I got this dictionary
{'lang': u'en_US', 'tz': False, 'uid': 21}
I wanna update lang key, and expected output of
request.env.context
{'lang': 'de_DE', 'tz': False, 'uid': 21}
Any idea how.
context = request.env.context.copy()
context.update({'lang': u'en_CA'})
request.env.context = context
Below code works well for me, I am guessing that you are working with a pos.order model
pos_order = request.env['pos.order'].with_context(
lang='de_DE')