Jupyter .save_to_html function does not store config - kepler.gl

I'm trying to use the .save_to_html() function for a kepler.gl jupyter notebook map.
It all works great inside jupyter, and I can re-load the same map with a defined config.
Where it goes wrong is when I use the save_to_html() function. The map will save to an html, but the configuration reverts to the basic configuration, before I customized it.
Please help! I love kepler, when I solve this little thing, it will be our absolute go-to tool
Thanks
Have tried to change the filters, colours, and point sizes. None of this works.
map_1 = KeplerGl(height=500, data={'data': df},config=config)
map_1
config = map_1.config
config
map_1.save_to_html(data={'data_1': df},
file_name='privateers.html',config=config)
Config
{'version': 'v1',
'config': {'visState': {'filters': [{'dataId': 'data',
'id': 'x8t9c53mf',
'name': 'time_update',
'type': 'timeRange',
'value': [1565687902187.5417, 1565775465282],
'enlarged': True,
'plotType': 'histogram',
'yAxis': None},
{'dataId': 'data',
'id': 'biysqlu36',
'name': 'user_id',
'type': 'multiSelect',
'value': ['HNc0SI3WsQfhOFRF2THnUEfmqJC3'],
'enlarged': False,
'plotType': 'histogram',
'yAxis': None}],
'layers': [{'id': 'ud6168',
'type': 'point',
'config': {'dataId': 'data',
'label': 'Point',
'color': [18, 147, 154],
'columns': {'lat': 'lat', 'lng': 'lng', 'altitude': None},
'isVisible': True,
'visConfig': {'radius': 5,
'fixedRadius': False,
'opacity': 0.8,
'outline': False,
'thickness': 2,
'strokeColor': None,
'colorRange': {'name': 'Uber Viz Qualitative 1.2',
'type': 'qualitative',
'category': 'Uber',
'colors': ['#12939A',
'#DDB27C',
'#88572C',
'#FF991F',
'#F15C17',
'#223F9A'],
'reversed': False},
'strokeColorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'radiusRange': [0, 50],
'filled': True},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': {'name': 'ride_id', 'type': 'string'},
'colorScale': 'ordinal',
'strokeColorField': None,
'strokeColorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'an8tbef',
'type': 'point',
'config': {'dataId': 'data',
'label': 'previous',
'color': [221, 178, 124],
'columns': {'lat': 'previous_lat',
'lng': 'previous_lng',
'altitude': None},
'isVisible': False,
'visConfig': {'radius': 10,
'fixedRadius': False,
'opacity': 0.8,
'outline': False,
'thickness': 2,
'strokeColor': None,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'strokeColorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'radiusRange': [0, 50],
'filled': True},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'strokeColorField': None,
'strokeColorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'ilpixu9',
'type': 'arc',
'config': {'dataId': 'data',
'label': ' -> previous arc',
'color': [146, 38, 198],
'columns': {'lat0': 'lat',
'lng0': 'lng',
'lat1': 'previous_lat',
'lng1': 'previous_lng'},
'isVisible': True,
'visConfig': {'opacity': 0.8,
'thickness': 2,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'sizeRange': [0, 10],
'targetColor': None},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'inv52pp',
'type': 'line',
'config': {'dataId': 'data',
'label': ' -> previous line',
'color': [136, 87, 44],
'columns': {'lat0': 'lat',
'lng0': 'lng',
'lat1': 'previous_lat',
'lng1': 'previous_lng'},
'isVisible': False,
'visConfig': {'opacity': 0.8,
'thickness': 2,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'sizeRange': [0, 10],
'targetColor': None},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}}],
'interactionConfig': {'tooltip': {'fieldsToShow': {'data': ['time_ride_start',
'user_id',
'ride_id']},
'enabled': True},
'brush': {'size': 0.5, 'enabled': False}},
'layerBlending': 'normal',
'splitMaps': []},
'mapState': {'bearing': 0,
'dragRotate': False,
'latitude': 49.52565611453996,
'longitude': 6.2730441822977845,
'pitch': 0,
'zoom': 9.244725880765998,
'isSplit': False},
'mapStyle': {'styleType': 'dark',
'topLayerGroups': {},
'visibleLayerGroups': {'label': True,
'road': True,
'border': False,
'building': True,
'water': True,
'land': True,
'3d building': False},
'threeDBuildingColor': [9.665468314072013,
17.18305478057247,
31.1442867897876],
'mapStyles': {}}}}
Expected:
Fully configurated map as in Jupyter widget
Actuals
Colors and filters are not configured. Size and position of map is sent along, so if I store it looking at an empty area, when I open the html file it looks at the same field

In the Jupyter user guide for kepler.gl under the save section
# this will save current map
map_1.save_to_html(file_name='first_map.html')
# this will save map with provided data and config
map_1.save_to_html(data={'data_1': df}, config=config, file_name='first_map.html')
# this will save map with the interaction panel disabled
map_1.save_to_html(file_name='first_map.html', read_only=True)
So it looks like its a bug if the configuration parameter doesn't work or you are making the changes to the map configure after you set it equal to config. This would be fixed if you set
map_1.save_to_html(data={'data_1': df},
file_name='privateers.html',config=map_1.config)

I think it is a bug (or feature?) happens when you use the same cell to save the map configuration or still not print the map out yet. Generally, the config only exists after you really print the map out.

the problem, as far as I see it and how I solved at a similar problem is, that you 1) named your 'data' key in instancing the map different than you told it to save in the HTML 2).
map_1 = KeplerGl(height=500, data={'data': df},config=config)
map_1.save_to_html(data={'data_1': df}, file_name='privateers.html',config=config)
Name both keys the same and your HTML file will use the correct configuration.

Had this issue as well. Solved it by converting all pandas column dtypes to those that are json serializable: i.e. converting 'datetime' column from dtype <m8[ns] to object.

Related

Python ML LSTM Stock Prediction with Dash No Output Code Just Keeps Running

I'm trying to run the following code in Jupyter notebook - but it just keeps running endlessly with no output. I'm following the tutorial from: https://data-flair.training/blogs/stock-price-prediction-machine-learning-project-in-python/
The code is from the stock_app.py which doesn't seem to be working:
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import plotly.graph_objs as go
from dash.dependencies import Input, Output
from keras.models import load_model
from sklearn.preprocessing import MinMaxScaler
import numpy as np
app = dash.Dash()
server = app.server
scaler=MinMaxScaler(feature_range=(0,1))
df_nse = pd.read_csv("./NSE-TATA.csv")
df_nse["Date"]=pd.to_datetime(df_nse.Date,format="%Y-%m-%d")
df_nse.index=df_nse['Date']
data=df_nse.sort_index(ascending=True,axis=0)
new_data=pd.DataFrame(index=range(0,len(df_nse)),columns=['Date','Close'])
for i in range(0,len(data)):
new_data["Date"][i]=data['Date'][i]
new_data["Close"][i]=data["Close"][i]
new_data.index=new_data.Date
new_data.drop("Date",axis=1,inplace=True)
dataset=new_data.values
train=dataset[0:987,:]
valid=dataset[987:,:]
scaler=MinMaxScaler(feature_range=(0,1))
scaled_data=scaler.fit_transform(dataset)
x_train,y_train=[],[]
for i in range(60,len(train)):
x_train.append(scaled_data[i-60:i,0])
y_train.append(scaled_data[i,0])
x_train,y_train=np.array(x_train),np.array(y_train)
x_train=np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1))
model=load_model("saved_ltsm_model.h5")
inputs=new_data[len(new_data)-len(valid)-60:].values
inputs=inputs.reshape(-1,1)
inputs=scaler.transform(inputs)
X_test=[]
for i in range(60,inputs.shape[0]):
X_test.append(inputs[i-60:i,0])
X_test=np.array(X_test)
X_test=np.reshape(X_test,(X_test.shape[0],X_test.shape[1],1))
closing_price=model.predict(X_test)
closing_price=scaler.inverse_transform(closing_price)
train=new_data[:987]
valid=new_data[987:]
valid['Predictions']=closing_price
df= pd.read_csv("./stock_data.csv")
app.layout = html.Div([
html.H1("Stock Price Analysis Dashboard", style={"textAlign": "center"}),
dcc.Tabs(id="tabs", children=[
dcc.Tab(label='NSE-TATAGLOBAL Stock Data',children=[
html.Div([
html.H2("Actual closing price",style={"textAlign": "center"}),
dcc.Graph(
id="Actual Data",
figure={
"data":[
go.Scatter(
x=train.index,
y=valid["Close"],
mode='markers'
)
],
"layout":go.Layout(
title='scatter plot',
xaxis={'title':'Date'},
yaxis={'title':'Closing Rate'}
)
}
),
html.H2("LSTM Predicted closing price",style={"textAlign": "center"}),
dcc.Graph(
id="Predicted Data",
figure={
"data":[
go.Scatter(
x=valid.index,
y=valid["Predictions"],
mode='markers'
)
],
"layout":go.Layout(
title='scatter plot',
xaxis={'title':'Date'},
yaxis={'title':'Closing Rate'}
)
}
)
])
]),
dcc.Tab(label='Facebook Stock Data', children=[
html.Div([
html.H1("Facebook Stocks High vs Lows",
style={'textAlign': 'center'}),
dcc.Dropdown(id='my-dropdown',
options=[{'label': 'Tesla', 'value': 'TSLA'},
{'label': 'Apple','value': 'AAPL'},
{'label': 'Facebook', 'value': 'FB'},
{'label': 'Microsoft','value': 'MSFT'}],
multi=True,value=['FB'],
style={"display": "block", "margin-left": "auto",
"margin-right": "auto", "width": "60%"}),
dcc.Graph(id='highlow'),
html.H1("Facebook Market Volume", style={'textAlign': 'center'}),
dcc.Dropdown(id='my-dropdown2',
options=[{'label': 'Tesla', 'value': 'TSLA'},
{'label': 'Apple','value': 'AAPL'},
{'label': 'Facebook', 'value': 'FB'},
{'label': 'Microsoft','value': 'MSFT'}],
multi=True,value=['FB'],
style={"display": "block", "margin-left": "auto",
"margin-right": "auto", "width": "60%"}),
dcc.Graph(id='volume')
], className="container"),
])
])
])
#app.callback(Output('highlow', 'figure'),
[Input('my-dropdown', 'value')])
def update_graph(selected_dropdown):
dropdown = {"TSLA": "Tesla","AAPL": "Apple","FB": "Facebook","MSFT": "Microsoft",}
trace1 = []
trace2 = []
for stock in selected_dropdown:
trace1.append(
go.Scatter(x=df[df["Stock"] == stock]["Date"],
y=df[df["Stock"] == stock]["High"],
mode='lines', opacity=0.7,
name=f'High {dropdown[stock]}',textposition='bottom center'))
trace2.append(
go.Scatter(x=df[df["Stock"] == stock]["Date"],
y=df[df["Stock"] == stock]["Low"],
mode='lines', opacity=0.6,
name=f'Low {dropdown[stock]}',textposition='bottom center'))
traces = [trace1, trace2]
data = [val for sublist in traces for val in sublist]
figure = {'data': data,
'layout': go.Layout(colorway=["#5E0DAC", '#FF4F00', '#375CB1',
'#FF7400', '#FFF400', '#FF0056'],
height=600,
title=f"High and Low Prices for {', '.join(str(dropdown[i]) for i in selected_dropdown)} Over Time",
xaxis={"title":"Date",
'rangeselector': {'buttons': list([{'count': 1, 'label': '1M',
'step': 'month',
'stepmode': 'backward'},
{'count': 6, 'label': '6M',
'step': 'month',
'stepmode': 'backward'},
{'step': 'all'}])},
'rangeslider': {'visible': True}, 'type': 'date'},
yaxis={"title":"Price (USD)"})}
return figure
#app.callback(Output('volume', 'figure'),
[Input('my-dropdown2', 'value')])
def update_graph(selected_dropdown_value):
dropdown = {"TSLA": "Tesla","AAPL": "Apple","FB": "Facebook","MSFT": "Microsoft",}
trace1 = []
for stock in selected_dropdown_value:
trace1.append(
go.Scatter(x=df[df["Stock"] == stock]["Date"],
y=df[df["Stock"] == stock]["Volume"],
mode='lines', opacity=0.7,
name=f'Volume {dropdown[stock]}', textposition='bottom center'))
traces = [trace1]
data = [val for sublist in traces for val in sublist]
figure = {'data': data,
'layout': go.Layout(colorway=["#5E0DAC", '#FF4F00', '#375CB1',
'#FF7400', '#FFF400', '#FF0056'],
height=600,
title=f"Market Volume for {', '.join(str(dropdown[i]) for i in selected_dropdown_value)} Over Time",
xaxis={"title":"Date",
'rangeselector': {'buttons': list([{'count': 1, 'label': '1M',
'step': 'month',
'stepmode': 'backward'},
{'count': 6, 'label': '6M',
'step': 'month',
'stepmode': 'backward'},
{'step': 'all'}])},
'rangeslider': {'visible': True}, 'type': 'date'},
yaxis={"title":"Transactions Volume"})}
return figure
if __name__=='__main__':
app.run_server(debug=True)

How to make selenium page loading more efficient if accessing all data on the page requires 10k+ clicks of a lazyload button?

I am scraping one particular page with the a headless chromedriver
The page is really huge, to load it entirely I need 10k+ clicks on a lazy load button
The more I click, the slower things get
Is there a way to make the process faster?
Here is the code:
def driver_config():
chrome_options = Options()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
chrome_options.page_load_strategy = 'eager'
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
return(driver)
def scroll_the_category_until_the_end(driver, category_url):
driver.get(category_url)
pbar = tqdm()
pbar.write('initializing spin')
while True:
try:
show_more_button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="root"]/div/div[2]/div[2]/div[2]/button')))
driver.execute_script("arguments[0].click();", show_more_button)
pbar.update()
except TimeoutException:
pbar.write('docking')
pbar.close()
break
driver = driver_config()
scroll_the_category_until_the_end(driver, 'https://supl.biz/russian-federation/stroitelnyie-i-otdelochnyie-materialyi-supplierscategory9403/')
UPDATE:
I also tried to implement another strategy but it didn't work:
deleting all company information on every iteration
clearing driver cash
My hypothesis was that if I do this, DOM will always be clean and fast
driver = driver_config()
driver.get('https://supl.biz/russian-federation/stroitelnyie-i-otdelochnyie-materialyi-supplierscategory9403/')
pbar = tqdm()
pbar.clear()
while True:
try:
for el in driver.find_elements_by_class_name('a_zvOKG8vZ'):
driver.execute_script("""var element = arguments[0];element.parentNode.removeChild(element);""", el)
button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,"//*[contains(text(), 'Показать больше поставщиков')]")))
driver.execute_script("arguments[0].click();", button)
pbar.update()
driver.execute_script("window.localStorage.clear();")
except Exception as e:
pbar.close()
print(e)
break
First the website invokes javascript to grab new data, the HTTP request is invoked by clicking the more results button, it calls upon an API and the response back is the data needed to load the page with more results. You can view this request by inspecting the page --> Network tools --> XHR and then clicking the button. It sends an HTTP GET request to an API which has data on each product.
The most efficient way to grab data from a website that invokes javascript is by re-engineering this HTTP request the javascript is making.
In this case it's relatively easy, I copied the request in a cURL command within XHR of inspecting the page and converted it using curl.trillworks.com to python.
This is the screen you get to with XHR, before clicking the more results page.
Clicking the more results page you get this, notice how a request has populated the screen ?
Here I'm copying the cURL request to grab the necessary headers etc...
Here I'm copying the cURL request to grab the necessary headers etc... you can then input this into curl.trillworks.com and it converts the request into params, cookies and headers and gives you boilerplate for the requests package.
Had a play around with the request using the requests package. Inputting various parts of the headers, you are provided cookies, but they're actually not necessary when you make the request.
The simplest request to make is one without headers, parameters or cookies but most API endpoints don't accept this. In this case, having played around with the requests package, you need a user-agent and the parameters that specify what data you get back from the API. Infact you don't even need a valid user-agent.
Now you could invoke a while loop to keep making HTTP requests in sizes of 8. Unfortunately altering the size of the request in the parameters doesn't get you all the data!
Coding Example
import requests
import time
i = 8
j = 1
headers = {
'user-agent': 'M'
}
while True:
if response.status_code == 200:
params = (
('category', '9403'),
('city', 'russian-federation'),
('page', f'{j}'),
('size', f'{i}'),
)
response = requests.get('https://supl.biz/api/monolith/suppliers-catalog/search/', headers=headers, params=params)
print(response.json()['hits'][0])
i += 8
j += 1
time.sleep(4)
else:
break
Output
Sample output
{'id': 1373827,
'type': None,
'highlighted': None,
'count_days_on_tariff': 183,
'tariff_info': {'title_for_show': 'Поставщик Премиум',
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'origin_ru': {'id': 999, 'title': 'Санкт-Петербург'},
'title': 'ООО "СТАНДАРТ 10"',
'address': 'Пискаревский проспект, 150, корпус 2.',
'inn': '7802647317',
'delivery_types': ['self', 'transportcompany', 'suppliercars', 'railway'],
'summary': 'Сэндвич-панели: новые, 2й сорт, б/у. Холодильные камеры: новые, б/у. Двери для холодильных камер: новые, б/у. Строительство холодильных складов, ангаров и др. коммерческих объектов из сэндвич-панелей. Холодильное оборудование: новое, б/у.',
'phone': '79219602762',
'hash_id': 'lMJPgpEz7b',
'payment_types': ['cache', 'noncache'],
'logo_url': 'https://suplbiz-a.akamaihd.net/media/cache/37/9e/379e9fafdeaab4fc5a068bc90845b56b.jpg',
'proposals_count': 4218,
'score': 42423,
'reviews': 0,
'rating': '0.0',
'performed_orders_count': 1,
'has_replain_chat': False,
'verification_status': 2,
'proposals': [{'id': 20721916,
'title': 'Сэндвич панели PIR 100',
'description': 'Сэндвич панели. Наполнение Пенополиизлцианурат ПИР PIR. Толщина 100мм. Длина 3,2 метра. Rall9003/Rall9003. Вналичии 600м2. Количество: 1500',
'categories': [135],
'price': 1250.0,
'old_price': None,
'slug': 'sendvich-paneli-pir-100',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/72/4d/724d0ba4d4a2b7d459f3ca4416e58d7d.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/67/45/6745bb6f616b82f7cd312e27814b6b89.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 12,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 12,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722131,
'title': 'Сэндвич панели ппу100 б/у, 2,37 м',
'description': 'Сэндвич панели. Наполнение Пенополиуретан ППУ ПУР PUR. Толщина 100 мм. длинна 2,37 метра. rall9003/rall9003. БУ. В наличии 250 м2.',
'categories': [135],
'price': 800.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu100-b-u-2-37-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/d1/49/d1498144bc7b324e288606b0d7d98120.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/10/4b/104b54cb9b7ddbc6b2f0c1c5a01cdc2d.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 4,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 4,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722293,
'title': 'Холодильная камера polair 2.56х2.56х2.1',
'description': 'Холодильная камера. Размер 2,56 Х 2,56 Х 2,1. Камера из сэндвич панелей ППУ80. Камера с дверью. -5/+5 или -18. В наличии. Подберем моноблок или сплит систему. …',
'categories': [478],
'price': 45000.0,
'old_price': None,
'slug': 'holodilnaya-kamera-polair-2-56h2-56h2-1',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/c1/9f/c19f38cd6893a3b94cbdcbdb8493c455.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/4d/b0/4db06a2508cccf5b2e7fe822c1b892a2.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 5,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 5,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722112,
'title': 'Сэндвич панели ппу 80 б/у, 2,4 м',
'description': 'Сэндвич панели. Наполнение ППУ. Толщина 80 мм. длинна 2,4 метра. БУ. В наличии 350 м2.',
'categories': [135],
'price': 799.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu-80-b-u-2-4-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/ba/06/ba069a73eda4641030ad69633d79675d.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/4f/e9/4fe9f3f358f775fa828c532a6c08e7f2.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 8,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 8,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722117,
'title': 'Сэндвич панели ппу 60 мм, 2,99 м',
'description': 'Сэндвич панели. Наполнение Пенополиуретан ППУ ПУР PUR . Новые. В наличии 600 м2. Толщина 60 мм. длинна 2,99 метров. rall9003/rall9003',
'categories': [135],
'price': 1100.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu-60-mm-2-99-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/e2/fb/e2fb6505a5af74a5a994783a5e51600c.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/9c/f5/9cf5905a26e6b2ea1fc16d50c19ef488.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 10,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 10,
'location': {'lon': 30.31413, 'lat': 59.93863}}}]}
​
Explanation
Here we're making sure that the response status is 200 before making another request. Using f-strings we change the page by 1 and the size of the results of the JSON object by 8 each iteration of the while loop. I've imposed a time restriction per request, because if push too many HTTP request at once you'll end up getting IP banned. Be gentle on the server!
The response.json() method converts the JSON object to python dictionary, you haven't specified what data, but I think if you can handle a python dictionary you can grab the data you require.
Comments
Here is where the parameters comes from. You can see the pages and size data here.

How to create an invoice and invoice lines via code - Odoo13

I am trying to create an invoice and invoice lines via python code.
Here is the code.
def createInvoice(self, date_ref):
// Skipped some code here
invoice_values = contract._prepare_invoice(date_ref)
for line in contract_lines:
invoice_values.setdefault('invoice_line_ids', [])
invoice_line_values = line._prepare_invoice_line(
invoice_id=False
)
if invoice_line_values:
invoice_values['invoice_line_ids'].append(
(0, 0, invoice_line_values)
)
invoices_values.append(invoice_values)
​
Values for
invoice_values = {'type': 'in_invoice', 'journal_id': 2, 'company_id': 1, 'line_ids': [(6, 0, [])],
'partner_id': 42, 'commercial_partner_id': 42, 'fiscal_position_id': False,
'invoice_payment_term_id': False, 'invoice_line_ids': [(6, 0, [])],
'invoice_partner_bank_id': False, 'invoice_cash_rounding_id': False,
'bank_partner_id': 42, 'currency_id': 130, 'invoice_date': datetime.date(2020, 11, 11),
'invoice_origin': 'Vendor COntract #1', 'user_id': 2, 'old_contract_id': 6}
invoice_line_values = {'move_id': False, 'journal_id': False, 'company_id': False,
'account_id': False, 'name': '[E-COM07] Large Cabinet', 'quantity': 1.0,
'price_unit': 1444.01, 'discount': 0.0, 'partner_id': False,
'product_uom_id': 1, 'product_id': 17, 'payment_id': False,
'tax_ids': [(6, 0, [])], 'analytic_line_ids': [(6, 0, [])],
'display_type': False, 'contract_line_id': 7, 'asset_id': False,
'analytic_account_id': False}
In create function of account move
vals = {'date': datetime.date(2020, 2, 11), 'type': 'in_invoice', 'journal_id': 2,
'company_id': 1, 'currency_id': 130, 'line_ids': [(6, 0, [])], 'partner_id': 42,
'commercial_partner_id': 42, 'fiscal_position_id': False, 'user_id': 2,
'invoice_date': datetime.date(2020, 12, 11), 'invoice_origin': 'Vendor COntract #1',
'invoice_payment_term_id': False,
'invoice_line_ids': [(6, 0, []), (0, 0, {'journal_id': False, 'company_id': False,
'account_id': 109, 'name': '[E-COM07] Large Cabinet', 'quantity': 1.0, 'price_unit': 1444.01,
'discount': 0.0, 'partner_id': False, 'product_uom_id': 1, 'product_id': 17,
'payment_id': False, 'tax_ids': [(6, 0, [19])], 'analytic_line_ids': [(6, 0, [])],
'analytic_account_id': False, 'display_type': False, 'exclude_from_invoice_tab': False,
'contract_line_id': 7, 'asset_id': False}), (0, 0, {'journal_id': False,
'company_id': False, 'account_id': 109, 'name': '[E-COM09] Large Desk', 'quantity': 1.0,
'price_unit': 8118.04, 'discount': 0.0, 'partner_id': False, 'product_uom_id': 1,
'product_id': 19, 'payment_id': False, 'tax_ids': [(6, 0, [19])],
'analytic_line_ids': [(6, 0, [])], 'analytic_account_id': False, 'display_type': False,
'exclude_from_invoice_tab': False, 'contract_line_id': 8, 'asset_id': False})],
'invoice_partner_bank_id': False, 'invoice_cash_rounding_id': False, 'bank_partner_id': 42,
'old_contract_id': 6}
It creates the account_move(Invoice) but not the account_move_line(Invoice lines).
What am i missing here?
Finally got the solution,
'line_ids': [(6, 0, [])],
The above line caused the problem. I removed it from the invoice_values then it worked.

How to extract the values from the json data frame with particular key

json_details
{'dob': '1981-06-30', 'name': 'T ', 'date': None, 'val': {'ENG': None, 'US': None}}
{'dob': '2001-09-27', 'name': 'A NGR', 'date': None}
{'dob': '2000-07-12', 'name': 'T B MV', 'date': None, 'val': {'ENG': None, 'US': None}}
{'dob': '1983-01-01', 'name': 'E K', 'date': None, 'val': {'ENG': None, 'US': 2034-11-18}}
{'dob': '1994-10-25', 'name': 'DF', 'date': None, 'val': {'ENG': '2034-11-18', 'US': None}}
Need to extract 2 keys from the json_details column. Some row have no val key if we apply it will throw key error and stop
df['json_details'][0]['ENG']
df['json_details'][0]['US']
Expected Out
df['json_details']['ENG']
None
No keys
None
None
2034-11-18
df['json_details']['US']
None
No keys
None
2034-11-18
None
solution is
df['new'] = df['json_details'].str['ENG']
df['new'] = df['json_details'].str['US']

pandas same attribute comparison

I have the following dataframe:
df = pd.DataFrame([{'name': 'a', 'label': 'false', 'score': 10},
{'name': 'a', 'label': 'true', 'score': 8},
{'name': 'c', 'label': 'false', 'score': 10},
{'name': 'c', 'label': 'true', 'score': 4},
{'name': 'd', 'label': 'false', 'score': 10},
{'name': 'd', 'label': 'true', 'score': 6},
])
I want to return names that have the "false" label score value higher than the score value of the "true" label with at least the double. In my example, it should return only the "c" name.
First you can pivot the data, and look at the ratio, filter what you want:
new_df = df.pivot(index='name',columns='label', values='score')
new_df[new_df['false'].div(new_df['true']).gt(2)]
output:
label false true
name
c 10 4
If you only want the label, you can do:
new_df.index[new_df['false'].div(new_df['true']).gt(2)].values
which gives
array(['c'], dtype=object)
Update: Since your data is result of orig_df.groupby().count(), you could instead do:
orig_df['label'].eq('true').groupby('name').mean()
and look at the rows with values <= 1/3.