Decode hexbytes into readable strings in Web3.py - web3py

After reading the hash of a transaction using hash = '0x0563998623f30f4cd01a22ef852a8155ef693a271a948db0b4ec4a1e4f95ed12' web3.eth.get_transaction(hash) I get the following response:
AttributeDict({'blockHash': HexBytes('0x19b7cb60bba6a238e5443087cd5dc24f40d7c2c3273a21b9e3f93a3b007a231a'), 'blockNumber': 13701231, 'from': '0x488333BBD386acCdA756A6F30fda8462659B4821', 'gas': 1763092, 'gasPrice': 5000000000, 'hash': HexBytes('0x0563998623f30f4cd01a22ef852a8155ef693a271a948db0b4ec4a1e4f95ed12'), 'input': '0x4bb278f3', 'nonce': 24, 'to': '0xcc9362583D87799E8eaa21Dd53F8032Ba7f5DE16', 'transactionIndex': 727, 'value': 0, 'type': '0x0', 'v': 148, 'r': HexBytes('0x5d4484ff8f70fca0120f38d8bc317c3bf31fd7676709293a3d9b51c9d6b09bb9'), 's': HexBytes('0x0a2b6050dd3a5fefecaaf13fcc17fab0f8b9ac561abef1b4ec9fc0afff2b8ba2')})
What exactly are the HexBytes tagged 'r' and 's'? How do you extract the encoded info in them because web3.eth.get_transaction(hash) does not seem to work on those. Thanks in advance for your help.

Related

fetch the data from array of objects sql BigQuery

I need to fetch key value pairs from the second object in array. Also, need to create new columns with the fetched data. I am only interested in the second object, some arrays have 3 objects, some have 4 etc. The data looks like this:
[{'adUnitCode': ca-pub, 'id': 35, 'name': ca-pub}, {'adUnitCode': hmies, 'id': 49, 'name': HMIES}, {'adUnitCode': moda, 'id': 50, 'name': moda}, {'adUnitCode': nova, 'id': 55, 'name': nova}, {'adUnitCode': listicle, 'id': 11, 'name': listicle}]
[{'adUnitCode': ca-pub, 'id': 35, 'name': ca-pub-73}, {'adUnitCode': hmiuk-jam, 'id': 23, 'name': HM}, {'adUnitCode': recipes, 'id': 26, 'name': recipes}]
[{'adUnitCode': ca-pub, 'id': 35, 'name': ca-pub-733450927}, {'adUnitCode': digital, 'id': 48, 'name': Digital}, {'adUnitCode': movies, 'id': 50, 'name': movies}, {'adUnitCode': cannes-film-festival, 'id': 57, 'name': cannes-film-festival}, {'adUnitCode': article, 'id': 57, 'name': article}]
The desired output:
adUnitCode id name
hmies 49 HMIES
hmiuk-jam 23 HM
digital 48 Digital
Below is for BigQuery Standard SQL
#standardSQL
select
json_extract_scalar(second_object, "$.adUnitCode") as adUnitCode,
json_extract_scalar(second_object, "$.id") as id,
json_extract_scalar(second_object, "$.name") as name
from `project.dataset.table`, unnest(
[json_extract_array(regexp_replace(mapping, r"(: )([\w-]+)(,|})", "\\1'\\2'\\3"))[safe_offset(1)]]
) as second_object
if applied to sample data from your question - output is
as you can see, the "trick" here is to use proper regexp in regexp_replace function. I've included now any alphabetical chars and - . you can include more as you see needed
As an alternative yo can try regexp_replace(mapping, r"(: )([^,}]+)", "\\1'\\2'") as in below example - so you will cover potentially more cases without changes in code
#standardSQL
select
json_extract_scalar(second_object, "$.adUnitCode") as adUnitCode,
json_extract_scalar(second_object, "$.id") as id,
json_extract_scalar(second_object, "$.name") as name
from `project.dataset.table`, unnest(
[json_extract_array(regexp_replace(mapping, r"(: )([^,}]+)", "\\1'\\2'"))[safe_offset(1)]]
) as second_object

How to make selenium page loading more efficient if accessing all data on the page requires 10k+ clicks of a lazyload button?

I am scraping one particular page with the a headless chromedriver
The page is really huge, to load it entirely I need 10k+ clicks on a lazy load button
The more I click, the slower things get
Is there a way to make the process faster?
Here is the code:
def driver_config():
chrome_options = Options()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
chrome_options.page_load_strategy = 'eager'
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
return(driver)
def scroll_the_category_until_the_end(driver, category_url):
driver.get(category_url)
pbar = tqdm()
pbar.write('initializing spin')
while True:
try:
show_more_button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="root"]/div/div[2]/div[2]/div[2]/button')))
driver.execute_script("arguments[0].click();", show_more_button)
pbar.update()
except TimeoutException:
pbar.write('docking')
pbar.close()
break
driver = driver_config()
scroll_the_category_until_the_end(driver, 'https://supl.biz/russian-federation/stroitelnyie-i-otdelochnyie-materialyi-supplierscategory9403/')
UPDATE:
I also tried to implement another strategy but it didn't work:
deleting all company information on every iteration
clearing driver cash
My hypothesis was that if I do this, DOM will always be clean and fast
driver = driver_config()
driver.get('https://supl.biz/russian-federation/stroitelnyie-i-otdelochnyie-materialyi-supplierscategory9403/')
pbar = tqdm()
pbar.clear()
while True:
try:
for el in driver.find_elements_by_class_name('a_zvOKG8vZ'):
driver.execute_script("""var element = arguments[0];element.parentNode.removeChild(element);""", el)
button = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,"//*[contains(text(), 'Показать больше поставщиков')]")))
driver.execute_script("arguments[0].click();", button)
pbar.update()
driver.execute_script("window.localStorage.clear();")
except Exception as e:
pbar.close()
print(e)
break
First the website invokes javascript to grab new data, the HTTP request is invoked by clicking the more results button, it calls upon an API and the response back is the data needed to load the page with more results. You can view this request by inspecting the page --> Network tools --> XHR and then clicking the button. It sends an HTTP GET request to an API which has data on each product.
The most efficient way to grab data from a website that invokes javascript is by re-engineering this HTTP request the javascript is making.
In this case it's relatively easy, I copied the request in a cURL command within XHR of inspecting the page and converted it using curl.trillworks.com to python.
This is the screen you get to with XHR, before clicking the more results page.
Clicking the more results page you get this, notice how a request has populated the screen ?
Here I'm copying the cURL request to grab the necessary headers etc...
Here I'm copying the cURL request to grab the necessary headers etc... you can then input this into curl.trillworks.com and it converts the request into params, cookies and headers and gives you boilerplate for the requests package.
Had a play around with the request using the requests package. Inputting various parts of the headers, you are provided cookies, but they're actually not necessary when you make the request.
The simplest request to make is one without headers, parameters or cookies but most API endpoints don't accept this. In this case, having played around with the requests package, you need a user-agent and the parameters that specify what data you get back from the API. Infact you don't even need a valid user-agent.
Now you could invoke a while loop to keep making HTTP requests in sizes of 8. Unfortunately altering the size of the request in the parameters doesn't get you all the data!
Coding Example
import requests
import time
i = 8
j = 1
headers = {
'user-agent': 'M'
}
while True:
if response.status_code == 200:
params = (
('category', '9403'),
('city', 'russian-federation'),
('page', f'{j}'),
('size', f'{i}'),
)
response = requests.get('https://supl.biz/api/monolith/suppliers-catalog/search/', headers=headers, params=params)
print(response.json()['hits'][0])
i += 8
j += 1
time.sleep(4)
else:
break
Output
Sample output
{'id': 1373827,
'type': None,
'highlighted': None,
'count_days_on_tariff': 183,
'tariff_info': {'title_for_show': 'Поставщик Премиум',
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'origin_ru': {'id': 999, 'title': 'Санкт-Петербург'},
'title': 'ООО "СТАНДАРТ 10"',
'address': 'Пискаревский проспект, 150, корпус 2.',
'inn': '7802647317',
'delivery_types': ['self', 'transportcompany', 'suppliercars', 'railway'],
'summary': 'Сэндвич-панели: новые, 2й сорт, б/у. Холодильные камеры: новые, б/у. Двери для холодильных камер: новые, б/у. Строительство холодильных складов, ангаров и др. коммерческих объектов из сэндвич-панелей. Холодильное оборудование: новое, б/у.',
'phone': '79219602762',
'hash_id': 'lMJPgpEz7b',
'payment_types': ['cache', 'noncache'],
'logo_url': 'https://suplbiz-a.akamaihd.net/media/cache/37/9e/379e9fafdeaab4fc5a068bc90845b56b.jpg',
'proposals_count': 4218,
'score': 42423,
'reviews': 0,
'rating': '0.0',
'performed_orders_count': 1,
'has_replain_chat': False,
'verification_status': 2,
'proposals': [{'id': 20721916,
'title': 'Сэндвич панели PIR 100',
'description': 'Сэндвич панели. Наполнение Пенополиизлцианурат ПИР PIR. Толщина 100мм. Длина 3,2 метра. Rall9003/Rall9003. Вналичии 600м2. Количество: 1500',
'categories': [135],
'price': 1250.0,
'old_price': None,
'slug': 'sendvich-paneli-pir-100',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/72/4d/724d0ba4d4a2b7d459f3ca4416e58d7d.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/67/45/6745bb6f616b82f7cd312e27814b6b89.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 12,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 12,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722131,
'title': 'Сэндвич панели ппу100 б/у, 2,37 м',
'description': 'Сэндвич панели. Наполнение Пенополиуретан ППУ ПУР PUR. Толщина 100 мм. длинна 2,37 метра. rall9003/rall9003. БУ. В наличии 250 м2.',
'categories': [135],
'price': 800.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu100-b-u-2-37-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/d1/49/d1498144bc7b324e288606b0d7d98120.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/10/4b/104b54cb9b7ddbc6b2f0c1c5a01cdc2d.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 4,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 4,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722293,
'title': 'Холодильная камера polair 2.56х2.56х2.1',
'description': 'Холодильная камера. Размер 2,56 Х 2,56 Х 2,1. Камера из сэндвич панелей ППУ80. Камера с дверью. -5/+5 или -18. В наличии. Подберем моноблок или сплит систему. …',
'categories': [478],
'price': 45000.0,
'old_price': None,
'slug': 'holodilnaya-kamera-polair-2-56h2-56h2-1',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/c1/9f/c19f38cd6893a3b94cbdcbdb8493c455.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/4d/b0/4db06a2508cccf5b2e7fe822c1b892a2.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 5,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 5,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722112,
'title': 'Сэндвич панели ппу 80 б/у, 2,4 м',
'description': 'Сэндвич панели. Наполнение ППУ. Толщина 80 мм. длинна 2,4 метра. БУ. В наличии 350 м2.',
'categories': [135],
'price': 799.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu-80-b-u-2-4-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/ba/06/ba069a73eda4641030ad69633d79675d.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/4f/e9/4fe9f3f358f775fa828c532a6c08e7f2.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 8,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 8,
'location': {'lon': 30.31413, 'lat': 59.93863}}},
{'id': 20722117,
'title': 'Сэндвич панели ппу 60 мм, 2,99 м',
'description': 'Сэндвич панели. Наполнение Пенополиуретан ППУ ПУР PUR . Новые. В наличии 600 м2. Толщина 60 мм. длинна 2,99 метров. rall9003/rall9003',
'categories': [135],
'price': 1100.0,
'old_price': None,
'slug': 'sendvich-paneli-ppu-60-mm-2-99-m',
'currency': 'RUB',
'price_details': 'Цена за шт.',
'image': {'preview_220x136': 'https://suplbiz-a.akamaihd.net/media/cache/e2/fb/e2fb6505a5af74a5a994783a5e51600c.jpg',
'image_dominant_color': '#ffffff',
'preview_140': 'https://suplbiz-a.akamaihd.net/media/cache/9c/f5/9cf5905a26e6b2ea1fc16d50c19ef488.jpg',
'hash': 'd41d8cd98f00b204e9800998ecf8427e'},
'additional_images': [],
'availability': 1,
'views': 10,
'seo_friendly': False,
'user': {'id': 1373827,
'name': 'ООО "СТАНДАРТ 10"',
'phone': '+79219602762',
'address': 'Пискаревский проспект, 150, корпус 2.',
'origin_id': 999,
'country_id': 1,
'origin_title': 'Санкт-Петербург',
'verified': False,
'score': 333,
'rating': 0.0,
'reviews': 0,
'tariff': {'title_for_show': 'Поставщик Премиум',
'count_days_on_tariff': 183,
'finish_date': '2021-02-13',
'url': '/supplier-premium-membership/',
'start_date': '2020-08-13'},
'performed_orders_count': 1,
'views': 10,
'location': {'lon': 30.31413, 'lat': 59.93863}}}]}
​
Explanation
Here we're making sure that the response status is 200 before making another request. Using f-strings we change the page by 1 and the size of the results of the JSON object by 8 each iteration of the while loop. I've imposed a time restriction per request, because if push too many HTTP request at once you'll end up getting IP banned. Be gentle on the server!
The response.json() method converts the JSON object to python dictionary, you haven't specified what data, but I think if you can handle a python dictionary you can grab the data you require.
Comments
Here is where the parameters comes from. You can see the pages and size data here.

convert pandas dataframe to list and nest a dict?

I have a list:
l = [{'level': '1', 'rows': 2}, {'level': '2', 'rows': 3}]
I can conert to DataFrame, but how do I convert back?
frame = pd.DataFrame(l)
We have to_dict
frame.to_dict('r')
Out[67]: [{'level': '1', 'rows': 2}, {'level': '2', 'rows': 3}]

pandas same attribute comparison

I have the following dataframe:
df = pd.DataFrame([{'name': 'a', 'label': 'false', 'score': 10},
{'name': 'a', 'label': 'true', 'score': 8},
{'name': 'c', 'label': 'false', 'score': 10},
{'name': 'c', 'label': 'true', 'score': 4},
{'name': 'd', 'label': 'false', 'score': 10},
{'name': 'd', 'label': 'true', 'score': 6},
])
I want to return names that have the "false" label score value higher than the score value of the "true" label with at least the double. In my example, it should return only the "c" name.
First you can pivot the data, and look at the ratio, filter what you want:
new_df = df.pivot(index='name',columns='label', values='score')
new_df[new_df['false'].div(new_df['true']).gt(2)]
output:
label false true
name
c 10 4
If you only want the label, you can do:
new_df.index[new_df['false'].div(new_df['true']).gt(2)].values
which gives
array(['c'], dtype=object)
Update: Since your data is result of orig_df.groupby().count(), you could instead do:
orig_df['label'].eq('true').groupby('name').mean()
and look at the rows with values <= 1/3.

Jupyter .save_to_html function does not store config

I'm trying to use the .save_to_html() function for a kepler.gl jupyter notebook map.
It all works great inside jupyter, and I can re-load the same map with a defined config.
Where it goes wrong is when I use the save_to_html() function. The map will save to an html, but the configuration reverts to the basic configuration, before I customized it.
Please help! I love kepler, when I solve this little thing, it will be our absolute go-to tool
Thanks
Have tried to change the filters, colours, and point sizes. None of this works.
map_1 = KeplerGl(height=500, data={'data': df},config=config)
map_1
config = map_1.config
config
map_1.save_to_html(data={'data_1': df},
file_name='privateers.html',config=config)
Config
{'version': 'v1',
'config': {'visState': {'filters': [{'dataId': 'data',
'id': 'x8t9c53mf',
'name': 'time_update',
'type': 'timeRange',
'value': [1565687902187.5417, 1565775465282],
'enlarged': True,
'plotType': 'histogram',
'yAxis': None},
{'dataId': 'data',
'id': 'biysqlu36',
'name': 'user_id',
'type': 'multiSelect',
'value': ['HNc0SI3WsQfhOFRF2THnUEfmqJC3'],
'enlarged': False,
'plotType': 'histogram',
'yAxis': None}],
'layers': [{'id': 'ud6168',
'type': 'point',
'config': {'dataId': 'data',
'label': 'Point',
'color': [18, 147, 154],
'columns': {'lat': 'lat', 'lng': 'lng', 'altitude': None},
'isVisible': True,
'visConfig': {'radius': 5,
'fixedRadius': False,
'opacity': 0.8,
'outline': False,
'thickness': 2,
'strokeColor': None,
'colorRange': {'name': 'Uber Viz Qualitative 1.2',
'type': 'qualitative',
'category': 'Uber',
'colors': ['#12939A',
'#DDB27C',
'#88572C',
'#FF991F',
'#F15C17',
'#223F9A'],
'reversed': False},
'strokeColorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'radiusRange': [0, 50],
'filled': True},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': {'name': 'ride_id', 'type': 'string'},
'colorScale': 'ordinal',
'strokeColorField': None,
'strokeColorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'an8tbef',
'type': 'point',
'config': {'dataId': 'data',
'label': 'previous',
'color': [221, 178, 124],
'columns': {'lat': 'previous_lat',
'lng': 'previous_lng',
'altitude': None},
'isVisible': False,
'visConfig': {'radius': 10,
'fixedRadius': False,
'opacity': 0.8,
'outline': False,
'thickness': 2,
'strokeColor': None,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'strokeColorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'radiusRange': [0, 50],
'filled': True},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'strokeColorField': None,
'strokeColorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'ilpixu9',
'type': 'arc',
'config': {'dataId': 'data',
'label': ' -> previous arc',
'color': [146, 38, 198],
'columns': {'lat0': 'lat',
'lng0': 'lng',
'lat1': 'previous_lat',
'lng1': 'previous_lng'},
'isVisible': True,
'visConfig': {'opacity': 0.8,
'thickness': 2,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'sizeRange': [0, 10],
'targetColor': None},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}},
{'id': 'inv52pp',
'type': 'line',
'config': {'dataId': 'data',
'label': ' -> previous line',
'color': [136, 87, 44],
'columns': {'lat0': 'lat',
'lng0': 'lng',
'lat1': 'previous_lat',
'lng1': 'previous_lng'},
'isVisible': False,
'visConfig': {'opacity': 0.8,
'thickness': 2,
'colorRange': {'name': 'Global Warming',
'type': 'sequential',
'category': 'Uber',
'colors': ['#5A1846',
'#900C3F',
'#C70039',
'#E3611C',
'#F1920E',
'#FFC300']},
'sizeRange': [0, 10],
'targetColor': None},
'textLabel': [{'field': None,
'color': [255, 255, 255],
'size': 18,
'offset': [0, 0],
'anchor': 'start',
'alignment': 'center'}]},
'visualChannels': {'colorField': None,
'colorScale': 'quantile',
'sizeField': None,
'sizeScale': 'linear'}}],
'interactionConfig': {'tooltip': {'fieldsToShow': {'data': ['time_ride_start',
'user_id',
'ride_id']},
'enabled': True},
'brush': {'size': 0.5, 'enabled': False}},
'layerBlending': 'normal',
'splitMaps': []},
'mapState': {'bearing': 0,
'dragRotate': False,
'latitude': 49.52565611453996,
'longitude': 6.2730441822977845,
'pitch': 0,
'zoom': 9.244725880765998,
'isSplit': False},
'mapStyle': {'styleType': 'dark',
'topLayerGroups': {},
'visibleLayerGroups': {'label': True,
'road': True,
'border': False,
'building': True,
'water': True,
'land': True,
'3d building': False},
'threeDBuildingColor': [9.665468314072013,
17.18305478057247,
31.1442867897876],
'mapStyles': {}}}}
Expected:
Fully configurated map as in Jupyter widget
Actuals
Colors and filters are not configured. Size and position of map is sent along, so if I store it looking at an empty area, when I open the html file it looks at the same field
In the Jupyter user guide for kepler.gl under the save section
# this will save current map
map_1.save_to_html(file_name='first_map.html')
# this will save map with provided data and config
map_1.save_to_html(data={'data_1': df}, config=config, file_name='first_map.html')
# this will save map with the interaction panel disabled
map_1.save_to_html(file_name='first_map.html', read_only=True)
So it looks like its a bug if the configuration parameter doesn't work or you are making the changes to the map configure after you set it equal to config. This would be fixed if you set
map_1.save_to_html(data={'data_1': df},
file_name='privateers.html',config=map_1.config)
I think it is a bug (or feature?) happens when you use the same cell to save the map configuration or still not print the map out yet. Generally, the config only exists after you really print the map out.
the problem, as far as I see it and how I solved at a similar problem is, that you 1) named your 'data' key in instancing the map different than you told it to save in the HTML 2).
map_1 = KeplerGl(height=500, data={'data': df},config=config)
map_1.save_to_html(data={'data_1': df}, file_name='privateers.html',config=config)
Name both keys the same and your HTML file will use the correct configuration.
Had this issue as well. Solved it by converting all pandas column dtypes to those that are json serializable: i.e. converting 'datetime' column from dtype <m8[ns] to object.