When you click the direct button on a project tree item, there are several options. I'm trying to add one more button - ide

I'm using the code below. And the image to show the example.
[enter image description here][1]
context_menu = self.main.create_menu(DirViewMenust.Newt, _('Newtest'))
new_menu = self.main.create_menu(DirViewMenust.Newt, _('Newtest'))
new_file_action = self.main.create_action(
DirViewActionst.NewFilet,
text=_("Test"),
icon=None,
triggered=lambda: self.on_clicked,
)
self.main.add_item_to_menu(context_menu, context_menu,
section=ExplorerWidgetMainToolbarSections.Main, before=None)
enter image description here
I'm using the API below:
# Local imports
from spyder.plugins.explorer.widgets.explorer import (
DirViewActions, DirViewContextMenuSections, DirViewMenus,
DirViewNewSubMenuSections, ExplorerTreeWidgetActions)
from spyder.plugins.explorer.widgets.main_widget import (
ExplorerWidgetMainToolbarSections, ExplorerWidgetOptionsMenuSections)

Related

homeeondemand/react-native-mapbox-navigation user location icon

https://github.com/homeeondemand/react-native-mapbox-navigation/blob/master/android/src/main/java/com/homee/mapboxnavigation/MapboxNavigationView.kt
below code is from MapboxNavigationView.kt
to display user location display like below
this.locationPuck = LocationPuck2D(
bearingImage = ContextCompat.getDrawable(
context,
R.drawable.mapbox_user_puck_icon
)
)
setLocationProvider(navigationLocationProvider)
enabled = true
}
how to modify this code to display custom image of car instead of above image

Is there a way to add a numerator/denominator to a PieDonut?

I am creating a PieDonut using the webr package but I was wondering if there is a way to add the n/N to the plot to make it clearer?
It currently looks like this;
enter image description here
This is the code:
PieDonut(PD, aes(Q6, Q7, count=n), title = "",
explode = 2, showDonutName = FALSE)

Why text function of xpath doesn't show any data on scrapy selenium?

I am trying to scrape a website with scrapy-selenium. I am facing two problem
I applied xpath on chrome developer tool I found all elements but after execution of code it returns only one Selector object.
text() function of xpath expression returns none.
This is the URL I am trying to scrape: http://www.atab.org.bd/Member/Dhaka_Zone
Here is a screenshot of inspector tool:
Here is my code:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector
from scrapy_selenium import SeleniumRequest
from selenium.webdriver.common.keys import Keys
class AtabDSpider(scrapy.Spider):
name = 'atab_d'
def start_requests(self):
yield SeleniumRequest(
url = "https://www.atab.org.bd/Member/Dhaka_Zone",
#url = "https://www.bit2lead.com",
#wait_time = 15,
wait_time = 3,
callback = self.parse
)
def parse(self, response):
companies = response.xpath("//ul[#class='row']/li")
print("Numbers Of Iterable Item: " + str(len(companies)))
for company in companies:
yield {
"company": company.xpath(".//div[#class='card']/div[1]/div/a/h3[#data-bind='text: NameOfOrganization']/text()").get()
#also tried
#"company": company.xpath(".//div[#class='card']/div[1]/div/a/h3/text()").get()
}
Here is a screenshot of my terminal:
And This is the url: ( https://www.algoslab.com ) I was practicing before That worked well. Although it's simple enough.
Why don't you try directly like the following to get everything in one go with the blink of an eye:
import requests
link = 'http://123.253.36.205:8051/API/Member/GetMembersList?searchString=&zone=0&blacklisted=false'
r = requests.get(link)
for item in r.json():
_name = item['NameOfOrganization']
phone = item['Phone']
print(_name,phone)
Output are like (should produce 3160 lines of results):
Aqib Travels & Tours Ltd. +88-029101468, 58151369
4S Tours & Travels Ltd 8954750
5M Logistics And Tours Ltd +880 2 48810030
The xpath you want could be simplified as //h3[#data-bind='text: NameOfOrganization'] to select the element and then view the text

How to add a second dataset to a plotly annotated heatmap?

I'm trying to create an annotated heatmap with a dropdown menu to switch between two different sets of data. The datasets have the same format and I have added a working dropdown menu. But I can only add one dataset at a time. I am using
fig = ff.create_annotated_heatmap(data, annotation_text=numbers, showscale=True, colorscale=colorscale, text=hover, hoverinfo='text')
to create the annotated heatmap. Is there a way to add a second dataset to switch between with the dropdown menu?
Resolved. Had to add the second data set to the args of the dropdown menu object
along with any other changes needed (such as hover text)
I just realized how easy it is to switch between two plots with a menu. You can just get the data from each figure to create a list of traces to swithc between
from plotly.offline import init_notebook_mode, iplot
import plotly.figure_factory as ff
init_notebook_mode(connected=True)
fig_1 = ff.create_annotated_heatmap(...)
fig_2 = ff.create_annotated_heatmap(...)
menu_items = ["Heatmap 1", "Heatmap 2"]
trace1 = fig_1.to_dict()["data"][0]
trace2 = fig_2.to_dict()["data"][0]
buttons = []
for i, menu_item in enumerate(menu_items):
visibility = [i==j for j in range(len(menu_items))]
button = dict(
label = menu_item,
method = 'update',
args = [{'visible': visibility},
{ 'title' : menu_item }])
buttons.append(button)
updatemenus = list([
dict(buttons = buttons)
])
layout = dict(updatemenus = updatemenus, title=menu_items[0])
fig = dict(data=[trace1, trace2], layout=layout)
iplot(fig)

Splinter - Element is not clickable because another element <p> obscures it

I am trying to get some thumbnail pictures from a website, from src, as well as click on a link, so I can later get the big picture.
For that I'm using Splinter with BeautifulSoup.
This is the htmlfor the first element I need to get:
In order to do that I have the following code:
executable_path = {"executable_path": "/path/to/geckodriver"}
browser = Browser("firefox", **executable_path, headless=False
def get_player_images():
url = f'https://www.premierleague.com/players'
# Initiate a splinter instance of the URL
browser.visit(url)
browser.find_by_tag('div[class="table playerIndex"]')
soup = BeautifulSoup(browser.html, 'html.parser')
for el in soup:
td = el.findAll('td')
for each_td in td:
link = each_td.find('a', href=True)
if link:
print (link['href'])
image = each_td.find('img')
if image:
print(image['src'])
# run
get_player_images()
But I'm running into 2 issues, after browser opens:
I'm accessing only first two actual src for players. After that, photos are missing, which I don't get why.
/players/19970/Max-Aarons/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p232980.png
/players/13279/Abdul-Rahman-Baba/overview
https://resources.premierleague.com/premierleague/photos/players/40x40/p118335.png
/players/13286/Tammy-Abraham/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/3512/Adam-Smith/overview
//platform-static-files.s3.amazonaws.com/premierleague/photos/players/40x40/Photo-Missing.png
/players/10905/Che-Adams/overview
....
Also, if I try to click on href link, with:
if link:
browser.click_link_by_partial_href(link['href'])
I get the error:
selenium.common.exceptions.ElementClickInterceptedException: Message: Element <a class="playerName" href="/players/19970/Max-Aarons/overview"> is not clickable at point (244,600) because another element <p> obscures it
What am I doing wrong? I'm running into a lot of troubles with selenium.
The player data is loaded dynamically via Javascript. You can use requests module to obtain the info.
For example:
import re
import json
import requests
from bs4 import BeautifulSoup
url = 'https://footballapi.pulselive.com/football/players?pageSize=30&compSeasons=274&altIds=true&page={page}&type=player&id=-1&compSeasonId=274'
img_url = 'https://resources.premierleague.com/premierleague/photos/players/250x250/{player_id}.png'
headers = {'Origin': 'https://www.premierleague.com'}
for page in range(1, 10): # <--- increase this to desired number of pages
data = requests.get(url.format(page=page), headers=headers).json()
# uncoment this to print all data:
# print(json.dumps(data, indent=4))
for player in data['content']:
print('{:<50} {}'.format(player['name']['display'], img_url.format(player_id=player['altIds']['opta'])))
Prints:
Ethan Ampadu https://resources.premierleague.com/premierleague/photos/players/250x250/p199598.png
Joseph Anang https://resources.premierleague.com/premierleague/photos/players/250x250/p447879.png
Florin Andone https://resources.premierleague.com/premierleague/photos/players/250x250/p93284.png
André Gomes https://resources.premierleague.com/premierleague/photos/players/250x250/p120250.png
Andreas Pereira https://resources.premierleague.com/premierleague/photos/players/250x250/p156689.png
Angeliño https://resources.premierleague.com/premierleague/photos/players/250x250/p145235.png
Faustino Anjorin https://resources.premierleague.com/premierleague/photos/players/250x250/p223332.png
Michail Antonio https://resources.premierleague.com/premierleague/photos/players/250x250/p57531.png
Cameron Archer https://resources.premierleague.com/premierleague/photos/players/250x250/p433979.png
Archie Davies https://resources.premierleague.com/premierleague/photos/players/250x250/p215061.png
Stuart Armstrong https://resources.premierleague.com/premierleague/photos/players/250x250/p91047.png
Marko Arnautovic https://resources.premierleague.com/premierleague/photos/players/250x250/p41464.png
Kepa Arrizabalaga https://resources.premierleague.com/premierleague/photos/players/250x250/p109745.png
Harry Arter https://resources.premierleague.com/premierleague/photos/players/250x250/p48615.png
Daniel Arzani https://resources.premierleague.com/premierleague/photos/players/250x250/p200797.png
... and so on.
Note: to get smaller thumbnails, change 250x250 in the image URLs to 40x40