I'm looking for a web resource where you can preview what code would look like in different formatting standards so I can choose and possibly set it up in Prettier.
I recently came across this code and thought the formatting looked amazing.
{name: 'year', size: 365 * 24 * 60 * 60 * 1},
{name: 'day', size: 24 * 60 * 60 * 1},
{name: 'hour', size: 60 * 60 * 1},
{name: 'minute', size: 60 * 1},
{name: 'second', size: 1}
I think it was formatted customly but I've seen people do this kind of alignment when assigning values to variables with different name sizes like
const bob = "bob";
const alice = "alice";
If anyone could share some insight to the name of that I would greatly appreciate it.
And if you have any recommendations on which code formatting standard to generally follow when doing web dev feel free to share.
Related
I have a link its this- www.cagematch.net/?id=8&nr=1&page=15
In this link you will able to see a table with wrestlers. But If you click on the the name of a wrestler you will be able to see details of a wrestler. So, I want to fetch all the wrestlers with details in an easy & shortcut way. In my mind, I am thinking like this :
urls = [
link1, link2, link3, link4
]
for u in urls:
..... do the scrap
But there are 275 wrestlers I don't want to enter all the links like this. Is there any easy way to do it?
To get all links into a list and then info about each wrestler you can use this example:
import requests
from bs4 import BeautifulSoup
url = "http://www.cagematch.net/?id=8&nr=1&page=15"
headers = {"Accept-Encoding": "deflate"}
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser")
links = [
"https://www.cagematch.net/" + a["href"] for a in soup.select(".TCol a")
]
for u in links:
soup = BeautifulSoup(
requests.get(u, headers=headers).content, "html.parser"
)
print(soup.h1.text)
for info in soup.select(".InformationBoxRow"):
print(
info.select_one(".InformationBoxTitle").text.strip(),
info.select_one(".InformationBoxContents").text.strip(),
)
# get other info here
# ...
print("-" * 80)
Prints:
Adam Pearce
Current gimmick: Adam Pearce
Age: 44 years
Promotion: World Wrestling Entertainment
Active Roles: Road Agent, Trainer, On-Air Official, Backstage Helper
Birthplace: Lake Forest, Illinois, USA
Gender: male
Height: 6' 2" (188 cm)
Weight: 238 lbs (108 kg)
WWW: http://twitter.com/ScrapDaddyAP https://www.facebook.com/OfficialAdamPearce https://www.youtube.com/watch?v=us91bK1ScL4
Alter egos: Adam O'BrienAdam Pearce a.k.a. US Marshall Adam J. PearceMasked Spymaster #2Tommy Lee Ridgeway
Roles: Singles Wrestler (1996 - 2014)Road Agent (2015 - today)Booker (2008 - 2010)Trainer (2013 - today)On-Air Official (2020 - today)Backstage Helper (2015 - today)
Beginning of in-ring career: 16.05.1996
End of in-ring career: 21.12.2014
In-ring experience: 18 years
Wrestling style: Allrounder
Trainer: Randy Ricci & Sonny Rogers
Nicknames: "Scrap Iron"
Signature moves: PiledriverFlying Body SplashRackbomb II
--------------------------------------------------------------------------------
AJ Styles
Current gimmick: AJ Styles
Age: 45 years
Promotion: World Wrestling Entertainment
Brand: RAW
Active Roles: Singles Wrestler
Birthplace: Jacksonville, North Carolina, USA
Gender: male
Height: 5' 11" (180 cm)
Weight: 218 lbs (99 kg)
Background in sports: Ringen, Football, Basketball, Baseball
WWW: http://AJStyles.org https://www.facebook.com/AJStylesOrg-110336188978264/ https://twitter.com/AJStylesOrg https://www.instagram.com/ajstylesp1/ https://www.twitch.tv/Stylesclash
Alter egos: AJ Styles a.k.a. Air StylesMr. Olympia
Roles: Singles Wrestler (1999 - today)Tag Team Wrestler (2001 - 2021)
Beginning of in-ring career: 15.02.1999
In-ring experience: 23 years
Wrestling style: Techniker, High Flyer
Trainer: Rick Michaels
Nicknames: "The Phenomenal""The Prince Of Phenomenal"
Signature moves: Styles ClashPelé KickCalf Killer/Calf CrusherStylin' DDTCliffhangerSpiral TapPhenomenal Forearm450 Splash
--------------------------------------------------------------------------------
...and so on.
In DBeaver I have a table containing some GPS coordinates stored as Postgis LINESTRING format.
My questions is: If I have, say, this info:
LINESTRING(20 20, 30 30, 40 40, 50 50, 60 60, 70 70)
which built-in ST function can I use to get every N-th element in that LINESTRING? For example, if I choose 2, I would get:
LINESTRING(20 20, 40 40, 60 60)
, if 3:
LINESTRING(20 20, 50 50)
and so on.
I've tried with ST_SIMPLIFY and ST_POINTN, but that's now exactly what I need because I still want it to stay a LINESTRING but just with less points (lower resolution).
Any ideas?
Thanks :-)
Welcome to SO. Have you tried using ST_DumpPoints and applying a module % over the vertices path? e.g. every second record:
WITH j AS (
SELECT
ST_DumpPoints('LINESTRING(20 20, 30 30, 40 40, 50 50, 60 60, 70 70)') AS point
)
SELECT ST_AsText(ST_MakeLine((point).geom)) FROM j
WHERE (point).path[1] % 2 = 0;
st_astext
-------------------------------
LINESTRING(30 30,50 50,70 70)
(1 Zeile)
Further reading:
ST_MakeLine
CTE
ST_Simplify should return a linestring unless the simplification results in an invalid geometry for a lingstring, e.i., less than 2 vertex. If you always want to return a linestring consider ST_SimplifyPreserveTopology . It ensures that at least two vertices are returned in a linestring.
https://postgis.net/docs/ST_SimplifyPreserveTopology.html
I've been searching for a solution to this and the closest I got was here:
Convert time string expressed as <number>[m|h|d|s|w] to seconds in Python
however none of the solutions work because the time format only sometimes contains one unit and is inconsistant throughout the column. e.g.
['4h 30m 24s', '13w 5d', '11w']
when I .apply() this over the entire column it fails. How can I convert all of these rows into seconds? I tried df['time_value'].str.split() but this is a very messy and seemingly inefficient way to do this, there must be a better way?
How about applying this method?
def convert_to_seconds(s):
seconds = 0
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
for part in s.split():
number = int(part[:-1])
unit = part[-1]
seconds += number * seconds_per_unit[unit]
return seconds
You can using stack then with map mapping those to second
s=pd.Series(l)
s=s.str.split(expand=True).stack().to_frame('ALL')
s['v']=s['ALL'].str[:-1].astype(int)
s['t']=s['ALL'].str[-1]
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
(s.t.map(seconds_per_unit)*s.v).unstack()
Out[625]:
0 1 2
0 14400.0 1800.0 24.0
1 7862400.0 432000.0 NaN
2 6652800.0 NaN NaN
My question is based on building a ramp up for planning production lines.
I have a WIP where a ramp up category is selected to be used for each MSO (Master Sew Order). The Ramp up is based on hour fences (for example 1-6 hours,6-12 hours,etc).
On the WIP, an MSO will have units (example 1,920 units), divided by capacity per hour (80 pcs/hr), to give time needed 24 hours. This then needs to be
calculated based on ramp up, for hours 1-6, 6-12, 12-18, and 18-24 and multiply our by related efficiency.
For example:
Hours 1-6: 20% efficiency * 80 units = 16 units/hr (6 x 16 = 96 units produced)
Hours 6-12: 40% efficiency * 80 units = 32 units/hr (192 units)
Hours 12-18: 60% efficiency * 80 Units = 48 units/hr (288 units)
Hours 18-24: 80% efficiency * 80 units = 64 units/hr (384 units)
Hours 24+: 100% efficiency * 80 units = 80 units/hr ((1920-960)/80)= 12 hours remaining
TOTAL TIME = 36 hours to produce
How would Power BI know to divide up the original 24 hour estimate into parts, multiply by respective efficiency, and return a new result of 36 hours?
Thank you so much in advance!
Kurt
Relationships
I'm not sure how to do this in DAX but you tagged PowerQuery so here's a custom query that computes 36 based on your parameters:
let
MSO = 1920,
Capacity = 80,
Efficiency = {
{6, 0.2},
{12, 0.4},
{18, 0.6},
{24, 0.8},
{#infinity, 1.0}
},
Accumulated = List.Accumulate(Efficiency, [
Remaining = MSO,
RunningHours = 0
], (state, current) =>
let
until = current{0},
eff = current{1},
currentCapacity = eff * Capacity,
RemainingHours = state[Remaining] / currentCapacity,
CappedHours = List.Min({RemainingHours, until - state[RunningHours]})
in [
Remaining = state[Remaining] - currentCapacity * CappedHours,
RunningHours = state[RunningHours] + CappedHours
]),
Result = if Accumulated[Remaining] = 0
then Accumulated[RunningHours]
else error "Not enough time to finish!"
in
Result
The inner lists for Efficiency are of the form time-efficiency-ends,efficiency-value. Plug in infinity to mean the last efficiency never stops.
In a normal iterative programming language you could update state with a for-loop, but in M you need to use List.Accumulate and package all your state into one value.
In your data model you may have MSO in one table containing 2 fields, [Units] and [UnitsPerHour], and another table called EffTable which may store the efficiencies broken out by the hour fences.
Create 4 new calculated columns in your MSO table, one for each hour fence, eg [1--6]:
=
6 * LOOKUPVALUE ( EffTable[Efficiency], EffTable[Hours], "1--6" )
* [UnitsPerHour]
These are fields that hold how many units you would produce in the 4 time slots. Create a new calculated field for the total, [RampUpUnits]:
=
[1--6Hours] + [6--12Hours] + [12--18Hours] + [18--24Hours]
Finally calculate the total time as:
=
24
+ ( [Units] - [RampUpUnits] )
/ [UnitsPerHour]
This calculates the number of hours required for the remaining units and adds it to 24 for the ramp up time.
so I am calling the twitter api:
openurl = urllib.urlopen("https://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&contributor_details&include_rts=true&screen_name="+user+"&count=3600")
and it returns some long file like:
[{"entities":{"hashtags":[],"user_mentions":[],"urls":[{"url":"http:\/\/t.co\/Hd1ubDVX","indices":[115,135],"display_url":"amzn.to\/tPSKgf","expanded_url":"http:\/\/amzn.to\/tPSKgf"}]},"coordinates":null,"truncated":false,"place":null,"geo":null,"in_reply_to_user_id":null,"retweet_count":2,"favorited":false,"in_reply_to_status_id_str":null,"user":{"contributors_enabled":false,"lang":"en","profile_background_image_url_https":"https:\/\/si0.twimg.com\/profile_background_images\/151701304\/theme14.gif","favourites_count":0,"profile_text_color":"333333","protected":false,"location":"North America","is_translator":false,"profile_background_image_url":"http:\/\/a0.twimg.com\/profile_background_images\/151701304\/theme14.gif","profile_image_url_https":"https:\/\/si0.twimg.com\/profile_images\/1642783876\/idB005XNC8Z4_normal.png","name":"User Interface Books","profile_link_color":"009999","url":"http:\/\/twitter.com\/ReleasedBooks\/genres","utc_offset":-28800,"description":"All new user interface and graphic design book releases posted on their publication day","listed_count":11,"profile_background_color":"131516","statuses_count":1189,"following":false,"profile_background_tile":true,"followers_count":732,"profile_image_url":"http:\/\/a2.twimg.com\/profile_images\/1642783876\/idB005XNC8Z4_normal.png","default_profile":false,"geo_enabled":false,"created_at":"Mon Sep 20 21:28:15 +0000 2010","profile_sidebar_fill_color":"efefef","show_all_inline_media":false,"follow_request_sent":false,"notifications":false,"friends_count":1,"profile_sidebar_border_color":"eeeeee","screen_name":"User","id_str":"193056806","verified":false,"id":193056806,"default_profile_image":false,"profile_use_background_image":true,"time_zone":"Pacific Time (US & Canada)"},"possibly_sensitive":false,"in_reply_to_screen_name":null,"created_at":"Thu Nov 17 00:01:45 +0000 2011","in_reply_to_user_id_str":null,"retweeted":false,"source":"\u003Ca href=\"http:\/\/twitter.com\/ReleasedBooks\/genres\" rel=\"nofollow\"\u003EBook Releases\u003C\/a\u003E","id_str":"136957158075011072","in_reply_to_status_id":null,"id":136957158075011072,"contributors":null,"text":"Digital Media: Technological and Social Challenges of the Interactive World - by William Aspray - Scarecrow Press. http:\/\/t.co\/Hd1ubDVX"},{"entities":{"hashtags":[],"user_mentions":[],"urls":[{"url":"http:\/\/t.co\/GMCzTija","indices":[119,139],"display_u
Well,
the different objects are slit into tables and dictionaries and I want to extract the different parts but to do this I have to know how many objects the file has:
example:
[{1:info , 2:info}][{1:info , 2:info}][{1:info , 2:info}][{1:info , 2:info}]
so to extract the info from 1 in the first table I would:
[0]['1']
>>>>info
But to extract it from the last object in the table I need to know how many object the table has.
This is what my code looks like:
table_timeline = json.loads(twitter_timeline)
table_timeline_inner = table_timeline[x]
lines = 0
while lines < linesmax:
in_reply_to_user_id = table_timeline_inner['in_reply_to_status_id_str']
lines += 1
So how do I find the value of the last object in this table?
thanks
I'm not entirely sure this is what you're looking for, but to get the last item in a python list, use an index of -1. For example,
>>> alist = [{'position': 'first'}, {'position': 'second'}, {'position': 'third'}]
>>> print alist[-1]['position']
{'position': 'third'}
>>> print alist[-1]['position']
third