Sportsipy API request - api

I need to use the sportsipy API to get the schedule for all teams in a dataframe. This is what I have:
from sportsreference.nba.schedule import Schedule
league = ['MIL','CHO','LAL','LAC','SAC','ATL','MIA','DAL','POR',
'HOU','NOP','PHO','WAS','MEM','BOS','DEN','TOR','SAS',
'PHI','BRK','UTA','IND','OKC','ORL','MIN','DET',
'NYK','CLE','CHI','GSW']
for i in league:
mil2019 = Schedule( i , year = '2020')
mil2019.dataframe_extended
The error i get is:
TypeError: unsupported operand type(s) for -: 'NoneType' and 'NoneType'

As mentioned above in the comment above, I believe your import is wrong. Using package sportsipy 0.6.0 and following the docs: https://sportsipy.readthedocs.io/en/stable/ I was able to achieve your desired result using following code:
from sportsipy.nba.schedule import Schedule
# MIL removed from league list as it is used to initiate league_schedule
league = ['CHO','LAL','LAC','SAC','ATL','MIA','DAL','POR',
'HOU','NOP','PHO','WAS','MEM','BOS','DEN','TOR','SAS',
'PHI','BRK','UTA','IND','OKC','ORL','MIN','DET',
'NYK','CLE','CHI','GSW']
league_schedule = Schedule('MIL', year="2020").dataframe
for team in league:
league_schedule = league_schedule.append(Schedule(team , year="2020").dataframe)
(Resulting dataframe has dimensions: [2286 rows x 15 columns])
Same should work with dataframe_extended, but it takes rather long time to get all that data. Maybe double check if you need all of it.
In case I am wrong and package you want to use in your question is correct please add additional info to your question, such as where can we get that package.

It appears you are using the module from pip install sportsreference from here which is on v0.5.2 in which case that is a valid import, even though you mentioned you're using sportsipy which caused some confusion for others. The latest version has refactored the package name to sportsipy.
If it wasn't a valid import, it would be throwing an import error on the very first line, so I'm not sure why folks are getting hung up on that.
You really should include the entire Python traceback, not just the final message, so we can determine exactly where in your code and the module's source code this exception is being raised. Also include the specific version of the library you're using, e.g. from pip freeze.
My initial thought is one of the requests somewhere for one of these teams is returning something unexpected and the library is not handling it properly, but without the full traceback that's just a theory.
It's probably a bug in v0.5.2 of sportsipy. I would try using the latest version from git and see if you can reproduce the error. Something, somewhere isn't validating things are what it expects before trying to do things with it. If I had the full traceback, I could tell you exactly what.
You could try catching the TypeError and passing on it, to see if skipping it allows everything else to continue working, but without knowing exactly where the error is coming from it's hard to say for sure at this point.
for i in league:
try:
mil2019 = Schedule( i , year = '2020')
mil2019.dataframe_extended
except TypeError:
pass
This won't fix the problem, it's actually hiding it, but if it's just one record from one game that is returning something unexpected this would at least let you get the rest of the results, possibly. It's also possible the issue would create other problems later, depending on exactly what it is. Again, this is where the whole traceback would have been helpful.
I will say that trying your code for just one team works for me. For example:
from sportsreference.nba.schedule import Schedule
mil2019 = Schedule("MIL", year="2020")
print(mil2019.dataframe_extended.head(10))
Returns this:
away_assist_percentage ... winning_name
201910240HOU 67.4 ... Milwaukee Bucks
201910260MIL 71.7 ... Miami Heat
201910280MIL 42.2 ... Cleveland Cavaliers
201910300BOS 55.3 ... Milwaukee Bucks
201911010ORL 51.1 ... Milwaukee Bucks
201911020MIL 70.6 ... Toronto Raptors
201911040MIN 48.0 ... Milwaukee Bucks
201911060LAC 42.9 ... Milwaukee Bucks
201911080UTA 41.2 ... Milwaukee Bucks
201911100OKC 57.4 ... Milwaukee Bucks
[10 rows x 82 columns]
It takes forever just to get the games from one team. The library is not passing around an existing requests.Session() when calling PyQuery (even though PyQuery supports a session kwarg), so every request for every box score is renegotiating a fresh TCP connection which is absurd, but I digress:
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): www.basketball-reference.com:80
DEBUG:urllib3.connectionpool:http://www.basketball-reference.com:80 "GET /teams/MIL/2020_games.html HTTP/1.1" 301 183
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): www.basketball-reference.com:443
DEBUG:urllib3.connectionpool:https://www.basketball-reference.com:443 "GET /teams/MIL/2020_games.html HTTP/1.1" 200 34571
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): www.basketball-reference.com:443
DEBUG:urllib3.connectionpool:https://www.basketball-reference.com:443 "GET /boxscores/201910240HOU.html HTTP/1.1" 200 46549
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): www.basketball-reference.com:443
I would add some debugging to your code to establish which team your code is working on when this exception is raised. Try first with one team like I did and confirm it generally works, then iterate through the list of teams with logging enabled like:
import logging
from sportsreference.nba.schedule import Schedule
logging.basicConfig(level=logging.DEBUG)
league = ['CHO', 'LAL', 'LAC', 'SAC', 'ATL', 'MIA', 'DAL', 'POR',
'HOU', 'NOP', 'PHO', 'WAS', 'MEM', 'BOS', 'DEN', 'TOR', 'SAS',
'PHI', 'BRK', 'UTA', 'IND', 'OKC', 'ORL', 'MIN', 'DET',
'NYK', 'CLE', 'CHI', 'GSW']
for i in league:
logging.info("Working on league: %s", i)
mil2019 = Schedule(i, year="2020")
print(mil2019.dataframe_extended)
This way you will know specifically what league and which specific request is responsible for the issue and that would help you determine what the root cause is.

Related

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

BG95 Can't Activate - AT+QIACT=1 returning error

I'm trying to get a BG95 to activate on hologram.
Here are my commands:
AT+QCFG="band",F,180A,180A OK
AT+QCFG="iotopmode",2 OK
AT+QCFG="nwscanseq",020301 OK
AT+QCFG="nwscanmode",0 OK
AT+QCFG="snrscan",0 OK
AT+QICSGP=1,1,"hologram","","",1 OK
AT+QIACT=1 ERROR
At first I thought it was antenna/signal related so I ran AT+CSQ and got this:
+csq: 11,99
This tells me I have a good signal I believe.
Next I tried AT+QNWINFO and get this:
+QNWINFO: "eMTC","311480","LTE BAND 13",5230
In my mind this is saying it's connected to a network.
After trying that I tried to activate again and got this:
AT+QIACT=1
ERROR
The weird thing is it activated just fine about a week ago with pure AT commands. I did try and use an Arduino library with it (WisLTEBG96TCPIP) which may have changed a setting in it. I've done a factory reset but the it still woln't activate.
Another strange thing is the hologram dashboard. Every once and a while it will show the SIM as connected, even though I can't activate.
I have tried with 2 different SIM cards any get the same activation error.
Any help would be greatly appreciated!
Verizon has cut off all non ODI products. If your hardware has not been Verizon ODI 'certified' it will no longer be allow to be connected to their network, I have 5 new pet rocks thanks to them. The solution is to purchase new modems from vendors that have been through the Verizon ODI program or switch carriers.
I had the same problem before, after a lot of maling with network operator I find out that there isn't a LTE-CAT-M1 (eMTC) network in my area, I tested in another area successfully
Also before setting AT+QCFG commands try AT+CFUN = 0
and after setting AT+QCFG commands try AT+CFUN = 1 .
before sending AT+QIACT, try 'AT+CEREG?' command several times and tell me what is the return of it

QuickFix Trouble - Repeating Groups

My fix engine keeps rejecting messages and I was hoping someone could help me figure out why... I'm receiving the following sample message:
8=FIXT.1.1 9=518 35=AE 34=4 1128=8 49=XXXXXXX 56=YYYYYYY 52=20130322-17:58:37 552=1 54=1 37=Z00097H4ON 11=NOREF 826=0 78=1 79=NOT SPECIFIED 80=100000.000000 5967=129776.520000 453=5 448=BCART6 452=3 447=D 448=BARX 452=1 447=D 448=BARX 452=16 447=D 448=bcart6 452=11 447=D 448=ABCDEFGHI 452=12 447=D 571=6611540 150=F 17=Z00097H4ON 32=100000.000000 38=100000.000000 15=EUR 1056=129776.520000 31=1.2977652 194=1.298120 195=-3.5480 64=20130409 63=W2 60=20130322-17:26:50 75=20130322 1057=Y 460=4 167=FOR 65=OR 55=EUR/USD 10=121
8=FIXT.1.1 9=124 35=3 34=4 49=XXXXXXX 52=20130322-17:58:37.917 56=YYYYYYY 45=4 58=Tag appears more than once 371=448 372=AE 373=13 10=216
But as you can see it's being rejected by the quickfix engine. I am using the 5.0sp1 data dictionary and have configured it in my config file:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\XXX\Interface\ReutersStore
FileLogPath=D:\XXX\Interface\ReutersLog
[SESSION]
BeginString = FIXT.1.1
SenderCompID = XXXXX
TargetCompID= YYYYY
DefaultApplVerId = FIX.5.0
UseDataDictionary=Y
AppDataDictionary=FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost= A.B.C.D
SocketConnectPort= 123
Does anyone have any idea why the Engine would be rejecting this message? I know that quickfix is normally able to handle messages with repeating groups, is it a config thing? Any help would be greatly appreciated!
Your message seems to be in order. Try putting this in your config file.
ValidateFieldsOutOfOrder=N
Quickfix by default puts that as Y and the underlying structure storing the tab and field values is unable to see the count before. 453 > 448.
As a sidenote always check these fields. They should point you to the source of the problem.
58=Tag appears more than once
371=448
Maybe it's a shot in the dark, but I had a similar a problem when using a 5.0sp2 dictionary.
I resolved using an updated version of the quickfix library compiled from the library SVN repository. If I remember correctly this was the bug.
It seems that the quickfix library has not been updated since a long time, and for newer version of fix I suggest you to use the "trunk" of the repo.
I had the same problem and i resolved it by tweaking my DataDictionary like the following in message AE TradeCaptureReport

Troubleshooting WebRTC code

I'm pulling my hair out with this one. A month or so ago, I was able to put together a proof-of-concept WebRTC demo, using some sample code from the good folks at SignalR. The demo is located here, the source for it is here, and it does what it's supposed to do.
But when I took that code and moved it into our actual application, I haven't been able to get it to work. Of course the code had to be changed significantly - different backends, different set of frameworks and supporting code, supporting multiple simultaneous connections, that sort of thing - but the core logic is very similar. But I can't get it to work.
I've put together a sample app here that demonstrates the problem:
https://bitbucket.org/smithkl42/signalr.webrtc
The core WebRTC logic is all in this TypeScript file:
https://bitbucket.org/smithkl42/signalr.webrtc/src/tip/SignalR.WebRTC/Scripts/Media/WebRTC.ts?at=default
It's several hundred lines long, so I won't bother posting it here, but you can see it by clicking on the link above.
When it runs, it produces output like this:
12:17:58.531 WebRTCController.call(): Calling 7d9e0d39-5047-4afe-86e5-e6e01b9f5955 when preparations have finished
12:17:58.533 WebRTCController.prepareForCall(): Preparing for call: localSessionId='39d2df53-6854-415a-8748-b5230eda2eb1'; remoteSessionId='7d9e0d39-5047-4afe-86e5-e6e01b9f5955'
12:18:0.139 Object.(): The user has granted media device access, so proceeding to prepare for call
12:18:0.141 Connection.createPeerConnection(): Creating peer connection; using stunServer stun:stun1.l.google.com:19302
12:18:0.144 (): Preparations finished. Creating and sending JSEP offer. Util.js:21
12:18:0.272 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
12:18:0.282 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
(More like that)
12:18:0.655 WebRTCController.handleJsepAnswer(): Handling JsepAnswer from 7d9e0d39-5047-4afe-86e5-e6e01b9f5955
12:18:0.694 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 1 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
12:18:0.706 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 2 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
(More like that)
But then it never connects, i.e., the video from the other side never starts playing. At the signaling layer, I can tell by the logs and by stepping through the code that the first browser is sending a JSEP offer; the second browser is receiving it, storing it and sending back an appropriate JSEP answer; and the first machine is storing that answer. Each peerConnection is then finding the ICE candidates and sending them to the remote machine; and each peerConnection is receiving and apparently trying those ICE candidates; and the peerConnections are even raising the onaddstream event. But the video never starts playing.
The state of the peerConnection object all the way through looks like this:
(iceGatheringState=new; iceState=starting; readyState=active)
The frustrating bit is that every so often, maybe one time out of 20, it does work, i.e., both videos show up. So I'm not doing everything wrong. It sounds like a timing issue of some sort - but I can't figure out what it is. And so far as I can tell, there's not much in the WebRTC objects (specifically RTCPeerConnection) to tell you what's going wrong.
I hate to ask anybody else to do my troubleshooting for me, but... well, I'm running out of options. Does anybody else see anything I'm doing obviously wrong?
Update 2012-12-19: I'm making some progress. I realized I was calling peerConnection.setLocalDescription() synchronously, i.e., without specifying callbacks. So now I've got some lines of code that look like this:
// Answer the call by sending a JsepAnswer message.
connection.peerConnection.createAnswer(
answer => {
connection.peerConnection.setLocalDescription(answer, () => {
var signalState: mData.SignalState = {
FromSessionId: connection.localSessionId,
ToSessionId: connection.remoteSessionId,
Message: JSON.stringify(answer)
};
me.roomHub.server.jsepAnswer(signalState);
mUtil.log("Sent JSEP answer: " + signalState.Message);
connection.readyForIceCandidates.resolve();
},
error => {
mUtil.error("Error setting local description from created answer: " + error + "; answer=" + JSON.stringify(answer));
});
},
error => {
mUtil.error("Error creating answer: " + error);
}, me.mediaConstraints);
And the setLocalDescription() error callback is showing this error:
16:14:42.439 WebRTCController.handleJsepOffer(): Error setting local description from created answer: SetLocalDescription failed.; answer={"sdp":"v=0\r\no=- 439659381 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\nm=audio 1 RTP/SAVPF 103 104 111 0 8 107 106 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:audio\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:111 opus/48000/2\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:107 CN/48000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:548068416 cname:IXg8QRisWrd7+7f8\r\na=ssrc:548068416 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf a0\r\na=ssrc:548068416 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:548068416 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfa0\r\nm=video 1 RTP/SAVPF 100 116 117\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:video\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:100 VP8/90000\r\na=rtpmap:116 red/90000\r\na=rtpmap:117 ulpfec/90000\r\na=ssrc:1460425980 cname:IXg8QRisWrd7+7f8\r\na=ssrc:1460425980 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf v0\r\na=ssrc:1460425980 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:1460425980 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfv0\r\n","type":"answer"}
Now I just need to figure out why that particular SDP - which comes straight from the createAnswer() method - is failing.
Update 2012-12-20: I've created an online demonstration of the problem here: http://srdemo.alanta.com/. I've also turned on Chrome debug logging, with the result that I see a bunch of errors that look like this:
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
Not sure what relationship they have to my problem, but I'm continuing to look into it.
*Edit 2012-12-20: I've managed (I think) to narrow the problem down. See this question for more precise details.
Figured it out. Turns out that SignalR 1.0 RC1 has a bug in it that changes any "+" in a string into a space. So lines in the SDP that looked like this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2+z
Were getting changed into this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2 z
But because not every SDP had a "+" in it on a critical line, sometimes it would work. Everything explained.
The bug has been reported to the good folks working on SignalR (see https://github.com/SignalR/SignalR/issues/1194), and in the meantime, a simple encodeURIComponent() and decodeURIComponent() around the strings in question fixed it.

Rails 3.2.2 log files unordered, requests intertwined

I recollect getting log files that were nicely ordered, so that you could follow one request, then the next, and so on.
Now, the log files are, as my 4 year old says "all scroggled up", meaning that they are no longer separate, distinct chunks of text. Loggings from two requests get intertwined/mixed up.
For instance:
Started GET /foobar
...
Completed 200 OK in 2ms (Views: 0.4ms | ActiveRecord: 0.8ms)
Patient Load (wait, that's from another request that has nothing to do with foobar!)
[ blank space ]
Something else
This is maddening, because I can't tell what's happening within one single request.
This is running on Passenger.
I tried to search for the same answer but couldn't find any good info. I'm not sure if you should fix server or rails code.
If you want more info about the issue here is the commit that removed old way of logging https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
If you value production log readability over everything else you can use the
PassengerMaxInstancesPerApp 1
configuration. It might cause some scaling issues. Alternatively you could stuff something like this in application.rb:
process_log_filename = Rails.root + "log/#{Rails.env}-#{Process.pid}.log"
log_file = File.open(process_log_filename, 'a')
Rails.logger = ActiveSupport::BufferedLogger.new(log_file)
Yep!, they have made some changes in the ActiveSupport::BufferedLogger so it is not any more waiting until the request has ended to flush the logs:
http://news.ycombinator.com/item?id=4483390
https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
But they have added the ActiveSupport::TaggedLogging which is very funny and you can stamp every log with any kind of mark you want.
In your case could be good to stamp the logs with the request UUID like this:
# config/application.rb
config.log_tags = [:uuid]
Then even if the logs are messed up you still can follow which of them correspond to the request you are following up.
You can make more funny things with this feature to help you in your logs study:
How to log user_name in Rails?
http://zogovic.com/post/21138929607/running-time-in-rails-logs
Well, for me the TaggedLogging solution is a no go, I can live with some logs getting lost if the server crashes badly, but I want my logs to be perfectly ordered. So, following advice from the issue comments I'm applying this to my app:
# lib/sequential_logs.rb
module ActiveSupport
class BufferedLogger
def flush
#log_dest.flush
end
def respond_to?(method, include_private = false)
super
end
end
end
# config/initializers/sequential_logs.rb
require 'sequential_logs.rb'
Rails.logger.instance_variable_get(:#logger).instance_variable_get(:#log_dest).sync = false
As far as I can say this hasn't affected my app, it is still running and now my logs make sense again.
They should add some quasi-random reqid and write it in every line regarding one single request. This way you won't get confused.
I haven't used it, but I believe Lumberjack's unit_of_work method may be what you're looking for. You call:
Lumberjack.unit_of_work do
yield
end
And all logging done either in that block or in the yielded block are tagged with a unique ID.