100th post!
Hoorah! This blog and twitter have given me a wee voice with which to share the math that runs through my head. Holla to my loyal few!
Updated Ratings
NBA Ratings as of 11/29
This iteration includes predicted wins (assuming an average season's worth of home & away opponents).
Recent Twitterings:
-Tyler Zeller's offensive impact. This is based off of the formulas in my prior post, alongside some basic estimates of what a player's teammates produce. The method here does not encapsulate all usable offensive statistics like Dean Oliver's offensive rating, although I have done that in the past. Perhaps I should just stick with that?
-Good News for the 76ers and Bad News for the Magic -- although other stats-head would likely tell you a similar story.
-The Bobcats (I know I said Hornets....gimme a break) are consistent -- and therefore consistently sub-par. The top of a 95% confidence interval maxes out the H...Bobcats at ~41 wins.
Finally - if anyone's interested, I can keep updating NBA league-wide win probabilities (which are probably more accurate than the expected output from my point ratings).
Praise for The Basketball Distribution:
"...confusing." - CBS
"...quite the pun master." - ESPN
win-probabilities
Statistically, we can try to estimate a team's overall win% against an average team, and say that's their adjusted Win% (similar to Ken Pomeroy's Pythagorean win%). But this is only part of the picture.
Here I have used my consistency and adjusted ratings to predict home and away win probabilities for every NBA matchup. Instead up predicting how a team would fare against an average team, I predict how they fare on average against every team.
Here are the results (the home teams are the rows, the away teams are the columns).
I plug in the following into the NormDist function.
value=Rating(hometeam)-Rating(awayteam)+HomeCourtAdv
mean=0
standard deviation=sqrt(team1consistency^2+team2consistency^2)
(^this estimates overall standard deviation of the two teams' performance, assuming a covariance of zero.)
cumulative?=1
Speaking of which: Some of you may have thought in the past, "This guy doesn't plug stuff into the NormDist function correctly!" And you would be correct. Technically, I should plug in a value of zero and a mean of an estimated point margin. But in order to find that team's win probability, I would do 1-Normdist(0,est. margin). But this requires more typing, so I use the equivalent, Normdist(est. margin,0).
Here I have used my consistency and adjusted ratings to predict home and away win probabilities for every NBA matchup. Instead up predicting how a team would fare against an average team, I predict how they fare on average against every team.
Here are the results (the home teams are the rows, the away teams are the columns).
I plug in the following into the NormDist function.
value=Rating(hometeam)-Rating(awayteam)+HomeCourtAdv
mean=0
standard deviation=sqrt(team1consistency^2+team2consistency^2)
(^this estimates overall standard deviation of the two teams' performance, assuming a covariance of zero.)
cumulative?=1
Speaking of which: Some of you may have thought in the past, "This guy doesn't plug stuff into the NormDist function correctly!" And you would be correct. Technically, I should plug in a value of zero and a mean of an estimated point margin. But in order to find that team's win probability, I would do 1-Normdist(0,est. margin). But this requires more typing, so I use the equivalent, Normdist(est. margin,0).
Recency....
I think I need to adjust my ratings for recency! ( http://tinyurl.com/BBstatsNBA )
Here's the sample of the games on Nov. 26th
(Predictions->Games->Updated Predictions)
Predictions (before games) on 11/26:
DENVER>CHICAGO by 3.4
PHOENIX>LA CLIPPERS by 7.7
LA LAKERS>UTAH by .5
MEMPHIS>GOLDEN STATE by 4.1
PORTLAND>NEW ORLEANS by 1.3
ORLANDO>CLEVELAND by 9.8
CHARLOTTE>HOUSTON by 2.8
BOSTON>TORONTO by 8.9
MILWAUKEE>DETROIT by 6.2
MIAMI>PHILADELPHIA by 9.8
INDIANA>OKLAHOMA CITY by 3.3
SAN ANTONIO>DALLAS by 6.3
Actual spreads on 11/26 (*=predicted correctly):
*DENVER>CHICAGO by 1
*PHOENIX>LA CLIPPERS by 8
UTAH>LA LAKERS by 6
*MEMPHIS>GOLDEN STATE by 5
NEW ORLEANS>PORTLAND by 19
*ORLANDO>CLEVELAND by 11
*CHARLOTTE>HOUSTON by 10
*BOSTON>TORONTO by 9
DETROIT>MILWAUKEE by 14
*MIAMI>PHILADELPHIA by 9
OKLAHOMA CITY>INDIANA by 4
DALLAS>SAN ANTONIO by 9
Updated spreads on 11/26 (after inputing actual spreads):
*=retrodictively correct
*DENVER>CHICAGO by 3.1
*PHOENIX>LA CLIPPERS by 7.6
*UTAH>LA LAKERS by .4
GOLDEN STATE>MEMPHIS by 1.6
New Orleans @ Portland = tie
*ORLANDO>CLEVELAND by 9.9
*CHARLOTTE>HOUSTON by 3.6
*BOSTON>TORONTO by 9
*DETROIT>MILWAUKEE by 1.4
*MIAMI>PHILADELPHIA by 9.9
INDIANA>OKLAHOMA CITY by 2.3
SAN ANTONIO>DALLAS by 4.2
My very own ratings!
I have finally done something I've been wanting to do for a long time: make my own ratings system! I have found a source of easily-updated data, and a way to VERY quickly update my ratings!
Here's how it goes, as of games through 11/21.
To do list: adjust for tempo*, adjust for recency.
Adjusting for recency will have to be well-thought out...perhaps finding what weights most accurately predict more recent games/etc.
Tempo -- I'm not sure I'll be able to do this. To add this to my data set will most certainly be a pain, and I'm not quite sure that efficiency margin is a better measure of team quality than point margin -- or vice versa (as I have discussed previously -- relating to NCAA ball).
Here's how it goes, as of games through 11/21.
To do list: adjust for tempo*, adjust for recency.
Adjusting for recency will have to be well-thought out...perhaps finding what weights most accurately predict more recent games/etc.
Tempo -- I'm not sure I'll be able to do this. To add this to my data set will most certainly be a pain, and I'm not quite sure that efficiency margin is a better measure of team quality than point margin -- or vice versa (as I have discussed previously -- relating to NCAA ball).
Turning 4-factors into efficiency (and vice versa) - Part I
I used the definitions of the 4 factors (and other statistics) to derive a formula that gives us Points and Points Per Possession from only the following stats:
-eFG%
-OR%
-TO%
-FTR(%)
-FT%*
-FG%*
I like to just use the 4 factors (as they're found on the game plan page of Ken Pomeroy's team pages) -- we can estimate FT% and FG% depending on how far we are into the season. FT% would be estimated by the team's average FT% (or the league's FT%), and FG% would be estimated by: [the team (or league) ratio of average FG%/average eFG%] x [Game eFG%].
So here's the formula otherwise:
To get points, you simply add FT points and FG points. To get efficiency of course, we add the two and divide by possessions played.
In the upcoming weeks, I'll be using Ken Pomeroy's estimates for efficiency to predict 4-factors according to this method. Later, I'll reverse the process (in hopes to get a better picture at how teams control one another's four factors to get the resultant efficiency).
In case you were wondering: the large bracketed term equals Field Goals Attempted.
In the first formula, we simply multiply FGA by (eFG% x 2), as eFG%=Points per Field Goal Attempted/2.
In the second formula, we simply multiply FTR (FTA/FGA) by the FGA term to cancel the FGAs out to give us FTA.Then FTA are multiplied by FT% to give us total Points from Free Throws.
The large bracketed term was derived like so:
Possessions=FGA-OR+.44*FTA+TO
Poss= FGA-FGA*(1-FG%)*OR%+.44*(FGA*FTR%)
(Poss-TO)=FGA*(1-(1-FG%)*OR%+.44*FTR%)
then divided to give us:
FGA=(Poss-TO)/(.44*FTR%+1-(1-FG%)*OR%)
-eFG%
-OR%
-TO%
-FTR(%)
-FT%*
-FG%*
I like to just use the 4 factors (as they're found on the game plan page of Ken Pomeroy's team pages) -- we can estimate FT% and FG% depending on how far we are into the season. FT% would be estimated by the team's average FT% (or the league's FT%), and FG% would be estimated by: [the team (or league) ratio of average FG%/average eFG%] x [Game eFG%].
So here's the formula otherwise:
To get points, you simply add FT points and FG points. To get efficiency of course, we add the two and divide by possessions played.
In the upcoming weeks, I'll be using Ken Pomeroy's estimates for efficiency to predict 4-factors according to this method. Later, I'll reverse the process (in hopes to get a better picture at how teams control one another's four factors to get the resultant efficiency).
In case you were wondering: the large bracketed term equals Field Goals Attempted.
In the first formula, we simply multiply FGA by (eFG% x 2), as eFG%=Points per Field Goal Attempted/2.
In the second formula, we simply multiply FTR (FTA/FGA) by the FGA term to cancel the FGAs out to give us FTA.Then FTA are multiplied by FT% to give us total Points from Free Throws.
The large bracketed term was derived like so:
Possessions=FGA-OR+.44*FTA+TO
Poss= FGA-FGA*(1-FG%)*OR%+.44*(FGA*FTR%)
(Poss-TO)=FGA*(1-(1-FG%)*OR%+.44*FTR%)
then divided to give us:
FGA=(Poss-TO)/(.44*FTR%+1-(1-FG%)*OR%)
Redesign!
Blogger has kindly put out some new tools (and a sick new template!) that will hopefully make this blog a little less clunky.
Some current happenings in statistics land:
Check out my Carolina-Duke win probability meter. This will update incrementally as the season goes on, and is based on the probabilities demonstrated by using Ken Pomeroy's predictions, which for the '10-'11 season is only available as of yet by purchasing College Basketball Prospectus '10-'11.
Also, I've got a formula for estimating end-of-season raw efficiencies (where efficiency is points scored per 100 possessions) based on two variables:
1) The number of games a team has played
1) The number of games a team has played
2) The team's current raw efficiency average (offensive or defensive)
Where g=games played, solve this: x=.141*LN(g)+.466
Where g=games played, solve this: x=.141*LN(g)+.466
Where x gives you the weight a team's current average should have compared to the league's average. i.e.
Est. Final Avg= x*Current Avg + (1-x)*League Avg
Est. Final Avg= x*Current Avg + (1-x)*League Avg
Have fun and be safe!
Subscribe to:
Posts (Atom)
Followers
About Me
- Nathan
- I wish my heart were as often large as my hands.