League Points (2 Viewers)

Bloody Marvelous

3 of a Kind
Joined
Feb 7, 2015
Messages
618
Reaction score
694
Location
Gouda, The Netherlands
This is always a nice topic for discussion, and I've been collecting and reworking existing formulas for ages, trying to come up with something that works for me. I'm happy to say that I finally cracked it :).

First let met start by listing some of the formulas that have been suggested, and that I've found along the way. (Sorry TexRex, I didn't include your points system because it's dependent on what happens during the game itself, which means there are too many variables to convert it to a formula.)

I've made graphs of the points distribution for the following 4 games:
  1. $150 Freezeout with 24 players (Prize-pool: $3,600)
  2. $50 R+A with 24 players (Prize-pool: $3,600)
  3. $75 Freezeout with 24 Players (Prize-pool $1,800)
  4. $75 Freezeout with 12 Players (Prize-pool $900)
I'll start with the simple formulas:
  • Points = Cash
    This is very straightforward. Just add however much money the players have won, and that's the points they accumulated. Only ITM finishes receive points (obviously), and the points curve is identical to the payout curve. The bigger the Prize-pool and the higher you finish, the more points you get. Rank, Players, Buy-ins, Rebuys, and Add-ons are all factors in this simple formula. I haven't graphed points here, since they mimic what you're paying out to your players.

  • Points = Players - Rank + 1 (by Pltrgyst)
    This is a linear points system where last place finish receives 1 point, and every place you finish higher gets you an additional point. First place finisher receives as many points as there were players in the game. Buy-ins aren't a factor in this formula, so games 1-3 have an identical points spread.

    Linear.png
Then the more complex formulas which apply a curve:
  • Points = LN((Players + 1) / Rank) (by Dr. Neau)
    This is an oldie by Dr. Neau. Probably one of his first attempts to apply a curve to the points distribution using Natural Logarithms. The curve is about halfway between the linear formula above, and the curves below. Last place points decrease as the fields get bigger. Dr. Neau has stepped away from this formula since, and created a new one which I'll go into later. Again, since Buy-ins don't factor in, games 1-3 are identical when it comes to the points.

    Dr. Neau's LN.png
  • Points = 10 * SQRT(Players / Rank) - 5 (by bpbenda*)
    In this formula the points for last place are frozen at 5, regardless of the number of players. Points increase in a curve dictated by the 1/SQRT(Rank) in the formula. Extra points are awarded for larger fields by the square root of the field size ratio. Buy-ins don't factor in, so graphs 1-3 are identical.

    * I'm crediting bpbenda but am not sure he created the formula. Feel free to let me know if I should change the credit on this.

    CT.png
The following formulas add to the complexity by adding the buy-in or the Prize-pool as a factor in how many points you receive.
  • Points = SQRT(Prizemoney) / Rank^(3/5) (by PocketFives.com)
    The 1/Rank^(3/5) dictates the shape of the curve. The square root of the ratio of the Prize-pool and the number of players dictate the points increase. 4x as many players will give you 2x as many points, and the same goes for the size of the Prize-pool (observable in the graph where $150 / 24 players gets twice as many points as $75 / 12 players). Points for last place decrease as the number of players increases. Since the size of the Prize-pool is identical for games 1 and 2, the graphs are identical as well.

    Simplified versions:
    Points = SQRT(Players) / Rank^(3/5) if you only run freezeout tournaments with identical buy-ins.

    P5s.png


  • Points = 10 * SQRT(Players / Rank) * (1 + LOG(Buy-in + 0.25) (by Pokerstars)
    The points curve is dictated by 1/SQRT(Rank). The points are boosted for larger fields by the square root of the ratio of players. The points boost for higher buy-in tournaments is determined by the logarithm section. The Prize-pool doesn't factor in, so even if there are rebuys or add-ons, the points stay the same. Last place points are not affected by the field size, only by the amount of the buy-in. Pokerstars only awards points to the top 15% finishes, but I applied the formula to all players in this graph.

    Simplified versions:
    Points = SQRT(Players / Rank) if buy-ins are identical for all your tournaments.

    PS.png


  • Points = Buy-in * SQRT(Players / (Buy-in + Rebuys + Add-ons)) / (1 + Rank) (by Dr. Neau)
    This formula has been adopted in one form or another by many poker leagues. This is the only formula I've come across which deducts points for having to rebuy or add-on to your chipstack. The curve is determined by the 1/(1+Rank) segment of the formula, and point boosts are given for larger fields and higher buy-ins at the square root of the ratio, like a lot of the other formulas above. The baseline for the points you receive is the buy-in from which points are deducted when you rebuy or add-on at the square root of the ratio to the buy-in. The size of the Prize-pool isn't factored in. This means that if you rebuy twice and add-on for the amount of the buy-in, your points will be halved. Points for last place decrease as the field size increases.

    Simplified versions:
    Points = SQRT(Players * Buy-in) / (1 + Rank) if you don't have rebuy tournaments.
    Points = SQRT(Players) / (1 + Rank) if all your tournaments have identical buy-ins.

    Dr.Neau.png
So here's where my compulsive need to tinker with these formulas comes in. There are elements of Dr. Neau's, bpbenda's and Pokerstars' formulas that I love, and there are elements that I'm not too fond of. What I was looking for is a formula that doesn't only punish you for rebuying, but rewards you when others have to rebuy. This meant that for me the size of the Prize-pool should be a factor in the points.

I reworked Dr. Neau's formula to incorporate the size of the Prize-pool, and came up with the following:
  • Points = Prizemoney / SQRT(Players * (Buy-in + Rebuys + Add-ons)) / (1 + Rank)
    The characteristics of this formula are identical to Dr. Neau's formula except that I'm not using the buy-in as the baseline, but the average amount spent per player.

    Dr.Neau rwrkd.png
I was relatively content with this formula, except that last place finish points decreased as the field size increased, and the spikes were too pronounced for players who didn't have to rebuy or add-on. In another discussion it was also pointed out that, while there may be some truth to the experience level of the field increasing with the buy-in, that difference isn't nearly as large as the square root of the buy-in ratio.

That's when I came across bpbenda's formula which was elegant in its simplicity. The points for last place were the same regardless of the field size, and the curve of the graph was similar. It was the points for last place that drew me to this formula because I believe you didn't play any better if you finish last out of 10 than if you finish last out of 1,000. Last is last. However, there was no provision for buy-in or rebuys in the formula. So reworking ensued:
  • Points = (10 * SQRT(Players / Rank) - 5) * SQRT(Prizemoney / Players / SQRT(Buy-in + Rebuys + Add-ons))
    Yeah, I basically added part of Dr. Neau's formula to bpbenda's, and threw in an extra square root to reduce the points boost for higher buy-ins and flatten out the effects of rebuys and add-ons.

    CT rwrkd.png
Piece of cake. Now I'm happy. I'll sit here being happy. ... But these points are really close to Pokerstars' points, except the buy-in boosts are smaller in Pokerstars' formula. I want that. I need that. OK, let's see what we can do with Pokerstars' system here:
  • Points = 10 * SQRT(Players / Rank) * (1 + LOG(Prizemoney / Players + 0.25))^2 / (1 + LOG(Buy-in + Rebuys + Add-ons + 0.25))
    Yes, I'm probably making everything way more complicated than it really needs to be, but this is exactly what I was looking for. It has the fixed points for last place, the curve, the points boost for larger fields, and the reduced points boost for higher buy-ins, and it's got the punish/reward system for rebuys and add-ons. Did it really need a square root, a power, and two logarithms in a single formula? Well, yes, I guess it did for me.

    PS rwrkd.png
So there it is. My journey to the new formula for my poker league.
 
Last edited:
Bloody, thanks for this! Indeed I've tried to measure what actually happens during a game. For those wondering, my game has had from 12 to 27 players. The buy-in is constant, and we don't have re-buys.

On thing I think this clearly demonstrates is that there is more than one way to do this. Since almost every game (by this I don't mean a single tournament but a group of tournaments by a particular group, and it could be a fixed or rotating location) is unique in some way, someone could pick something based on their unique circumstances. Best of all, no one can really say your method is invalid -- just different!

Here's some observations I'd like some feedback on.

1 -- How do you deal with a late player who comes after another player has been knocked out?
It seems with all of these formulas, there is a reward at the bottom for showing up late. That to me is not the kind of behavior I want to encourage. Example: Players A through X (since you primarily used 24) -- A-W start tournament. After 30 minutes (for us halfway through Round 2), W is KO'd. At the start of Round 3, X show up and enters. X goes all in on his second hand and loses. Yet, he finishes ahead of W because W was out before X started. Based on my experience, showing up late isn't an advantage. I've been tracking this for about 40 tournaments. A player who shows up late is far less likely to wind up in the money than one who is there on time. Maybe this is just my perception, but I see a rating system that puts X ahead of W as inherently flawed because it doesn't measure anything other than where they finished. On average, we payout just less than 27% of the field. For those who show up late, they average getting in the money way less than 27% of the time. Here's some actual numbers. Of the last 208 payout spots, only 4 went to players who showed up late (less than 2%), but that number by itself doesn't mean much. Of the last 65 players to show up late, only 4 made it into the money (6.15%). How do I define late? If you arrive after the first hand is played. I can see the argument that this is a small sample size, but those seem like very large gaps to make an argument that showing up late should help someone.

How I've dealt with this issue -- My system doesn't rate players on where they finish unless they finish high enough to be among the better players. In theory, my system doesn't distinguish between the #11 finisher and the #30 finisher in a tournament based only on where they finish. They could distinguish themselves in other ways (by knocking out another player, for example).

2 -- What are most people trying to measure?
There is one formula you didn't include that I looked at, and that was Card Player magazine's formula. That's actually the one I based my system on, though I included things they did not. It based the formula on where the top 5 players finished and how many players entered. When I started our "league" almost 2.5 years ago, I looked at Card Player and one of the Dr. Neau formulas. I liked them both, but really sensed they were trying to measure something I wasn't. I think they were each trying to measure something that I wasn't, and I could be completely wrong about this. It seemed like Dr. Neau was trying to eliminate those players least deserving or identifying those players most deserving of getting to a championship game. Card Player was trying to rate thousands of players based on things easily measurable -- field size and top of the heap finishes. They approached things from seemingly opposite ends of the spectrum. Dr. Neau wanted to measure from the bottom up and measure everyone but with a limited number of players; Card Player wanted to measure from the top down but not go very deep, and used a system for an unlimited number of players. I liked elements of both but didn't think either by themselves got me what I wanted.

What I was trying to measure -- I was trying to determine, based on actual performance, who the best players were over 12 games (1 year) for those who played in at least 7 of those games and identify the player who performed the very best over those games. The games are almost identical -- same buy-in, though some are bounty games, no re-buys. The maximum # of players is 30, and the fewest I've had is 12. I think the same buy-in and no re-buys makes measuring this easier. In some games, players can get up to a 10% variation in starting chips (1-2 games a year this year, though I didn't do that the first 2 years).

Why it matters what you are trying to measure -- This seems obvious to me, but if you are trying to determine the best player over a given time with a finite number of players, you will likely do something different than if you are trying to determine which players in the group should make some type of championship game.

All formulas, or methods since you've probably correctly stated my system can't really be reduced to a formula, involve some level of value judgment about what should count the most, and that is somewhat arbitrary whether people want to admit it or not.

Bloody, I loved your explanation of the Poker Stars system -- that it had what it needed for you!

I'm fascinated with systems that try to rate players. I always try to see if there is something I can learn from a system that might help improve my own. I think knowing what someone is trying to measure and why helps evaluate whether they have something that can improve my system.
 
No system is perfect. Faults can be found in any points system you can think of.

Everyone has different criteria in what they want to measure (as you mentioned).

Personally I prefer a single formula which can be universally applied to all players as opposed to inputting individual data per player per tournament. The trick for me was finding the right balance in evaluating the performance of the players based on a few basic and easily measured values.

I personally don't see why eliminating a player should get you extra points. The player you've eliminated may have been crippled through great play by other players, and you simply got the right pot odds to call his all-in with ATC. This is how most players are eliminated, because they are shortstacked and need to widen their range to prevent being blinded off. That doesn't make you a better player than the players who took most of his stack, yet you do reap the rewards.

I also believe that finishing 11th out of 30th means you are a better player than when you finish 7th out of 12. That's why I prefer to award all players points, and not cutting off points below a set number of players.

Points for making the final table also seems oddly random. If your final table seats 10 players and you started with 12, what is the achievement in making the final table? If you have 30 players I can see that it would be an accomplishment, but the points are awarded regardless of the number of players who entered.

Here's some observations I'd like some feedback on.

1 -- How do you deal with a late player who comes after another player has been knocked out?
It seems with all of these formulas, there is a reward at the bottom for showing up late. That to me is not the kind of behavior I want to encourage. Example: Players A through X (since you primarily used 24) -- A-W start tournament. After 30 minutes (for us halfway through Round 2), W is KO'd. At the start of Round 3, X show up and enters. X goes all in on his second hand and loses. Yet, he finishes ahead of W because W was out before X started. Based on my experience, showing up late isn't an advantage.

The advantage of showing up late in the hopes that players have already been eliminated is negligible. The graphs level out near the lower finishes (except Pltrgyst's linear and Dr. Neau's LN formulas).

To give you an idea using the example you've given, here's the advantage player X has over W by arriving after W's been eliminated and then playing like a donkey:

Pltrgyst's linear formula :: X: 2; W: 1; 1st: 24. Advantage: 4.17% of 1st place points.
Dr. Neau's LN formula :: X: 0.08; W: 0.04; 1st: 3.22. Advantage: 1.32% of 1st place points.
bpbenda's formula :: X: 5.22; W: 5.00; 1st: 43.99. Advantage: 0.49% of 1st place points.
PocketFives' formula :: X: 9.14; W: 8.91; 1st: 60.00. Advantage: 0.38% of 1st place points.
Pokerstars' formula :: X: 32.45; W: 31.77; 1st: 155.63. Advantage: 0.44% of 1st place points.
Dr. Neau's new formula :: X: 2.50; W: 2.50; 1st: 30.00. Advantage: 0.33% of 1st place points.

The advantage is hardly worth arriving late for. If player X does this throughout the season he is guaratneed to finish low in the points, whereas player W will have or have had ample opportunity to compensate for his bad luck which got him eliminated in last place.

I added TexRex's data sheet (but removed the names of the players and replaced them with generic Player X names). The sheet is too complex to describe how the points are calculated so it's best to experiment with it yourself.
 

Attachments

Last edited:
I'm not in favor of awarding league points based on bounty chip collections, mostly because often the bounty chip is actually won by a weaker player after the loser's stack has previously been decimated, often by a stronger player. In this scenario, the stronger player is not awarded for significantly slashing the loser's stack while the weaker player is rewarded for finishing off the wounded player (usually an easier proposition). It's just not an accurate or consistent measure of player strength. Over the long term it may indicate 'level of aggression', but that's about it.

We track collected bounty chips and award them in different ways, but they have no bearing on the league points standings.
 
Bloody, my point about players showing up after someone is KO'd is that they score better even if they actually play worse. And getting KO'd on the second hand isn't necessarily playing like a donkey. A player can have a great hand but lose to a better one. If you are trying to find reasons to eliminate players from the last game, I'm just not sure that's a good criteria, but that's my opinion.

I agree that finishing 11th of 30 is better than finishing 30th of 30. It's harder to compare players close together though -- like 27/28, or 21/22. When you have multiple tables especially, other factors might be the reason for those finishes. But if someone does measure everyone by order of finish with 30, over time higher finishes show up and hopefully late arrivals even out. I just don't know if that happens effectively only over 12 games. If a player consistently finished 11-15 and another consistently in the 20s, I agree the higher finisher is better, but he's probably nowhere near the top of the heap, which is what I'm trying to measure.

I get the theory that knocking someone out may not count for much, but I can tell you some players are very good at KO'ing other players and some are not. And there are some good players who are good at it and some good players who are not good at it. However, in my formula, while it counts, it doesn't count for as much as anything else other than points themselves. I've collected this info enough to believe that one guy swooping up the leftovers of a weakened stack simply cannot account for the very wide differences between the number of KO's players accumulate. It does seem to me there is skill involved. Last year one player accumulated more than 2x as many KO's as any other player. That player was our top player. He would have been our top player if KO's hadn't counted. I checked and if you tossed out what players scored for KO's the top 12 would be unaffected, and I didn't check all the way down to 42.

I chose to reward only measurable, top tier performance for each game. It certainly isn't the only way to do it, but I think it measures our top players really well.

Bloody, if only 12 play and 10 make the final table, yes, they get extra points, but not as many as you might think. Every stat like that is compared to the overall total in the category. Over the course of 12 games, we have at the most 119 final table appearances (since one has only 9, but others have only 9 when multiple players are KO'd on the bubble). Those 119 slots count a lot more with 300 total appearances than with only 144 total appearances. Same thing with giving extra points to our In the Points (top 7). There are 84 of those slots through the year, that counts more when there are 300 total spots as opposed to 144. My system is designed to reward most those things that are the hardest to do. Trying to figure out what things are really harder to do and therefore require more skill is a challenge.
 
... in my formula, while it counts, it doesn't count for as much as anything else other than points themselves. ... I checked and if you tossed out what players scored for KO's the top 12 would be unaffected, and I didn't check all the way down to 42.

Ah, so what you're saying is that they do receive points, but those points are low compared to points for finishing higher, and over the course of the season don't really influence the rankings as much. Kinda like a player who arrives a bit later might get an insignificant amount more points if someone's been knocked out already.

I guess the only difference between these two is that you manually enter these insignificant amounts of points for guys swooping up the leftovers of a weakened stack, while the late player automatically gets an insignificant amount of extra points because of how the formula works.

... Bloody, if only 12 play and 10 make the final table, yes, they get extra points, but not as many as you might think. Every stat like that is compared to the overall total in the category. Over the course of 12 games, we have at the most 119 final table appearances (since one has only 9, but others have only 9 when multiple players are KO'd on the bubble). ...

So over the course of 11 games, the top 10 get an extra point, and for one game only 9 will get the extra point because you have a seat less at the table. That seems very random that you base your points on the number of seats you decided to place around the table. I know it's only one point, and it won't account for much, but the principle of it is just odd. Especially since your system seems to be all about the principle of it.

Also it seems a very convoluted way to award more points to the top 10 (or 9), and name it final table points.

Players basically receive:
1st: 35 (34 + 1)
2nd: 22 (21 + 1)
3rd: 14 (13 + 1)
4th: 9 (8 + 1)
5th: 6 (5 + 1)
6th: 4 (3 + 1)
7th: 3 (2 + 1)
8th-10th (or 9th): 2 (1 + 1)
11th (or 10th) and onward: 1

Here's what that looks like when graphed (I also implemented the multiplier of (Players / 10)):

upload_2015-5-24_6-16-22.png


... If you are trying to find reasons to eliminate players from the last game, I'm just not sure that's a good criteria, but that's my opinion.

... If a player consistently finished 11-15 and another consistently in the 20s, I agree the higher finisher is better, but he's probably nowhere near the top of the heap, which is what I'm trying to measure.

... I checked and if you tossed out what players scored for KO's the top 12 would be unaffected, and I didn't check all the way down to 42.

I don't know, but it sounds like we're trying to measure the same thing. If you're looking only at the top 12 players at the end of the league, you're saying that these are the 12 best players in the league. Whether or not you want to hold a last game just for those 12 is not relevant for the scoring system you're using.

I'm really curious about your 2014 results. If you compare the results in your sheet against various formulas, is there any difference in the top 12 players? We've already established the KO's don't influence these numbers, so they are only relevant for the lower finishes. I'm guessing final table finishes aren't influencing these numbers much either, since the final table finishers generally get way more points for finishing in the Fibonacci numbers. Which would suggest that it's the top 7 finishes that really determine who ends up in the top 12 at the end of the season.

The question then becomes: Are you happy with the curve generated by the Fibonacci sequence. If you are, then that's all you really need. I can see it may be of interest to know how many knockouts someone's made over the season, or how many times he's finished in the top 10 or 9 (though I do believe you should make a choice between the two), but I don't see the added value of awarding league points for it.
 
Last edited:
I've had a look at Cardplayer.com's points system. This system is all about multiplying values from various lookup tables, and not so much a formula.

I've made a spreadsheet for this system for those who want to use it.

I've made a few alterations in the lookup tables so the multipliers are applied to homegame situations.

Buy-ins have been divided by 100 (so the $400 became $4 minimum buy-in).
Players have been divided by 10 (so the progression is now 1, 6, 7, 8 i.s.o. 1, 60, 70, 80).

The original numbers are included in the Lookup Tables tab so you can copy/paste those into the columns with adjusted values if you want.

Here's the points graph for the 4 games from the first post:
Games 2 and 3 receive the same points.

upload_2015-5-24_7-48-43.png
 

Attachments

Last edited:
In addition I've made a sheet for the WSOP POY points.

The system is similar to Cardplayer.com's system, as in that it uses lookup tables and multipliers.

Since the WSOP points system is based on large fields the numbers can be a little skewed in the low finishes when applied to home games, simply because the number of players at the final table is larger than 20% of the field. You'd need at least 5 tables for this system to really work (or 10 when playing 6-handed).

The WSOP only hosts 6-9 handed games, so it cannot be applied to 10-player final tables.
  • 9-handed: Holdem & Omaha
  • 8-handed: Stud (or mixed games including Stud, like HORSE)
  • 7-handed: Single Draw (or mixed games including Single Draw)
  • 6 handed: Triple Draw (or mixed games including Triple Draw like 8/10 game and Dealer's Choice)

Here too, I've adjusted the values for the buy-in (1/100) and players (1/10) so it's better applicable to home games.

The WSOP points graph looks like this (games 2 & 3 are again identical):

upload_2015-5-24_11-18-5.png
 

Attachments

Last edited:
Bloody, I’m answering your questions starting at the top. My general comment is that you do not understand the system at all.

Also, I misstated which rating systems I looked at. It was Bluff magazine, not Card Player. Sorry for that error – I’ve heard a lot more of Card Player but when I was looking, didn’t know they had a system.

My system rewards most those things that are the most difficult to do. Consider that our top 3 finishers last year won 3, 2, and 1.5 tournaments. The next 4 all won 1 tournament. 3 people chopped for a tournament win. Those 10 were the only 10 of 42 that got ANY score for a Tournament Win (TW) because they were the only people who had a TW or a partial TW.

The hardest things to do, in order of hardest to easiest, are Tournament Wins, In the Points finish (top 7), Final Table appearances (top 10), KO’s, and then Points.

You don’t understand our points at all. Participation points (that measures total participants in a tournaments) are multiplied by finish points, not simply added. The lowest finish point is 1. If we had 30 players, the highest points for outright 1st place would be 102. Player 8 and below all get participation points only.

Here is an accurate score based on 30 players (participation points = 3)
Total points in tournament = 327
1st = 34 x 3.0 = 102 (.312 of total points, or what a player would score for this tournament)
2nd = 21 x 3.0 = 63 (.193 of total points, or what a player would score for this tournament)
3rd = 13 x 3.0 = 39 (.119 of total points, or what a player would score for this tournament)
4th = 8 x 3.0 = 24 (.073 of total points, or what a player would score for this tournament)
5th = 5 x 3.0 = 15 (.049 of total points, or what a player would score for this tournament)
6th = 3 x 3.0 = 9 (.028 of total points, or what a player would score for this tournament)
7th = 2 x 3.0 = 6 (.018 of total points, or what a player would score for this tournament)
8th – 30th = 1 x 3.0 = 3 (.009 each of total points, or what a player would score for this tournament)

We don’t award “points” for a KO, finishing in the points, making the final table, or winning a tournament. All of those are recorded by feat (either you did it or you didn’t do it). At the end of the year, there is a total of those categories, and a person is given a percentage score based on their percentage of the total. In parenthesis above, I gave the percentage score for points for that one tournament. However, in the end of the year scoring, a players total points are measured as a percentage of the total points awarded. In 2014, the highest score for points was .082 of the total and .067 of the per game average.

Points are accumulated throughout the year. At the end of the year, the total number of points awarded is calculated. Each player gets a percentage of the total. Since every players score is based on a percentage, they are automatically compared to every other players’ performance.

Overall rankings are based on a player’s total percentage score. Each category is counted as a total and then on a per game average, and the 10 scores are totaled. The highest overall score finishes 1st.

Comparing our #2 and #3 players, I was #3. The guy who finished #2 beat me primarily in 2 categories – KO’s and Tournament Wins. But his TW score was what was decisive. He got his 2 wins in 8 games while I got my 1.5 in 12 games. Thus he scored much higher than me in that category. He scored 33% more than me in TW and twice in much on TW/Game. Those two scores made up 20% of the score.

Perhaps the best comparison is our #4-7 – all won 1 tournament. #4 beat #7 in all other categories though. In other words, it didn’t come down to a single factor, and it just so happened that none of the top 12 would have changed if you threw out KO’s. I said I didn’t check further, so there might have been a difference further down. Since players are listed alphabetically and not by rank, it takes time to go check every category, so I only checked the top 12 because I got tired of looking.

We don’t have a championship type game. We are not trying to eliminate anyone from an end of the year tournament. Some that do eliminate players for a championship game are basing it on how players finish in all prior tournaments, unless they leave some performances out (which strikes me as completely artificial). If part of that does not calculate the effect of showing up late, it could make a difference. In my system, if two players were equal in all other categories except KO’s (or any other single category), that category would be decisive between those two players.

In the first two years, our #1 players have run away with things. The first year, #1 won 2 tournaments, but finished 2nd in 3 others. The other players who won two didn’t perform nearly as well in other tournaments.

Final Table is a separate category. It may seem random, but twice last year we had 2 players KO’d on the bubble, and thus had only 9 at the final table. We planned for 10, but only 9 made it. We didn’t give either player credit for making the FT because they got KO’d at different tables. Even if they had been at the same table, we wouldn’t have given the one with the most chips credit for the FT because they didn’t make the FT.

Overall, the system is designed to separate players by what they actually do in the tournament, not what they didn’t do.

Starting this year, we have one tournament a year with only 9 FT slots available. You can call it random, but one of our key players died. We honor him with a memorial tournament where a place at the FT is reserved for him. Each player who makes the FT is given a 10,000 bonus for making it to divvy out Mike’s chips. So I don’t have trouble making my mind up. That was done to honor that player.

You are correct that the top 7 really determines how players finish. We are trying to determine who wins our Top Gun (best overall player), Men’s and Ladies Players of the Year, and Top Bounty Hunter (based strictly on KO’s). Since the last one stands alone, we are really trying to find the top 2 or 3 players. If the Top Gun winner is male, then MPOY will be the 2nd best male and LPOY will be the highest scoring female. Those awards are more about bragging rights than anything else. A player who never scores in the top 7 has 0 chance to win one of those awards. Really, a player who doesn’t win a tournament is unlikely to do well enough to win since 20% of their score would be .000, and it’s hard to get a 0 as 20% of your grade and finish at the top of the class.

Now, if we had a case that was very close, any single factor could decide it. While we do rate players in the top 10, that’s only so players will know how close they are to the best awards.

You asked if I'm happy with the curve generated. It's not accurate for our system for a couple of reasons. First, it doesn't accurately reflect our point system since it doesn't show the multiple based on field size. Second, points are only 20% of a player's score. I did look at a number of scenarios, and for points alone, though I didn't graph it like you did, I'm comfortable that our system on points measures what it is intended to measure -- field size and top tier finish. However, I'm open to ideas to improve it.
 
The hardest things to do, in order of hardest to easiest, are Tournament Wins, In the Points finish (top 7), Final Table appearances (top 10), KO’s, and then Points.

You don’t understand our points at all

I understand it completely, and I think it's ridiculous to award extra points for 'in the points', 'final tables', and 'knock-outs'. Accomplishing the first two are already rewarded as part of the awarded points structure, and the last one has nothing to do with 'hardness' or even competence.

You should start looking at this with an open mind, instead of just always defending your system and insisting it's great. It isn't., and it could use a revamp.
 
I fully admit I'm having a hard time grasping your points system. I've been over the excel sheet you've sent (and was posted a few posts above) numerous times, and I still can't figure out how the points for Finishes / KO's / FT relate to one another.

I think we're both probably focusing too much on things "wrong" with each other's systems that absolutely do not matter. Apparently (if I've understood correctly), the KO's and FT points only come into play as tie breakers, and I'm very much ok with that. The point of your scoring system appears to be to give three awards per season, and you don't really care about how many points the other players have scored. The points for other players are only there to keep track of their progress towards making one of these awards over the season. I understand having 9 players at a FT if you're commemorating a dearly departed friend, and I'm sorry for your loss, but it would've been nice to know that was the reason when you mentioned it the first time.

When it comes to a formula based scoring system, you seem to be too focused on having a final game for the top points finishers (which, again, is not the point of keeping track of a player's performance, but may be a nice way to end a season), and what happens low in the points by players eliminated first in hypothetical situations in a single game without looking at the impact of that on a whole season. If you really think that having an extra (and I'm using bpbenda's formula for 24 players as an example) 0.22 points is going to make any difference over the course of 12 games in which an average of 84.48 points is given per player, there's nothing I can tell you that will make you understand formula based curved points systems.

I'm not trying to get you to abandon your points system, if it works for you that's great, but to me it's too complex for what it's trying to do. I'm 100% certain that any of the formulas listed above would award the exact same 3 players as the system you're using with the added benefit of it being easy to implement and easy to understand for the players (if what's been happening on this forum is any indication at least). You've tried to explain how your system works and how fair it is many times, and still nobody understands or sees how. I can't vouch for your system, because I can't enter the results of our last season because I haven't been keeping track of half this data. I would need to use it for an entire new season to see the final results, and it's just too time consuming and too much work. It would be nice if we had a comparison between the final outcome of your system and the various formulas, but for that I'd need your raw data for a completed season (like finishing places per game of every player: Player 1: 1,5,3,12,dnp,11,5,27,etc. / Player 2: 4,18,7,20,2,dnp,17,17,etc. / etc.), and the final results of the season using your system.

Bluff magazine's system (who also created the points system for the WSOP) works similar to Cardplayer.com's system.
 
BG, if you understood it completely, you would know we don't award extra points for 'in the points', 'final tables', and 'knock-outs'. We don't award points for tournament wins. You would also know that in the points scoring and points awarded based on finish measure different things. Points measures order of finish for the last 7 considering field size. In the points doesn't measure field size, but just the ability to be one of the last ones standing. Each category is scored separately, and then the scores are added together.

As for revising, I keep revising my system, and am always looking for ways to make it better.

Bloody, it takes me just a few minutes to input data from a game. I don't understand the time consuming comment. Regardless of system, you have to input data. I start by inputting the top 10 in order of finish. The rest I put in alphabetically -- easy to do since most of them are already on a spreadsheet. I go back and record each player's KO's -- 2-3 minutes tops. Then I hit "Refresh All" and let the spreadsheet do the rest. Let's say I have 30 (3 more than the most we've had). Total input time is 5-7 minutes. If we recorded every player in reverse order of finish, it might be slightly quicker, but not much, and maybe not at all. Some might input data as the tournament goes, but I do not have a set up that lends itself to that. It's also not easy to determine when I have 3 tables in 3 different rooms who was first out if more than one is eliminated at nearly the same time. And I'm honestly not sure that matters. I really believe the cream will rise to the top.

In 2014, one of the better players (finished #6) lost on the first hand one tournament when we went all-in on the flop with the nut flush and no pairs showing. He intended to force out anyone with a set (which one guy had). He got called by a guy who also had a flush, but he also had suited connectors and an open ended straight flush draw, and he caught his straight flush on the turn. He had another tournament where in the first round he got KO'd by a donkey catching his hand on the river after making a stupid call. #6 flopped the nut full house, and lost to donkey who had a smaller pair, but made runner-runner for 4 of a kind. Was #6 playing like a donkey either time? I don't think so. I'm not convinced a system that rates players all the way down effectively deals with good play gone bad on rare occasions, but I agree it's a way to do things that has some validity over the long run. Of course, those two performances hurt him simply because he didn't accumulate anything other than participation points.

Devising the system involved a lot of thought, and it's hard to explain everything about it without writing a book. It's designed to measure what actually happens, then weigh those factors to determine final scoring. Weighing the scoring method can always be tricky, and I can understand debate over how much each part should count. 5 people had input and all made good points about what to factor in and how much weight to give each factor. The reason we kept KO's out of the formula was because of the perception that most players KO'd have been weakened by others before the shark moves in for the kill. It's hard to prove that's not true, but it's easy to prove some are good at KO'ing others and some are not. Example: Some players average KO'ing a player once every 8 tournaments while some average over 2.4 per tournament. That's a huge difference! I don't think that's luck.

We tried to determine what things require a measurable skill. We didn't count KO's in 2013 (first year), but did track results from 4 bounty tournaments to award our Top Bounty Hunter. Even though the data was limited, it sure seemed like some people were really good at taking players out, and some other good players were not. So in 2014, we counted KO's because after tracking it, data strongly indicated that in fact that is a skill. After tracking all KO's in 2014, I'm even more convinced. Some players are very good at it, and some are not. Here's a good question I'd ask someone. Suppose you don't have a nut hand, but believe you have the best hand. Someone puts you all-in, you believe you have the best hand, and would normally call. If that player has KO'd more than anyone else in the group, do you call? Would your answer be different if that player was not good at KO'ing players? Would you ignore that completely?

So, it's a measurable, and I think a skill. How do you weigh it? Our system doesn't give it much actual weight. It gives points even less weight. Making the FT has more weight, making the top 7, even more than just making FT, and winning the most weight of all. Even though each factor counts 20% in scoring, the more of something there is, the less it counts. So, for example, #1 last year, who had the most points, got 14.6% more for points than #41 (where we had a tie); but for TW, his edge was 43.4% compared to 0 for #41.

This year, using the Fibonacci sequence will separate those who make the top 7 even more from those who don't, but it is designed to create larger distinctions between the top players.

I think (and the other 4 agree) that in the money finishes should be weighed more heavily, and that's why we count both points and in the points finishes -- it's gives that factor more weight than KO's and FT table appearances. We also tracked all other stats leaving out KO's and FT, and it made no difference in the top 10. Then we tracked just KO's and FT. Guess what -- the top 2 didn't change at all in order, but the next several did.

Points were the primary factor we used in 2013 (primarily based on Bluff magazine's system), but when one player would have won every award (we dropped 2 of the awards since then), I realized we needed tiered awards since someone with a dominating performance would win everything. We used different weights in 2014. In 2013, we had only 2 tables and every tournament paid out 5 places (16-21 players). In 2014, players ranged from 16-27, and we paid 5 or 6 places. That created a problem. There were significantly more points given when we paid 6. Unfortunately, we only tracked final table and order of finish of in the money places. We realized the flaws in the system, but didn't try to change it during the year. Because of how much weight we gave to winning a tournament and finishing in the money, it was easy to see that the system identified the best player, and while there could be some debate about the some of the better players, the top 3 would probably have been the top 3 with any system.

For 2015, we developed the "in the points" instead of in the money and made it consistently 7. All 5 of us are convinced the changes are improvements. But alas, the best laid plans and Mr. Murphy have interfered with the planned results. I had to cancel the May tournament (my mom's death and funeral the day of the tournament and no other date that would work). Now I'm not sure whether we go from 7 games to 6 games counting since our season will now be only 11 games.

The biggest flaw I see in the system is that it doesn't track money won as well as I'd like, but no one thinks we should put that as a category. We have some players who are more profitable than some who finished above them. But at the very top, it tracks it well. So I think our system does what it's designed to do. I'd love to make it better.
 
BG, if you understood it completely, you would know we don't award extra points for 'in the points', 'final tables', and 'knock-outs'. We don't award points for tournament wins. You would also know that in the points scoring and points awarded based on finish measure different things. Points measures order of finish for the last 7 considering field size. In the points doesn't measure field size, but just the ability to be one of the last ones standing. Each category is scored separately, and then the scores are added together.

The semantics may be different, but you are indeed awarding what-ever-you-want-to-call-it (scores?) for the same things more than once (except knockouts, which shouldn't be rewarded at all). And they all get added together to determine your top player/performer/whatever list.
 
It's time consuming because I have to wait until the next season starts to use it and see how it works, and will have to wait until that season is over before I can compare results over a whole season and see who comes out on top. In this case that's probably a year and a half in the future. Hence my request for data from a finished season so I can see what the differences are in the final rankings, but if that information is not available I can't make any educated assessment about your system, and can only speculate on how efficient and balanced it is. When using formulas the level of efficiency and balance is self-evident.

For me personally, especially since it takes so much explaining and so many examples to give an idea of how it works, it's too complex in the way it calculates player performance. While I like that you are continually trying to tweak and improve it (and I agree that you should never change the points system while the season is underway), this also means it will likely become more and more complex over time.

Tracking profit/loss may be detrimental to your league. The losing players would constantly be reminded of, and confronted with their losses, and may eventually stop playing. And since you are already awarding more points for higher finishes, as you would pay more money for higher finishes, I think that this would be repeating a previously measured value. BGinGA already pointed out that Final Tables, In the Points, and Tournament Wins are also already awarded through the points curve, but you are apparently using these extra values to balance out the Fibonacci curve somehow. From a point of view of uniformity I would look at the points curve itself, rather than try and add more elements to give the players the points they deserve. Perhaps the Fibonacci sequence just isn't the best choice here, seeing how much it apparently needs to be tweaked through other scoring elements.

Of course, using a formula doesn't exclude the possibility of awarding points to only an x-amount or x-percentage of players (where my preference would be to award points to a percentage instead of a fixed number) like Pokerstars does by only awarding points to the top 15%, or Cardplayer Magazine awarding points only to ITM finishes. You can also choose to exclude outlier data (for instance like judges scores in sports where the top and bottom scores are removed from the final points), or allow players to miss a certain number of games without it affecting their scores too much or excluding a bad run (for instance by counting only the 10 highest scores out of 12) like Bluff Magazine counting only the 10 highest scores. There are a large number of options available to formulaic systems that are similar to the criteria your system has for awarding points.
 
Last edited:
Bloody, you could start gathering the info needed immediately. As we've identified something we think we might possibly use, we've started gathering the info to use it. You certainly could test a partial season. I believe I sent you the 2014 season, but again, we tweaked it since we discovered flaws. By the time we realized we had a problem, we had already failed to record critical info to fix it during the season.

I don't think that tracking profit/loss may be detrimental; I think it IS detrimental! And for exactly the reasons you pointed out. That's why it only shows up on my own data and not anything posted for our group.

What is being tracked is not identical info, though it may be similar info. I suspect that using just the points system (which is 20% of the player final score) would at least allow us to determine the best players. Think of it this way. In the NFL (which I realize you might not be familiar with), when looking over the course of a decade to determine the "team of the decade," they don't look at just the W/L ratio (roughly the equivalent of points). You can look at how many division titles a team won. With 8 divisions, there are 8 of those a year. With 12 playoff teams, you can measure how often a team made the playoffs. You can measure how many Super Bowls they won. Of course, winning a Super Bowl at least means they made the playoffs. They could also look at their playoff record (KO's of other teams).

In our system, winning a tournament would count like a Super Bowl win. To do it, you first have to survive to the final table, you have to make it to the top 7, and then you have to KO at least one player.

I'm not using all the factors to make fine separations between close players who aren't near the top. I'm using those factors to distinguish between those at the top whose performances may be very close and hard to distinguish.

Our first season (2013) we had 3 players who all won 2 tournaments. It was assumed that points would cover that, so it wasn't tracked separately and no one got anything for it other than their points. I'll compare only the top 3 overall performances. In the Money -- #1-6 times, #2-3 times, #3-7 times. Final Table (which by itself didn't count for anything -- only ITM performances did) -- #1-8, #2-4; #3-9. In the 4 regular season bounty tournaments, #s 1 & 3 both had 10 KO's, #2 had 6. Not that we track this, but #s 1 & 2 both KO'd each other 3 times. But one of them, #1, had 3 2nd place finishes while #3 had none. In that case, the points acquired by finishing 2nd was decisive. After the 2013 experience, I believed there had to be a better way to rate players.

Surviving is a skill that could be measured simply by recording the order a player is KO'd. Points can be done the same way. Field size could be measured by some combination of points and finish (Bluff method that we used). In 2013 we tracked stats for each of our planned awards separately. Each award, except the Bounty Hunter, was won outright using only the stats necessary to measure it. The Bounty Hunter was tied after the 4 bounty tournaments, so the winner was to be determined by either most KO's in our Main Event, or higher finish in the ME if it was still tied. The top 2 both made it to the ME FT without recording a KO. It was decided when #1 KO'd #2, accomplishing both more KO's and higher finish.

So in 2014, I wanted a system that allowed us to measure all awards by the same system and tier the awards (gold, silver, and bronze levels). So the basic strategy was to measure everything and weigh it. We measured 5 things, and could measure each by annual total and per game performance. We gave the two methods (total and per game) equal weight. How to weigh the 5 factors was more of a challenge. It seemed like a percentage comparison between all other players would provide its own weighting in a category.

The decision to measure each against all other players by giving a player 10 scores to 3 decimals (like a batting average in baseball) provided its own weighting system. Those things hardest to accomplish, tournament wins, would count the most since scores would be tiered at significant jumps. Highest score .250 total and lowest .000 (shared by 32 players). Since everyone gets points for participating, and we give much higher points for good performance. However, the high number of points awarded and the fact that everyone at least got participation points made points the easiest thing to accumulate. Thus a single point would count less than any other factor.

Using our 2014 formula, the difference in combined points and points/game scores, #1 had .149, #2 had .097, and #41 had .003. Using the Fibonacci numbers, #1 had .199, #2 had .140, and #41 had .002. The Fibonacci sequence thus created greater separation between close performances, but had little impact on the lowest performers.

The spreadsheet allows me to compare whichever of the 10 scores I want to and compare results. I can see that I don't think the results are as accurate in my opinion by not counting TW, FT, and ITM. That gives too much weight to KO's. BG doesn't think I should count that at all, which is fine -- he doesn't have to. But the way things are currently measured, points make fine distinctions between players when the more heavily weighted factors are equal. So do KO's, but neither counts as much as TW and ITP. FT appearances count for more as a one time thing, but over the time, the best players accumulate the most points, boosting their score in that category.

I don't think everything you can measure should count equally. I think the challenge is figuring out how to weigh the factors you do count. I don't throw out any games, good or bad. In college football, if they threw out the two worst games by every team, there wold be a lot of undefeated teams. I can see throwing out best and/or worst scores when the scoring is by judges who might be prejudiced. But whether someone accomplishes something specific isn't subject to prejudice, unless there is a ruling that affects the outcome. That's very rare in our game.

We record info that we currently aren't using, and may never use to evaluate players. For example, we record what round a player is KO'd in. We use it primarily to determine how quickly KO's occur in the group so we know about then tables will break down, and not for how many rounds of play each player plays. We started recording it with the idea it might be useful in player evaluations, but haven't figured out whether it should count for anything and if so, how.

BG, I'm curious. You say you don't think KO's should count. Is that because you don't believe it's a skill? If so, what kind of evidence would it take to convince you otherwise? Even if you believe it's a matter of picking off weak players, do you think there is skill involved in determining who to try to pick off and who not to?
 
I'm not arguing that being a KO specialist isn't a skill. I'm simply arguing that it shouldn't be a factor in determining the best player over a period of time.

Unless you can differentiate between players who used that skill to knock out other players versus those who just happened to be at the right place at the right time, then non-skilled players are be rewarded (unfairly, some might argue). Same goes for those players who used their KO skills to cripple other players (setting them up to be knocked out), but receive no reward for their 'skillful' play.

For both of those reasons, it's not fair/feasible to use as a component of determining 'best player' -- because the best player doesn't always get credit for his KO actions (which don't always result in a KO, sometimes it's just a cripple which is just as important and the same demonstration of that skill), and sometimes non-KO-skilled players do get full credit for something they really had no significant part in accomplishing.

Nothing wrong with tracking the KO metric and rewarding it on it's own right, but it has no place in the overall rating system.
 
BG, that's a fair and I believe honest answer. When we looked, we thought, based on evidence of skill, that it should be a component. But to keep it from being to heavily weighted, we have a way to measure everything else at least twice, except for field size, which is still troublesome for us because some think it should be a bigger factor than it is. Our Top Bounty Hunter is based only on that (the annual total, with KO's/game as the tiebreaker). The first two years, Top Gun and Top Bounty Hunter have been the same player. Four games into this year, they are not the same player.
 
If your field sizes differ drastically, I'd measure the KO ratio per event vs simply counting total knockouts. For example, a player who scores 4 knockouts in an 11-player event would be awarded 0.364, versus a player who had 4 knockouts in a 14-player event (0.286). I'd also consider trying to incorporate a minimum stack size loss for KOs (in terms of BBs?) - which would help lessen the impact of KO's scored by players who really didn't 'earn' them. Harder to develop would be a system of rewarding KO points (or partial points) for players who cripple a larger stack (much harder to determine because this isn't always due to KO skills).

Just implementing those two changes (accounting for field size and requiring a minimum stack size to qualify) would go a long way towards validating the actual KO rankings. Instead of merely counting KOs, you would be ranking the KO skill index for each player.

Note that those could also be expressed as percentages (36.36 vs 28.57 in the above example). You can track them as either cumulative totals, or as an overall percentage of KOs per players faced. In fact, that may be the two different ways of tracking that you've been searching for.
 
Last edited:
I received the new 2015 points sheet (see attachment in post 4). I could use partial existing results if I were to leave out the KO's, but I'd much rather test your system with complete data to prevent any arguments over the results arising from missing data. If I were to include KO's with data from now on, it would still take 6 months to get a decent sample size, and I'm not going to drag this out for so long.

If you've got complete data for part of 2014 (at least 6 games) I'd love to run a comparison across all points systems mentioned in this thread and post my findings in points, rankings, and relative points per game and the 6 game season.

You can PM me the data (if available), and I'll remove all the names and replace them with generic Player X labels to protect the privacy of the players in your league.
 
Last year's field sizes varied from 16-27. This year's, so far, from 12-17. I don't know if that's what you consider drastic. Players tend to accumulate more KO's when the field size is bigger, which is what you'd expect. You're right that it's almost impossible to award anything for crippling a large stack. The thought of measuring KO's by field size makes some sense, but I'm not sure I understand your suggestion. Right now, KO's are simply measured and at the end of the year, each player's percentage is 10% of their score, and each player's KO/G average is also given a percentage and added. Players with the highest KO/G scores are really the ones who seem to be the best at it, but they also tend to be among the players who show up the most.

We express all of our percentages like a baseball batting average, so the scores you suggest above would be translated into .364 and .286, but that's measuring what he did in a specific game. We only count totals and convert to percentages for scoring the season, never an individual game.

The challenge I see with your suggestion is that if we had a layer score 4 KO's with 12, he'd get .333 for that game. A guy with 6 w/18 players would get .333 too. Now, once you go to 3 tables, the KO% per player goes down, as one would expect since someone has to be KO'ing other players. We had an interesting situation once where one table record 0 KO's. Another table

Notes... KO's seem to come in bunches. For whatever reason, it seems like one player gets a lot in a particular game (5-7 is commonly the most in a game)... Over 90% of KO's come from people who make the FT. In several tournaments (more than half), ALL the KO's are done by people who make the FT... In tournaments where there are KO's by people who don't make the FT, it's usually singles, with a few 2's. Anyone with 3 KO's has always made the FT. I suspect that's because our field is not that large, but still find it curious... Some people who otherwise perform very well don't do well in KO's, but it has yet to decide any award other that Top Bounty Hunter... The decision to chop has been a factor in annual award competition. Those who elect to chop tend to do worse. Of course, often they are chopping realizing the money, points, and tournament wins work much better than if they lose, so this could be misleading. But in both years, at least one player was out of the running for Top Gun because they had a chop instead of a win...

Bloody, I'll PM you w/2014 based on Fibnacci sequence. For your purposes, it will work fine. I'll have to send with actual names. I'd be curious about the results as well.
 
We express all of our percentages like a baseball batting average, so the scores you suggest above would be translated into .364 and .286, but that's measuring what he did in a specific game. We only count totals and convert to percentages for scoring the season, never an individual game.

I think that's a misguided approach. If you are truly wanting to measure the success of a player's KO skill, it should be in relation to how many knockouts he scored of the potential targets he faced. In a 13-player field, he faced 12 opponents (true, he may or may not have actually played against all 14, but that's almost impossible to track). If he knocks out 4 of those players, his KO score would be .333 -- and if he scored 6 knockouts in the next event with field size 13, his KO score would be .417 (10/24).

If he missed the third tournament (field size 15), his KO score does not change - it would still be .417, since he has knocked out 10 of the 24 players he has faced. To include those 15 players he did not face skews the performance index, along with adding unwanted attendance factors into the equation. You can always use a minimum qualifying number of events or faced players to ensure everybody is on a level playing field.
 
KO's are scored just like we score everything else.

What exactly would we input in a player's monthly data sheet? How would that lead to a truly different number than adding their totals? I suspect we have the data to measure what you are suggesting, and I'm willing to try it. But I have to understand how it would work.

If I understand what you seem to be suggesting, at the end of the year when the scores are added up, it seems that would leave me with an amount per game and an amount per game. Tell me how that doesn't happen.

We had a discussion about this. We thought it could be best related to an outfielder's assist totals. Some say that outfielders with high assist totals have poor arms and get more opportunities. There's the possibility of that happening, but with enough opportunities, outfielders with the best arms easily outdistance those with lesser arms in assists. I can look at the stats and see those with high KO totals per game over many games are in fact the players who are best at KO'ing other players. There are short term variations, but those are more likely to come from a player getting really good cards one night. Players who do it consistently aren't getting lucky that often. It's easy to eliminate luck as a factor because whether they do it on great big hands or they pick off the weak, they are either calling all-ins or putting players all-in in situations where they have to win the hand. You can only eliminate players in show downs.
 
KO's are scored just like we score everything else. ... We thought it could be best related to an outfielder's assist totals. Some say that outfielders with high assist totals have poor arms and get more opportunities. There's the possibility of that happening, but with enough opportunities, outfielders with the best arms easily outdistance those with lesser arms in assists. I can look at the stats and see those with high KO totals per game over many games are in fact the players who are best at KO'ing other players. There are short term variations, but those are more likely to come from a player getting really good cards one night. Players who do it consistently aren't getting lucky that often. It's easy to eliminate luck as a factor because whether they do it on great big hands or they pick off the weak, they are either calling all-ins or putting players all-in in situations where they have to win the hand. You can only eliminate players in show downs.

I'm starting to suspect you are talking about literal KO's through the use of baseball bats...
 
LOL -- naw, nobody here would show up to a gunfight with just a baseball bat.... :)
 
What exactly would we input in a player's monthly data sheet? How would that lead to a truly different number than adding their totals?

You would need to track two items per player per event (I'm assuming that's what you mean by 'monthly'): the number of qualifying knockouts for that event/month, and the number of opponents faced in that event/month (field size-1). The KO index is calculated by taking the sum of all monthly qualifying knockouts and dividing it by the total number of opponents faced during those events.


Below is a quick look at how the KO index differs from simply counting total KO's, and how it differs from calculating the average number of KO's. We will use a small sample of just four events/months for just three players, but it should suffice for comparison purposes:

Field sizes:
Event 1 = 12 players
Event 2 = 15 players
Event 3 = 17 players
Event 4 = 14 players

Knockouts:
Player A: E1 = 4, E2 = 1, E3 = 0, E4 = 3 (but only 2 qualified under the KO index structure)
Player B: E1 = 1, E2 = 0, E3 = 5, E4 = 2
Player C: E1 = 5, E2 = 1, E3 = DNP, E4 = 2 (but only 1 qualified under the KO index structure)

Scoring:

Total knockouts (all four events)
Player A: 8
Player B: 8
Player C: 8

Average knockouts per event (total knockouts / total events)
Player A: 2.00
Player B: 2.00
Player C: 2.00

Using those two measuring systems, it appears at face value that all three players have demonstrated equal knockout skills over the course of the four events.

But what happens if you introduce the concept of 'qualified knockouts' and view those knockouts in the context of the total number of 'available' knockouts?

Total qualifying knockouts (all four events)
Player B: 8
Player A: 7
Player C: 7

Note that Player B moves up in the rankings if a qualifier is used to discount kills of weakened stacks.

KO index (total qualifying knockouts / opponents faced)
Player C: .184 = 7 / 38 (11+14+13)
Player B: .148 = 8 / 54 (11+14+16+13)
Player A: .130 = 7 / 54 (11+14+16+13)

Player C has clearly demonstrated more knockout skill during the four events (and it is a measurable metric). That's a pretty dramatic difference than just counting total knockouts, and very reflective of the actual performances (remember that counting just the totals indicated equal performances for all three players). Even if you don't use a knockout qualifier (instead counting all knockouts), the advantages of using a KO index to track demonstrated skill is still apparent:

KO index (total knockouts / opponents faced)
Player C: .211 = 8 / (11+14+13)
Player B: .148 = 8 / (11+14+16+13)
Player A: .148 = 8 / (11+14+16+13)

Still pretty clear, but the leader's score is now artificially inflated as is Player A's score (since they really didn't 'earn' those knockouts).

So the better question becomes, what is the threshold for defining a 'qualified' knockout? My thought is that anything less than 5BB is probably scraps left over from a previous beating, and should be either discounted or discarded. Awarding 1/2-value for under 5BB, and no value for 2BB or less seems appropriate. Let's see how applying that threshold affects our sample group (and let's add a fourth player, too):

Knockouts:
Player A: E1 = 4, E2 = 1, E3 = 0, E4 = 3 (but one was discounted at 4BB and only worth 1/2 value)
Player B: E1 = 1, E2 = 0, E3 = 5, E4 = 2
Player C: E1 = 5, E2 = 1, E3 = DNP, E4 = 2 (but one was discarded at only 2BB)
Player D: E1 = DNP, E2 = DNP, E3 = 2, E4 = 1 (discounted at 4BB and only worth 1/2 value)

KO index (total qualifying knockouts / opponents faced)
Player C: .184 = 7 / 38 (11+14+13)
Player B: .148 = 8 / 54 (11+14+16+13)
Player A: .139 = 7.5 / 54 (11+14+16+13)
Player D: .086 = 2.5 / 29 (16+13)


To summarize, the KO index reflects a player's true KO skill level, using both field size and the 'quality' of knockouts in making that determination.
 
If I understand your concept of “quality” KO, a KO doesn’t count if the player went all-in with less than 5 BB, only counts as half a KO; and if 2 BB or less, it doesn’t count at all. Since I didn’t find that until the end, I’m not 100% sure I got that.

Comment 1: Whoever KO’s a player STILL has to actually win the hand. I don’t know how many of these weakened KO’s we have, but I’d guess not a lot.

Comment 2: Regarding the number of opponents, in E1, we’d have 2 tables of 6, so no one faces more than 5 players until the FT, when they could face 9 (assuming no double KO’s on the bubble); in E2, it would be 8-7, so 7 and 6 opponents; in E3, 9 and 8, so 8 and 7; and in E4, 7/7 so 6/6.

Comment 3: Opponents faced could be misleading too because of player movement. It could be tracked, but then we have A faced 6 opponents for 45 minutes, 5 opponents for 15 minutes, 4 opponents for 1:15, and then 9 opponents and going down starting at the FT.

Suppose despite issues I see with the comments I accept the above even if I’m not convinced of its validity.

Let me first point out how KO’s based on your example would be scored in our current system. Let’s assume no chops. There would be a total of 54 KO’s. At the bottom, I put in the actual scoring from our 2014 Top Bounty Hunter. Notice that all of the scores are well above what he actually scored. Here’s our scoring, first total KO’s/Games, then total as the player’s total percentage of 54 KO’s, then raw per game average, then per game scoring (which can’t be calculated without knowing the scores of other players and their per game average, so I will measure it using ONLY these 4 in order of finish):

C 8/3 .148 – 2.667 -- .327 = .475
A 8/4 .148 – 2.000 -- .245 = .393
B 8/4 .148 -- 2.000 -- .245 = .393
D 3/2 .055 – 1.500 -- .184 = .239
TBH 2014 – 29/11 .125 –2.636 -- .083 = .208

Comment 4: Our TBH got 29 KO’s out of the total of 232. So your scores are considerably higher than our actual best score. Player C has a comparable KO/G score, just .031 higher than our guy, or less than one standard deviation. I’d say all of your players are terrific at KO’ing players if they performed that well.

Comment 5: We averaged 19.33 KO’s per game total in 2014. The average player’s KO/G was .754, meaning the average player would KO slightly over 4 players every 3 games they played in. Anyone above .830 KO/G in our game would be in the above average category. Your worst player is almost twice that. That’s not good or bad, just stating the fact using your example.

Comment 6: Note that C would do better under ours because his KO/G average is much higher than anyone else’s. It looks like you gave C credit for 2 KO/G instead of his actual 2.667 KO/G as we would have scored it. I assumed that was a simple math error.

This at least gives us something of a base to compare scores.

I see only one score that could go in our system. So here were your discounted scores put into our system.

Player A: E1 = 4, E2 = 1, E3 = 0, E4 = 3 (but one was discounted at 4BB and only worth 1/2 value)
Player B: E1 = 1, E2 = 0, E3 = 5, E4 = 2
Player C: E1 = 5, E2 = 1, E3 = DNP, E4 = 2 (but one was discarded at only 2BB)
Player D: E1 = DNP, E2 = DNP, E3 = 2, E4 = 1 (discounted at 4BB and only worth 1/2 value)

KO index (total qualifying knockouts / opponents faced)
Player C: .184 = 7 / 38 (11+14+13)
Player B: .148 = 8 / 54 (11+14+16+13)
Player A: .139 = 7.5 / 54 (11+14+16+13)
Player D: .086 = 2.5 / 29 (16+13)

Converts to (again recognizing that without the other scores, we can’t compare a player to all others, so we can only compare these 4 to each other and again using our 2014 TBH as the comparison to our scores):

Player C: .184 = 7 / 38 (11+14+13) – converts to .330
Player B: .148 = 8 / 54 (11+14+16+13) – converts to .266
Player A: .139 = 7.5 / 54 (11+14+16+13) – converts to .250
Player D: .086 = 2.5 / 29 (16+13) – converts to .154
TBH 2014 – 29/11 .125 –2.636 -- .083 = .208

To really do a valid comparison, we’d need to know how many of the 54 KO’s were either discounted or dropped, and we’d have to know how each other player did. For example, there could have been as few as 18 (all played in 1 except for C) total players or as many as 49 players (only A, B, C, and D common to any of the games). That is a huge spread.

Comment 7: I realize these numbers have some flaws, but let’s try to ignore that. These scores, put into our system as converted, would apparently be 1/9 of the total. I say that because I don’t see how you would convert these into a total performance and a per game performance, which our system does. Thus, you could use one combined score and count it only once, or you could count it twice, but I’ve assumed only once. Notice that they are much higher than our current scores that count 1/5. Even unconverted scores as 1/9 instead of 1/5 would raise the overall scores considerably. That would make KO’s hugely important! In my opinion, way beyond what they should count for.

So help me understand this. I thought your basic argument is that KO’s count too much in our system. Based on what I’m seeing, in yours KO’s count for considerably more than in our system, and that’s given the fact that we’ve already weighted other factors more to reduce the impact of KO’s in the scores. Maybe I've just completely misunderstood something. If so, what is it?

It’s possible that the sample size of 4 players provided is too small for a valid comparison, and you did only use 4 games instead of 12. I get it would take a long time to do a complete example. We also don’t have any method to measure KO’s already on the books based on quality as your suggest.

I'm perfectly willing to tweak. I could start gathering the chip stack related to BB and be prepared to start discounting some KO's. But I need to understand how this might be better than what we are already doing. Right now it looks like more difficult data to gather and convert KO's to a much higher score that I'm not convinced is justified.
 
Last edited:
I think we lost pretty much all the members here except for the three of us...

Still, I've applied all formulas on a 10 game season.

Just to make sure I'm comparing apples to apples and oranges to oranges, I've split the comparison into 3 categories.
  • Points based on rankings and field size
  • Points based on rankings, field size, and buy-in
  • Points based on rankings, field size, buy-in, and rebuys
This means I've eliminated certain systems from the comparisons either because they are variations on another system (meaning my variations on Dr. Neau's, bpbenda's, and Pokerstars' systems) because they produce the same or comparable results, or because they don't use buy-ins or rebuys in their formulas.

Here are the results from the season (I've used names of famous poker players to make it easier to reference them).

upload_2015-5-27_4-13-9.png


upload_2015-5-27_4-13-43.png


Analyzing the different rankings is going to be difficult, even when splitting them into three categories, but I'll try my best to make sense of the major discrepancies between the different formulas. I'll be focusing my analysis mostly on the top 50%, because those are the players who attended the most games, and scored the best. As you drop below the top 50% players have generally attended fewer than 50% of the games, and those that did attend more than 50% performed poorly in all of them.

I've sorted every table by Dr. Neau's rankings because it's the only system appearing in all three categories. I'll try to refrain from making value judgments on the different systems as much as possible, but some may slip in here and there.

Category 1: Points based on Finish and Number of players.

In order to be able to make a comparison of all the different scoring systems, the buy-ins for all games were set to $40.

upload_2015-5-27_6-8-17.png


I'll start with the obvious odd ones out.

TexRex
Instead of accumulating points over the season, TexRex converts the scores to percentages of points won, then averages the scores over the number of games played during the season, and compares those averages to achieve a ranking. Because the approach differs from the points accumulation of the other systems, the rankings are vastly different.

When playing in a league where scores are averaged over the number of games played, it introduces a new (likely undesirable) strategy for the players. In this case the best strategy for Sammy Farha would've been to stop playing for the rest of the season after the first game. His first place got him maximum percentage points, the maximum average percentage points per game (points/1), the maximum tournament wins per game (1.000), the maximum In The Points per game (1.000), and the maximum Final Tables/game (1.000). Continuing to play in the league only hurt his standing. Had Sammy stopped playing and assuming all the other players would've been eliminated in the same order, Sammy would have collected 0.777 points at season's end from that one game, almost 2x the points for 2nd place (0.396 for Bertrand in that scenario), making him the clear winner of the season. This strategy wouldn't apply if there was a minimum number of games the player had to have played in order to qualify, but I can't recall seeing any mention of that, and TexRex' 2015 sheet doesn't contain any such provisions.

As it is:
  • Bertrand Grospellier wins the league title, where in most other systems he finishes 3rd or 4th.
  • Sammy Farha, the winner in most points systems finishes 3rd because he played in too many tournaments, which brought his point average down.
  • Dan Harrington is ranked 6th overall because he played in only one game in which he finished 2nd. In all other systems Dan hasn't accumulated enough points over the season because he missed 9 games, and thus ends up in 22nd or 23rd position.
Linear system
Because there is no points curve to the linear system, in the case of 20 players 1st place only gets 5.3% more points than 2nd place, and 19th gets 100% more points than 20th. The final winner therefore is Patrick Antonius, even though he hardly ever finishes near the top, and when he does this is offset by games where he finished last. Patrick finishes around 7-9th in most other systems.

Pokerstars
The biggest upset in Pokerstars system is bumping Bertrand Grospellier from 3rd/4th to 6th position. This is because Bertrand only played in 7 games, and participation alone in a 20 player game will net you 22.36% of first place points. Compared to bpbenda's system where participation only gets you 12.7% of first place points in the same scenario, and 9.5% in Dr. Neau's this percentage is quite high. Eli Elezra is also seriously docked points for only playing in 5 games. Counting only X highest scores (i.e. the top 8 out of 10) will help balance this discrepancy out somewhat. You should decide whether not attending games should be so severely punished before choosing to use Pokerstars' formula.

PocketFives
PocketFives awards 16.5% of first place points to last place in a 20 player game. While not as high as Pokerstars, it is the reason why Eli is also demoted to 16th place. Other than that the results are comparable to Dr. Neau and bpbenda's results.

WSOP & Cardplayer
I'm grouping these two together because they work according to the same principle. Their graphs aren't gradual, but stepped curves because of the use of multiplier tables. It's the difference in multipliers that accounts for the different results in rankings.

Dr. Neau & bpbenda
The results of these two are very close together. The differences in the middle positions are the results of bpbenda awarding a higher percentage of first place to last place, thereby reducing the spread and punishing the players for missed games slightly more than Dr. Neau.

Category 2: Points based on Finish, Number of players, and Buy-in.

Only systems that include the buy-in in their calculation are listed here. This means that my reworked formula replaces bpbenda's, and the Linear points, Dr. Neau's LN points, and TexRex' points are removed from the comparisons.

upload_2015-5-27_4-29-39.png


The reasons for the discrepancies listed in Category 1 apply here as well.

Phil Ivey edges slighty ahead of Sammy Farha because he won two $40 tournaments whereas Sammy won a $30 and a $40 tournament.

The differences are:
Dr. Neau: Ivey's gone from -0.92% to +0.91%
bpbenda: Ivey's gone from -0.34% to +0.47%
Pokerstars: Ivey's gone from -0.25% to +0.18%
PocketFives: Ivey's gone from -0.33% to +1.17%

The rankings for the top 2 have remained unchanged in the WSOP and Cardplayer results.

Category 3: Points based on Finish, Number of players, Buy-in, and Rebuys.

The final shifting of formulas has taken place. Aside from Dr. Neau's formula (which originally incorporated rebuys), only PocketFives (which incorporates the Prizepool which increases with rebuys) and my reworked formula for bpbenda remain, my reworked formula for Pokerstars replaces the original Pokerstars formula, and my reworked variation of Dr. Neau's formula is added.

upload_2015-5-27_7-49-24.png


Here Kara Scott has jumped to 3rd from 5th and 4th respectively in Dr. Neau's and bpbenda's reworked formulas because she achieved her scores with only 1 rebuy.

Sammy had a big enough lead over Kara to not lose a position with his extra rebuy.

Phil Ivey extends his lead over Sammy because he never rebought.

Bertrand Grospellier and Tom Dwan lose places to Mike Matusow because they rebought in two $40 games over 7 and 8 games played respectively, whereas Mike rebought in a $30 and a $40 game over 10 games.

Because of the exaggerated peaks in my variation of Dr. Neau's formula, Eli Elezra is bumped from 9th to 11th compared to Dr. Neau's original formula. Because of the reduced impact in bpbenda's and Pokerstars' reworked formulas his final position remains pretty much unchanged.

In PocketFives formula points aren't deducted when you rebuy, only added to all scores whenever anyone rebuys, thereby increasing the prizepool. There's a slight shift in the rankings because rebuy tournaments had a bigger prizepool than freezeout tournaments with the same buy-in.
 
Last edited:
TexRex
Instead of accumulating points over the season, TexRex converts the scores to percentages of points won, then averages the scores over the number of games played during the season, and compares those averages to achieve a ranking. Because the approach differs from the points accumulation of the other systems, the rankings are vastly different.

When playing in a league where scores are averaged over the number of games played, it introduces a new (likely undesirable) strategy for the players. In this case the best strategy for Sammy Farha would've been to stop playing for the rest of the season after the first game. His first place got him maximum percentage points, the maximum average percentage points per game (points/1), the maximum tournament wins per game (1.000), the maximum In The Points per game (1.000), and the maximum Final Tables/game (1.000). Continuing to play in the league only hurt his standing. Had Sammy stopped playing and assuming all the other players would've been eliminated in the same order, Sammy would have collected 0.777 points at season's end from that one game, almost 2x the points for 2nd place (0.396 for Bertrand in that scenario), making him the clear winner of the season. This strategy wouldn't apply if there was a minimum number of games the player had to have played in order to qualify, but I can't recall seeing any mention of that, and TexRex' 2015 sheet doesn't contain any such provisions.

As it is:
  • Bertrand Grospellier wins the league title, where in most other systems he finishes 3rd or 4th.
  • Sammy Farha, the winner in most points systems finishes 3rd because he played in too many tournaments, which brought his point average down.
  • Dan Harrington is ranked 6th overall because he played in only one game in which he finished 2nd. In all other systems Dan hasn't accumulated enough points over the season because he missed 9 games, and thus ends up in 22nd or 23rd position.

Bloody, in our system, a player must play in 7 of 12 games to be eligible for any annual awards. We didn't put that in the spreadsheet itself. A good example of that is our player who chopped for the Jan. 2014 tournament. That was the only tournament he played in because of his death. He's first in all 5 averages, but not in the totals. But he wasn't eligible for any award because he didn't attend enough games.

The risk of a strategy like the leader just skipping the last game was also seen in December 2014. Going into that game, I think we had 9 or 10 players who were still in the running for awards, and a couple of people who could have passed the leader if he had sat out. The leader won the December tournament, running away with the award.

Added -- It looks like your 10 games have all the elements we measure except KO's. Would you mind sending me the spreadsheet you used for the 10 games? I'll run with mine except for KO's. I'll have to ignore re-buys since we don't have them.

It's probably different with re-buys because of the possibility that someone who re-buys could win. It would also make quickly verifying the KO's calculation more difficult because it could no longer be determined by taking total entrants and subtracting 1 per event and 1 per chop. For those reasons, I'm not really sure you can compare my formula to a re-buy formula on an apples to apples basis. But the info is still interesting.
 
Last edited:
7 out of 12 sounds like a reasonable number to me and it's good that you implemented this rule.

The purpose of any league is to keep players coming back to improve their score. Setting a minimum number of games that must be played for the score to count is excellent for scoring systems that average the scores over the games played.

For cumulative scoring systems handing out participation points, and/or counting only the best X number of scores towards the final score are excellent options.

Counting only the best X number of scores can be used to offset a bad run, improve their score even if they've already played X amount of games, and allow players to miss games and not face impossible odds to score enough points to finish in the top or win the season.

Even for averaging scoring systems like yours this is a great option to keep players coming back, even when they already are atop the leaderboard.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account and join our community. It's easy!

Log in

Already have an account? Log in here.

Back
Top Bottom