How would you rate these players? (1 Viewer)

TexRex

3 of a Kind
Joined
Aug 8, 2013
Messages
739
Reaction score
599
Location
Texas
Our 2015 season has ended. I’m going to post the raw data of our top 10 eligible performers and how they did in each of the 11 games they might have played in. By eligible, I mean players who attended 7 games. They played enough to win one of our 4 annual awards. From this group we will choose the top overall performer, the second best performer, and the best female performer over the course of the season. I’m interested in how you would rate these 10 players and why you would rate them that way. The top 2 awards should be the two players who performed the best overall. Obviously there can be differences, which is why I’m interested in the why.

We do give a Ladies Player of the Year, had 2 women players eligible (Carol and Mindy), but I think there is no real debate of that choice. However, I’d like players to rate that too.

Our fourth award goes to the player who KO’d the most players throughout the year, and it wasn’t even close this year. I've provided their KO's, but you can tell me whether you would count it and if so, how you would weigh it.

Why I’m interested in this
Every year I evaluate our rating system to see what tweaks I think might improve it. This was the third year of our league, if you can really call it a league.

There are no league fees and the winners of awards get certificates indicating their prowess. Beyond the money they win at individual tournaments, we have nothing that benefits these players beyond the awards. They don’t get reduced Main Event fees, more chips for that game, or anything else. It’s all about bragging rights.

Basic facts of our season
The number of entrants ranged from 12 to 23. We normally have 10 at our final table, though we have one month with only 9 scheduled (in honor of a deceased player) and one month where two players were knocked out (KO’d) the hand before the final table, leaving only 9, so both of those months had only 9 at the final table. Players finishing #11 or lower are not counted in order, so I don’t have those stats. In months where only 9 made the final table, I don’t have stats for those finishing #10 or lower.

Buy-ins for every tournament are identical ($20), and all tournaments are freeze out tournaments (no re-buys or add-ons).

We count only performances where a player made the final table, except we count all KO’s.

One game this year was canceled. Each other game was won by a different player – there were no repeat winners.

Overall, 48 players made 176 appearances, and we averaged exactly 16 players per game. The 10 players under consideration appeared a total of 87 times with a minimum of 7 appearances to be eligible for an award.

The numbers going down the left side is the place where these players finished. If there is no number, players are listed alphabetically The number to the right of the month is the number of entrants. The number to the right of a name is how many players they knocked out (KO'd) in that tournament.

Jan (12)
1 -
2 Ronald 1
3 Len 2
4 Rob 1
5 -
6 -
7 Carol 2
8 -
9 Brandon 1
10
Dan 0
Kevin 0
Mindy 0

Feb (12)
1 Rob 1
2 Mindy 2
3 Dan 3
4 Ronald 0
5 -
6 Kevin 0
7 -
8 -
9 Peter 0
10 Carol 0
Len 1

Mar (14)
1 Dan 5
2 Peter 0
3 -
4 -
5 Kevin 0
6 -
7 -
8 Ronald 0
9 -
10 -
Carol 0
Len 0
Mindy 0
Rob 0

Apr (17)
1 -
2 -
3 Peter 1
4 -
5 Kevin 1
6 -
7 -
8 Ronald 0
9 -
10
Dan 0
Len 0
Mindy 0

Jun (15)

1 Len 2
2 -
3 -
4 Brandon 1
5 -
6 Dan 4
7 -
8 Tim 0
9 Rob 0
10 -
Kevin 0
Ronald 0

Jul (22)
1 -
2 Carol 3
3 -
4 -
5 Rob 0
6 -
7 -
8 -
9 -
10 Dan 0
Brandon 0
Kevin 0
Len 0
Ronald 0
Tim 0

Aug (15)
1 -
2 Rob 2
3 Tim 0
4 Peter 1
5 Mindy 2
6 Len 2
7 Ronald 0
8 -
9 -
10 Brandon 0
Carol 0
Dan 0

Sept (23)
1 -
2 Peter 2
3 -
4 -
5 Brandon 2
6 Ronald 0
7 Dan 1
8 Tim 0
9 Rob 1
10 -
Carol 1
Kevin 0
Len 0
Mindy 2

Oct (17)
1 -
2 -
3 Dan 8
4 -
5 Brandon 0
6 -
7 Ronald 0
8 Tim 0
9 -
10 Rob 0
Kevin 0
Len 0
Peter 0

Nov (14)
1 -
2 Brandon 2
3 -
4 -
5 Kevin 2
6 Rob 0
7 Ronald 0
8 -
9 -
10 Len 0
Tim 0

Dec (15)
1 -
2 -
3 Tim 0
4 Ronald 1
5 -
6 Peter 2
7 Kevin 1
8 Rob 0
9 -
10 -
Brandon 0
Carol 0
Len 0
Mindy 0
 
This would be easier if the results were summarized. Let's say { name } { #of knockouts } { list of top ten finishes }

My formula would be something like this:

I only count the top seven results since seven events qualifies you. The players who made all 11 games have enough of an edge getting to drop their bottom four results vs a player who only came seven times.

Each place starting at 10th gets a point - so 10th = 1, 9th =2, 8th =3 up to 1st = 10

The top four finishes typically get the most money and should get extra points. let's just say 4th = +2, 3rd = +4, 2nd = +7 and 1st = +10 I would structure this to reflect the actual payout schedule - a sharp increase system would get points like I suggested but a very flat system might only merit a small top finish bonus. In all cases a single win will be worth more than a bunch of 10th - 7th results (I.e. a player who won once and didn't make the final table the other six games is better than someone who had five very weak final table performances.)

I would ignore knockouts except a) it does get its own prize and b) as a tie breaker.

DrStrange
 
Dr. Strange, I'll pull that together and post it. I thought the raw data might be easier, but I can do the summary. It may not be until either tonight or tomorrow night.
 
Dr. Strange, here's a summary of results. The highest in each category is in bold and italics, and if two or more were tied, all the top scores are in bold and italics. Players are in alphabetical order, except the two ladies who are at the bottom.

Name -- # of Games (#) -- Tournament Wins (TW) -- Final Table Appearances (FT) -- Top 7 (ITP) -- Knockouts (KO) -- Top 7 (Places)
Brandon -- # 8 -- TW 0 -- FT 6 -- ITP 4 -- KO 7 -- Places -- 2, 4, 5, 5
Dan -- # 9 -- TW 1 -- FT 6 -- ITP 5 -- KO 21 -- Places -- 1, 3, 3, 6, 7
Kevin -- # 10 -- TW 0 -- FT 5 -- ITP 5 -- KO 4 -- Places -- 5, 5, 5, 6, 7
Len -- # 11 -- TW 1 -- FT 6 -- ITP 3-- KO 8 -- Places -- 1, 3, 6
Peter -- # 7 -- TW 0 -- FT 6 -- ITP 5 -- KO 6 -- Places -- 2, 2, 3, 4, 6
Rob -- # 10 -- TW 1 -- FT 9 -- ITP 5 -- KO 9 -- Places -- 1, 2, 4, 5, 6
Ronald -- # 11 -- TW 0 -- FT 9 -- ITP 7 -- KO 2 -- Places -- 2, 4, 4, 6, 7, 7, 7
Tim -- # 7 -- TW 0 -- FT 5 -- ITP 2 -- KO 0 -- Places -- 3, 3
Carol -- # 7 -- TW 0 -- FT 2 -- ITP 2 -- KO 7 -- Places -- 2, 7
Mindy -- # 7 -- TW 0 -- FT 2 -- ITP 2 -- KO 6 -- Places -- 2, 5

What I'd really like to see is you rate the players using your scoring system, and explain it. :)
 
OK, Thanks

Brandon gets 9 points for a 2nd, 7 points for a 4th, 6 points X 2 for two fifth place finishes and 9 bonus points for top four finishes (7 for a 2nd and 2 for a 4th) = 37 points
Dan gets 10 (1st) + 8 (3rd) +8 (3rd) + 5 (6th) + 4 (7th) + 18 (top four bonus) = 53 points
Keven gets 6 + 6 + 6 + 5 + 4 + 0 (no top four finish bonus = 27 points
Len gets 10 + 8 + 5 + 14 (bonus) = 37 points
Peter gets 9 + 9 + 8 + 7 + 5 + 20 = 58 points
Rob gets 10 + 9 + 7 + 6 + 5 + 19 = 56 points
Ronald gets 9 + 7 + 7 + 5 + 4 + 4 + 4 + 11 = 51 points
Tim gets 8 + 8 + 8 = 24 points
Carol gets 9 + 4 + 7 = 20 points
Mindy gets 9 + 6 + 7 = 22 points

So:
1st is Peter with 58
2nd is Rob with 56
3rd is Dan with 53
4th is Ronald with 51
I note that these top four places are close and subject to reordering with tweaking of the scoring system
5th is tied, Brandon and Len both have 37 points
7th is Keven with 27
8th is Tim with 24
9th is Mindy with 22
10th is Carol with 20

If you have the details about payouts - you could just total all the prize money won (top seven finishes only) and award based on who won the most money. Top money winner being the league champion would be easy to defend and make good sense to the players.

I hope the league will agree on these methods in advance next year. The top four are closely matched and there could be hard feelings about a scoring system made up on the fly picking a "winner". There could be serious problems if the prizes for top player(s) are substantial.

DrStrange
 
Dr. Strange, there was a system in place at the beginning of the year, so this wasn't done on the fly at all. Our players knew before the first tournament how scoring would work this year. I've not said how our system rated these players. I will announce in early January, before our first game, what tweaks our system will have for next year. I'm trying to get ideas to consider tweaks I've not already thought of. Ultimately, two other players and I will consider the scoring system and decide on the changes. So I'm looking for ideas.I ask this because I wanted to see how others would rate these 10 players with the data I have. If I want to suggest any, I need to show how it might improve the system we do have.

We award more points to the top 7 because at 30 players, we pay out 7, but at 15 players, we only pay 5. Therefore, we award the top 7 to maintain consistency of what I call "finish points" whether we pay 3, 4, 5, 6, or 7.

One point -- I'd advise anyone against using money won. It points out to the losers how bad they are doing. For those who play socially, it emphasizes the lack of friendliness in the game, and for those who are competitive and losing, they may see they are in a game over their heads. I think determining the best player in cash games should be decided by who won the most money over time, but there are so many different tournament payout structures, all artificially constructed, all one has to do to change the results in change the payout structure. If that happens part of the way through the year, it could alter the results. For example, if we paid all the prize money to first place, among these players Len would be the clear winner since there was more money in the one prize pool he won.

Would you consider the field size at all? For example, Rob's Tournament Win came with 12 players, Dan's with 14, and Len's with 15 players? Is there any real doubt that Len winning over 14 others was more difficult than Rob winning over 11 others? If a 10 player tournament gets the same points as the guy who wins a 30 player tournament, but its' far more difficult to win a 30 player tournament it seems to me the scoring system is flawed. Am I wrong on that?
 
I have seen scoring systems that give points for every player you beat. So the bigger the field, the more points you get for progressing. In most cases it shouldn't matter all that much unless the players' performances are close and then it is more a matter of variation than skill.

The key take-away for me is the top four are clearly better than the rest of the field. But ranking the top four will depend as much on the ranking method(s) as it does with performance.

Same thing with the two top women. They are quite closely matched this year. Yes Mindy performed "better" than Carol but the results might well hinge on the outcome of one or two hands. But no matter how we parse the data, the women aren't going to be one of the top players this year.

I appreciate the issues that arise when money results are public. But at the very least you should see if the rankings based on total money won gets close to the results from the scoring system used. I would not want the system to award top prizes to anyone but the top money winners. What I think you really want is a system that closely models the results based on money rankings without using money amounts.

DrStrange
 
Here's some general observations ~
  • Making the FT shows survival skills. Obviously you can't win if you don't first survive, but there is more to winning than just surviving.
  • The only 3 players who won a tournament are also the 3 players with the highest number of KOs. Brandon and Peter, next on the KO list, also had 4 top 5 finishes. Ron and Tim were the only two players in the top 10 who had fewer than the 3.375 KO's that was the league average.
  • Based on these 10 players, there is a high correlation between KOs and winning, and a fairly high correlation between KOs and surviving
Not seen in these numbers, but looking at complete results for all 48 players ~
  • Most KOs in a tournament are by players who make the FT. It's very surprising how KOing even one player significantly boosts a players chances of making the FT. Going Jan - Dec, here's a comparison KOs by FT players to total KOs in that tournament: 11/11; 11/11; 11/13; 14/16; 14/14; 17/18 w/3 unaccounted for; 14/14; 15/22; 15/16; 12/13; 13/14. For the year, only 15 of 162 KOs by players who didn't make the FT.
  • 61.4% of players made the FT; 90.7% of KOs were by players who made the FT. I've been tracking KOs for 3 years now, and this year was not an anomaly, but similar to previous years. While I'm not sure what to make of it, I'm convinced that KOs are a skill.
Here's some I think you would agree with ~ We can safely remove Brandon, Kevin, and Tim from top player consideration. Though Len won the biggest tournament of these 10 players, only one male in this group made the FT fewer times, and Tim made 5 in only 7 tournaments.

I'd remove Len from top player consideration, but note there is something odd about his results. I would note that his high KOs but low number of good performances might indicate getting very unlucky in a short 11 tournament season. It seems like every year we have at least one player who seems solid, has some stats that indicate good player, but performs poorly overall. If we gave a "Snakebit" award, Kevin and Len would be in the running. Not seen in these stats, but both were KOd 3 times where they had a vastly superior hand on the flop but lost when their all-in got bested by runner-runner. Collectively they had another 5 times when they lost on the river. Their bad luck this year a club joke. Commonly heard comment: "I only went all-in because I've been so lucky against [Kevin/Len]." That aside, this is based on actual performance, not what might have been.

Let me give you some arguments against those results and have you address them:

Mindy vs. Carol ~ Mindy's #2 finish came w/12 players while Carol's came w/22 players. Carol did her best at the bigger tournament, while Mindy did her best at the smallest tournament. Mindy's #5 came w/15 and Carol's #7 came w/12. Both got to the extra points slots both times they made the FT. Both were above average at KOs, but Carol had one more in the same number of tournaments.

Against Peter ~ Having Peter as the top player would be like saying the NFL team of the decade didn't win any Super Bowls.
For Peter ~ Tied with Rob for 4 top 5 finishes, and some of those were in bigger events.

Rob vs. Dan ~ Rob made 9 Final Tables (tied for the most with Ronald) and into the top 7 5 times. Dan made 6 FTs and into the top 7 5 times. These were the top 2 in KOs, but Dan had the edge in that 21-9.

Ronald ~ Of his 3 best performances were in the 2 smallest tournaments, and one in a tournament with 15 (slightly below the 16 player average but within 1 standard deviation (which was 3.6). Like Peter and Brandon, he never won a tournament. He showed excellent survival skills making the FT 9 of 11 times. Note on Ronald -- only Tim, with 0, had fewer KOs.

***
Big Question: Would any of the above change the way you would evaluate those players?
 
The "best" player is the one who won the most money. Rank them based on money then reverse engineer the scoring system to match as best can be done without capricious rules. Any one of the top four players could be the 'best' player depending on how you decide to score. Since we are playing for money . . . . the biggest winner was best.

Really, money is the true bottom line. If the scoring system gets to a result much different than the rank based on money results, then the system likely needs to be changed.

The reward system(s) strongly affect play. {rewards normally are money, but if there is a big year-end tournament that could dominate decision making or if there were special prizes during the year. } The best players adapt their strategy to maximize the reward. Plus there could be some internal gaming going on if there are rewards for knockouts or needing to beat specific player(s). Even the scoring system could distort the way players act if the prizes for top scores are large relative to the nightly prizes. (example - lets say the top two players get a WSOP entry plus travel costs.)

I am not sold that knockouts are a predictor of success rather than the results of success. We can see there is a degree of correlation but can not so easily establish which caused the other.

DrStrange
 
Dr. Strange, I've been contemplating some of your ideas. Here's a little about our game so you know. You are exactly right about the reward system affecting play. For us, all buy-ins are paid out the night of the tournament. There is no rollover money (though I have tried that). Though there is a year-end tournament, I call it our Main Event (ME) because it’s patterned after the WSOP Main Event. It’s open to only 30 players though. Those who have attended the most games get the first chance to sign up. Our Main Event offers no reward for past performance. You get in by coming to more games and paying to enter. Everyone at the ME starts with the same chip stack. Like all of our tournaments, it stands completely on its own, it's just for a larger buy-in. What I’m trying to do is select the top 2 from what may be very similar performances among players who played at least 7 times. We measure on totals and per game averages. This was our third year, and every year a different single factor has determined the outcome.

All tournament payouts are artificially constructed, so you could easily match the two (results and money). Money does factor in field size since the larger size means either a larger payout or more places paid. I don't like dropping a player's lowest scores, and if you are trying to make results closely resemble money, it can change a net loser into a net winner. I've been toying around with the idea, though short of just making money THE way to judge, I've not come up with a way to do it. That means the payout, which in our case is designed to maximize the # of players who keep coming, would need to be designed to reward what we thought were the best players. See last point to see why I don't think it's that practical, but it could be.

So, to your ideas ~

Dropping lowest score(s) -- One thought is for our per game component, which counts half a player's score, is to divide his totals by 7 since that's the minimum number of games to be eligible. Instead of dropping the lowest score, what it appears to do is reward those who attend more than 7 since that half of their score cannot go down by attending more games. It is no longer a true average, but does reward those who attend the most without dropping their lowest scores. This year we divided all scores by either 7 or the number of games a player attended.

One issue I see is what if two players are close. One comes to 9 games and one to 10. The one with 9 had higher averages but slightly lower overall totals, which in our current system would give him an edge. But the extra that the one with 10 gets might push him over the top. That puts a great emphasis on totals without throwing out something resembling an average per game performance. One example is if they both won a tournament, the one with 9 gets a higher per game score since his is divided by 9 and the other divided by 10.

I floated this idea to another decision maker, and he really seems to like it. Any thoughts on that more specific idea?

KOs -- On KOs, I'm convinced after looking at 3 years of data it is a skill and not just the result of good play. Some otherwise similar players have vastly different totals of KOs, even though all other things might be pretty close. However, I'm also considering separating it from the rest of our scores and only considering it for our bounty hunter award. Since that is a separate award, keeping that score separate doesn't seem crazy to me.

Top Player -- We count tournament wins because they are the single hardest thing to do, and it would seems that to have our "best" player be one who didn't win a single tournament is like saying the Buffalo Bills were the NFL team of the 1990s because they were the only team that went to 4 Super Bowls (and lost all 4).

Sticking close to money results -- Finally, this last one is a bit complex. History -- Some years ago I read an article by Mike Caro who proposed that the guy who wins a tournament should get all the money because he got all the chips. I assume he was making a serious suggestion (it seemed to be serious). I think that's as artificial as any other method since the tournament format forces higher and higher blinds, and the tournament continues until there is only one left. #2 could be the better player but lost because #1 got cards at the right time, not because he necessarily played better.

Applying that to this year, Len wins because he won the biggest tournament, even though he had a lackluster performance in other games. But if only one pays, even 2nd is lackluster. We both agree on who the top 4 are, though I would have discounted Len because of other issues, but a money system could be devised where he is the clear winner.

I'm not seriously thinking of trying to tie the two together, but do you think Len was close enough this year to legitimately be the champ?

BTW, we are meeting Saturday to discuss possible changes, so I'm interested in additional thoughts and even other ideas.
 
My bias towards taking only the top seven outcomes is to avoid over rewarding perfect attendance. There is a solid case that you want to reward the best attendance so long as it doesn't kill the action at the end of a season.

Using KOs as a special bounty hunter award looks like a fine idea.

I think it is plausible that the best player could play a season and never win a single event. It might be hard to accomplish that depending on the scoring system used.

I would not ever consider a winner takes all format, Mr. Caro's opinion doesn't sway me a bit. Along the same line - I wouldn't score best player of the season in a winner takes all format. Sure, a single tournament win should give a significant number of points for the achievement, but not so much that a single night would be enough to carry the entire season. I assume the cash payouts each night are widely spread among the players making the final table and would make a suitable proxy for best player of the season awards. If the cash payout table is rewarding only 2 or 3 players then it isn't a good choice for measuring season long performance.

Len is not champ, he isn't even a contender.

DrStrange
 
Not in order ~

I agree, Len should not be a contender for the top 2 spots, and wasn't in our system. I was just pointing out that a scoring system could be arranged that makes him the best. And Mike Caro suggested it. Like you, I don't agree with him on that issue. It does illustrate though how tournament scoring systems are all artificial. Whoever devises the scoring system does it by what they think should be rewarded.

There is one other issue we look at, but it doesn't count. Our system rates the top 3 of all players (48 this year) in each of 10 categories. I think of it like the Olympics -- gold, silver, and bronze in each of 10 events. Winning 3 golds is tough, but every year we've had at least one do that. It seems to be a decent test of validity. The best player should have more truly elite performances, and that's always been the case. But it has the disadvantage of more people scoring identically if you don't count the other performances. So it doesn't make fine distinctions between close players. Only 16 of 48 would win a medal; 49 total medals because of ties. If 3 or more are tied for the top spot, it doesn't award silver or bronze. If there are 3 between gold and silver, it doesn't award a bronze. When I say award, it just color codes those top 3 spots.

One issue I see with taking totals and dividing by 7 for half the score is the possibility that it alters results. (And yes, I get that's just as artificial as any other method.) A player attends 1 more game, both attend enough games to qualify, but the one with 1 more game improves his average by attending 1 more. So far the per game average and the totals are equally weighted, but dividing by 7 puts more weight on the totals. It doesn't really penalize a player for a poor performance, whereas the current system does by only rewarding elite performances but counting all performances. At least one of the committee likes it because it does weigh totals more. And he didn't attend enough to win, so he's not looking for his best interest.

The divide by 7 for average and making the KOs a completely separate category should make for the most discussion. If we make KOs separate, they wouldn't be used as a tie-breaker for anything other than Top Bounty Hunter. Ties are rare. This year, of the 48 players, 2 players tied for 14th, 3 tied for 40th, 3 tied for 43rd, and 2 tied for 46th. There were no ties anywhere near the top. All of those tied attended only 1 game, performed identically in all respects to another player whose 1 game they attended had the same number of players. The only other two players who attended the exact same mix of games were the two with perfect attendance.

It is possible for someone to not win a tournament and win the overall, but almost impossible if someone wins 2 tournaments. We weigh tournament wins heavily because it's the hardest single thing to do. This is the first year we've had no repeat winners. In both prior years, one player was dominate and would win by almost any measure, but that's not true this year. Results really bounce between 3 players depending on what your measure. The final margin (which hasn't yet been announced) was razor thin. We weigh 10 things, and changing even one of them could affect the outcome.

Even a player with no chance of qualifying for an award still has an incentive to come because the awards have no real tangible value. The incentive to come is that seats for the Main Event, while players do sign up for them, are awarded based on attendance. We averaged 16 players, had 48 different players come, and those 48 basically were competing for 23 of the 30 seats. Our Main Event is a combined even with another group that got 7 seats this year. Someone who only came once didn't make it in the top 30 who wanted to come.

Our cash payouts are 4 @ 10; 5 @ 15; 6 @ 21; and 7 @ 28. The worst percentage is 6 @ 27 (22.2%). It's mostly a linear payout, designed to pay more players and keep it fun. It's not really constructed to be used to determine the best player. We are more interested in keeping attendance up by rewarding more players than having high payouts. That's fine on a club level, but I judge myself primarily based on money.

I did look at what would happen if we said we are paying out 7 regardless of attendees. #7 would not break even until we had 28 players. With fewer, #7 would lose money, but at a slower rate than those who got no payout. The #1 player would get less for winning at every level. I have about an even mix of people saying pay more to fewer players vs. paying less to more players, but most like the payout system.
 
Tex, I have not read any of the comments beyond your initial post (intentionally, so as to not influence my rankings in any way). I would find it beneficial if you could fill in the blanks of the finishers in each of the eleven events. You can put them in parenthesis if you wish, to distinguish them as not having qualified for awards due to having fewer than seven appearances.

Also, if you have the data (which I assume you do), please provide a list of all 47 participants along with the number of tournaments played and number of knockouts scored for each player.

I'll feed the data into several points formats and report back the results of each.
 
I’m interested in how you would rate these 10 players and why you would rate them that way.

Based on their performance over the course of eleven events, I'd rank your ten players in this order:

1 - Peter, 17.6
6 - Dan, 14.6
7 - Rob, 14.4
8 - Len, 11.9
15 - Carol, 7.8
17 - Tim, 7.4
18 - Brandon, 7.0
19 - Ron, 6.6
20 - Mindy, 4.8
26 - Kevin, 0.0

As you can see, there were a number of other players (not even eligible by your standards) that I feel outperformed many on your 'top ten' list -- which would indicate to me a severe problem with the way your system determines high performance.

My top Twenty:
1 Peter (7), 17.6
2 Johnathan (5), 17.4
3 Gary (6), 15.8
4 Nathan (3), 15.7
5 Scott O (5), 15.0
6 Dan B (9), 14.6
7 Rob (10), 14.4
8 Len (11), 11.9
9 Harlin (1), 10.5
10 Randy (1), 10.5

11 EJ (6), 9.4
12 Sheri (5), 9.3
13 Heath (2), 9.0
14 Byron (1), 9.0
15 Carol (7), 7.8
16 Pat (5), 7.7
17 Tim (7), 7.4
18 Brandon (8), 7.0
19 Ron (11), 6.6
20 Mindy (7), 4.8

The points reflected above are based on the top 25% of each field receiving points, with a baseline of 10-6-3-1 at the average attendance point (16 players), using a narrow +/- multiplier range to account for field size differences. Players that perform well receive points; players are not awarded for attendance or mediocre performances. Players that win large-field events are not overcompensated. Meaningless 'performance' categories are not awarded with the points system.

Under this system, a total of six players performed well enough to finish in the top ten that were not even considered by your reward system. There is something seriously wrong there.

A few observations:
  • Knockouts are not necessarily a good indicator of player performance. Sometimes this may be the case, but it is more often simply a measurement of aggression and/or luck.
  • In small field sizes, making the final table is not an indicator of high performance, especially when combined with few top three finishes. It is more often a measure of survival or 'hanging on'. This is not the case when hundreds or thousands of players are involved, but in a two- or three-table event, making the final table doesn't necessarily mean good play.
  • Points should not be awarded for finishing in menial positions. The difference in skill for finishing 6th or 7th in a 12 player field is negligible, and neither effort should be awarded compensation for finishing mid-pack.
Tex, I know you like to argue endlessly about how your system does this and that and what-not, and frankly, I'm just not interested. However, if you sincerely believe that your system does reward the best performers, I'll make you a wager:

I'll take my top ten players above vs your top ten player list, and you tell me which list won the most money -- which in the end, is the prime indicator of performance. I'll buy you dinner if your top ten players list won more cash over the eleven events than mine. If my player list cashes the highest, change your system so that the true high performers are rewarded and recognized.
 
BG, we only had 10 attend the 7 required to earn one of the awards. It's a competition to see which of our regulars performed the best with a minimum of 7 games. Our system rates all players. We did have some (#3-6 in fact) who were in the top 10 but ineligible, or even 6 of the top 10 if we don't divide per game averages by 7. So it's not true that our system doesn't consider them. It considers and rates every player. I only asked for people to rate the 10 who were eligible because I'm really interested in the #1, #2, and which of the 2 qualifying female players was better. I know how our system did it, but I'm interested in any ideas that would improve our system.

There are some things I like about your suggestions. I see your results. So let me ask:
  1. I'd think every league (ours is more of a club though) has some minimum number of games for awards if they give them. What do you think the number of required entrances should be and why?
  2. What is the "narrow +/- multiplier" for field size differences? If it's not proportional, how will it follow the money?
  3. If you give more to the top 25%, what happens to the scoring at 20 players? 24? 28? You only pointed out the average for 16.
  4. Then what happens at 12? Do you only award 3 spots? One thing we found really skewed the scores was significant differences when you added one player but one additional points or pay slot. That's how we wound up doing 7. I could see 5 being very workable, or even 4. I re-ran ours with 5 top scores only. You have to go past the first 5 places before it alters anyone's position, and thus would not change the outcome.
  5. Byron came once and finished 1st in a 14 player tournament. Heath came 2x, finished 2nd in a 15 player tournament and 9th in the 14 player tournament. They seem to be equal (9.0). Is that an error? The formula, based on what you said, is (Byron) 10 pts x X = 9.0 with 14 players; (Heath) 6 x Y = 9.0 (14 players), unless you don't give 10 until there are 15 players and only 6 at 14 players. If you strictly follow the money, Byron won $92 and Heath won $65-$20=$45, so Byron won more than 2x as much. If this isn't an error, I don't understand how you got there. Please explain.
  6. Is your system a pure accumulation system, or does it have a per game component?
It seems you would not give participation points at all. I like that a player who just shows up wouldn't win over another player just because he attended one more game. It would only help him if he accomplished something. It would certainly cut down on record keeping. Scores themselves would be a lot lower.

It looks like you give a lot more credit for 4th than we do. We give more credit for finishing 1st. Part of our (ongoing) discussion was giving more credit for top 5 finishes.

Comments on your observations:

  • Knockouts are not necessarily a good indicator of player performance. Sometimes this may be the case, but it is more often simply a measurement of aggression and/or luck. TR -- I agree it measures aggression. Over a short run, it can measure luck. But for 3 years, we see the same thing, certain players are much better at this than other players who have otherwise similar performances. In our system, KOs are primarily a tie-breaker when other factors are close. One discussion point is taking them out of the overall evaluation and making them their own separate category. No decision has been made on that yet -- good arguments both ways. Our issue isn't whether to use it, it's how to weigh it.
  • In small field sizes, making the final table is not an indicator of high performance, especially when combined with few top three finishes. It is more often a measure of survival or 'hanging on'. This is not the case when hundreds or thousands of players are involved, but in a two- or three-table event, making the final table doesn't necessarily mean good play. TR -- I agree with that. Because of the high percentage of players making the FT, it's another factor that really only helps break ties between otherwise close players. I think 61% made the FT, which makes that pretty average all by itself. Because of the high number, that makes it count very little. If we averaged 20, it would indicate those who consistently finish in the top half. If we have the 30 our system was set up for, it shows their ability to survive to the top 1/3, which is a skill. We see this over time as well -- some players who rarely finish near the top consistently make the FT.
  • Points should not be awarded for finishing in menial positions. The difference in skill for finishing 6th or 7th in a 12 player field is negligible, and neither effort should be awarded compensation for finishing mid-pack. TR -- I agree with that. Our system gives 2x the points for 6th as 7th, but a player who doesn't finish much higher at least once isn't ever going to win. And while I agree those two are likely similar, they were both on the same table when that happened.
Questions we are discussing I'd like feedback on
We have a per game average component that makes up half of the current evaluation. Cumulative scores are divided by 7 or the number of games a player attends if higher than 7. We've discussed cutting that off at 9 or 10, which effectively eliminates a player's worst 2-3 games with perfect attendance (which is normally 12). However, we've discovered that wherever we draw that line, it might mean that a player who attended one more game, but did poorly, does better than a player with a higher average. If we don't divide by 7, the players at the very top will always be those who came once and won. Once a player attends a second game and doesn't win, his average will never approach and a 1-time attendeed/winner did, and that makes up half the score.

We've given thought to giving points beyond participation points to the top 5 instead of the top 7. That might mean though that a guy might cash at #6 or #7, but get no extra points for it. Thoughts?

We award points specifically for winning a tournament. It is possible for someone to be in our top 2 without winning, but the chances decrease significantly if any player wins 2 tournaments. In prior years, #1 and #2 both won at least 2 tournaments. This year there were no repeat winners. There is a strong feeling that someone who succeeds in winning 2 tournaments should beat out a guy who doesn't win one. Thoughts?

Thanks.
 
Check out the pocketfives.com scoring formula, or the WSOP POY formula.

There shouldn't be a linear correlation between placement and points. It is much more important to win than it is to move up a spot. That isn't reflected if 1st gets 10 points, and 2nd gets 9, while the difference between 6th and 7th is also one point. Think about how tournaments are paid out. Your point scales should have a similar relationship to placements as $ payouts in MTTs.
 
Rainman, our points is based on a Fibonacci sequence -- 8th and lower=1; 7th=2; 6th=3; 5th=5; 4th=8; 3rd=13; 2nd=21; and 1st =34. It's not linear at all.
 
Rainman, our points is based on a Fibonacci sequence -- 8th and lower=1; 7th=2; 6th=3; 5th=5; 4th=8; 3rd=13; 2nd=21; and 1st =34. It's not linear at all.

Oops, I missed that. Is field size taken into consideration at all, or are all the tournies about the same size?
 
I'd think every league (ours is more of a club though) has some minimum number of games for awards if they give them. What do you think the number of required entrances should be and why?
If it is a performance-based league/competition, then I think the results of the players should stand on their own merits - i.e., no minimum required. If Player A plays in 5 events and wins all five, he clearly is one of the top performers in the group.... and to exclude him from receiving awards when he has obviously demonstrated high performance is idiotic in my opinion. If it's a social club where performance is not as important as other variables, then use whatever rules that make the most sense to ensure that the mediocre players return next season and don't lose interest. But that's not measuring actual and true performance, and trying to combine the two opposite ends of the spectrum just doesn't work.

One of our leagues awards the top eight point finishers with entry into a free-roll Championship Tournament, which is funded by a small rake from the nine regular season tournament prize pools. The minimum number of attended events required to gain free entry is five (rules specify "greater than 50% attendance"). But it also allows a player who finishes in the top eight with fewer than 5 appearances to make up the difference in rake (added to the CT prize pool) and still play; this appeases those players who made a larger financial commitment to the CT by playing in more regular season events . Awarded points are structured so that it is rare that a player could amass enough points to finish in the top eight with only a single win (it has never happened in 13 seasons).


What is the "narrow +/- multiplier" for field size differences? If it's not proportional, how will it follow the money?
Your field size multiplier uses increments of +/- 0.1 per player, is based on a normalized field size of 10 players, and ranges from 0.3x (3 players) to 3x (30 players). I do not believe that points for winning a 30 player event should be 3x those of winning a single table of 10 players, and your multiplier seems to excessively reward finishers in larger tournaments and overly penalize finishers in smaller tournaments. In my opinion, the range should be much narrower.

Our multiplier uses increments of +/- 0.05 per player, is based on historical average field size, and when applied to your system (avg = 16) results in a range of .035x (3 players) to 1.7x (30 players). More importantly, the range that would be actually used by your events (based on actual attendance numbers) is 0.8x (12 players) to 1.35x (23 players), which at 0.55 is twice as narrow than the 1.1 range what was actually used (1.2x for 12 players up to 2.3x for 23 players, with 1.6x used for an avg 16 player field) .

The narrower +/- multiplier range still rewards extra points for finishing high in larger fields, but not excessively so.


If you give more to the top 25%, what happens to the scoring at 20 players? 24? 28? You only pointed out the average for 16.
Not quite sure what you are asking. Our system awards points to the top 25% (rounded) of the field, regardless of size.

For example:
12 player field = top 3 are awarded points
16-player field = top 4 are awarded points
20-player filed = top 5 are awarded points.

Some of our leagues award points to the top 33% of the field, and some of the leagues award points to the top 33% but only reward money to the top 25% (in the latter, the money bubble still gets points but no cash). There are a couple of leagues that reward menial point amounts to the second 25% (5th-8th in a 16-player field, for example), but they are pretty meaningless in the overall scheme of things.


Then what happens at 12? Do you only award 3 spots? One thing we found really skewed the scores was significant differences when you added one player but one additional points or pay slot. That's how we wound up doing 7. I could see 5 being very workable, or even 4. I re-ran ours with 5 top scores only. You have to go past the first 5 places before it alters anyone's position, and thus would not change the outcome.
See above. I think using a set number of positions vs a percentage results in awarding points to players who do not deserve it. For example, it makes no sense to me to award points to seven players in an 8-player field. All you are really doing at that point is rewarding attendance, and it cheapens the points awarded to a 7th-place finisher in a much larger field.


Byron came once and finished 1st in a 14 player tournament. Heath came 2x, finished 2nd in a 15 player tournament and 9th in the 14 player tournament. They seem to be equal (9.0). Is that an error? The formula, based on what you said, is (Byron) 10 pts x X = 9.0 with 14 players; (Heath) 6 x Y = 9.0 (14 players), unless you don't give 10 until there are 15 players and only 6 at 14 players. If you strictly follow the money, Byron won $92 and Heath won $65-$20=$45, so Byron won more than 2x as much. If this isn't an error, I don't understand how you got there. Please explain.
By my manual calculations, Byron should have won 9.0 points for finishing 1st in a 14-player event (10 x 0.9). Heath would have scored 5.7 points for 2nd in a 15-player event (6 x 0.95) plus 0 points for his 9th place finish in the 14-player event, or a total of 5.7 points.

I cannot explain the reporting error, although my calculations were done late at night and using your spreadsheet with substituted numbers in the Scoring Reference table. I did not error check all of my results, although I did discover a few monthly tabs that were not calculating properly and manually checked/corrected those numbers.

EDIT: Going back just now, I see that the Dec tab was not using the correct multiplier (it still shows your 1.5 multiplier value instead of 0.95, so the previously reported results for Scott O, Heath, Tim, and Ron are all inflated numbers). Corrected total points are:
Scott O - 9.5 pts
Heath - 5.7 pts
Tim - 5.7 pts
Ron - 6.1 pts

The errors change the order of the top 20 places to this:

1 Peter (7), 17.6
2 Johnathan (5), 17.4
3 Gary (6), 15.8
4 Nathan (3), 15.7
5 Dan B (9), 14.6
6 Rob (10), 14.4
7 Len (11), 11.9
8 Harlin (1), 10.5
8 Randy (1), 10.5
10 Scott O (5), 9.5 (previously 15.0 in error)

11 EJ (6), 9.4
12 Sheri (5), 9.3
13 Byron (1), 9.0
14 Carol (7), 7.8
15 Pat (5), 7.7
16 Brandon (8), 7.0
17 Ron (11), 6.1 (previously 6.6 in error)
18 Tim (7), 5.7 (previously 7.4 in error)
18 Heath (2), 5.7 (previously 9.0 in error)
20 Mindy (7), 4.8

Although it has not been previously mentioned, our scoring system uses the following tie-breakers in order: points, wins, cash, knockouts, events. The tie-breakers were not used in either list published here.


Is your system a pure accumulation system, or does it have a per game component?
Strictly total accumulation. When points are only awarded to the top 25% of the field, the need for results to be viewed on a points-per-event basis is drastically reduced, and the problems dealing with low event numbers producing artificially high points-per-event numbers is eliminated.


It seems you would not give participation points at all. I like that a player who just shows up wouldn't win over another player just because he attended one more game. It would only help him if he accomplished something. It would certainly cut down on record keeping. Scores themselves would be a lot lower.
Correct on all points. We do have some leagues that are more social in nature (smaller entry fees, etc.) that do partially reward attendance on a small basis, compared to this system.


It looks like you give a lot more credit for 4th than we do. We give more credit for finishing 1st. Part of our (ongoing) discussion was giving more credit for top 5 finishes.
Your assessment of this may change in light of the corrected numbers above. But our 10-6-3-1 base system awards points to 4th at 1/10 that of first place (and only when applicable based on qualifying field size) . Your 34-21-13-8 system actually gives more credit for 4th in relationship to the points awarded for 1st through 3rd, and would be awarded even in a 4-player field (another example of why pre-set numbers of awarded points is a bad idea).
 
Questions we are discussing I'd like feedback on
We have a per game average component that makes up half of the current evaluation. Cumulative scores are divided by 7 or the number of games a player attends if higher than 7. We've discussed cutting that off at 9 or 10, which effectively eliminates a player's worst 2-3 games with perfect attendance (which is normally 12). However, we've discovered that wherever we draw that line, it might mean that a player who attended one more game, but did poorly, does better than a player with a higher average. If we don't divide by 7, the players at the very top will always be those who came once and won. Once a player attends a second game and doesn't win, his average will never approach and a 1-time attendeed/winner did, and that makes up half the score.

We've given thought to giving points beyond participation points to the top 5 instead of the top 7. That might mean though that a guy might cash at #6 or #7, but get no extra points for it. Thoughts?

We award points specifically for winning a tournament. It is possible for someone to be in our top 2 without winning, but the chances decrease significantly if any player wins 2 tournaments. In prior years, #1 and #2 both won at least 2 tournaments. This year there were no repeat winners. There is a strong feeling that someone who succeeds in winning 2 tournaments should beat out a guy who doesn't win one. Thoughts?

I don't think ppa (points per appearance) are valid indicators in small sample sizes, and certainly don't warrant being half of your current season player evaluations. After 13 seasons and 130+ events, I barely even think they are good indicators even with that much data. I've found that a minimum of 25-30 events is needed to minimize fluctuations that can occur with additional first or last-place finishes. It does make for interesting data viewing (and bragging rights, without using cash numbers which have emotional drawbacks), but I don't think it deserves a place in a short season evaluation.

As previously stated, I strongly disagree with using either participation points or a fixed number of positions being awarded points. Neither is conducive to a good performance-based points system.

In 13 seasons (139 total events, including Championship Tournaments and special events), we have had 31 different individual tournament winners. 19 of those have multiple tournament wins, and nine players have 5 or more tournament wins.

Looking at those 13 individual regular seasons (9 events each): eighteen times players have scored two wins in a single season, and there has been a player with 3 wins in eight of the seasons (4 wins one of those times). Only once (season VII) has there been nine different winners with no repeaters. The fewest number of winners in a season is five (three times), with the average number of winners per season at 6.3. There have been eight different regular season champions in 13 seasons, with three players having won more than one championship.

One year (season III), the series champion had one win while both 2nd and 3rd place each had two wins - but the winner had several more top three finishes to accumulate more points (demonstrating better overall performance for that season). In another season (VI), the champion had 2 wins while the runner-up had 3 wins - again, due to the winner having several more top-three finishes. In the other eleven seasons, the player with the most wins won the championship, although there was a tie for most wins in four of those seasons (so players' non-winning performances made the difference). The highest any player has finished without winning an event during the season is 2nd.
 
Last edited:
I'll start with Rainman ~ Yes, field size does count, and it's proportional. Everyone gets 1 participation point, and that is multiplied by the # of participants divided by 10. I got that formula from either Bluff Magazine or Card Player Magazine. They used a less direct approach, but after playing around with it, I discovered that from 7 or 8 to 30 players in a tournament, our formula duplicated their results.

BG ~ Wow! That's actually a great deal of useful information! I'll take it from the top.

Performance-based -- While I like some per game average, we do think 50% counts too much. We did already adjust that, and will likely again. The challenge has always been how to deal with high performers who came only 1 or 2 times. Johnathan is a great example. He came 5 times, did very well in the two largest games, but did nothing in the smaller games. This is his lowest attendance year and his performance this year, based on past performance, is higher than normal. He's historically done really well at a couple of tournaments, and poorly in most others. He's just one of many examples of what the NCAA has introduced us to -- low number of data points. Honestly, I don't think 7 is enough to truly evaluate the best performers, but if we required the 10 I'd be comfortable with, our eligible list would be only 4 players this year. At that point, it's about attendance. Again, we aren't taking anything into the final game from this, so there are both performance and social aspects of this. Our intent has always been to determine the best of the regulars, but obviously with 48 people coming to 11 events, it's pretty open. We have no membership per se.

Top 25% (and I realize it could be 1/3) -- If I'm understanding this correctly, 3 are awarded points from players 12-15; 4 from 16-19; 5 from 20-23; 6 from 24-27; and 7 from 28-30. Is that correct, or do the breaks occur at slightly different numbers? If so, that translates into 20-25% for 3; 21-25% for 4; 22-25% for 5; 22-25% for 6; 23-25% for 7. That's a fairly narrow range and I don't think it's unreasonable. Our current system gives points to 7, but as the field size increases, the points given increase. I think we are closer to giving points or extra points out only to 5 or 6 than a straight percentage because of the problems that causes. For example, the jump between the top points for only 1 more player has a more significant increase for a larger tournament than our current system where tournament size is directly proportional to the points. That's the only place we value field size, so while it seems like a big difference, the fact that we weigh other factors offsets how much field size and the high number of points count.

How many points do you give at each size for the top players? I really need that info to evaluate and compare. I get 10-6-3-1 for 4 payouts, but how many for 3, 5, 6, and 7?
The magazine I mentioned scored only the top 5 -- 10-7-5-3-2 and 1 for everyone else.

Corrections -- Thanks for the clarification. That does make sense

More Credit for 4th -- Actually I think these numbers are deceiving. You give 0 points for 5th, so even though 1st is 10x the points instead of just under 4x like ours, ours puts a greater emphasis on higher finishes because every player gets something. Once I know how you score for the various payouts, I can compare this better. Also, if we reduce the number of extra points to 6 or 5, it's going to affect all scores. The basic idea of going with the Fibonacci sequence is the relatively close 1.6x the next higher finish. It practically means that if one guy finishes 1st in a 20-player tournament, he scores as well as a player who enters two 20-player tournaments and finishes 2nd and 3rd. The first guy winning is equal to the second guy doing pretty well in twice the number of events. However, because field size varies, so much, you can see we had 12 2x, 14 2x, 15 3x, and 17 2x. (12 12 14 17 15 22 15 23 17 14 15) In 7 of those tournaments, it seems you'd only award points to 3 players, to 4 players 2 times, and 5 players 2 times. That seems to put a big emphasis on field size as opposed to our proportional system.

Your second post is mostly about history. It's odd that we had no repeat winners this year. The prior years we've had multiple repeat winners. Our field size varied more this year than in years past (16-21 in 2013, 16-27 in 2014, and 12-23 in 2015). And this year, only 4 of the 11 tournaments were as large as the smallest the previous two years.

While we disagree about whether per game performance should count, I'm looking to figure out how to weigh the various factors. We also disagree about participation points, but maybe not that much. That really tells us little about our very top players, but does gives us a way to rate all players. Your system would seem to rate those who never scored in the points in one big mass of 0. Considering what we are trying to accomplish in finding our top 2 and our best of the sex that wasn't the top, that's not a bad way to do it. It's just not the way we've done it.

BG, I'm looking forward to hearing more. We have to have our system in place to announce by the Jan. 14 since it will be announced the night before our first season game. So let me ask these questions:
If we are going to way per game performance, in what relation to cumulative would you do it?

I'd appreciate prompt input so I can actually play with it and submit it to the others for consideration.

Ideas from others welcome too!
 
Last edited:
While I like some per game average, we do think 50% counts too much. We did already adjust that, and will likely again. The challenge has always been how to deal with high performers who came only 1 or 2 times.

Honestly, I don't think 7 is enough to truly evaluate the best performers, but if we required the 10 I'd be comfortable with, our eligible list would be only 4 players this year.

Our intent has always been to determine the best of the regulars

If we are going to way per game performance, in what relation to cumulative would you do it?
I think you hit the nail on the head when it comes to using any kind of per-game-average to determine performance: your data size sample is simply too small for it to be meaningful or helpful in any way. Nothing wrong with tracking it (or even publishing it, or awarding it on it's own), but I would never use it to influence the 'best player' awards unless I had at least 20 or more data points for every player. Even at that data level, I'm not sure that it's going to say anything different than the straight points wouldn't already show.... and if that's the case, why bother with it at all, since it doesn't add anything except complexity? Sometimes simpler is better.


If I'm understanding this correctly, 3 are awarded points from players 12-15; 4 from 16-19; 5 from 20-23; 6 from 24-27; and 7 from 28-30. Is that correct, or do the breaks occur at slightly different numbers?

How many points do you give at each size for the top players? I really need that info to evaluate and compare. I get 10-6-3-1 for 4 payouts, but how many for 3, 5, 6, and 7?

The numbers for awarded positions are typically rounded up/down. For example, 25% of a 14-player field is 3.5, which would round up to 4 places. 25% of a 13-player field is 3.25, which would round down to 3 places. If 33% was being used, those numbers would be 4.6 (5) and 4.3 (4), respectfully for 14- and 13-player fields.

There are several scoring tables in use (depending on the specific league), but here is one that uses 33%:

positions awarded / field sizes
2 / 6-7
3 / 8-10
4 / 11-13
5 / 14-16
6 / 17-18

The baseline on this table is 12 players at 10/6/3/1, and a +/-0.04 increment multiplier is is used to calculate points for other field sizes. Field sizes that award points to fewer than 4 places simply drop the extra points. Field sizes that award points to more than 4 places calculate those at 50% of the next highest awarded point level.

For example, a 14-player field would receive 10.82 / 6.49 / 3.24 / 1.08 / 0.54 points for 1st through 5th. Points are calculated to two decimal points (resulting in very rare ties), but are typically published only to the first decimal point for clarity.


The basic idea of going with the Fibonacci sequence is the relatively close 1.6x the next higher finish. It practically means that if one guy finishes 1st in a 20-player tournament, he scores as well as a player who enters two 20-player tournaments and finishes 2nd and 3rd. The first guy winning is equal to the second guy doing pretty well in twice the number of events.
This is pretty much the case in our point systems, too. In addition (and more importantly, imo), a 1st and 3rd finish will roughly equal two 2nd place finishes (with the winner always getting a slight advantage).


However, because field size varies, so much, you can see we had 12 2x, 14 2x, 15 3x, and 17 2x. (12 12 14 17 15 22 15 23 17 14 15) In 7 of those tournaments, it seems you'd only award points to 3 players, to 4 players 2 times, and 5 players 2 times. That seems to put a big emphasis on field size as opposed to our proportional system.
I think you are looking at it wrong, and missing the bigger picture. I would argue that awarding points to a set percentage of the field size IS proportional by definition, and that your system is not proportional at all. Awarding points to a smaller set percentage of the field size puts a big emphasis on performance, regardless of field size. Increasing point awards to a larger number of players definitely awards attendance more than performance, which is counter-productive to your goal as stated.

Winning vs finishing second is often more luck-related than most winners would like to admit. Getting to three-handed play or heads-up and into a position to win is often a better indicator of skill and performance than is actually winning the event. A player that does this twice (in two tries) has demonstrated far more skill and performance that someone who routinely finishes 5th or 6th (in each of 11 tries). A points system designed to evaluate and indicate top performances should reflect and acknowledge that.
 
Last edited:
BG, thanks! I'll start playing with some of these ideas tonight and see how they work out. It might take me a day or two, but I'll post the results. Maybe we could all learn something.
 
BG, is your multiplier for larger fields linear? If it is not, I don't understand how you do that and need to in order to experiment. My time crunch on this is getting critical. Could you possibly send me the exact numbers you us from say 10-30? Thanks.
 
The multiplier is +/- 0.04 from one field size to the next. An example, where 12 is the baseline:

field, points
10, 9.22 - 5.53 - 2.76 - 0.92
11, 9.60 - 5.76 - 2.88 - 0.96
12, 10.0 - 6.0 - 3.0 - 1.0
13, 10.40 - 6.24 - 3.12 - 1.04
14, 10.82 - 6.49 - 3.24 - 1.08
etc

The cut-off on awarded points is predetermined (22%, 25%, 30%, 33%, 35%, etc.). The lower the percentage, the more performance-based the system. Higher percentages factor in attendance to a larger degree.

Points awarded to positions higher than 4th are calculated as 50% of the next highest point value. An example, based on the points table above:

5th points = 0.5*(4th points) = 0.54 for 14 players, or 0.46 for 10 players
6th points = 0.5*(5th points) = 0.27 for 14 players, or 0.23 for 10 players
etc.
 
Thank you! I'll be playing with this over the next couple of days.
 
First a response to a comment by BG ~ You said the difference between 1st and 2nd is often luck. I agree with that. Generally I think there's a group of players that heads up, against each other, at any given point the likelihood of one winning is proportional to their chip stacks. That's going to come down to who gets the cards at the right time.

In other cases, one player is enough better that I'd consider it an even match if the better player had 40% of the chip stack, or even 25%.

***
After spending many hours playing with this based on our 2015 season, here are my observations, though I'm going to start with how our "system" does things.

Our system rates players in 5 areas in two ways. One is by accumulated totals and the other is a per game average. The per game average can be measured in its pure form (attend 1 game and that's your average), though we us the one that divides by a minimum of 7. That's relevant to some of the results. Someone using our system then can decide how much they will weigh each factor. To make weighing easy, all scores are converted into a 3 digit percentage (like a baseball batting average) of the total in that category that a player got compared to all other players. Our system also rates every player who played.

By comparing a player to the total in a category, we can know when two scores are different how far apart they really are. For example, in one measure the top 2 in points were 62.693 and 61.443. If you compare just these 2 numbers, the first is 2.04% higher. However, when converted to 3 digits, they come to .068 and .067 -- 20 times closer together. The first number appears more decisive than the second.

General Observations
Money won -- Money won seems like an easy way to measure. I think if you are looking at a player, you ultimately want to know whether he is a winning or losing player. However, when you have a short season (and I think 12 games is a short season), it's not a reliable indicator for several possible reasons. If you measure gross payout, a player who has a great night and wins the biggest tournament will have the most paid out. If you measure gross payout, good payouts in a couple of large events might put that player well ahead of better players who might not have made the biggest events. If you measure net payout, you eliminate many players from consideration. In a short season, you might eliminate a good player who had a bad year as being a good player. Net payouts will be best measured by a system that heavily emphasizes per game averages. Gross payouts will be best measured by a system that emphasizes field size.

I honestly think money won is a much better measure if you have the same players playing 20+ times, which doesn't reflect our season at all. For those who maybe do a weekly game and have upwards of 40 data points, I think it's a great measure of the best player. For short seasons, it can be misleading.

Conclusion -- If you want to use money as the evaluation in a short season, just be direct about it, even if you don't call it that. Give the player the exact same points for every dollar won. Just be aware that you may see losing players drop out. Also be aware that if you have your system award extra chips in a year end event, the losers are likely subsidizing the winners more than just because they lose. Not only do they lose, but they are punished extra for losing.

Linear systems -- Linear systems reward survival and average finish, but not necessarily winning. For example, Dr. Strange rated Ron (or Ronald -- once BG got the spreadsheet, he used the names from the spreadsheet -- I probably should have done that originally) as the 4th best. Ron did very well making final tables, but was the 2nd biggest loser in our group in terms of net payouts. We thought our system had him overrated, but he was nowhere near our top 4. Linear systems do not distinguish between what is elite performance and what is not. He said he doesn't consider field size. That is at odds with trying to measure by money won though.

When I said I didn't think there was any real competition for our Ladies POY, that's because we had Carol ahead by so much. However, he rated Carol ahead. I think that reveals a flaw in both a linear system and not awarding anything for field size. All systems I tried that consider field size and a non-linear payout had Carol ahead.

Conclusion -- Linear systems will tend to reward someone who finishes just below the bubble in several games more than players who went above that bubble enough times to make money. I think that is a serious flaw of a linear system. It rewards mediocrity.

Dropping lower scores -- There are effectively at least 2 ways to do this. One is to count all scores and then drop the lowest. If you only drop scores when a person attends a minimum number, this favors attendance, and really favors high attendance. The other is to count some scores as 0 and go only by accumulation. Since we don't have a 0 in the system if a person attends, they must get something.

We count all scores and don't drop any. Since we have a per game component, some think that does penalize a player for a poor performance. I personally think it just says every game counts one way or the other.

One variation that I and another guy liked initially was dong the per game averages by a minimum and a maximum. That has the effect of rewarding performance by only counting up to a certain number. However, as I'll get to, I don't think it works as well as I initially thought. I'm glad I considered it though -- I wouldn't have known if I hadn't tried it.

Conclusion -- Dropping lowest scores rewards attendance a lot. It doesn't follow the money at all. And depending on the exact scoring system, can lead to wild variations that take you far from reality, at least in a short season with a relatively small number of players.

Dropping the lowest and highest scores -- No one suggested this in this thread, but I've heard it before. This has a certain appeal to some. If we are talking about something based on judges' opinions, it might have some merit, but at least for us, there are no such decisions. There might be luck involved, but ultimately, it was decided by the hands played.

Conclusion -- This will reward most a player with the most number of high finishes who isn't hurt as much by dropping his highest score as for another player. In other words, good performance in a bigger game doesn't count as much, and it for sure moves away from money as the result.
 
I'm breaking the rest of this into to parts. Read both parts before responding.

BG's Ideas

BG provided some real meaty ideas that I can put into our system and see how they work. There is some I like and some I don't.

Overall, I liked a lot of his ideas and can compare them.

No participation points -- I couldn't directly test that in our system, but I could compare his results to ours. I'll admit it appeals from a simplicity standpoint. Don't do well enough to get in the points/money, you get a 0. It requires little input. But I couldn't test it in my system. Perhaps he might have sent me his spreadsheet and I could have used that as a base. After playing with his ideas for hours, I discovered it doesn't yield significantly different results. In fact, I can duplicate the results and still give participation points.

What I don't like about this is it inability to measure a player against all other players. That's its biggest weakness. It's hard when trying to determine the difference between 2 players when one has a 0. The one who scored anything is infinitely better. Though I didn't like that, I found that there were ways to compare the idea within our system. But in terms of figuring how what you want to figure out, either will work. They are just different ways of getting there.

Conclusion -- Ultimately, giving participation points or not isn't outcome determinative in a system. Thus you can achieve the same thing by giving some participation points to everyone or giving finish points only to a certain percent who do well enough.

Participation points -- Our system is simple. Take the number of players and divide by 10. That makes for a base on 10 as 1 point. It's purely linear, so the participation points is directly proportional to the number of participants. I took mine from a linear system. BG's system has 12 as a base for 1 pt. If I understood correctly, for each additional player, it goes up 4%. I'm not sure why he did a base of 12. Both of our starting points are completely arbitrary, so one isn't better than the other. It's a distinction without a difference. I wondered, and maybe he will address this, if he did that because 12 happened to be our smallest tournament. If that wasn't it, I'd love to know why start at 12. We started at 10 because it's a full table for us. However, the why with either system isn't critical to outcome. You could start with a base anywhere and yield the same results.

At first glance, I was concerned that this might make bigger tournaments more important than the linear system. It does, but not nearly as much as I thought. It has the advantage of the ratios between any two positions are the same percentage. A linear system makes the data points the same distance away.

The two systems are different, but far more similar than it might first appear. I found when you isolate that one thing, the differences in results are minute. If you are going to evaluate very large tournaments, BG's system keeps the numbers smaller while still keeping the same ratios. I think most of the systems I've seen are linear for this factor. Maybe what I like about his system is that it isn't linear and it makes sense to me.

Conclusion -- More important than where you start as 1 is your system having a logical progression.

Finish points -- By finish points, I mean how many for 1st, 2nd, 3rd, etc. Those are multiplied by participation points to yield the number of points. We use a Fibonacci sequence where the one number is the sum of the previous 2. For us, that means if a guy finishes 1st, for a guy to equal him in points, he must finish 2nd and 3rd in two tournaments of the same size. It maintains a ratio of the next higher score being roughly 1.6x that score. Here's our payouts for the top 7 -- 2, 3, 5, 8, 13, 21, and 34.

BG's system is 1, 3, 6, 10 that he clearly identified, then I think would go to 15, 22/23, and then 33/35. The percentage then levels out to 1.5. Excel will allow you to do that to many decimal points. In this system, though there is only 1 point difference between #4 (1) and #5 (0), the difference mathematically is infinitely high. You can't multiply 0 by anything and get a higher number. In this sense, it rewards the last paying or points finish far above the guy on the bubble. The next lowest is worth 6x the lowest, and then 3x, and then 1.667x, before leveling off to 1.5x. That does make a difference in final results, but at least for our season, both systems have the same 8 people in the top 8. Where they finished in a particular tournament then is significantly different based on tournament size. Even though our ratios seem to be higher, it seemed odd that BG's would reward tournament size more, but after thinking it through, because the amount is the same percentage increase every time, whereas ours goes down slightly every time, that has to be the case.

One thing I like about BG's for very large tournaments is the consistent ratios for all but the very lowest payouts or points, though the Fibonacci sequence does the same thing, just using a different number. But the larger the field of those being paid out, the less you want massive numbers being the issue. For 30 players, there is very little difference.

One thing I did not like about BG's is that whether 1st is equal to a 2nd and 3rd of the same tournament size depends entirely on how many are paid out at that size. That means you can't look at tournament size and finish, but have to know how many places are paid and know that you will get different results at the break point where you are adding one more place, and they may be significant.

Conclusion -- Awarding points differently will lead to different results. I know that's not shocking at all. We could have a lengthy discussion about whether his or my finish points are better, or whether paying the same percent, 25% or 33% rounded vs. a set number. Either will work just fine. How you do the rounding (i.e., add another payout place) has little impact. But neither of those will yield very different results if the points awarded are the same for the top spots. Ratios of payouts from places 1, 2, 3, 4, etc. will have a far greater impact, but even that probably isn't as much as you would imagine.

Cut offs -- By cut offs, I mean at what point do you no longer award points. We've been awarding the top 7 regardless of field size. BG would only award the top 25% or maybe 33%. I couldn't test this directly (because I can't use 0), but could indirectly by giving more for the bottom paying spot and then keeping his ratios. One thing I can test is making everything below the bottom payout based on a percentage = to 1. Here's an example: 16 players -- BG would award points only to the top 4 as 10, 6, 3, and 1. We would award the top 7 as 34, 21, 13, 8, 5, 3, 2. At these lower payouts, BG's variations in how you do other things is much greater because of those ratios.

I then went in and used our system, but made everyone not in the top 25% or 33% a 1.

I toyed around with BG's rounding vs. our set number of additional payouts. Neither makes much difference.

To me the most important point is that while results vary a little, neither causes significant changes up at the top. You have to go down several spaces before you are talking about different players being on the list. Any two systems can vary a little in order near the top if performances are close. It just depends on what you want to rate the highest.

Conclusion -- There is no real difference between systems that don't really reward players who don't perform extremely well. A linear system will reward lesser players, depending on how far down you rate them.
 
More specific comparisons

I identified 4 differences I could apply. I've classified it as BG's vs. mine where I could compare them. I really don't look at it as competition though. I've identified these above. I'll look at 10-30 players.

BG's participation points (.925 - 2.026) vs. mine (1.0 - 3.0)
BG's payouts (limited by payout size) vs. mine (7 places)
BG's cutoffs (rounded) vs. mine (set at certain points)
BG's point amounts (for 5 places) 15, 10, 6, 3, 1, 0 (all others) vs. mine 13, 8, 5, 3, 2, 1 (all others)

I didn't run every variation. The only one of these 4 factors that made a real difference is the point amounts. A little analysis shows why. If a person who didn't finish high enough gets 0, no number of games attended helps him, not even in a close race. In that sense, the person finishing 5th in the above scenario does better under BG's system than mine, and the person finishing 6th or lower does better in mine. But neither system rewards #4 and #5 enough to make a difference unless that is the deciding factor because all other performances are equal.

BG, if you have the data, you might try two things to experiment. One is paying a set number and compare it to your limited number. The other is using payout amounts based on a consistent ratio. Compare both results to what you are getting now. I'd be interested in the results. I found it only makes a difference at the top if you had a player on the bubble. In mine, he might move up a little.

About Our 2015 Season (and the reason I asked about this)
In the first 2 years of our league, we had one dominant player who won on points accumulated (and per game average), tournament wins, and KOs. That one player would have won with any linear system we looked at on points and any non-linear system we looked at on points. The winners just dominated the year.

In 2015, we had no truly dominating overall performances. Parity ruled. Though we had one guy who dominated in KOs far beyond what had been done before, that would have made little difference if he hadn't been right in the mix on other areas.

When all scores were put together, our first and second was Dan, then Rob. Rob has a very slight edge on points, and Dan in points per game. When the two were added together, it was a dead tie when rounded to 3 places. Their performances were so similar, we had to look to other criteria to decide the winner. By every scenario we looked at, Dan had a slight edge. Of our 10 criteria, Rob won on 3 -- final table appearances, average, and points accumulated. The points were offset by the per game average. Dan won big in KOs, but the only way Rob would have won if we left out KOs would be if we included final table appearances. Those two things count the least in our system, so if you took out one, you should take out both. Thus, we don't feel bad about how this came out.

Peter was the only one who warranted consideration who didn't win a tournament, and that's the main reason he didn't finish first even though he did have the most accumulated points. He also did very well in the per game average in that category. However, our system had him completely out of the discussion, even after we removed high performing players who didn't attend 7 games. He was the only one that had 4 top 4 performances. In those four he finished 2, 2, 3, 4. If you considered that his 2 and 3 we considered equal to a 1, then he had a 2 and 4. Rob and a 2 and a 4; Dan had a 3 and a 3. We didn't necessarily think Peter should have won, but we thought he should have been right in the thick of it. That was amazing close considering the differences in the number of games attended.

In particular, I looked at what I referred to as the BG vs. my system regarding these 3 players. When it comes to point accumulation only, Peter was first by all systems, but that's based almost entirely on his high finish in the largest tournament.

We felt Len was overrated. I found it interesting that BG's results actually put Len slightly higher than we did (by 1 place). I definitely wasn't expecting that! It was easy for me to say Len was overrated. There is no "Len." Len was the name I used for me. I felt I had a bad year, but still ended up in the top 10. Only Dr. Strange's linear system took me out of the top 10. Thus, there are times that linear systems get some things right. However, I wasn't near the discussion about the top 2, and that's what we were trying to find. When I looked at how few players scored high more than once though, I really understood why this was a top 10 spot.

We felt we had Ron overrated. We felt BG's system came a little closer, but still overrated him. I haven't found a fix for that other than going by straight money.

We felt we had Kevin overrated. The linear system had him rated much higher than we did. BG's had him lower than ours.

My Takeaways
I like BG's participation points system. It makes the scores based on that closer, so I'm going to recommend that we use that even though ours is simpler and the results don't really matter in points.

I'm going to suggest, to be considered, BG's payouts and cutoffs. I doubt we will use either of them because I can't really show they improve our system and I can't show they would make a material difference.

I'm going to suggest, to be considered, his points. I don't see us using that because past discussions suggest they like the consistent ratios, and the lack of participation points makes it hard to evaluated others except when they finish high.

We will have some type of per game average. I'm not sure it will change. Given that we divided that score by 7 for evaluation purposes, we can say that the 8 highest scores overall on this came from people who came fewer than 7 times and who won a tournament. That seems to confirm that it skews the scores if they have less than 7. I think this is going to be the toughest decision. If we consider per game average less than 50%, then we alter 1st and 2nd. It was that close. We've always debated what it should be with ranges from 50/50 to 70/30 in favor of accumulation.

The very biggest takeaway is tweaking our system in terms of how to weigh the various factors. Making points 75% of the evaluation yielded the following result. 11 of the top 12 players won a tournament. Before, all the winners were in the top 15. I toyed around with many ways to weigh the factors.

Thanks to Dr. Strange and BG for responding. I might be back next year looking for the same thing, but definitely feel I got things out of this that will improve our system.
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account and join our community. It's easy!

Log in

Already have an account? Log in here.

Back
Top Bottom
Cart