PDA

View Full Version : BC Analyzes Challenge 4

BC
07-30-2004, 01:02 AM
I will be making several of these posts. I'm starting tonight because I just got back from bridge. We had a 50% game. We deserved far worse. I knew there would be a lot of bidding in the last half hour, but...

Anyway.

I started out by trying to analyze each distribution to get its "true" mean and "true" variance. Ury truncated distributions 4-10, which threw their means off. The initial investment was the floor, and the maximum possible cost was the ceiling. So I modeled each of these by looking at %-ile points at

0.00005
0.00015
0.00025

...

0.99995

and treating each distribution as a draw from among 10,000 balls.

The underlying distribution for Contract #

4: Mean ~ 244K, sigma ~ 182K. Truncated Mean ~ 223K, sigma ~ 177K
5: Mean ~ 565K, sigma ~ 352K. Truncated Mean ~ 529K, sigma ~ 313K
6: Mean = 125K, sigma = 30K. Truncated Mean ~ 126K, sigma ~ 28K
7: Mean = 300K, sigma = 100K. Truncated Mean ~ 301K, sigma ~ 94K
8: Mean = 260K, sigma = 260K. Truncated Mean ~ 244K, sigma ~ 205K
9: Mean = 315K, sigma = 315K. Truncated Mean ~ 320K, sigma ~ 277K
10:Mean = 500K, sigma = 75K. Truncated Mean ~ 500K, sigma ~ 75K.

Also, there is value in "Skewness". If you are trying to "beat" another team (and let's face it, the ordinal of the results is all that mattered here, not the magnitude), positive skewness is a good thing, because for the same expected value, you win more often. You also win by less, and lose by more. I think it's unfair to criticize Cubedbee for "overlooking the risk on 9", since I was advising our team to GO FOR skewness. The three contracts we made money one were the positively skewed ones. The 50%-ile cost is lower than the expected cost. Whatever cliche you want to apply here worked for us.

Now we all know the knapsack problem, but a quick review:

You have a choice from among n objects to put into a knapsack. Each object has a value to you of v(i) and a weight of w(i). Subject to an upper limit on weight, maximize value.

I constructed a spreadsheet to consider all 1024 possibilities dynamically (actually, just the subset of 503 possibilities that had initial cost less than or equal to 500k) so that it could maximize "Current Bid - expected value" subject to initial cost &lt; 500k, and e-mailed that to a few of my teammates. I also put in a "bang for the buck" measure. The knapsack problem is NP-Hard in general, but a heuristic that gets you good results is to consider the subset of objects that can still fit in your knapsack at a given time and select the object for which v(i)/w(i) is the largest. So for the last half hour, when I couldn't be there, I suggested that whenever cash became free, they should go for the best "bang for the buck" and I also produced a list of 181 possible contract/price combinations ordered in descending order of preference (indexed in multiples of 5000).

More soon...

BC
07-30-2004, 01:20 AM
Okay, anyway...

Based on the final prices in Ury's post, the "bang for buck measure" shows the following deals in order:

Best:

Contract 3 had an expected gain of 187,500 on an initial investment of 140,000, for a relative value (bang for buck) of ~ 1.34

Contract 6 had an expected gain of ~ 99,000 on an initial investment of 80,000, for a relative value of ~ 1.24

Contract 4 had an expected gain of ~ 47,000 on an initial investment of 50,000, for a relative value of ~ 0.93

Contract 5 had an expected gain of ~ 161,000 on an initial investment of 175,000, for a relative value of ~ 0.92

Contract 7 had an expected gain of ~ 114,000 on an initial investment of 130,000, for a relative value of ~ 0.88

Contract 8 had an expected gain of ~ 21,000 on an initial investment of 25,000, for a relative value of ~ 0.86

Contract 2 had an expected gain of 67,500 on an initial investment of 95,000, for a relative value of ~ 0.71

Contract 9 had an expected gain of 62,000 on an initial investment of 90,000, for a relative value of ~ 0.69

Contract 10 had an expected gain of 75,000 on an initial investment of 200,000, for a relative value of ~ 0.38

Contract 1 had an expected gain of 0 on an initial investment of 30,000, for a relative value of ~ 0.00

The best "full knapsack" of contracts until 7:30pm was 1,4,5,7,8,9, based on expected gain. This finally changed - prices on these dropped sufficiently that it dropped to 87th best of 503 possibilities. The best "full knapsack" based on the final prices was 1,3,4,5,6,8 (actually, it was a tie with 3,4,5,6,8).

According to my calculations:

Team Green's total mean is ~343K and total sigma is ~425K. We took risks and they seem to have approximately cancelled out. We had 120K of investment undeployed.

Team Blue's total mean is ~354K and total sigma is ~63K. They took little risk, and seem to have gotten unlucky. They had 185K of investment undeployed.

Team Red's total mean is ~ 137K and total sigma is 287K. They had 180K of investment undeployed.

Tomorrow I will try to come up with an estimate of the probabilities of the 6 possible orders of finish.

4sigma
07-30-2004, 05:32 AM
BC,

In order to follow along the excitement, I confess I built my own very crude spreadsheet. It was not as detailed as yours, as I had rather little at stake, so I spent perhaps 10 minutes building it.

The first order of business was to calculate the mean and variance of each distribution. I did a very crude adjustment for the truncations. Basically I ignored it on all but contracts 4, 5, 8, and 9, and did a crude guesstimate on those, based on the number of σ from the mean to the truncations, and my intuition on how thick the tails were on those distribution. As a result, I was off by about 5K from the true means in a couple of instances.

Next I entered the investment cost for each contract, and left a field to input the current bid on each contract (default to maximum allowable bid, if no bid has been made yet.)

Then the ratio (Current bid - Expected cost) / Investment cost gave the "knapsack ratio" which showed which contracts contained the best remaining value.

Based on this, contracts 4, 8, and 9 were the most lucrative, all having an expected profit of more than 3 times their investment cost. Curiously, team :D held these for most of the early game while bidding wars went on for virtually all the remaining contracts.

A comment on skewness. BC, I disagree with your position on this. I would be more inclined to agree if there were only 2 teams. But in a 3-team game, the goal is to avoid coming in last. As we have seen, the outcome of this game was heavily influenced by unlucky ownership of contract #9. If CubedBee had not bought this at the last second, Team :D would be going to tribal council instead of team :x.

By preferring towards investing in skewness, you are beating the median more often, but you are also taking a large loss more often. Avoiding the large loss is more important that beating the median. Particularly when there are 4 fairly skew contracts out there, so chances are quite good that at least one of them will be come back with a significant loss. If you simply avoid owning all 4 of those contracts, you will probably not go to Tribal Council as long as at least one of the 4 results in a heavy loss. Team :shake: was apparently following a strategy along these lines.

Of course, it's easy to say this in hindsight, given the actual outcomes of the percentiles. It was extremely unlucky that contract 9 came in at the 98th percentile. But I expect that when you model the probability of the 6 different possible results, you will likely find that team :shake: was significantly more likely than team :D to avoid Tribal Council, despite the closeness of means between these two teams. It is entirely possible that Team :D comes in first more often, however.

I'm going on vacation for a week now, and unlikely to check in before next Wednesday. Y'all have fun discussing this....

BC, condolences on your 50% Bridge game. Hopefully you will deserve better next time :D

aces219
07-30-2004, 08:07 AM
Team :shake: was absolutely following an "avoid tribal council" strategy. By avoiding what I deemed excessive risk given our objective, all our contracts made money. It was likely in my mind that 1 of those skewed contracts would result in a large loss. Again with the hindsight.

I wouldn't say that we were particularly unlucky.

My modeling was full of approximations but I decided that further refinement would not add much value, since we were only playing the game once and there was a significant amount of luck involved.

BC
07-30-2004, 08:17 AM
A comment on skewness. BC, I disagree with your position on this. I would be more inclined to agree if there were only 2 teams. But in a 3-team game, the goal is to avoid coming in last. As we have seen, the outcome of this game was heavily influenced by unlucky ownership of contract #9. If CubedBee had not bought this at the last second, Team :D would be going to tribal council instead of team :x.

By preferring towards investing in skewness, you are beating the median more often, but you are also taking a large loss more often. Avoiding the large loss is more important that beating the median. Particularly when there are 4 fairly skew contracts out there, so chances are quite good that at least one of them will be come back with a significant loss. If you simply avoid owning all 4 of those contracts, you will probably not go to Tribal Council as long as at least one of the 4 results in a heavy loss. Team :shake: was apparently following a strategy along these lines.

You may well be right about your comments regarding skewness, but my intuition says otherwise. I believe taking the convolution of multiple skew contracts simple leaves you with a less skew distribution.

That said, our team decided that the last half hour was going to be a desperate rush and that without me, there was little utility in using the spreadsheet for evaluation. You will note that the most "lucrative" contracts at the END were NOT ours, because we managed to price 1,4,5,7,8,9 (theoretically the best "knapsack" at 7:30ET) WAY down WITHOUT trying to compete on 2,3, or 6; Contract 3, in particular, was one of the worst for most of the game, until the very end, when it became the best. Contract 10 stunk the whole game, although it managed to climb to second-to-last when contract 1 went at cost.

That said, however, I believe that you are correct about Team Green coming in last more often than Team Blue because of VARIANCE.
I did not try to price variance into the model and I probably should have.

Perhaps when I run my analysis of the rankings, I can do two different analyses. I can base one on the distributions as given, and another on a variation in which these distributions are averaged with a reflection of themselves about the mean. This would preserve the mean and variance but eliminate the skewness. I suspect that in the former case, where team green took advantage of skewness, we will go to tribal council less often. However, if I run the scenarios, I will share the results either way :wink: I appreciate your insights and am glad to see that someone came to similar conclusions independently of my work - it helps validate my work :)

Contract 9 busting it's top is, according to my calculations, approximately a 2.22% event, although obviously events NEAR that level are also pretty lousy.

And you're also right that had we not "lost" contract 9 at the end, we WOULD be going to tribal council. I maintain that this would be more a result of bad luck than bad mathematics.

urysohn
07-30-2004, 08:30 AM
- drew 10,000 random data points
- converted these uniform random variables to random variables distributed according to the published distributions
- took into account current bids

Why in the heck would you then take an extra step to lose all this useful information by summarizing it simply into expected value? Instead, turning "10,000 points" into "10,000 points for each distribution" (a simple change I'd think, and one you might have already done; you were not clear) and look simply at each row with a more relevant measure: who wins?
To put it another way, you had built 99% of a Monte Carlo simulator (the ideal solution, in my mind) and then did more work to make it less efficient.
Curious.

Mind you cubedbee's model, which I'll call "holy crap, I don't have any contracts", might also have been overly simplistic. ;)

BC
07-30-2004, 08:47 AM
It was "10,000 data points for each distribution". They were not random. They were evenly spaced to get a feel for the "true" distribution for each contract. I don't believe in using simulation when there are other more efficient methods available.

I would have had to run over 500 convolutions of multiple distributions to accomplish what you suggest. In this case, simulation would have been a good way of going about it. I don't believe in using simulation because frankly, I'm not that good at it. I'm better at using analysis and reasonable estimation with a fine mesh.

And this certainly was a fine mesh, wasn't it :D

Seriously, though, trying to write a program to do what you suggest in real time? When I expected that all of the bids are going to come in the last 5 minutes? Our team thought that even the "enter the value, sort the scenarios, paste them onto the filtering sheet, sift and sort" wasn't going to work in real time. I think I averaged 20 seconds on the procedure.

Sorry if that wasn't clear.

I am interested to see what you would have done, though - seriously - I can always afford to add another trick to my bag. I am happy to send you the 2Meg zipped file of my work in exchange.

And kudos, BTW, to 4Sigma for estimating the mean using back-of-the-envelope methods. It's always heartening to hear that someone can think without a keyboard.

Gandalf
07-30-2004, 09:21 AM
I am surprised the market was so inefficient. How did team Blue steal two contracts at prices (expected gain / initial investment) above 1? Did Red and Green think Blue would wait and take them back only at the last minute, leaving whoever temporarily placed a lower bid with nothing at the end? Even if that happened, they would have lowered Blue's expected gain.

Knapsack analysis is interesting, but with other teams bidding you were unlikely to end up with a full knapsack.

BC
07-30-2004, 09:51 AM
I looked in "Valuation of Interest-Sensitive Financial Instruments" (otherwise known as "Vasicek should die a slow and painful death") and found the name of what I use to analyze distributions. It's a method I've used for years and devised indepently, but nonetheless, it's well-known although not well-described in VISFI. Chapter 8, section 7 briefly mentions "low-discrepancy" methods.

Basically, it would be estimating parameters of a distribution by looking at

the 5%, 15%, 25%, 35%, 45%, 55%, 65%, 75%, 85%, and 95% points and using the distribution AS IF each of these individual points had a 1/10 probability of occurring. I used 10,000 points for each distribution.

Ury, I think I get what you're saying about my model being a bit simplistic. However, it needed to be dynamic - if someone underbid us, I need to be able to tell within seconds, not hours, what to do next, and I thought I could make up for some of the deficiencies in the model using the "gray matter" processor and my mathematical intuition. To have built a model like you described that would be anything CLOSE to comprehensive and dynamic would have probably taken me 2 months, not 2 hours, and still might not have enabled the fast decision-making required - although it might, after several minutes, tell me what I should have done.

Okay, my programming skills are a bit weak. I admit it.

To model the "ranking" probabilities, I intend:

For the Blue team: Use 1/80-ile, 3/80-ile, 5/80-ile, ..., 79/80-ile for each of the three distributions in the contracts they won. I will use these to generate 64,000 points and call that the "final distribution" of their profits.

For the Red team: Use 1/512, 3/512, 5/512, ..., 511/512-ile for contracts 9 and 10. Contract 1 is irrelevant. I will use these to generate 65,536 points and call that the "final distribution" of their profits.

For the Green team: Use 1/512, 3/512, 5/512, ...511/512-ile for contracts 4 and 5 to generate 65536 points, and similarly join contracts 7 and 8. I will then average the 128th and 129th points of each to use as a 1/512-ile; the 384th and 385th points for the 3/512-ile; and so on to generate 256 points for each convoluted distribution (and isn't this getting convoluted :D ) and then convolve these two distributions to generate 65,536 points and call that the "final distribution" of our profits.

At this point, it will be fairly easy, I think, to consider the set of 64,000 * 65,536 * 65,536 scenarios and divvy them into 6 classes: GRB, GBR, RGB, RBG, BGR, BRG. I also intend to calculate (interpolate) the %-iles for each team by 10% intervals and report these.

Hopefully, my analytic methods will jibe with everyone's simulations, and we can then judge the true probabilities and determine which team "deserved" to go to tribal council, eliminating the factor of luck (and also the factor of relevance, since even if red doesn't deserve it, they're going anyway).

Ebenezer Kohl
07-30-2004, 09:56 AM
am happy to send you the 2Meg zipped file of my work in exchange.
If we are rounding to the nearest Meg it was 3. :shake:

I had another spreadsheet I developed, which was very simple by comparison but I kept checking it for reasonableness. BC and patterns in static were both backing me up with current floors I could bid on individual contracts.

My decision process involved checking the provided floor, checking my spreadsheet for a profit margin and remaining surplus, and checking BC's 3 meg spreadsheet. BC and patterns in static kept good information flowing throughout the day. Mine was an automated process and I am sure my delightful personality was the reason I made the bids for team green.

:D

the mole
07-30-2004, 09:56 AM
BC, Isn't that a lot of work to demonstrate that I was right in predicting Blue would not go to Tribal Council?

BC
07-30-2004, 09:59 AM
BC, Isn't that a lot of work to demonstrate that I was right in predicting Blue would not go to Tribal Council?

Yes, mole, it is, but I thought it was important for you to know exactly how likely your prediction was to come true, and besides, you're worth the effort.

:shake:

the mole
07-30-2004, 10:10 AM
BC, Isn't that a lot of work to demonstrate that I was right in predicting Blue would not go to Tribal Council?

Yes, mole, it is, but I thought it was important for you to know exactly how likely your prediction was to come true, and besides, you're worth the effort.

:shake:
Yes it would be, but with your simplifying methodology you're not going to get a very good answer for "exactly". Unless I'm misreading what you plan to do, you plan to model the unconditional distribution of the cost of contract 9, rather than the conditional distribution of the cost given that Team :shake: did not get it.

BC
07-30-2004, 11:11 AM
I started with the easy one, Team Red. They got extremely unlucky. According to my calculations, about 3430-to-1 unlucky. Seriously bad luck.

The %-iles and profits:

10 -\$277,127.85
20 - \$58,881.72
30 + \$68,907.55
40 +\$158,752.18
50 +\$226,279.65
60 +\$277,741.09
70 +\$320,488.43
80 +\$361,079.17
90 +\$409,138.69

Holding the Green and Blue results fixed, Team Red needed to hit a little better than 63rd %-ile, so they had a nearly 2-in-3 chance of going to tribal council and needed a little bit of luck. As it happened, they got a lot of it, and all bad.

07-30-2004, 11:18 AM
I just did a quick-and-dirty simulation with 10,000 iterations (which is not to say I trust m\$'s randomness), and only worried about who got sent to the tribal council:

Red: 5,913
Green: 3,277
Blue: 810

Why didn't we have any fun distributions to work with. A Pareto would have been appropriate.

BC
07-30-2004, 11:45 AM
I next went on to Team Blue. I realized that I could combine Contracts 2 and 3 analytically, so I went back to the 256 lattice points. I will now have 65536 possible results for each team to compare. Blue's results were much more conservative, obviously. Given Red's results, they could not go to tribal council. They were guaranteed a profit of at least \$150,000. We definitely should have competed more, on contracts 3 and 6 in particular, in the late stages. C'est la vie.

Their actual result was at about the 18th %-ile, so they were slightly unlucky with their total, but I don't think they're crying right now...

The %-iles and profits:

10 + \$271,260.65
20 + \$297,482.82
30 + \$317,970.92
40 + \$336,489.40
50 + \$354,266.50
60 + \$372,102.26
70 + \$390,715.66
80 + \$411,271.27
90 + \$437,255.86

Edited to remove information provided (more appropriately) in a previous post.

BC
07-30-2004, 11:52 AM

Why don't I, instead of trying to "symmetrize" the skew distributions, just "reverse" or "mirror" them about their means and then compare the probabilities of the 6 possible orderings based on these reversed distributions? In other words, see who does better under negative skewness vs. who does better under positive skewness? Do you think this would produce a reasonable test of which of our hypotheses is better? Actually, I can even run 4 tests; the original one, one in which Green skewness is reversed, one in which Blue skewness is reversed, and one in which both of our skewnesses are reversed. We can see which of the 4 scenarios works best for each team.

Will Durant
07-30-2004, 12:12 PM
Holding the Green and Blue results fixed, Team Red needed to hit a little better than 63rd %-ile, so they had a nearly 2-in-3 chance of going to tribal council and needed a little bit of luck. As it happened, they got a lot of it, and all bad.
Edit problem?

BC
07-30-2004, 12:20 PM
Finally, my own Team Green. I first combined Contracts 4 and 5 and determined 256 lattice points for the combined distribution, and did similary for contracts 7 and 8. Green actually hit our median result almost on the nose (50.43%, by my estimation). Given Team Red's bad luck, we still could have gone to tribal council, but the odds of doing worse than Team Red's disaster were nearly 260-to-1.

The %-iles and profits:

10 -\$242,985.69
20 - \$23,035.80
30 +\$139,266.49
40 +\$274,610.45
50 +\$392,904.85
60 +\$503,913.92
70 +\$612,909.27
80 +\$725,448.83
90 +\$856,830.02

Next, I'll try to come up with probabilities for the order. I hope the results come out similar to EB, since it validate both models, but I recall that there may be a bug in the visual basic random number generator. Gandalf, do you remember the details? Bama wanted an estimate of how much capital he needed to be 90% sure of being able to participate in 100 tourneys without running out of money or something like that...

Gandalf
07-30-2004, 12:38 PM
I recall that there may be a bug in the visual basic random number generator. Gandalf, do you remember the details?As with most (all?) "random number" generators, they are really pseudo-random number generators. They use some kind of seed to start (perhaps a function of the computer's internal clock), and generate a fixed sequence thereafter. Visual Basic's sequence is 17 million something numbers long, so if you need only 3 random numbers per trial (one each to draw a result for Red, Green, Blue) you can run plenty of trials without running into a problem.

For Bama's problem, you could use more than 17 million numbers, so you were getting repetition. Avi tried to compensate by breaking the sequence and starting again at a random point. I think in practice that made the problem worse, since by the nature of the process the breaks and restart points weren't really random.

Will Durant
07-30-2004, 12:51 PM
BC,

If you want a better random generator than VB's, you can look at the code in http://astronomy.swin.edu.au/~pbourke/analysis/random/. I don't know if it's worth it for this little exercise, but for more serious questions, it's one of the best algorithms out there. It can be implemented in VB without much difficulty. I've used it for some Monte Carlo simulations.

BC
07-30-2004, 12:53 PM
Thank you, Will and Gandalf. As it happens, I am not using random numbers or a Monte Carlo simulation. I'm just approximating the continuous distribution with evenly-spaced discrete distributions and cranking.

The Cat in the Hat
07-30-2004, 12:58 PM
Wow. Those people who consider us actuaries just analytical nerds obviously don't know what they're talking about.

:lol:

Gandalf
07-30-2004, 01:26 PM
As it happens, I am not using random numbers or a Monte Carlo simulation.
You really threw me for a loop with that statement. So you were asking about random numbers as a possible reason E.Blackadder's result might not be reliable? With 10,000 trials he should be fine.

Gandalf
07-30-2004, 01:29 PM
Wow. Those people who consider us actuaries just analytical nerds obviously don't know what they're talking about.

:lol:Now we understand why Team 8-) 's analysis suggested that 575 was a good price for contract 10.

Will Durant
07-30-2004, 01:32 PM
Wow. Those people who consider us actuaries just analytical nerds obviously don't know what they're talking about.
I resent that characterization as an analytical nerd

Actually I consider myself an equal opportunity nerd:
* actuary / analytical nerd
* physicist
* computer programmer
* math nerd (only got a BS in math, but still)
* gaming nerd (I play Paranoia, Star Fleet Battles, Magic)
* chess nerd
* science fiction nerd

BC
07-30-2004, 01:36 PM
As it happens, I am not using random numbers or a Monte Carlo simulation.
You really threw me for a loop with that statement. So you were asking about random numbers as a possible reason E.Blackadder's result might not be reliable? With 10,000 trials he should be fine.

Agreed, if the numbers are truly random.

Here are my results:

BGR: 14.99%
BRG: 21.82%
GBR: 40.44%
GRB: 11.32%
RBG: 9.94%
RGB: 1.49%

Summary statistics:

Winner of Challenge:
Green: 51.76%
Blue: 36.81%
Red: 11.43%

Loser of Challenge (and winner of next trip to tribal council):

I'm a little concerned about the difference between my results and EB's.

Blue beats Red 77.25% of the time
Green beats Red 66.75% of the time
Green beats Blue 53.25% of the time

I would happily offer to provide my work for inspection, but it's in a 45Meg file.

Blue did get an incredibly good deal, as it turns out, on those low-risk (3 and 6) contracts.

BC
07-30-2004, 01:39 PM
* gaming nerd (I play Paranoia, Star Fleet Battles, Magic)

I've played Star Fleet Battles. I'm more into online games right now - don't know that many local gamers; don't have much time for it, anyway.

I've heard that Paranoia is quite interesting, but I've never played it. What can you tell me about it?

:horse: :D

BC
07-30-2004, 01:52 PM
Here are my results:
BGR: 14.99%
BRG: 21.82%
GBR: 40.44%
GRB: 11.32%
RBG: 9.94%
RGB: 1.49%

These are the results of approximately 280 Trillion (=2^48) equally spaced scenarios in 3(RGB)-space.

Will Durant
07-30-2004, 01:57 PM
I've played Star Fleet Battles. I'm more into online games right now - don't know that many local gamers; don't have much time for it, anyway.
The computerized version Starfleet Command is not bad. And I understand you can play online with it (although I've never done that).

Will Durant
07-30-2004, 01:59 PM
Red wins: 11.43%
Red loses: 55.43%
So Red has both the lowest probability of winning AND the highest probability of going to Council. Seems like Red misplayed it.

Gandalf
07-30-2004, 02:08 PM
Since you have nothing else to do, BC, why don't you answer the questions people really want to know?
1. Did B^3 increase or decrease his chances of going to Tribal Council by winning Contract 9?
2. On the assumption that B^3 had contract 10 for 575, did he increase or decrease his chances of Tribal Council by taking Contract 9? (He concluded, not unreasonably, that he needed a big win if he was taking a bath on #10. Would he have been better off hoping green got destroyed on #9?)

07-30-2004, 02:17 PM
I really wouldn't take my analysis as conclusive. My spreadsheet could easily be wrong in at least three ways: The random number generator The functions (although they did match the conditions of contest my programming technique.

On the other hand, a 1 terrabite spreadsheet is likely to have some issues, and surely your company will have to at least have someone review it, to make sure you aren't into online gaming or even porn.

Ebenezer Kohl
07-30-2004, 02:32 PM
Blue did get an incredibly good deal, as it turns out, on those low-risk (3 and 6) contracts.
aces219 stole Contract 3 from Red with three minutes remaining. cubedbee chose to go after Green's contracts instead of defending what turned out to be Blue's most profitable contract.

BC
07-30-2004, 02:43 PM
Why would I check out porn at work if I have a high-speed connection at home?

But seriously...

The spreadsheet is actually only about 45 Meg. I used a few tricks to get the results I wanted - basically a product of Vlookups using the "middle" team as a fulcrum of sorts, I could multiply the number of ways they were beaten by one team, given that they got a particular result, by the number of ways they could beat the other team given that they got the same result.

Anyway to Gandalf's first question:

If the Green team had been allowed to keep contract #9 at a bid of 383,000:

BGR: 17.50%
BRG: 24.80%
GBR: 57.52%
GRB: 0.12%
RBG: 0.06%
RGB: 0.00(31)%

Summary Statistics:

Winner of Challenge:

Green: 57.64% (better)
Blue: 42.30% (better)
Red: 0.06% (worse)

Loser of Challenge (and winner of next trip to tribal council):

Blue: 0.12% (better)
Green: 24.86% (better)
Red: 75.02% (worse)

Grabbing Contract 9 from us was clearly the mathematically correct move to make. It happened to go disastrously wrong.

Blue beats Red 99.82% of the time (worse for Red)
Green beats Red 75.14% of the time (worse for Red)
Green beats Blue 57.64% of the time (unimportant to Red, better for Green).

So if Red thought that the bid was going to be accepted at 575,000, then grabbing Contract 9 was a better move than NOT grabbing contract 9.

No offense intended, EB, I just don't tend to trust simulations when there's a more direct approximation method available that comes to a different conclusion. Especially since all of these distributions were capped and floored, making EXTREMELY extreme values less critical to analyze.

No aspersions were meant to be cast against your programming technique or your understanding of the functions used.

Gandalf
07-30-2004, 02:51 PM
If they find out you have a spreadsheet to decide how to respond to various moves by aces, they may consider it geeky porn.

BC
07-30-2004, 03:01 PM
If the bid of 575 was going to be ruled as accepted...

If Red gets contract 9:

BGR: 39.88%
BRG: 6.87%
GBR: 53.25%

Green wins the challenge 53.25% of the time Blue wins the other 46.25%.

Red goes to tribal council 93.13% of the time, Green goes the other 6.87%.

Conversely, if Red lets team Green have contract 9:

BGR: 37.06%
BRG: 5.30%
GBR: 57.64%

Green wins the challenge 57.64% of the time Blue wins the other 42.36%.

Red goes to tribal council 94.70% of the time, Green goes the other 5.30%.

So in this case, too, between the choices of "get contract 9" and "don't get contract 9", Cubedbee's intuition was correct - he improved his team's probability of avoiding tribal council either way.

With of course, the caveat that some would philosophically argue that the probability of a knowable or past event is either 0 or 1, and with 20/20 hindsight Cubedbee made a horrible mistake.

If they find out you have a spreadsheet to decide how to respond to various moves by aces, they may consider it geeky porn.

:lol:

BC
07-30-2004, 03:11 PM
I think at this point I'm done, except for the question 4Sigma had with regard to skewness. Next week I'll probably look at that. Feel free to ask questions (I may answer them), verify or challenge my work, present your own models, and/or let me know how your team planned out the challenge, whatever.

07-30-2004, 03:43 PM
You win.

Red 55,479
Green 32,002
Blue 12,519

Gandalf
07-30-2004, 04:29 PM
Blue did get an incredibly good deal, as it turns out, on those low-risk (3 and 6) contracts.
aces219 stole Contract 3 from Red with three minutes remaining. cubedbee chose to go after Green's contracts instead of defending what turned out to be Blue's most profitable contract.
Great move by Blue. In retrospect, there weren't that many contracts underbid toward the end of the game. I wonder why cubedbee couldn't keep track of the relevant activity?

Ebenezer Kohl
07-30-2004, 04:43 PM
I think a rational decision would have been for cubedbee and aces219 to have fought it out for non-Green contracts after the massive posts were made at the end. However, rational decisions are difficult where I am involved, and perhaps cubedbee saw sorting the massive posts as a challenge he could overcome.

07-30-2004, 04:43 PM
He was playing Bridge with BC.

BC
07-30-2004, 04:45 PM
You win.

Red 55,479
Green 32,002
Blue 12,519

Red: 55.43%
Green: 31.75%
Blue: 12.81%

Assuming my results are correct, we differ by 0.31*sigma for Red, 1.71*sigma for Green, and 2.75*sigma for Blue.

This makes me feel better. I realize I have a margin for error in my estimation, also, but it shouldn't have been nearly as wide as it looked based on your last results. I'm glad this confirms we're in the same ballpark. I'd hate to have 180 meg of mathematical porn on my hard drive for no good reason :) Care to check my other results?

I wonder why cubedbee couldn't keep track of the relevant activity?

I assume this was a rhetorical question...

I actually went through all 800 or so of EK's bids this morning to check that the final results were correct. They were.

the mole
07-30-2004, 04:52 PM
However, rational decisions are difficult where I am involved.Aces had no trouble making rational decisions where you were involved. Fortunately, when I am involved, she follows her heart. :love:

aces219
07-31-2004, 12:32 PM
I'm quite rational when it comes to mathematics, not so rational in the rest of my life.

BC
08-02-2004, 08:31 PM
A bit after the fact, but according to my Monte Carlo runs, this is how the challenge would have played out under 10,000 random runs...

Winner
Blue 36.7%
Red 10.8%
Green 52.5%

Loser
Blue 13.2%
Red 55.6%
Green 31.2%

My results:

Winner of Challenge:
Green: 51.76%
Blue: 36.81%
Red: 11.43%

Loser of Challenge (and winner of next trip to tribal council):
Blue: 12.81%
Green: 31.75%
Red: 55.43%

Considering that these results are pretty close to EB's results also, I think we're pretty close to correct, now.

BC
08-03-2004, 12:40 PM
I think at this point I'm done, except for the question 4Sigma had with regard to skewness. Next week I'll probably look at that. Feel free to ask questions (I may answer them), verify or challenge my work, present your own models, and/or let me know how your team planned out the challenge, whatever.

I reversed the skewness of challenges 4, 5, 7, and 8 by replacing X by 2*mu - X for each potential result.

Results given positive skewness (from a prior post):

BGR: 14.99%
BRG: 21.82%
GBR: 40.44%
GRB: 11.32%
RBG: 9.94%
RGB: 1.49%

Summary statistics:

Winner of Challenge:
Green: 51.76%
Blue: 36.81%
Red: 11.43%

Loser of Challenge (and winner of next trip to tribal council):
Blue: 12.81%
Green: 31.75%
Red: 55.43%

Blue beats Red 77.25% of the time
Green beats Red 66.75% of the time
Green beats Blue 53.25% of the time

Results given negative skewness:

BGR: 18.22%
BRG: 25.11%
GBR: 33.91%
GRB: 9.44%
RBG: 11.87%
RGB: 1.45%

Summary statistics:

Winner of Challenge:
Green: 43.35%
Blue: 43.34%
Red: 13.31%

Loser of Challenge (and winner of next trip to tribal council):
Blue: 10.88%
Green: 36.98%
Red: 52.14%