Imagine you and I play the following game: first, I roll a dice privately, so neither you nor anyone else on Facebook can see the true result. I report to you the number I got. Then, you roll the dice privately (I can’t see the true result), and report the number you got. If we both happen to get the same number – that is, if we get a “double” – Facebook pays us accordingly. So, if we get a double of 1s, we each earn $1; if we get a double of 6s, we each earn $6; and so on. However, if we report two different numbers, Facebook does not pay anything.
As you have probably figured out, it is ridiculously easy to lie in this game: we can report whatever we want, regardless of the dice, without getting caught. So, we are guaranteed to get filthy rich if I always report getting a 6 and then you always report the same. This setting is similar to many real life situations, in which we are unlikely to get caught for moral transgressions. Do you think you would lie in this way?
Now, imagine that you instead play the same game without me as a partner: you roll the dice twice, report the two numbers you got, and get paid if you report a double. Do you think you are more likely to lie in this new version of the game?
If you are like me, you might predict that lying will be lower in the collaborative game and higher in the solitary game: when we collaborate, our teammates watch us and we are accountable to them, so we might be less willing to behave immorally. In addition, we might not be willing to coerce them into being our partners in crime.
To test whether or not collaboration would decrease lying (compared to playing alone), two researchers from U-Nottingham and Ben-Gurion-U tested 300 undergraduates on these games. Each student played the game for 20 rounds, either with a partner or alone. Here is what they found:
If players are being honest then, statistically, they are expected to report doubles on 3-4 out of 20 rounds (on average). In practice, individuals in the solitary game lied a lot: on average, they reported doubles 3 times more often than they would if they were honest (11 out of 20 rounds).
More shockingly, teams in the collaborative game lied significantly more: they reported doubles 5 times more often than they would if they were honest (15-16 out of 20 rounds!).
The likelihood of a team getting doubles on all 20 rounds is one in a quadrillion (one thousand million million, or 1,000,000,000,000,000). However, half of the teams in the collaborative game reported getting doubles all the time.
Interestingly, in the collaborative game, dishonesty was not only shown by the second player – the first player lied as well. They reported their respective dice to roll mostly 5s and 6s, but very few 1s and 2s.
It seems as if collaboration increases, rather than decreases, corruption (in that sense, dictators are better than governments…). Perhaps this happens because cooperation is a moral obligation in human groups, and when people face a conflict between this and telling the truth (another moral obligation) – they favor cooperation, offsetting our perceived cost of lying.
Is it possible to combat this tendency for collaborative corruption? Apparently, yes. The key tactic is to break the equal gains of the two players by changing the payment scheme of only one player:
If the first player (me) is paid according to the rules described above but the second player (you) instead receives a fixed, low payment for doubles (regardless of their value), then lying decreases. Nonetheless, teams are still far from fully honest.
If the second player gets paid on every round, regardless of whether a double is reported or not, lying is decreased even further. In this case, teams steal 78% less money, which importantly suggests that organizations can decrease corruption by paying employees a fixed salary regardless of performance.
Most surprisingly, changing payments to the first player (me) instead of payments to the second player (you) also decreases lying, even though it is the second player who decides whether to report a double or not.
MY TWO CENTS: there are very few studies with figures as compelling and illuminating (or such jaw-dropping effects) as this one. I whole-heartedly recommend looking at the gorgeous figures, and even trying to infer what different players were thinking based on their detailed reports provided in the Supplementary Materials.
CITATION: Weisel, O., & Shalvi, S. (2015). The collaborative roots of corruption. Proceedings of the National Academy of Sciences, 112(34), 10651-10656.