Interesting paper. Haven't read it all yet (saving it for later), but... are lowly connected networks "less efficient" though, or do highly connected networks end up drowning in noise?
As I understand it, each person has a certain limit on input and output transmission speeds (reading is faster than writing, but speaking sits between them both), and communication quality declines with density. So the most efficient network, would be that which has as many connections as possible, up to a threshold of desired communication quality. Different people can have different speeds, and form part of several networks, each with a different threshold for quality.
That suggests an ideally efficient network structure, would be formed by a stack of overlapping networks with different topologies and unequally connected nodes depending on each one's in/out capacity and quality requirements of the networks they form part of. If we add different data processing quality and capacity at the nodes, each node would have a particular combination of networks with which it would perform ideally, for maximum total performance.
A further problem to solve, would be the evolution of parameters over time, which could require nodes switching to different combinations of networks and a different number of connections on each. Different types of periodic cycling over different configurations could be ideal for distributing information to maximize problem solving... and different types of problems could benefit from different setups of the whole system.
I wonder if instead of trying to run a simplified network version of a static problem on MTurk, it wouldn't benefit more from a series of initial simulations, and only then run a static or evolving problem with MTurk, adapting the setup based on signals from nodes, networks, and a fit function for the whole system.
Interesting.