0
An unethical optimization principle
royalsocietypublishing.org

"May be necessary to re-think the way AI operates in very large strategy spaces"

The significance of these results is that if a large number of strategies is tested at random, then unless the distribution of the returns is fat-tailed, as in the cases of the Pareto or t distributions, a responsible regulator or owner should be extremely cautious about allowing AI systems to operate unsupervised in situations with real consequences.

'Unethical Optimization Principle'

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk.

can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden in a large strategy space.

Practical advice to the regulators and owners of AI is to sample the strategy space and observe whether the returns A(s) have a fat-tailed distribution. If not, then the ‘optimal’ strategies are likely to be unethical whatever the value of η

Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.

Further reading on this topic
waiting for moderation