In this talk: The Long-Term Future of (Artificial) Intelligence, Stuart Russel explains the situation with the Comprehensive Test-Ban Treaty. (From 16m40s to 22:54)

The United States that proposed the treaty still has not ratified it, supposedly because detection mechanisms are not good enough to detect other nations from cheating. This has motivated the UN to develop methods for telling apart seismic activity from nuclear explosions.

His point is not to draw an analogy with AI risk, but rather to show that the problem of detecting nuclear explosions is a success story for AI. The UN spent hundred of millions of dollars and about a hundred years to achieve a modest 30% error rate. Russel and his group got this figure down to about 10% after a few months of work and - you guessed it - probabilistic programming.

In his words:

The point is that what used to require the work of possibly hundreds of people - the UN system is over a 100 million dollars just to develop the software part, and they’ve been trying to solve this problem for about a hundred and three years, since the first paper was written on seismic localization. In the space of just a few months, from learning about it, we had a system working substantially better than what was available. Simply because the field of AI has produced tools that were sufficiently expressive to simply describe the problem and solve it in an almost mechanical fashion.