And there are more nefarious examples, like the 1968 RAND project to reduce fire response times in NYC, resulting in an estimated 60,000 fires in impoverished sections of New York, as "faulty data and flawed assumptions" triggered the replacement of fire stations in Brooklyn, Queens, and the Bronx with smaller ones. The coup de grace here was the politicization of the supposedly "scientific" project, where clever RAND officials, realizing that rich folk in well-to-do neighborhoods would not tolerate the effects of "efficiency" using their (flawed) simulations, placed such neighborhoods outside the scope of the project.
And on and on the story goes. Unintended consequences are simply part and parcel of the development of causal or predictive models using quantitative data gleaned from messy, complex systems. The real folly, however, in the Pegasus project and so many others like it, is not in the (basically correct) idea that quantitative analysis can provide useful information when devising strategies, for urban planning or otherwise, but that the human element can therefore be eliminated. That latter claim does not follow, and taking it too seriously will almost certainly guarantee that among the lessons we learn from the "Center for Innovation, Testing, and Evaluation", one of the most important is likely to be that innovation, testing, and evaluation is not enough.