Testing and deployments
One of the things that happens with software shops is you wind up in a bi-modal situation with testing and deployments for the most post.
If, like Amazon, you have a very low bar for a deployment you wind up in a scenario where testing is a relatively minor task (if it even exists) that is a gating condition for a deployment. If something doesn't work then you can very quickly fix the issue and quickly deploy a fix. To put things in perspective, I've seen interns check-in code and cause a deployment during my time at Amazon -- there isn't a "sign off process" or anything. You just put stuff out there and fix it if it breaks.
If you break things too often you get a good talking to.
On the other hand, many software shops have a much higher bar for a deployment. You can spend a man-month or more testing to make sure things are working as designed. When you have such an expensive process you will, of course, be very tentative to kick off a deployment since it's so incredibly expensive.
This doesn't mean that bugs don't happen.
Because bugs, regardless of the level of rigor of testing, tend to happen anyway.
Sure, with things that are really expensive you can set an unbearably high bar. Things like space launched come to mind -- when hundreds of millions or even billions of dollars and potentially human life are on the line -- that bar is set super high.
Baring the extreme case of things like space travel and planes, you wind up with this bi-stable situation. You can have cheap and low-test, or expensive and high-test. Weirdly, it's hard to go from one to the other because you simply can't conceive of not doing what you're doing.
"The other camp is simply crazy."