First, many thanks to Isaac Rodriguez whose comments on my previous post elicited a comment from me which I have re-written as a post here. I've also updated my previous post a bit to better state the original point.
Let's say that you develop your software in 30-day iterations. At the end of every iteration all functionality introduced during that iteration had all of its tests written during that iteration instead of at the end of the one year cycle. All of those tests pass. You take testing very seriously; you've got very high test coverage numbers, you are using decision based coverage instead of line coverage, and you fix every bug you find.
Let's say that you could release every month but instead you choose to release every year. Now at the end of this year you release your product. I guarantee that your customers will find problems. Let's say for the sake of simplicity that they find exactly 12 and each one is linked to functionality introduced in one of the 12 iterations. This means that for the bug in functionality in iteration 1, you waited 11 months to find the issue that your customer would have found right away.
I'm not saying that you should forgo testing or rely on your customers to be part of your QA department. I'm only saying that despite your best efforts, it is inevitable that there will be issues that you only find after you release, so keep on testing, don't stop that. But release as often as you can.
Also in this (contrived) example, your customers would still be exposed to the same number of bugs, just not all at the same time.
Moral to this story: "In addition to keeping your testing standards high, it is better to find problems that you are likely to only find by releasing to customers as soon as you possibly can. Therefore, release as often as you can."
Next: Customers Don't Want Frequent Releases