At a high level, you should work to keep the ratio of development to QA balanced. The ratio will vary from project to project, so I'm not commenting on that, but once you have a ratio there are several ways to maintain it. In an Agile project, you may quickly find out that the ratio itself is off and need to adjust the ratio. In a project with long iterations, you might not find out that the ratio is out of whack until it is "too late."
In any case, once you find that you are short of QA resources, there are a couple of options. The first option of course is to get more QA resources. Failing that, you can use development resources. It may also be that you have a short-term need for more QA and in that case using development resources might be exactly the right short-term solution.
I think a common rule of thumb, which especially applies in the context of compliance, is that different people do the development and testing of a particular requirement. That is, if Sue is responsible for issue 12345, then Sue doesn't do the testing of it.
As to the exact use of development resources, I advocate that developers take on a sub-set of QA activities, thus allowing for better leveraging of the existing QA resources. For instance, creating automated unit tests or the general generation of automated test cases.
You can of course set whatever policy makes sense for your organization. Having people only write test cases for other people's code is always a good idea. Beyond that, I would assume that QA would take what the developers had done as a starting point, supplementing with additional tests, reviewing what the developers had done, running the official test cycle according to the test plan and reporting on the results.
Next: Multi Stage Continuous Integration