Monsanto Roundup Lawsuit

Thursday, August 16, 2007

GlaxoSmithKline - Snakes, ladders, and spin?

Browsing through the BMJ I found this rather fascinating article which pretty much sums up how GSK works. Its called 'Snakes, ladders and spin' and can be read here

See if it rings any bells for you.

How to make a compelling submission to NICE: tips for sponsoring organisations

Evidence in support of products being assessed by the National Institute for Clinical Excellence can be presented in various ways to make products seem more attractive than they really are

During health technology appraisals the National Institute for Clinical Excellence (NICE) invites sponsors of the technology to make a submission in support of their product. These submissions can be of variable quality. We examine some of the more dubious techniques that can be used by sponsoring organisations to make their products seem attractive to those making reimbursement decisions.

Although the following "advice" is based on real examples of submissions to NICE, it should be remembered that similar biases can be found in academic, peer reviewed publications and that such practices are not the preserve of industry.

Make your technology look effective

Generating the evidence

Compare your intervention with an inactive control; placebo controlled trials are ideal for circumventing clinically relevant head-to-head comparisons.

If forced to use an active control make sure that the comparator is:

Inadequately dosed;

Toxic; or

Ineffective in the patient group studied (for example, select only those who have failed to respond to standard treatment and then use standard treatment as the comparator).

Do not look at long term outcomes; for modelling purposes, extrapolating from short term data is far more flexible.

If the treatment effect may be short lived, switch all the controls to your intervention at the end of the trial. This makes long term assessment impossible, and you can extrapolate the benefits from the early results. Justify this switch on ethical grounds.

To be taken seriously, you must describe at least one analysis as "intention to treat." Do not over-interpret this requirement or stick slavishly to technical definitions. For example, by defining withdrawal criteria to include patients who find the intervention toxic or ineffective, you can avoid collecting follow up data for patients whose inclusion in an intention to treat analysis would be undesirable.

Reporting the evidence

Place most emphasis on the outcome in the trial that produces the most significant results.

Do not be unduly upset if no outcome on its own is statistically significant.

Combine different outcomes—by chance you will often end up with the intersection of two or three sets of outcomes that is highly significant. Report these as a clinically "very important combination."

With a little creativity you will almost certainly be able to find a subgroup of patients with especially good results.

Report mean changes on rating scales as your primary outcome. This way, no one will know how many patients experienced worthwhile clinical improvements.

Always refer to the trials favouring your intervention as the "pivotal" trials.

If your product is a drug and the evidence of efficacy is weak insist that it is a member of a class of drugs.

Conversely, if the class of drugs has little advantage over alternative treatments, insist that your product is unique within that class.

Presentation and framing are all important—a 33% decrease may be better expressed as a 50% increase, and expressing results as a relative risk reduction is usually much more compelling than the equivalent absolute risk reduction.

Minimise the possibility of independent evaluation of the evidence

Suppress the original protocols for trials ("commercial in confidence" is an established justification for this)—this prevents independent reviewers from detecting whether your reported outcomes are results driven, as they will not be able to verify the primary outcome of the trial.

If some of your trials come out with unfavourable results:

Do not report them;

Delay reporting until after the NICE committee makes its decision; or

If these results have already been reported in journals, minimise their importance—find some minor difference in trial design from the studies that give favourable results and emphasise the clinical importance of this.

Make sure it is not possible to reanalyse your results—there are many ways this can be done:

Present your data as an "integrated" analysis. This also conveniently allows you to add and subtract bits to make your case stronger (such as truncating data at an outcome point that maximises the apparent treatment effect);

Leave out the standard deviations and confidence intervals, especially where these do not reach conventional levels of statistical significance;

Present survival curves without giving information on the hazard ratio, withdrawals, or losses to follow up;

Present results in different formats for each trial to prevent independent pooled analyses.


Overwhelm independent technology assessment reviewers by submitting large volumes of data. Aim for 10 000 pages as a minimum. Include rat, elk, sheep, and in vitro tissue studies where possible—you do not want to be accused of holding back potentially useful information.

Ensure that data are delivered at the last possible moment.

If an independent review team have worked in the area insist that the work be given to a team that is coming fresh to the subject.

However, if there is plausible evidence to support your technology, ignore the above. Develop a chummy and cooperative relationship with the review team and provide them with everything they need. (The evidence is more likely to impress the appraisal committee when it is presented by independent reviewers and will thus make your case better than you can.)

The article continues pretty much in the same vain throughout. It has GlaxoSmithKline tactics written all over it wouldn't you say?
Post a Comment