Thursday, October 8, 2009

Hormonal contraception makes women less attractive and choose worse partners, says review paper

A few months ago, I said that hormonal contraception (the pill, Depo, etc.) messes up women's choice of mates because of results we knew that women choose differently and asked whether that's responsible for higher divorces in those couples. We still don't know that information, whether divorce is more likely among women who chose their partners using hormonal contraception.

But a review paper has come out solidifying that finding that women choose partners differently on hormonal contraception than off and adding that women are less attractive while they're taking it because it suppresses ovulation (during which women are more attractive to men).

That's another reason to add to the book that just came out saying that women ought to consider other methods besides the pill.

Hormonal contraception is a standard, and evidence probably won't change that, but there are other equally reliable methods of pregnancy prevention such as intrauterine contraception (IUC) that currently have miniscule proportions of women using them. I wonder whether we'll see even a slight shift.

(Conflict of interest disclosure: I have a very nice pen given to me by the company that makes an IUC device that I happen to have used this morning, but really that's not why I wrote that. This is just the first time I've ever had one of those legendary conflicts of interest.)

Monday, October 5, 2009

Political compromises on sex ed

I'm an ardent moderate, but I find the administration's compromises counterproductive. Obviously the stimulus compromise didn't work: the bill was watered down, and virtually no Republicans voted for it anyhow.

If the health bill passes, the sex education situation will go to $50 million for abstinence-only education, $50 million for evidence-based comprehensive sex education, and $25 million for experimental comprehensive sex education. That's not compromise. That's going also against popular opinion: 52% of even politically "very conservative" parents favor teaching birth control in schools, and 89% of the general population of parents. Just as most of the public and most physicians favor the public option, but that doesn't make it into policy either.

More importantly, it's going agsinst the findings of the Congressionally-mandated study finding that abstinence-only sex education doesn't work.

They're not listening to either the public or the researchers they hired.

Friday, October 2, 2009

Why the placebo effect is an effect

There was an interesting article in Wired recently that spoke about the placebo effect getting stronger: that the pre-post difference from a placebo drug is greater than it was a decade or two ago and that it differs between countries. That is, if you are looking at antidepressants and your outcome measure is a score on the Beck Depression Inventory that measures how depressed someone is, the score before the drug minus the score after the drug is different now than it was 10 years ago.

One criticism of the article is that the placebo effect cannot be considered an effect unless it is compared with another experimental condition. Since drug trials don't include both patients who receive a placebo and patients who receive nothing, there is no such thing as a placebo effect unless we know what the pre-post difference would have been in the absence of the placebo. Without a nothing arm to compare with, the writer contends that the pre-post difference in the placebo arm of a trial is just by definition the background noise in the trial.

I think that he's making a semantic point because a true placebo effect is impossible to measure.

To break the problem down further:

We do not know what the pre-post difference in a nothing arm of a trial would be. In some trials and for some diseases, there would be spontaneous improvement in the patient's condition: in that case, the pre-post difference in the placebo might just be that spontaneous improvement that would have happened if nothing were done.

In some trials and for some diseases, there would not be much change in the patient's condition, so the nothing arm would have no difference: in that case, the pre-post difference in the placebo arm would represent an "effect" and we could say that we have a placebo effect.

The question is which diseases have spontaneous improvement and which don't. There are three ways I can think of to figure this out.

1. A randomized clinical trial with patients that actually have some disease in which half the patients get a sugar pill and half the patients get nothing. No human subjects board would authorize this trial. Second, the study would not measure what we want it to. Ethically patients have to be told that the two possibilities are sugar pill and nothing. The Wired article contends that the placebo "effect" is based on a patient's prior beliefs about a drug's effectiveness, so it's specific to the drug, rather than being just the effect of a plain sugar pill.

2. The placebo effect could in theory be measured with matching, were there any subjects to match them to. The placebo pre-post difference can be defined in two ways: the pre-post difference of the sugar pill plus the pre-post difference of enrolling in the trial, or just the pre-post difference of the sugar pill alone. I would say it's the former. In that case, where we want to measure the effect of enrolling in a trial and taking a sugar pill, we could match normal patients with placebo patients based on their records and compare their pre-post differences. Except for the fact that medical records of normal patients with a disease are there because the patients are getting some treatment from their doctors. So there's no group to compare the placebo patients to.

3. The one remaining possibility is for each drug trial to divide their control group into two unequal groups: one receiving a sugar pill would be the larger group and one being put on a waiting list for the drug would be a smaller group. The problem is that placebos serve two purposes: one is for the statistical purpose and one is to keep the participants in the study and encourage them against taking other treatments. Depending on the condition, a control participant put on a waiting list might leave the trial or take another treatment in addition to the waiting list. So you might lose a good portion of the nothing arm of the trial.

Given the impossibility of rigorous measurement of what would happen under no treatment, the best we can do is guess which are the diseases where symptoms spontaneously resolve and which are the diseases where they don't. And that's what we already do when we talk about a placebo effect. We compare the pre-post difference in the placebo arm of a trial with our beliefs about what the pre-post difference would be with no treatment. In that sense, the placebo effect is really an effect. It's just imprecise.

Further, it's reasonable to assume that whatever the pre-post difference under nothing is, it's not going to change with time in any systematic way. If we could put all the placebo arms of, say, antidepressant trials together and find a trend with time, that's not sampling error. And that's exactly what the Wired article is talking about.

Statisticians protest at G-20 conference for safer data mining



Dating miners protest alongside United Steelworkers at the G-20 conference. My favorites: "Repeal Power Laws," "Our Sets. Our Axiom of Choice."

They even got John Oliver from the Daily Show to join in. I can't read the sign he is holding.

Thursday, October 1, 2009

Booty Call journal article

Journal of Sex Research published an article about "The 'Booty Call': A Compromise Between Men's and Women's Ideal Mating Strategies." It is not even slightly representative, just some Texas undergraduates, but its conclusions are similar to those of the book Hooking Up that I reviewed here. Hooking Up was a qualitative study of two undergraduate populations that followed subjects in college and a year or two after graduation.

Here are its conclusions:


With regards to accepting versus rejecting booty call partners, physical attractiveness was considered the most important criteria by both genders. Fourth, whereas men tended to cite other reasons related to sexual access, women tended to cite reasons related to friendship, compatibility, and personality. Fifth, for booty calls that do not progress into long-term relationships, both genders attribute the lack of progression to the man's not wanting a long-term relationship. Taken together, our results suggest that, although booty calls are mostly a sexual relationship whereby physical attractiveness is important, there are elements in which booty calls differ from other casual sexual relationships, such as one-night stands or hookups. In addition, whereas men tend to favor the sexual aspects of booty calls, women tend to favor other, more long-term oriented considerations. These findings are consistent with our overall hypothesis that the booty call may represent a compromise between the short-term, sexual nature of men's ideal relationships and the long-term, commitment ideally favored by women.


As with Hooking Up, the women surveyed have long-term ideas in mind but are willing to settle for short-term. Unlike Hooking Up, none of the women discuss an initial stage of experimentation with hooking up when they did not want a long-term relationship and just wanted to experiment with short-term. Perhaps the survey did not ask them about that.