USC researchers have released a report on the role of women and girls on children's TV. The research finds that females are underrepresented, portrayed as primarily interested in appearance and romance, and that characters are sexualized. I was shocked to see how two children's TV characters were made over as sexualized. Both characters were originally little girls. Now they look older, thinner, show more skin, and Rainbow Brite especially looks like jail bait.
The right and left wings of the sex education debate don't agree on much, but they do agree that reducing women and girls to sexual and romantic roles is distasteful and inappropriate. I hope that all can find some constructive action to counter these disturbing trends.
Wednesday, May 26, 2010
Monday, May 24, 2010
Anecdote vs. data: soda taxes
I'm constantly amazed how many people doubt that tobacco taxes and soda taxes work, even people who are educated and listen to empirical research in other parts of their lives. The reasoning seems to be that because they don't feel particularly price sensitive --- making purchase/quantity decisions based on goods' prices --- they aren't price sensitive. Likewise, because they have seen people continue to smoke and buy cigarettes at high prices, they believe few people are smoking less or quitting smoking due to tobacco taxes.
Campaign for Tobacco Free Kids put out a new report on this subject, documenting simultaneous decrease in smoking and increase in revenues from tobacco taxes. The economic research on the subject indicates that soda taxes could be similarly effective.
Going the other way, many people don't perceive expensive produce as a barrier to produce consumption. If they choose to buy more expensive produce (e.g., shopping at a more expensive store, only organic/local), they don't see that their decision to buy more expensive produce could decrease their fruit and vegetable consumption. At most levels of income, probably they will buy less if the produce costs more. Certainly they would buy more if it were cheaper: I've noticed that yelp reviews of lower cost produce shops and supermarkets commonly remark that they can get several huge bags of produce for the cost of one bag of produce at their regular store, and that they do get more when they shop at these stores (e.g., this one in Chicago).
People who otherwise listen to economic research and use it to guide decisions and opinions in the rest of their lives, and who believe in things that they can't see like germs, atoms, and molecules, somehow don't believe in price elasticity because they can't see it. I wonder if anyone's researched the impact of belief in price elasticity on economic behavior.
Campaign for Tobacco Free Kids put out a new report on this subject, documenting simultaneous decrease in smoking and increase in revenues from tobacco taxes. The economic research on the subject indicates that soda taxes could be similarly effective.
Going the other way, many people don't perceive expensive produce as a barrier to produce consumption. If they choose to buy more expensive produce (e.g., shopping at a more expensive store, only organic/local), they don't see that their decision to buy more expensive produce could decrease their fruit and vegetable consumption. At most levels of income, probably they will buy less if the produce costs more. Certainly they would buy more if it were cheaper: I've noticed that yelp reviews of lower cost produce shops and supermarkets commonly remark that they can get several huge bags of produce for the cost of one bag of produce at their regular store, and that they do get more when they shop at these stores (e.g., this one in Chicago).
People who otherwise listen to economic research and use it to guide decisions and opinions in the rest of their lives, and who believe in things that they can't see like germs, atoms, and molecules, somehow don't believe in price elasticity because they can't see it. I wonder if anyone's researched the impact of belief in price elasticity on economic behavior.
Sunday, May 23, 2010
Limits of randomized experiments
I just got back from the Mid-Atlantic Causal Inference Conference, the leading meeting for statisticians who look not just for associations, but for causality. Randomized experiments are the gold standard for causality because randomization ensures that on average, the treatment and comparison groups are similar. Experiments do have limitations, however, that come primarily from their great expense: experiments may need to be small and short duration, weakening the chance that experimenters can see an effect. The study described in this article is a perfect example: 22 autistic children were randomized to a gluten-free, casein-free (GFCF) diet for 18 weeks and then given a "challenge" of these foods about 4 weeks into the trial; by the end of the trial 8 of the subjects had dropped out.
Seemingly, there are hundreds of parents on internet mailing lists and websites putting their children on a GFCF diet. GFCF diet is hard to implement, and it takes weeks or months or practice to get right, and even then an errant crumb can disrupt the progress, and it's unclear how long kids need to be on the diet to see an improvement because determining the starting point is so inexact. A parent can probably remove >90% of gluten and casein from their child's diet starting on day 1, but hunting down the remaining 10% to reach 100% adherence takes a long time. And 99.9998% adherence may be exactly what's required: the FDA definition of gluten-free is 20 ppm. Once the GFCF diet is in place, many parents say that it improves their children. Now a randomized trial that started out with 22 participants and lost 8 of them comes into the news with the headline, "Eliminating Wheat, Milk From Diet Doesn't Help Autistic Kids."
An experiment doesn't have the luxury of trying to refine the diet to make sure that it's being done correctly, or to figure out the length of time the diet needs to continue until there's improvement. An experiment generally determines the treatment in advance rather than trial and error, since trying to get the best result is, to a certain extent, cheating (i.e., risking a spuriously significant result that occurred simply by chance).
A good experiment is an invaluable tool for understanding reality, but a so-so experiment is no better than a qualitative study of people on internet websites.
Seemingly, there are hundreds of parents on internet mailing lists and websites putting their children on a GFCF diet. GFCF diet is hard to implement, and it takes weeks or months or practice to get right, and even then an errant crumb can disrupt the progress, and it's unclear how long kids need to be on the diet to see an improvement because determining the starting point is so inexact. A parent can probably remove >90% of gluten and casein from their child's diet starting on day 1, but hunting down the remaining 10% to reach 100% adherence takes a long time. And 99.9998% adherence may be exactly what's required: the FDA definition of gluten-free is 20 ppm. Once the GFCF diet is in place, many parents say that it improves their children. Now a randomized trial that started out with 22 participants and lost 8 of them comes into the news with the headline, "Eliminating Wheat, Milk From Diet Doesn't Help Autistic Kids."
An experiment doesn't have the luxury of trying to refine the diet to make sure that it's being done correctly, or to figure out the length of time the diet needs to continue until there's improvement. An experiment generally determines the treatment in advance rather than trial and error, since trying to get the best result is, to a certain extent, cheating (i.e., risking a spuriously significant result that occurred simply by chance).
A good experiment is an invaluable tool for understanding reality, but a so-so experiment is no better than a qualitative study of people on internet websites.
Labels:
autoimmune diseases,
gluten-free,
statistics
Tuesday, May 18, 2010
Chemicals and comparison groups
Current legislation is trying to ban a plastic that has been used for 50 years to line cans, so I decided to look into how much evidence there is that this plastic is dangerous, and whether the potential substitutes for this plastic are safe. The American Council on Science and Health finds little evidence that this plastic is dangerous; they find no proposals for what plastics might substitute, much less any evidence on the alternatives' safety profiles. Their analysis raises very good points and is worth reading.
Just as in statistics, the important question for any risk analysis is "compared to what?" Nothing is dangerous on an absolute level: risks always have to be weighed against their alternatives. When we banned DDT decades ago, it may or may not have had beneficial effects for the eagle population, but malaria has rebounded: going from millions of cases in Sri Lanka to a couple dozen, and then back up to a million cases after the DDT ban. Malaria still affects hundreds of millions of people around the world, many cases that might be prevented if DDT spraying were allowed. The developed world has not had malaria since the 1940s --- perhaps if malaria had rebounded, perhaps we would see pesticides as the life-saving tools that they are --- but we do have the resurgence of bed bugs, even on the Upper East Side, after they had been almost completely eliminated 50 years ago. Maybe the good effects of the DDT ban are worth hundreds of millions of cases of malaria in the developing world and bedbugs in the developed world, but alternatives always need to be considered. The WHO has backed bringing back DDT because it was so useful.
With the current talk of banning BPA, the comparison group is completely missing. By banning a plastic without discussing alternatives and their risks, we risk having worse alternatives or no alternatives.
Just as in statistics, the important question for any risk analysis is "compared to what?" Nothing is dangerous on an absolute level: risks always have to be weighed against their alternatives. When we banned DDT decades ago, it may or may not have had beneficial effects for the eagle population, but malaria has rebounded: going from millions of cases in Sri Lanka to a couple dozen, and then back up to a million cases after the DDT ban. Malaria still affects hundreds of millions of people around the world, many cases that might be prevented if DDT spraying were allowed. The developed world has not had malaria since the 1940s --- perhaps if malaria had rebounded, perhaps we would see pesticides as the life-saving tools that they are --- but we do have the resurgence of bed bugs, even on the Upper East Side, after they had been almost completely eliminated 50 years ago. Maybe the good effects of the DDT ban are worth hundreds of millions of cases of malaria in the developing world and bedbugs in the developed world, but alternatives always need to be considered. The WHO has backed bringing back DDT because it was so useful.
With the current talk of banning BPA, the comparison group is completely missing. By banning a plastic without discussing alternatives and their risks, we risk having worse alternatives or no alternatives.
Sunday, May 16, 2010
Ambivalence or Planned Parenthood
A lot of evidence finds that a sizable number of young adults are ambivalent about pregnancy: they don't want to get pregnant, but they wouldn't mind if they did. Bill Albert of the National Campaign to Prevent Teen and Unplanned Pregnancy asks whether ambivalence is the right mindset in which to start a family, which reminded me of last week's A Prairie Home Companion.
In honor of Mother's Day, a skit on last week's Prairie Home Companion contrasted today's view of motherhood with the earlier generation. In today's motherhood, the woman says to her husband that they may need to work on their relationship and openness before they are really completely ready for having children, although they've already been married over a dozen years, and the childbirth is assisted by a midwife, a chanting Tibetan monk, and a dolphin named Sparky. The earlier generation, the woman says, "Gee John, I just got back from the doctor, and guess what?" John says, "Guess we ought to get married then." Her childbirth is attended by a doctor who is also a veterinarian, and she runs back to the potato field to finish harvesting right afterwards.
Certainly planning is best, but there is certainly such a thing as too much planning and waiting. Earlier not-quite-planned parenthood is difficult but so are fertility treatments. Parenthood is difficult no matter when it's done; while parenthood is gratifying, according to the research that I'm aware of, people with children are less happy than people without children. May everyone find a middle ground that they can be happy with, and be able to have as many children as they would like.
In honor of Mother's Day, a skit on last week's Prairie Home Companion contrasted today's view of motherhood with the earlier generation. In today's motherhood, the woman says to her husband that they may need to work on their relationship and openness before they are really completely ready for having children, although they've already been married over a dozen years, and the childbirth is assisted by a midwife, a chanting Tibetan monk, and a dolphin named Sparky. The earlier generation, the woman says, "Gee John, I just got back from the doctor, and guess what?" John says, "Guess we ought to get married then." Her childbirth is attended by a doctor who is also a veterinarian, and she runs back to the potato field to finish harvesting right afterwards.
Certainly planning is best, but there is certainly such a thing as too much planning and waiting. Earlier not-quite-planned parenthood is difficult but so are fertility treatments. Parenthood is difficult no matter when it's done; while parenthood is gratifying, according to the research that I'm aware of, people with children are less happy than people without children. May everyone find a middle ground that they can be happy with, and be able to have as many children as they would like.
Friday, May 14, 2010
Fantastic WW2 sex ed film for soldiers
This film shown to World War II soldiers is already better than any official abstinence-only (a-h criteria compliant) education curriculum. The message is do not have sex, but if you do, use a condom, and it shows a condom and then tells how to put it on, and do not drink so much that you are careless. And then it closes with the typewritten message on the screen, "Do not be so weak as to let some ignorant individual persuade you that you must seek sex relations to be a good sport. If you follow his advice, you are only being a fool."
Short and to the point, and an easy substitute for weeks-long curricula. And this was before there was even effective treatment for either syphilis or gonorrhea. (And before we were aware of chlamydia, herpes, and HPV.) Interestingly, the film was made in 1941, or either in the 24 days after the US declared WW2 or prior to entering WW2. Either way, it's clear that the army clearly anticipated that STIs can be a major problem, and it knew that it had to prevent STIs as much as possible, rather than waiting for them to come up. Unfortunately, there is no such common unifying impetus to prevent STIs today.
Policy proposal: given that some idealize the times before the sexual revolution, states that are reluctant about giving modern comprehensive sex education should limit themselves to material produced by state and federal government bodies prior to the sexual revolution. The vintage government films that I've seen are more practical and factual than the abstinence-only curricula that I've seen.
Short and to the point, and an easy substitute for weeks-long curricula. And this was before there was even effective treatment for either syphilis or gonorrhea. (And before we were aware of chlamydia, herpes, and HPV.) Interestingly, the film was made in 1941, or either in the 24 days after the US declared WW2 or prior to entering WW2. Either way, it's clear that the army clearly anticipated that STIs can be a major problem, and it knew that it had to prevent STIs as much as possible, rather than waiting for them to come up. Unfortunately, there is no such common unifying impetus to prevent STIs today.
Policy proposal: given that some idealize the times before the sexual revolution, states that are reluctant about giving modern comprehensive sex education should limit themselves to material produced by state and federal government bodies prior to the sexual revolution. The vintage government films that I've seen are more practical and factual than the abstinence-only curricula that I've seen.
Labels:
abstinence-only,
communication,
condoms,
sex education,
STD risk,
STIs
Sunday, May 2, 2010
Why doctors need to know Bayes theorem
In graduate school, I was the head TF for several general audience statistics courses, and my favorite subject was Bayes theorem because it implies that many "common sense" policies are, in fact, dangerous. Given a dreaded disease, drug use among ship captains or pilots, or anything else, it's so easy to say, "Just test everyone." But in fact, that's not good policy.
Social psychologist Gird Gigerenzer's new book covers some instances of asking doctors to give probabilities to their patients, and they do a horrible job. The question presents the information exactly as doctors are taught: prevalence, sensitivity, and specificity (false positives).
A prestigious doctor, department chief with 30 years of experience "was visibly nervous while trying to figure out what he would tell the woman. After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent. Nervously, he added, ‘Oh, what nonsense. I can’t do this. You should test my daughter; she is studying medicine.’ He knew that his estimate was wrong, but he did not know how to reason better. Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities."
And he was typical: more than 90% of the doctors were wrong, mostly very wrong.
When the question was posed in terms that are easier for people to understand, nearly all of the doctors got the question right.
Social psychologist Gird Gigerenzer's new book covers some instances of asking doctors to give probabilities to their patients, and they do a horrible job. The question presents the information exactly as doctors are taught: prevalence, sensitivity, and specificity (false positives).
The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
A prestigious doctor, department chief with 30 years of experience "was visibly nervous while trying to figure out what he would tell the woman. After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent. Nervously, he added, ‘Oh, what nonsense. I can’t do this. You should test my daughter; she is studying medicine.’ He knew that his estimate was wrong, but he did not know how to reason better. Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities."
And he was typical: more than 90% of the doctors were wrong, mostly very wrong.
When the question was posed in terms that are easier for people to understand, nearly all of the doctors got the question right.
Subscribe to:
Posts (Atom)