In mid-November the international community was still seriously concerned about Ebola and its effects on West Africa. Some prominent figures even called Ebola a threat to international peace. My realist/cynical side figured the calls might simply be an attempt to raise awareness and aid, but I was intrigued by the question, has disease ever led to war?
In my Foreign Policy piece (USIP mirror) I examine the literature on how disease could directly or indirectly lead to war. The short answer is that disease does not lead to war, but depending on the exact effect of an epidemic and government’s response, disease could lead to other forms of conflict.
Two sections were cut for the final version of the article. First, we cut out the research on whether economic shocks lead to conflict using rainfall as an instrumental variable. This growing body of literature launched by Miguel, Satyanath and Sergenti is fascinating, but trying to explain instrumental variables proved unwieldy in such a compact article.
The other section was a quick data probe inspired by a report by the US Institute of Peace from 2001 that discussed how HIV/AIDS could lead to conflict in Sub-Saharan Africa. I pulled UNAIDS figures on the national prevelance of HIV/AIDS in 2001 and UCDP/PRIO data on whether a country experienced civil war in from 2002 to 2012. Dividing the countries up into quartiles based on the prvalence of HIV/AIDS gives the following figure:
If HIV/AIDS increased the possibility of conflict, we would expect that those with the highest rates of HIV/AIDS would be the most likely to experience conflict. However, I find that those countries with the highest prevalence of HIV/AIDS in 2001 also were the least likely to experience a conflict in the 10 years after. While not particularly rigorous or scientific, this simple data exercise challenges some assumptions and raises some questions.
When the OECD released their 2015 Fragility Report I remember looking at the penta-Venn Diagram of the different states of fragility and wondering why Afghanistan was not fragile in institutions, which was supposed to capture corruption among other governance issues. This question eventually led to a Monkey Cage post on my attempt to replicate their measures of fragility.
The OECD responded in a comment to the initial posting pointing out some problems with my replication while admitting certain errors. However, after a revised replication that incorporates those edits, all up on Github, I still get very different results.
I look forward to seeing OECD’s 2016 report as they have discussed some interesting revisions to their measure of fragility. Hopefully along with substantive improvements they will also incorporate improve their methodology especially by way of transparency. As the OECD continues to work with this data in order to provide a public good, the greatest good will come from being as public as possible.
I’m happy to introduce UCDPtools, an R package for accessing data from the Uppsala Conflict Data Program (UCDP). UCDPtools includes UCDPindex that makes it easy to move around the websites and codebooks for the 15 UCDP datasets and the function getUCDP() that loads the datasets into R and fixes obvious errors and variable names.
I hope UCDPtools can become a one stop shop for tools and tricks for working with the UCDP datasets in R. Future iterations will have the datasets with different units, for example a year-actor version of “ArmedConflict” with a conflict count. If you have some code you would like to contribute, please contact me. Thanks to Jonathan Olmsted for assistance in packaging and Stephen Haptonstahl for making packages seem possible. All bugs are my own.
In exploring the GDELT dataset around disasters, I found an interesting trend around the tragic Typhoon Haiyan. Looking at events geolocated in the Philippines before and after the typhoon, I found a steep rise in the number of optimistic comments, clearly overtaking a rise in the number of pessimistic comments.
This is just a probing of the GDELT data, which already must be used cautiously, so no conclusions should be drawn. It does suggest possible questions that we can ask about politics and psychology around disasters. More broadly, it raises some other ways to use the GDELT dataset.
Oh, and as a quick comparison, there is no clear trend for optimistic / pessimistic comments in India surrounding the destructive but much less deadly Cyclone Phailin.
I recently attended the PSU GDELT Hackathon where I got a chance to contribute to the R package GDELTtools. The experience inspired me to clean up and share my own explorations of GDELT. My colleague Anna Schrimpf presented a research plan looking at the incentive structure that NGOs like Amnesty International face when choosing which issues to focus on. I found her research agenda fascinating and wondered if it could be applied to different types of conflict.
To check, I took the three UCDP datasets on annual casualties from (1) battlesrelated to civil or interstate war, (2) one-sided government attacks on civilians, and (3) violence between non-state actors from 1989 and 2011 and plotted total fatalities against the count of Media-reportedNGO actions using GDELT. For each country-year, I count the number of actions by Amnesty International, Human Rights Watch, Oxfam, or the Red Cross that target an actor of that country in that year. The resulting plot below contains 2500 points spread across 170 countries.
Immediately Rwanda 1994 jumps out as an outlier with over 800,000 deaths and the USA in 2011 with over 1500 NGO actions. Zooming in to the area outlined in purple gives the following plot:
Again we have points along each axis. Along the Y-axis we see USA and Israel joining more tumultuous countries, about which Ron and Ramos have a nice discussion in Foreign Policy. Across the X-axis we see Sudan, DRCongo, and Ethiopia with many conflict casualties but little NGO targeting.
Any analysis of this data is clearly early and should be approached warily. This is only meant as a probe of GDELT’s potential. That said, I couldn’t resist running a negative binomial regression with the different types of violence and fixed effects by country and year. My expectations are that, controlling for country and year effects:
(H1) NGO action increases with violence
(H2) NGOs react most to violence against civilians (onesided)
(H3) NGOs are more responsive to government violence (onesided and battle)
Estimate
Std. Error
z value
Pr(>|z|)
(Intercept)
.928
.223E-01
4.155
3.25E-05
***
battle1000
.129
.0144
8.932
< 2e-16
***
nonstate1000
.108E
.0597
1.812
0.070012
.
oneside1000
.0651
.0122
5.335
9.57E-08
***
The results shown above suggest support for (H1) and (H3) as non-state violence is the only one not statistically significant at the .01 level. I was surprised at the similar results for battle and onesided deaths, but a quick look at the literature revealed similar findings for battle deaths (see Hafner-Burton and Ron 2012 and Ron, Ramos, and Rodgers 2005).
My own conclusion from this exercise is that GDELT has a lot of potential and will only get better as it is further tested and developed. I am especially excited about combining GDELT data with other datasets such as the UCDP violence data used here, even if doing so does lose GDELT’s temporal power.
In exploring the GDELT dataset around disasters, I found an interesting trend around the tragic Typhoon Haiyan. Looking at events geolocated in the Philippines before and after the typhoon, I found a steep rise in the number of optimistic comments, clearly overtaking a rise in the number of pessimistic comments.
This is just a probing of the GDELT data, which already must be used cautiously, so no conclusions should be drawn. It does suggest possible questions that we can ask about politics and psychology around disasters. More broadly, it raises some other ways to use the GDELT dataset.
Oh, and as a quick comparison, there is no clear trend for optimistic / pessimistic comments in India surrounding the destructive but much less deadly Cyclone Phailin.
Last month the NFPA released several reports on fire losses in 2012. The report Catastrophic Multi-death Fires in 2012 covers the seventeen incidents that had five or more deaths. These incidents make up .001% of the total fires for the year and 2.9% of the total deaths.
The incident with the single largest loss-of-life was not any of the usual suspects such as a building collapse or a gas explosion. It was a series of car accidents due to low visibility from a brush fire near a highway. On January 29th, on Interstate 75 near Gainesville, Florida, 25 vehicles were involved in six crashes that left 11 people dead. The helicopter footage of the aftermath is harrowing but tenable given the visibility captured in the photograph (via Daily Mail UK).
The timeline of that day’s events as fascinating as it is tragic:
1435 hours, January 28th - Brush starts, agencies notified
1737 - DOT places Fog/Smoke signs on US 441
2003 - Fire Service (FS) officer contacts High Patrol (HP) officer, informs HP that FS is leaving the scene, makes sure HP will continue to monitor. FS states, "We don’t know what it’s gonna do, weather wise, so that thing is pretty close to 441 and 75 and we don’t want any major accidents."
2028 - HP officer returns to fire scene, reports that fire is out with no smoke. Incident considered closed.
2331 - HP notified of six vehicle crash but not of low visibility.
2354 - HP notified of accident with semi-truck and two vehicles due to smoke.
0010, January 29th - Interstate 75 is shutdown to traffic
0326 - interstate 75 reopens
0350 - DOT maintenance drives 75, find S is clear and N is solid smoke
0401 - HP receives calls of multiple accidents.
0409 - Interstate 75 permanently shutdown.
The Law Enforcement Incident Review highlights many factors that went wrong here, but I was struck by the meteorological dimension. The sudden drop in visibility was due to a phenomenon called inversion where a shift in the atmosphere’s temperature layers traps smoke. After a similar situation in Florida in 2008 led to a 70 vehicle accident with 4 deaths, the highway patrol updated a checklist for smoke/fog incidents which now included checking the Low Visibility Occurrence Risk Index and getting a spot forecast from the National Weather Service. In 2012, neither of these things were done.
The Interstate 75 catastrophe is neither the worse nor the most recent fire tragedy where the weather played an important role. Just four months ago, in what was the deadliest event for firefighters since the 9-11 attacks, 19 firefighters were lost in the summer wildfires in Yarnell Arizona when the winds changed. My former Cornell bandmate Owen Shieh, now the Weather and Climate Program Coordinator at the National Disaster Preparedness Center, posted an very detailed analysis of what happened and how it could have been avoided, and concludes with the crucial question for emergency decision-making - What else can be done to bridge the gap between the science and the decisions?
Unfortunately, it is not clear whether agencies in Florida are taking the weather more seriously. A year after the Interstate 75 incident, the Florida DOT, Forest Service, and Highway Patrol have reportedly collaborated to make improvements in terms of inter-department communications, cameras to monitor the highway, signage to warn drivers, sensors to monitor traffic speeds, and increased training for responded to fog related accidents.
While reading Damon Coppola’s Introduction to International Disaster Management, I was struck by the unequivocal denouncement of cost-benefit analyses of disaster mitigation with respect to human life. In listing three criticisms of the process of determining risk acceptability, number two reads:
Setting a dollar figure (in cost-benefit analyses) on a human life is unethical and unconscionable . . . Because of the controversial nature of placing a value on life, it is rare that a risk assessment study would actually quote a dollar figure for the amount of money that could be saved per human life loss accepted. Post-event studies have calculated the dollar figures spent per life during crisis, but to speculate on how much a company or government is willing to spend to save or risk a life would be extremely unpalatable for most.
The emphasis is not mine. I had two initial reactions. First, setting a dollar figure on human life is common practice in many settings. The EPA is a common example, and currently has their carefully defined value of statistical life set at 7.4 million (in 2006 dollars). In an oft cited article, Viscusi and Aldy review over 100 articles that measure how individuals value morbidity risk.
The second thought was to consider who in the emergency management field would benefit or lose from Coppola’s forbidden analysis. My former Princeton colleague Sarah Bush, now an Assistant Professor at Temple University, began quantifying the benefits of NGO democracy promotion programs to keep up with the shifting preferences of their donors. In that transition, there were certainly winners and losers among the NGOs.
Two pieces the last couple week put these questions to a wider audience. First, Peter Singer’s NYT column pitted charity categories against each other resulting in a thought experiment where you would visit a new museum wing if doing so gave you a 0.1% chance of suffering 15 years of blindness. With this bizarre would-you-rather, Singer advocates for the evidence-based approach to charity known as Effective Altruism.
Second, Professor Chris Blattman strongly criticized charities that eschew impact appraisals. In a recent episode of This American Life looking at the best way to give that features Blattman, the vice-president of Heifer International responds to a question about an experiment where one village gets cows and training and another village gets an equivalent amount of cash. She says, “It sounds like an experiment, and we’re not about experiments. These are lives of real people and we have to do what we believe is correct. We can’t make experiments with people’s lives. They’re just– they’re people. It’s too important.”
To Blattman, this is the crux of the matter. He writes, “Let me be blunt: This is the way the Heifers of the world fool themselves. When you give stuff to some people and not to others, you are still experimenting in the world.” Blattman admits that ignoring someone is easier than talking to someone and measuring their outcomes without giving them anything, which may be part of why Heifer feels the latter is immoral. Blattman could not disagree more, saying that in a world with limited resources it is immoral not to take measurements. If in fact a poor family does twice as well with cash than with cattle, then each person Heifer helps is in essence withholding aid from another through opportunity costs. Of course we don’t know if this is the case, but the point is that ignorance is not only bliss, it’s cruel.
Both Singer and Blattman are taking part in the discussion about a new charity, GiveDirectly, but they speak to part of what really bothered me about the paragraph in Coppola. Decision-making in disaster preparedness, mitigation, response, and relief with a cost-benefit analysis that considers human life feels like playing god, choosing who lives and who dies. However, avoiding that analysis does not absolve you from making life and death decisions, you just do so with even less information. We do not know if those uninformed actions may result in the further loss of life, but it seems immoral not to find out.
The latest Freakonomics episode, How Many Doctors Does It Take to Start a Healthcare Revolution, Jeffrey Brenner, executive director and founder of the Camden Coalition of Healthcare Providers, uses Princeton, NJ as an example of over active hospital financing. The Coalition is currently working JPAL on an RCT of an care management program targeting healthcare system “super-utilizers” identified by healthcare “hotspotting”. From the transcript:
BRENNER: One of the problems is that we have a giant economic bubble underlying this where we have hospital financing authorities underpinning, that are run by states that help hospitals float bonds. And we have this giant bond market called the hospital bond market that’s considered very secure, very safe, good investment. And you know, that bond market has floated too much hospital capacity and created and brought online too many hospital beds, far more hospital beds than we need in America. So you know, the most dangerous thing in America is an empty hospital bed. In the center of New Jersey, near Princeton, a couple years ago, we built two brand-new hospitals. These are two $1 billion hospitals, 10 miles apart, very close to Princeton. So one is called Capital Health, and the other is Princeton Medical Center. I don’t remember anyone in New Jersey voting to build two brand-new hospitals. But we are all going to be paying for that the rest of our lives. We’ll pay for it in increased rates for health insurance. And, boy, you better worry if you go to one of those emergency rooms, because the chances of being admitted to the hospital when there are empty beds upstairs that they need to fill are going to be much, much higher than when all the beds are full–whether there’s medical necessity or you need it or not. So I’d be very worried if you live in Princeton that there are now two $1 billion hospitals waiting to be filled by you.
The RCT is fascinating, but also interested me since I spent two years volunteering with Princeton First Aid and Rescue on the ambulance taking patients to the $520 million Princeton Medical Center, opened in 2012.1
I do not recall taking anyone to the Capital Health Medical Center - Hopewell, opened in 2011, but Capital Health Regional Medical Center in Trenton was our go to trauma center. To get an idea of the layout, the map below shows the Regional Medical Center in red, the two new hospitals in green, and the old Princeton Medical Center in Orange.2
Map of past and current hospitals in the Princeton area.
The hospital construction was also brought up in a 2014 NYT article asking why a procedure in retired math professor was billed $5,435 at the Princeton medical center and $1,714 in Boston for the same procedure:
But that cost must cover some expenses in the United States not found in other medical systems. The area around Princeton has had a spate of new hospital building in the past five years. The University Medical Center of Princeton at Plainsboro, which has no connection to Princeton University, cost more than $500 million to build and has a curving atrium decorated with artwork from the hospital’s permanent collection. “It was like a luxurious museum,” Mr. Charlap said.
University Medical Center of Princeton at Plainsboro: curved for your health
I know little about health care finances and have no strong feelings about the hospital construction, which seems like it isn’t even the biggest factor in rising health care costs. I am happy to see that people are thinking about and questioning the status quo.
For my two cents, I felt that I transported many patients that did not really need an emergency room, not out of any financial interest but because of protocols. If a patient wanted to go to the hospital, I wasn’t qualified to refuse him or her by judging the trip unnecessary. Similarly, if a patient did not want to go to the hospital, they can deemed unqualified to refuse for a number of reasons including alcohol use. Often is seemed someone was of sound mind even after a couple drinks, but when there are liability concerns why risk releasing a not sober but medically sound student who could then turn around and sue you when he later electrocutes himself on the Dinky, Princeton’s iconoclastic rail car? The rules of when to transport is probably a relatively small factor in the larger world of health care costs, but from my view in the back of the ambulance it would also be worth a rigorous evaluation.
FOOTNOTES:
Technically the University Medical Center of Princeton at Plainsboro. ↩
Home to Dr. House, but the aerial shots actually showed Princeton’s campus center. Everyone lies. ↩
When the OECD released their 2015 Fragility Report I remember looking at the penta-Venn Diagram of the different states of fragility and wondering why Afghanistan was not fragile in institutions, which was supposed to capture corruption among other governance issues. This question eventually led to a Monkey Cage post on my attempt to replicate their measures of fragility.
The OECD responded in a comment to the initial posting pointing out some problems with my replication while admitting certain errors. However, after a revised replication that incorporates those edits, all up on Github, I still get very different results.
I look forward to seeing OECD’s 2016 report as they have discussed some interesting revisions to their measure of fragility. Hopefully along with substantive improvements they will also incorporate improve their methodology especially by way of transparency. As the OECD continues to work with this data in order to provide a public good, the greatest good will come from being as public as possible.
The title is hyperbolic, but it gets to a shortcoming of the UN Security Council’s Resolution 2118 on Syria’s chemical weapons passed last week. It was inspired by reported responses to the resolution.
US Ambassador to the UN Samantha Power - This resolution makes clear there will be consequences for noncompliance. US Secretary of State John Kerry - Progress would be reported to the Council, he said, stressing that non-compliance would lead to the imposition of Chapter VII actions.
It is not clear whether Ambassador Power and Secretary Kerry actually believe what they are saying, but to be clear, they should not.
The consequence in question is item 21 of the resolution: “decides, in the event of non-compliance with this resolution, including unauthorized transfer of chemical weapons, or any use of chemical weapons by anyone in the Syrian Arab Republic, to impose measures under Chapter VII of the United Nations Charter.”
In reality, this is hardly a consequence. First off, any measures imposed under Chapter VII would require another UNSC resolution, a point that Russian Foreign Minister Sergey Lavrov made clear on a Russian TV interview. Another resolution gives Russia another chance to veto, and an earlier draft of 2118 suggests that Russia’s requirements for action are quite high.
For fun, let’s assume that Russia would be sensitive to criticism if it prohibited a response to another chemical attack after approving 2118. While it is true that the UN’s harshest measures require Chapter VII, the reverse is not true; Chapter VII does not require harsh measures. That is, Russia could put forth a toothless Chapter VII resolution to meet its institutional obligations and still shield Assad.
The Security Council Report wrote in 2008 on the myths of Chapter VII that spells out the range of Chapter VII resolutions. The following line is the most relevant to the situation:
In some cases, the Council invokes Chapter VII (for purely political purposes) but with no intent to impose binding obligations.
Ultimately, if we do see another chemical attack by the regime, the UN would find itself in a position similar to where President Obama was in early September - having its vague threat challenged with little chance of backing it up in a meaningful way. Let us hope that Assad is a man of his word.
In the last two weeks a happenstance agreement on Syria's chemical weapons has changed the discussion from 'what should we do' to 'what just happened'. Here's another attempt to break down the underlying questions and arguments. I do not think we will see again the kind of policy debate we saw around possible strikes, so my review of news and events here has more information than arguments.
What happened?
Sept 9th - John Kerry's rhetorical comment
Sept 13th - Kerry and Russia Foreign Minister Sergey Lavrov
I’ve been tasked with helping students understand the Syria crisis and US policy options. Below is the outline of basic facts, key questions and arguments, and interesting sources. The goal was to lay out many of the smaller debates that (ideally) contribute to any policy decision on Syria. Of course many points have been simplified as the infamous Afghanistan powerpoint came to mind.
Syria Timeline:
March 2011 - Protests start
October 2011 - Opposition Syrian National Council Forms
Alternative internet tabloid title: “7 Ways Afghanistan Kicks America’s Butt!”
Unemployment, insecurity, and corruption, are the biggest challenges facing Afghanistan according to the ASIA Foundation’s 2014 survey. The fraud allegations and controversial recount of the presidential run-off election might have made the list, but the survey ran before the preliminary vote totals were released.
Even before the elction though, pessimism was on the rise. The survey found that 40% of Afghans believe the country is moving in the wrong direction, a new high since the ASIA Foundation began surveying in 2004. If you are keeping up with US politics however, that number doesn’t look so bad. Throughout 2014, over 60% of Americans have reported that the country is headed down the wrong track.
Of course, when comparing across surveys you have to account for many factors such as different methodologies, sampling limitations, exact phrasing, and cultural significance. We cannot say that America’s track is more wrong than Afghanistan’s, whatever that even means. However, I believe these comparisons should make us think about both the face value comparison (why might Americans be more pessimistic?) and the issues of doing such a comparison in the first place (how is the data shaped by differences in culture, society, history, politics, economics, linguistics, and the survey methodology itself?).
With the goal of promoting additional thinking, I present the seven survey results where Afghanistan outperforms the USA:
1. Right Direction
On the other side of wrong-track coin, 54.7% of Afghans believe that the country is moving in the right direction. This is a slight dip from 2013, but overall still seems to be on a steady climb since 2008. In the United States you have to go back to 2009 in the Reuter’s Poll or 2003 in the NBC News/Wall Street Journal Poll to find that level of optimism in the US (see graph by Marist Poll).
2. Property Crime
Security is a major concern in Afghanistan; 16% of respondents report that they or someone in their family had suffered from violence or crime in the past year. Overall, violent acts (beatings, suicide attacks, murder, kidnapping, militant action, etc.) were more prevalent than property crime (Racketeering, livestock theft, pick-pocketing, burglary, vehicle theft). Between 6.5 and 13% of respondents reported some type of property crime (exact unknown because respondents were allowed to report two types). In comparison, the violent crime rate in the US in 2013 was only 2.3% according to the USA’s Bureau of Justice Statistics (BJS). However, the US may have a slightly higher rate of property crime - 13.1% of households.
3. Crime Reporting
The first response on property crime might be about what incidents go unreported. While Americans and Afghans may have different ideas on what constitutes a criminal act, as it is Afghanistan has a higher reporting rate than the US. The BJS estimated that Americans reported 46% of violent crime and 36% of property crime to police. In Afghanistan reporting of crime or violence increased this year to 69%.
4. Confidence in the Police
The difference in crime reporting may be related to each country’s confidence in their police. When a 2014 Gallup poll asked Americans how much confidence they have in the police 53% selected agreat deal or quite a lot versus the other choices: some, very little, or none. The ASIA Foundation’s survey found 73.2% are confident in the Afghan National Police (ANP). This is using a composite measure of people that agree strongly or agreesomewhat with three statements: the ANP (a) is honest and fair (b) improves security and (c) is efficient at making arrests. As more direct comparisons, 86% of Afghans strongly agreed or somewhat agreed that the ANP improve security; according to a Pew Poll 83% of Americans said that police were excellent, good, or only fair at protecting people from crime. Similarly, 88% of Afghans at strongly or somewhat agreed that the ANP is honest and fair, while 74% of Americans gave the police a fair or better rating at treating racial and ethnic groups equally.
5. Confidence in the Army
The higher levels of confidence in the police may be related to their role in fighting the insurgency. The Afghan National Army (ANA) similarly garners a high level of confidence. Using a composite measure again, 86.5% of Afghans agree strongly or agree somewhat with all three of the three following statements: that the ANA (a) is honest and fair (b) improves security and (c) protects civilians. The most recent 2014 Gallup Poll found 74% of Americans have a great deal or quite a lot of confidence in the US military.
6. Confidence in the Legislature
In 2013 the US congress’ ratings hit record lows; a Public Policy Polling survey found that Americans had higher opinions of root canals, head lice, traffic jams, cockroaches, and Nickelback. Gallup polls that year found that 10% of Americans had a great deal or quite a lot of confidence in Congress. This number dipped to 7% in 2014. Afghanistan cleanly clears this low bar with 12% reporting a lot of confidence in parliament as a whole.
7.Fair Elections
Before the announcement of the run-off election and the subsequent allegations of sheep stuffing and disputes over unity governments, the Afghan people were quite bullish on the elections. When asked about them, 63% responded that they were in general free and fair. If you ask likely American voters, only 40% think elections are fair to voters. Given what has happened since the US may have taken back the lead on fair-election perceptions, but that’s still not something to brag about.
When the OECD released their 2015 Fragility Report I remember looking at the penta-Venn Diagram of the different states of fragility and wondering why Afghanistan was not fragile in institutions, which was supposed to capture corruption among other governance issues. This question eventually led to a Monkey Cage post on my attempt to replicate their measures of fragility.
The OECD responded in a comment to the initial posting pointing out some problems with my replication while admitting certain errors. However, after a revised replication that incorporates those edits, all up on Github, I still get very different results.
I look forward to seeing OECD’s 2016 report as they have discussed some interesting revisions to their measure of fragility. Hopefully along with substantive improvements they will also incorporate improve their methodology especially by way of transparency. As the OECD continues to work with this data in order to provide a public good, the greatest good will come from being as public as possible.
In mid-November the international community was still seriously concerned about Ebola and its effects on West Africa. Some prominent figures even called Ebola a threat to international peace. My realist/cynical side figured the calls might simply be an attempt to raise awareness and aid, but I was intrigued by the question, has disease ever led to war?
In my Foreign Policy piece (USIP mirror) I examine the literature on how disease could directly or indirectly lead to war. The short answer is that disease does not lead to war, but depending on the exact effect of an epidemic and government’s response, disease could lead to other forms of conflict.
Two sections were cut for the final version of the article. First, we cut out the research on whether economic shocks lead to conflict using rainfall as an instrumental variable. This growing body of literature launched by Miguel, Satyanath and Sergenti is fascinating, but trying to explain instrumental variables proved unwieldy in such a compact article.
The other section was a quick data probe inspired by a report by the US Institute of Peace from 2001 that discussed how HIV/AIDS could lead to conflict in Sub-Saharan Africa. I pulled UNAIDS figures on the national prevelance of HIV/AIDS in 2001 and UCDP/PRIO data on whether a country experienced civil war in from 2002 to 2012. Dividing the countries up into quartiles based on the prvalence of HIV/AIDS gives the following figure:
If HIV/AIDS increased the possibility of conflict, we would expect that those with the highest rates of HIV/AIDS would be the most likely to experience conflict. However, I find that those countries with the highest prevalence of HIV/AIDS in 2001 also were the least likely to experience a conflict in the 10 years after. While not particularly rigorous or scientific, this simple data exercise challenges some assumptions and raises some questions.
In the last two weeks a happenstance agreement on Syria's chemical weapons has changed the discussion from 'what should we do' to 'what just happened'. Here's another attempt to break down the underlying questions and arguments. I do not think we will see again the kind of policy debate we saw around possible strikes, so my review of news and events here has more information than arguments.
What happened?
Sept 9th - John Kerry's rhetorical comment
Sept 13th - Kerry and Russia Foreign Minister Sergey Lavrov
I’ve been tasked with helping students understand the Syria crisis and US policy options. Below is the outline of basic facts, key questions and arguments, and interesting sources. The goal was to lay out many of the smaller debates that (ideally) contribute to any policy decision on Syria. Of course many points have been simplified as the infamous Afghanistan powerpoint came to mind.
Syria Timeline:
March 2011 - Protests start
October 2011 - Opposition Syrian National Council Forms
The title is hyperbolic, but it gets to a shortcoming of the UN Security Council’s Resolution 2118 on Syria’s chemical weapons passed last week. It was inspired by reported responses to the resolution.
US Ambassador to the UN Samantha Power - This resolution makes clear there will be consequences for noncompliance. US Secretary of State John Kerry - Progress would be reported to the Council, he said, stressing that non-compliance would lead to the imposition of Chapter VII actions.
It is not clear whether Ambassador Power and Secretary Kerry actually believe what they are saying, but to be clear, they should not.
The consequence in question is item 21 of the resolution: “decides, in the event of non-compliance with this resolution, including unauthorized transfer of chemical weapons, or any use of chemical weapons by anyone in the Syrian Arab Republic, to impose measures under Chapter VII of the United Nations Charter.”
In reality, this is hardly a consequence. First off, any measures imposed under Chapter VII would require another UNSC resolution, a point that Russian Foreign Minister Sergey Lavrov made clear on a Russian TV interview. Another resolution gives Russia another chance to veto, and an earlier draft of 2118 suggests that Russia’s requirements for action are quite high.
For fun, let’s assume that Russia would be sensitive to criticism if it prohibited a response to another chemical attack after approving 2118. While it is true that the UN’s harshest measures require Chapter VII, the reverse is not true; Chapter VII does not require harsh measures. That is, Russia could put forth a toothless Chapter VII resolution to meet its institutional obligations and still shield Assad.
The Security Council Report wrote in 2008 on the myths of Chapter VII that spells out the range of Chapter VII resolutions. The following line is the most relevant to the situation:
In some cases, the Council invokes Chapter VII (for purely political purposes) but with no intent to impose binding obligations.
Ultimately, if we do see another chemical attack by the regime, the UN would find itself in a position similar to where President Obama was in early September - having its vague threat challenged with little chance of backing it up in a meaningful way. Let us hope that Assad is a man of his word.