Information

The Berkeley Protocol on Open Source Investigations

UC Berkeley’s Human Rights Center is leading the way in the ethical use of open-source methods for investigating war crimes and human rights violations.

Berkeley Protocol - Open Source Investigations

Human rights investigations increasingly rely upon open-source intelligence (OSINT) to identify, document, and verify human rights atrocities. These open sources—such as publicly available Facebook posts, YouTube videos, and tweets—provide important information about human rights violations and perpetrators. However, the process of analyzing, verifying, and corroborating these sources to support legal accountability is time-consuming and requires expertise. Additionally, there is currently no international standard for using open-source investigations for legal accountability.

The UC Berkeley Human Rights Center‘s forthcoming Berkeley Protocol on Open Source Investigations—the first-ever manual on the effective use of open-source information in international criminal and human rights investigations—seeks to set that standard. Tentatively set to be published in partnership with the United Nations in Fall 2020, the Protocol details measures that should be taken in order to check the authenticity and reliability of visual content, such as videos and photos, that emerge out of crises from around the world. The Protocol will be available in Arabic, Spanish, and French in order to reach a wider audience and to ensure its adaptability in local contexts. The Berkeley Protocol is expected to launch in all four languages in early September, in Berkeley and Geneva.

The Protocol’s development was supported in part by a 2017-2018 Matrix Project Team, which brought together scholars from the fields of law, as well as journalism, public policy, and public health, among others, for a series of workshops on how courts have successfully used open sources to improve the outcomes of their cases, and to what extent a protocol might be necessary and/or helpful for the field. The project team also hosted workshops in London and the Hague on a variety of topics, including the ethical and legal challenges of using sock puppets (fake online identities) on Facebook.

We interviewed Dr. Alexa Koenig, executive director of the Human Rights Center at Berkeley Law and a pioneer in shaping the Berkeley Protocol. (Note that this interview has been edited for length and clarity.)

What is an “open-source investigation” in the context of international criminal and human rights law?

It is an investigation into any big research question using open-source information, which is available to the public usually by at least one of three means: observation, request, or  purchase. What the Human Rights Center (HRC) is pioneering is how to use new and emerging digital technologies to help us understand social phenomena. Our focus is primarily on war crimes and human rights-related issues.

What are some examples of the investigations the HRC has conducted?

A longstanding focus has been on the Syrian conflict, as people in Syria are trying to get information out to their broader communities, and to the world more generally. Open-source investigation has changed how digital information that is shared online can be helpful for building war crimes cases and bringing attention to people in humanitarian crises.

We have worked to find information online to understand the “who, what, where, when, and how” of different attacks, and to verify content provided to us by our partner organizations, such as the Syrian Archive and Amnesty International. When we verify, we are interrogating the information that is being shared with us to check for its reliability. For example, if we receive a video and someone is claiming it took place in Idlib on a particular date, how do we in fact know that such claims are accurate? We have investigated and published on events ranging from chemical attacks to bombings of hospitals to the destruction of civilian facillities and infrastructures.

Myanmar is another longstanding crisis where we have been trying to support fact-finding. In 2016/17, when we started our Investigations Lab, we began grabbing content from Facebook that was being shared by the military in Myanmar—content that is now characterized as hate speech by human rights lawyers and increasingly recognized by the UN and others to have potentially contributed to genocide. We began collecting this content prior to some of the worst attacks, including those in Summer 2017, and we had information leading up to those attacks (and after). This contributed to a series of stories published by Steve Stecklow of Reuters that ended up winning the Pulitzer Prize.

A lot of the work we do focuses on supporting three constituencies: 1) human rights researchers and activists who are trying to get information out through organizations like Amnesty International, Human Rights Watch, Physicians for Human Rights, and the Syrian Archive; 2) human rights lawyers who are trying to bring information that will strengthen the evidentiary foundations of the legal cases they are bringing forth; and 3) investigative reporters who are trying to report information out to the world about what is happening.

Increasingly, we are beginning to look at how open source fact-finding can be supportive of telling the climate crisis story and the impact of climate change on populations around the world, whether destruction of forests or the shrinking of lakes and rivers through satellite imagery. The human rights community is realizing that we cannot talk about human rights violations without understanding the role that climate justice plays in human rights phenomena across the globe.

What is the value or impact of having this type of investigation housed in a university environment, and what has been UC Berkeley’s contribution to this field?

What UC Berkeley can offer the world is the extraordinary talent and dedication present amongst our students. When we were considering launching our Investigations Lab in 2015/16, we recognized that a lot of this work is time-consuming, whether coming up with structured research questions, deciding what technologies should be used for different conflict areas, or doing the painstaking work of verification itself. A task as simple as geolocating where a video was captured can take hours, if not days to complete. Additionally, a lot of our non-profit partners are strapped for cash and human resources. The question became, how can we match the extraordinary international cohorts of students, who are rich in background, culture, and language skills, with those organizations? It also allowed for the pedagogical purpose of training students at Berkeley in the newest methods of fact-finding for human rights and social justice more generally.

What have been the most difficult dilemmas the HRC, or this field as a whole, has grappled with, both ethically and in terms of psycho-social resilience for the investigators?

One of the biggest questions for us has been whether we should be exposing students to graphic material, and what boundaries should be placed around that. One of our advisory board members, who was a war reporter, approached us and said “this is an incredible program and I am blown away by what’s happening at Berkeley, but how are you going to keep the students safe?” Ultimately, we were working with students who don’t necessarily have the on-the-ground experience or resources to deal with the psycho-social aspect of this work.

Sam Dubberley, who started the Digital Verification Corps at Amnesty International and was our partner from the outset, had been observing the impact of user-generated content on journalistic communities for a long time, and conducted some of the first research to examine the toll dealing with such raw material was taking on even professionals on this space. We began thinking through safeguards we could put in place to support the students, and I credit the students in the Lab who have helped refine it. The work the Berkeley students are doing is helping us all understand what works and doesn’t and is now informing global practice.

There’s an assumption by many human rights workers that when you are remote from a crisis, you are protected from it. Yes, you may be physically protected from the violence happening on the ground, but when you are dealing with the sheer volume of imagery before you, there’s an intimacy to watching digital content about crises on your laptop for hours on end. You also don’t have the same degree of community you might have when you’re on the ground. As a result of the distance, you are not seeing the immediate impact of your work, and we’ve learned that understanding your impact goes a long way to helping provide you with the strength and thoughtfulness in doing this work.

Lastly, when we were launching our first international training on open-source investigation, there was some reluctance by our partners toward including a resiliency section in our training. The implication was that these are trained and tough war crimes investigators, and discussions around resiliency would be “fluffy.” I remember saying, “over my dead body are we not going to have a training on this.” There’s a tangible role that Berkeley plays in thinking through the psycho-social aspects of this work and preparing a generation to do this work on a long-term basis. What we have seen historically is people burning out, resorting to alcohol or other dangerous coping mechanisms, so we wanted to figure out how the next generation will tackle these issues head on.

It’s not about being “mentally weak,” it’s about thinking through the physical, digital, and psycho-social work together. We really can’t talk about the physical and digital security for ourselves and partners unless we are dealing with the psycho-social security of that entire ecosystem. As soon as one of those three pillars crumbles, the entire scope is compromised and mistakes will be made.

The first protocol on open-source investigations is emerging thanks to your Center’s work. What was the process from which this protocol emerged? What were the key issues or demands that inspired its creation?

We began working in this space back in 2011/12, when we engaged with the International Criminal Court to look at how digital technologies can supplement what witnesses were saying in the courtroom. The courts were struggling to bring cases to fruition because the judges were saying the stories of survivors were not enough to meet the legal requirements to have cases proceed. We began holding a series of workshops with people from other court systems—and people who were experimenting with new technologies for fact-finding—to determine what could be brought into the courtroom to support what survivors were saying was happening in their communities. This included remote-sensing technologies such as satellite imagery, people working on big data analytics in looking for patterns of conflict, and people who were stretching the boundaries of what could be captured on cell phones.

We worked with Witness, which trains people and communities on how to document crises with phones to produce video evidence that can be successful in a court of law. We’ve also worked closely with groups like the Syrian Archive, which is aggregating content that is shared online, to think through how war crimes investigators might probe that information to find the data that is most helpful in building and strengthening their cases.

After four or five years, we began receiving calls from people on the ground in Iraq and Bangladesh, for example, asking us on how they should be storing the information they were collecting to meet the forensic standards. There were a lot of amazing researchers, like Lindsay Freeman and Tara Vassefi, who helped think through what exists and what’s missing. What we found, after consulting with more than 150 different people and holding multiple workshops, was that there really wasn’t clear guidance for fact-finding using digital technologies.

What is the goal of the Berkeley Protocol? How can it help address the dilemmas you highlighted?

We began holding a series of workshops to think through thorny questions where there wasn’t a clear understanding about what courts were going to require. We began doing a lot of doctrinal and social-science research on what other people were talking about regarding how digital technologies could help. Lindsay Freeman, whom we brought on in early 2018, began looking at law enforcement, domestic courts, international courts, human rights organizations, and technologists to really find out what was coming together and what was missing.

What we heard over and over was that it would be helpful to have a set of guidelines. The trick was to create a set of global guidelines that would not be so restrictive that people “on the ground” — in low-bandwidth or low-resource environments — would be unable to meet those standards. We aimed for a minimum set of standards for how to do this work and do it really well. We provide the bare minimum that you need to preserve chain of custody around the content so we know how likely it is to have been tampered with since capture, and we provide best practices for when resources are available. We tried to make it less prescriptive, so those doing this work in less privileged positions weren’t excluded.

We also realized that a lot of people did not know how to assess the quality of the investigations that were being pulled together. Often, visual investigations are produced that seem very sexy and convincing, bringing together satellite imagery with chronolocation and geolocation. People who are intimidated by those or do not know how they work are at risk of two things: 1) they are overly convinced by the research without having healthy skepticism; and 2) they’re unsure how to evaluate that work and blow it off completely, and do not consider it a valuable source of information. If the viewer is a judge, we don’t want them to be doing either of those two things because it will either be overly prejudicial, or all that work and time will be wasted and not weighted into the fact-finding and analysis. We wanted to provide guidance for the lawyers on the ground, as well as the judges, researchers, investigators, and journalists and human rights activists who are increasingly using these methods.

How do you see the Protocol being implemented by organizations and universities who engage with this field of investigation?

I’m hoping it’ll be a resource for people to be able to flip through particular sections. It serves as a helpful guide in providing tips about the best way to carry out any particular portion of an investigation.

The first audience we focused on are legal investigators, as the protocol helps them understand what judges are likely to want to see — practices that can help ensure the information makes it into a court of law. But we expect it to be helpful for reporters who are trying to grab information and figure out what to download and how to download it, as well as human rights advocacy groups who are trying to get information out, but are open to having the information they collect potentially go to the courts.

One risk with digital information is that it is ephemeral. If content is graphic, social media companies are likely to take it down or make it inaccessible. The question is, how do we ensure that the precious piece of content that somebody may have risked their life to put online is held in a way that can have maximum impact down the line?

The next piece is figuring out how national jurisdictions can adapt and adopt some of the findings and research that went into the Protocol. I am looking forward to having regional workshops so we can get feedback on future iterations of the Protocol, but also to help translate the protocol into local contexts. The spread of smartphones and digital technology is only going to continue, so jurisdictions that aren’t grappling with the questions the Protocol aims to answer will probably be facing them in the near future.

Five years ago, we were told that social media content would never become primary evidence in the courtrooms. In the case of Al-Werfalli, a Libyan war lord, the International Criminal Court recently issued its first arrest warrant based primarily on information pulled from Facebook. There’s a dawning realization that this kind of information will have more utility in the future than was previously recognized.

Part of the Protocol focuses on “resiliency,” ensuring the long-term mental health of the researchers. How do you think about resiliency in this context, and why is it featured so prominently?

Resiliency is something everyone thinks is important, but almost no one dares talk about. The attitude for a long time has been, either you can handle this work or you can’t, and if you can’t, then get out. Reacting negatively to egregious material is a very human response, and there are things we can do to prepare ourselves and better respond when we do react. There are some who are very grateful that we are shining a spotlight on some of these issues, while others have rolled their eyes until they hear us talk about it in the context of security, where they see its value.

What’s exciting for UC Berkeley is that the students are helping to bring some weight into this space and reinforce each other in sharing the importance of this work. It’s very hard when you’re new in your job with much more experienced colleagues, who may not share the same understanding of the value of this work. We are trying to figure out a way to support the students by getting as much out there publicly so that it provides an echo into what they are saying in their fields internally.

From a sociological perspective, when we talk about social change, we are thinking about having a focus, first, at the individual level so each person has the skills and knowledge—for example, about the value of community to mental wellbeing—to be equipped to do this work. Though we have a long way to go, the more we can drive that cultural change, the better. At the structural level, we are thinking about a template for a resiliency plan that could be shared and adapted by those conducting their investigations in the hope that planning for resiliency will become as important as surveying the digital landscape before an investigation.

To what extent did social science research methodologies and frameworks go into the Protocol?

There are three ways in which social science research went into the Protocol. The first relates to the idea of resiliency. There has been research by Columbia University by lawyers and psychologists who are interested in tracking people’s reactions and responses to the sheer volume of graphic and sensitive content that they watch. Additional research has been done by individuals such as Michele Grant and Sam Dubberley, who founded the Digital Verification Corps at Amnesty International, on vicarious trauma as it pertains to the field of human rights.

The second way is with respect to questions of discrimination and bias that can enter the field of open-source investigations. Swansea University, alongside researchers from Essex University and UC Berkeley, are involved in a consortium dedicated to understanding how digital content capturing crises can bias which narratives and stories are heard and reported on, particularly with respect to age, gender, and geography. We have been thinking through questions of access with respect to demography. That is, who has access to social media, the internet, and the technologies needed to capture or investigate violations as they emerge.

The third piece relates to gender and sexual-based violence, and understanding how digital investigations can help address these highly sensitive and overly stigmatized issues without feeding into the stigmatization. An important piece to this are the ethical considerations around which digital information can be shared with courts, for example, without risking, exposing, or exploiting the vulnerability of the victims in these cases.

What limitations, if any, or areas for future discussions and proposals do you see with the Protocol?

One question that we debated is to what extent should we make this Protocol tool-agnostic, as opposed to bringing in the tools of investigations ranging from tools to archive, chronolocate, geolocate, and more broadly verify the open source content. Of course a lot of people on the ground want to know what tools they should be using to preserve the chain of custody, for example. There are amazing tools out there like Hunchly to preserve that content, but for the protocols that exist and have specific tools and how to use them, they become outdated really fast. The hardest part has been providing guidance based on principles and concepts, rather than on the tools.

As a center, we have always had a deep dedication to the resiliency and ethics piece of this work, but there were questions about whether they should be standalone sections or how we should integrate them into the overall piece. This is an area where feedback from those using the Protocol will be helpful.

Image: From the cover of The New Forensics: Using Open Source Informationto Investigate Grave Crimes. Credit: Eliot Higgins, Bellingcat

You May Like

Interview

Published December 18, 2019

Q&A with David Harding: “On the Outside: Prisoner Reentry and Reintegration”

An interview with David Harding, Professor of Sociology at UC Berkeley and co-author of the book On the Outside: Prisoner Reentry and Reintegration, which examines the lives of 22 people as they pass out of the prison gates and back into the world.

Learn More >

Authors Meet Critics

Recap

Published December 12, 2019

In the Ruins of Neoliberalism

Watch the video from our "Authors Meet Critics" panel on Professor Wendy Brown's In the Ruins of Neoliberalism: The Rise of Anti-Democratic Politics in the West.

Learn More >

Matrix On Point

Recap

Published November 27, 2019

On IPCC, Climate Crisis

A "Matrix On Point" panel examined the climate crisis from diverse disciplinary perspectives.

Learn More >