% Data Hazards Self-Assessment Template
%
% https://very-good-science.github.io/data-hazards
%
% GUIDANCE FOR USE
%
% The aim is to go through each label and give some reasoning on whether or not it applies.
% If it does apply, you should say what safety precautions you plan to take.
% These do not have to be those listed with the Hazard, and will depend on your project.
% If you use landscape then you can preview the page in landscape on Overleaf:
% Menu > PDF Viewer > Browser and then click the rotate button to change orientation
\documentclass[fleqn,10pt]{olplainarticle}
\usepackage{csquotes} % For biblatex formatting
\usepackage{longtable}
\usepackage{lscape} % For landscape orientation
\usepackage[none]{hyphenat} % For not splitting words
\addbibresource{bibliography.bib} % Bibliography file
\graphicspath{ {./images/} } % Image path
\raggedright
\title{Data Hazards Self-Assessment: Project Name}
\author[1]{First Author}
\author[2]{Second Author}
\affil[1]{Address of first author}
\affil[2]{Address of second author}
\begin{document}
%\begin{landscape} % Option in case you would prefer landscape orientation
\maketitle
\thispagestyle{empty}
\section*{Project Overview}
This is a brief introduction to your project.
It might include links to places where more information about the project is available.
You could also reference some other work here \cite{exampleref}.
\section*{Data Hazards Assessment}
\begin{longtable}[t]{p{0.2\textwidth}p{0.2\textwidth}p{0.3\textwidth}p{0.3\textwidth}}
\hline &
Hazard & % The Hazard Label being considered
Reasoning & % Your view on whether that Hazard applies to your project
Safety Precautions \\ % Safety precautions you are taking if it applies to your project
\hline
\includegraphics[width=0.18\textwidth]{general-hazard.png} &
General data hazard &
% Data Science is being used in this output, and any negative outcome of using this work are not the fault of “the algorithm” or “the software”. This hazard applies to all Data Science research outputs.
&
\\
\includegraphics[width=0.18\textwidth]{reinforce-bias.png} &
Reinforces existing biases &
%Reinforces unfair treatment of individuals and groups. This may be due to for example input data, algorithm or software design choices, or society at large. Note: this is a hazard in it’s own right, even if it isn’t then used to harm people directly, due to e.g. reinforcing stereotypes.
&
\\
\includegraphics[width=0.18\textwidth]{classifies-people.png} &
Ranks or classifies people &
% Ranking and classifications of people are hazards in their own right and should be handled with care. To see why, we can think about what happens when the ranking/classification is inaccurate, when people disagree with how they are ranked/classified, as well as who the ranking/classification is and is not working for, how it can be gamed, and what it is used to justify or explain.
&
\\
\includegraphics[width=0.18\textwidth]{environment.png} &
High environmental cost &
%This hazard is appropriate where methodologies are energy-hungry, data-hungry (requiring more and more computation), or require special hardware that require rare materials.
&
\\
\includegraphics[width=0.18\textwidth]{lacks-community.png} &
Lacks community involvement &
% This applies when technology is being produced without input from the community it is supposed to serve.
&
\\
\includegraphics[width=0.18\textwidth]{misuse.png} &
Danger of misuse &
% There is a danger of misusing the algorithm, technology, or data collected as part of this work.
&
\\
\includegraphics[width=0.18\textwidth]{difficult-to-understand.png} &
Difficult to understand &
% There is a danger that the technology is difficult to understand. This could be because of the technology itself is hard to interpret (e.g. neural nets), or problems with it’s implementation (i.e. code is not provided, or not documented). Depending on the circumstances of its use, this could mean that incorrect results are hard to identify, or that the technology is inaccessible to people (difficult to implement or use).
&
\\
\includegraphics[width=0.18\textwidth]{automates-decision-making.png} &
May cause direct harm &
% The application area of this technology means that it is capable of causing direct physical or psychological harm to someone even if used correctly e.g. healthcare and driverless vehicles may be expected to directly harm someone unless they have 100% accuracy.
&
\\
\includegraphics[width=0.18\textwidth]{automates-decision-making.png} &
Privacy &
% This technology may risk the privacy of individuals whose data is processed by it.
&
\\
\includegraphics[width=0.18\textwidth]{automates-decision-making.png} &
Automates decision making &
% Automated decision making can be hazardous for a number of reasons, and these will be highly dependent on the field in which it is being applied. We should ask ourselves whose decisions are being automated, what automation can bring to the process, and who is benefitted/harmed from this automation.
&
\\
\includegraphics[width=0.18\textwidth]{automates-decision-making.png} &
Lacks informed consent &
% This hazard applies to datasets or algorithms that use data which has not been provided with the explicit consent of the data owner/creator. This data often lacks other contextual information which can also make it difficult to understand how the dataset may be biased.
&
\\ \bottomrule
\end{longtable}
\vspace{15em}
\printbibliography
%\end{landscape} % Option in case you would prefer landscape orientation
\end{document}