This article is the first article featured in our Spring 2019 journal. For the complete journal, please see the “Current Issue” tab above.
By Andreas Sampson Geroski
Edited by: Bryce Liedtke, Nandita Sampath, Mai Sistla
Data privacy has become an increasingly important policy area as personal information becomes a major currency in developing goods and services. It is crucial for policymakers to consider limits on future uses of personal data while the heightened threat of cybersecurity breaches, abuses of targeted content delivery, and unintended consequences of machine learning applications loom over our increasingly information-based economy. The rapid increase in use of at-home genetic testing and, more specifically, the potential misuses of genetic data raises concerns over privacy and leads to questions regarding the potential exploitation of personal information. In addition to discussing the potential misuse of genetic data by private companies and government agencies, this paper suggests potential policy recommendations to improve individual legal protections.
The Use and Misuse of Genetic Data
How concerned should we be about the data companies collect on us? This question was behind a Guardian and New York Times interview last year with whistle-blower Christopher Wylie, who helped expose the illegal collection of Facebook data and subsequent voter manipulation by political consulting firm Cambridge Analytica during the 2016 election cycle [1]. Cambridge Analytica was successfully able to harvest information on more than 50 million Facebook profiles by promoting online personality tests [2]. Using this data, Cambridge Analytica psychologically profiled users and targeted them with personalized advertisements to provide their clients with a powerful tool in which to influence voters [3]. The firm grouped users displaying similar online behavior together and preyed on their concerns. As Wylie explained, “we exploited Facebook to harvest millions of people’s profiles and built models to exploit what we knew about them and target their inner demons [4].”
However, it is not just online where users are inadvertently supplying companies with huge amounts of personal data. Many people are interested in learning about their family ancestry and genetic profile. In response to this demand, companies like 23andMe have developed use-at-home genetic tests which claim to trace customers’ genetic ancestry and identify disease predispositions [5]. To do so, customers submit a saliva sample which is tested and compared to variants from people around the world and with particular illnesses [6]. Ancestry companies like 23andMe report these analyses to customers, detailing heritage and susceptibility to specified ailments.
Scientifically, however, genetic testing for ethnicity is largely arbitrary. The technology has been referred to as “genetic astrology” by genetics expert Professor Mark Thomas [7]. The results are often based on company databases, which are perennially updated and refined, demonstrating ongoing ambiguity in its precision [8]. More broadly, nationalities do not change or define genetics. For example, being identified as 12.5 percent Senegalese is less concrete than such a figure suggests, implying only that one shares some genetic characteristics with current Senegalese residents and ignoring the fact that people have migrated from all over the world to Senegal in previous generations. It also ignores the results of the human genome project, which states that, as summarized by Bill Clinton, “in genetic terms, all human beings, regardless of race, are more than 99.9 percent the same” [9].
However, this is not how ancestry companies make their money. The lucrative part of their business is selling the data they collect from genetic tests. As a 23andMe board member explained in 2013, “The long game here is not to make money selling kits. … Once you have the data, [23andMe] does actually become the Google of personalized health care” [10]. As of 2017, 23andMe had sold database access to 13 pharmaceutical firms [11]. Ancestry companies’ ability to collect and sell genetic data is at the heart of their business models — the fewer restrictions on user privacy, the more they can potentially profit. Using this data, pharmaceutical companies are researching, amongst other things, how genetic differences can make drug treatments more effective, in addition to understanding what genetic codes predispose people to diseases like Parkinson’s. The breast cancer drug Herceptin provides an example of how this personalized medicine works. The drug works by controlling the growth of a protein (HER-2), but is only effective in women with specific genetic traits prevalent in 25-30 percent of breast cancer patients [12]. While this is undeniably a positive development, the current business model for personalized medicine exists in a ethical grey area which some might view as exploitative. Companies are developing applications and profiting from genetic data for purposes that customers have often unwittingly submitted to after paying for a wholly different and unrelated service. Properly informing, if not compensating users for the genetic data unknowingly provided for pharmaceutical research and development raises questions about consent and confidentiality. Data privacy is not an abstract concept; in the example of personalized medicine, ownership of genetic data has profound implications, from what drugs are developed to who profits off them.
Genetic Discrimination
Publicly-available genetic data has a variety of potential ramifications. In May 2018, California law enforcement announced they had tracked down the “Golden State Killer” — a notorious serial killer in California accused of 13 homicides — using a publicly-available genetic database to identify the culprit [13]. Police arrested suspect James DeAngelo after linking the DNA taken at the crime scene to the DNA of a relative who used the do-it-yourself genealogy website GEDmatch [14]. The case shows how when family members make their genetic data public, it effectively makes their relatives’ DNA public as well. A network effect occurs, similar to how by logging into Facebook, information is given to companies about other users and even non-users.
While there can be clear benefits to publicly-available genetic databases, there can also be drawbacks. Predictive policing provides a case of genetic data’s potential for abuse. Police forces across the world are increasingly using predictive policing, where data collected on crimes and people who come into contact with the police are used to predict where and by whom future crimes may be committed [15]. As with any model, this method is only as good as the data that goes in; its output will reflect any assumptions or biases used in its programming and data collection. In the case of predictive policing, excessive policing of minority communities results in statistically biased crime data, which means that these groups are identified as more likely to commit crimes [16]. This creates a vicious cycle, in which minorities are even more targeted because of attributes such the area they live in and the school they attended, even if they haven’t committed a crime [17]. ProPublica has extensively documented how people entering the criminal justice system are given “risk-based assessments,” where software predicts their likelihood of committing a future crime. This score is often used in sentencing or parole decisions [18]. Minority groups have consistently higher risk assessment scores than their white peers, even when comparing whites who have been in prison to minorities who have no criminal records [19]. These biases would become even more entrenched if these predictive programs were to start including genetic information collected through swabs taken by individuals going through the criminal justice system. Racial minorities are far more likely to be victims of a false positive test, as they are disproportionately represented in genetic databases due to their disproportionate interactions with the criminal justice system. Thus, they are more likely to be falsely accused when there is a mismatch in the genetic data. The use of genetic data in algorithms, like those used in predictive policing, could result in a greater number of false criminal accusations against minority groups.
In Weapons of Math Destruction, Cathy O’Neill details how software now determines health scores, credit reports, and approvals for loans, often using suspect measurement and biased or prejudiced assumptions [20]. The key is not just the widespread use of unaccountable algorithms, but also how scientific legitimacy is not a precursor to this kind of data being incorporated into algorithms. In Scientific American, journalist Charles Seife explains that 23andMe is now “sifting through its genomic database, which is combined with information that volunteers submit about themselves, to find possible genetic links to people’s traits” [21]. There is a clear link to this and the potential for the misuse of genetic data in algorithms. For instance, let’s say an ancestry company “finds” a genetic variant linked to a predisposition towards violent behavior. Ancestry companies could present this link between a certain gene and violence as “fact”, similar to how ancestry companies present identity as a “fact” by reducing it to arbitrary numbers and percentages. Whether someone has this gene could then be incorporated into algorithms used in predictive policing. This is genetic stereotyping; people who are born with certain genes would be treated more harshly in the criminal justice system, irrespective of their actions. Considerations of how society, law, and policing affect levels of violence could be ignored , and society is reduced to the level of the individual. This “genetic stereotyping” could be used to justify prejudice in determining credit scores, approval for mortgages, and any number of life-changing events.
Genetic stereotyping has also been increasingly used to survey and track marginalized communities, especially the poor and unhoused. In Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks explains how the use of automated systems, algorithms, and data analytics are being increasingly used in social services to designate “risk assessment” scores, which are then used to make decisions about eligibility for social services. Often, this requires collecting information through invasive surveys. Sensitive information, such as social security numbers or an individual’s description of their experience suffering from sexual assault, can then be stored on databases that can be accessed by police, academic researchers and public agencies not linked to the services they are trying to access. While people are asked for their consent, they are sometimes unable to access social services if they don’t provide this information. As Eubanks argues, “if the old surveillance was an eye in the sky, the new surveillance is a spider in a digital web, testing each connected strand for suspicious vibrations” [22]. Seen in this context, genetic data could be used as another component of social service risk assessments, and could be used to automate decisions on the services and housing that people can access. Given that our genetic data is intrinsic to us and cannot be changed, this data has the potential to completely lock people out of services, regardless of their actions.
The misuse of genetic information to current algorithms is terrifying, but is currently occurring even across the world. For example, the Chinese Government is piloting various versions of a Social Credit Score (SCS) to improve “citizen trustworthiness”, where citizens are assigned a score based on various “good” or “bad” actions [23]. Citizens can lose points for committing traffic violations, not paying taxes or bills on time (regardless of context), and for engaging in “undesired” social behaviors like playing video games for too long. On the other hand, citizens can earn more points by saying complimentary things about the government online, doing “heroic” acts, or buying diapers for your infant. In some variations, the actions of your friends also impact your score [24]. A high score can give you perks like discounts on energy bills, access to better insurance plans, and permission to travel abroad without supporting documents. A low score can restrict your access to fast internet speeds, forbid people from applying to jobs, and bar people from using public transportation. Since its inception, 1.65 million Chinese people have been barred from using trains because of their Social Credit Score [25]. If genetic data were included in the algorithms used by the SCS, then someone’s genetics could result in systematic discrimination and exclusion in daily life. Governments could associate certain genes with undesirable behaviors and tweak algorithms to ensure that certain groups of people always receive a lower score.
Disappearing Legal Protections
The concerns with the widespread use of genetic data go beyond discrimination. Peter Pitts of the Centre of Medicine for the Public Interest explains how:
“one MIT scientist was able to identify the people behind five supposedly anonymous genetic samples randomly selected from a public research database. It took him less than a day. Likewise, a Harvard Medical School professor dug up the identities of over 80% of the samples housed in his school’s genetic database. Privacy protections can be broken. Indeed, no less than Linda Avey, a co-founder of 23andMe, has explicitly admitted that ‘it’s a fallacy to think that genomic data can be fully anonymized.”[26].
In short, anonymity isn’t guaranteed when submitting your genetic data — it could still be used to identify you without your knowledge. It is unlikely that people would be as willing to use genetic tests if they knew the data could be used to identify them by a third party
With companies such as Adidas, Sears, and Delta having had customer information stolen as a result of hacking [27],there is also a real concern that genetic information stored on companies’ databases could be hacked and members of the public subsequently identified from the stolen data. Moreover, as the Cambridge Analytica scandal has shown, companies like Facebook, either carelessly or deliberately, can break their own privacy terms and conditions. The Federal Trade Commission has already warned consumers to consider the privacy implications of home genetic tests, saying “although most tests require just a swab of the cheek, that tiny sample can disclose the biological building blocks of what makes you you” [28]. This is not like having credit card details stolen, where you can be issued with a new card and claim stolen money on insurance. Once your genetic information is public, it stays that way.
To an extent, there are legal protections in place to protect genetic data. In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) allows medical companies to share genetic data as long as it has been anonymized [29]. Current privacy protections also require companies to separate customers’ names and samples before the samples are sent for testing, to receive consent for sharing information with companies, and to include an option for consumers to request that their data be discarded after the test. However, this legislation was passed in 1996, before the widespread use of at-home genetic testing [30]. Comparing the privacy policies of different ancestry companies, journalist Lydia Ramsey has shown how vulnerable genetic data can be. Results are often sent to laboratories run by different companies to be tested, all of which may have varying privacy standards and policies. Test results can oftentimes be kept for up to 10 years and data can be transferred between companies [31]. For instance, Family Tree DNA’s privacy statement states that if the company were acquired, “personal information, including test results, will, as a matter of course, be one of the transferred assets.” In other words, customers are not in control of who owns their data [32].
These concerns are even more important when you consider that in the United States, legal protections to stop genetic discrimination are incomplete. In 2008, President Bush signed the Genetic Information Nondiscrimination Act (GINA), which gave federal protections against genetic discrimination. It prohibited health insurers or employers from asking for genetic information, and made it illegal to force people to take a genetic test or exclude people from services based on their genetic information [33]. The law, however, does not cover schools, life insurance, or housing, leading to situations where, for example, “a 36-year-old woman’s life insurance application was denied because her medical records noted she had tested positive for the BRCA1 breast cancer gene” [34]. There is a concern that rental or mortgage applications will start asking for genetic information, with the aim of identifying and rejecting people who may have genetic predispositions to diseases like Alzheimer’s that could affect their healthcare costs and therefore ability to pay. It is not currently clear whether this type of discrimination is illegal [35]. Most worryingly though, “there are virtually no prohibitions on the subsequent uses and disclosures of the [genetic] information,” meaning that not only are people vulnerable to discrimination, but their privacy is at risk [36].
In 2012 California passed a stronger law, CalGINA, to prohibit genetic discrimination on a wider range of issues including housing. CalGINA also removes the financial cap on penalties for violations of the law, increasing the incentive for adherence to the law [37]. CalGINA works by amending a host of other codes, such as Educations and Elections, to include genetic non-discrimination in all areas where existing anti-discrimination laws exist [38]. Despite this, there are ongoing proposals that critics argue would weaken GINA at the federal level. A House Committee has approved a bill, H.R. 313, that lets companies genetically test employees as a part of the Affordable Care Act’s work wellness programs. Work wellness programs are voluntary health schemes that employees can sign up for which are meant to help promote a healthy workforce; employees that participate are often offered lower health insurance premiums [39]. While supporters of the bill argue that the schemes are voluntary and that people are still protected under GINA [40], it potentially puts employees in a catch-22 of having to disclose genetic information to lower their healthcare costs. While employers argue that knowing someone’s predisposition to diseases like Alzheimer’s means that they can tailor wellness programs to help prevent the development of these diseases, it still leaves employees in a vulnerable position. Employers can potentially fabricate a reason for dismissal if they know an employee is likely to develop a condition or disease that is expensive to treat.
Those with minimal or no legal protections are the most vulnerable to the misuse of genetic information. The U.S. government is already using DNA testing for visa applications, where testing is used to verify biological relationships to determine whether family members can join their families in the States [41]. As already detailed, this genetic information is not always secure and there are not provisions in place to protect people from discrimination if these data were to be used nefariously in the future. Additionally, this testing creates further violations of their privacy. Rachel Swarns details how these tests have led to family members being denied visas because members were found to not be biologically related, sometimes because of affairs or sexual assaults that the woman had never disclosed [42]. The pain and harm these situations cause for families who have lived together their whole lives is egregious. Resorting to genetic tests to prove family bonds is a blunt and often meaningless tool. Ancestry companies have also recently offered to use genetic testing to help reunite families separated at the U.S.-Mexico border. Given the limited legal protections these families already face, however, the use of DNA tests here might make a horrible situation even worse [43].
Conclusions and Policy Recommendations
The sensitivities around the use of DNA testing for visa applications show the challenges in creating policies to protect our genetic data; these issues are nuanced and analyzing the costs and benefits of using genetic data is not always clear-cut. Protecting our genetic privacy will require policy recommendations that cover legal and political concerns in both the private and public sector. However, while further debate is needed about the general use of genetic data, there are a few policy recommendations that would provide immediate protections for very little cost.
First, GINA should be expanded to cover all forms of genetic discrimination. The many potential abuses of genetic information may outweigh any benefits it may bring. From its potential use in predictive policing and other algorithms determining life insurance claims or mortgages, the use of genetic information is currently not sensitive nor accurate enough to be used fairly. By expanding GINA to cover all forms of genetic discrimination at a federal-level, it allows the U.S. government and our broader society to fully understand and debate the uses of genetic data before any decisions using this information are made.
Given the myriad of ways in which algorithms perpetuate discrimination, another policy recommendation is that the use of genetic information in algorithms should be both regulated and transparent. While the Social Credit System is unlikely to be instituted in a democratic country, transparency in the use of algorithms is still a concern around the world. The discussion around how to make algorithms and technology more transparent must include an awareness of the potential future use of genetic data. On issues like predictive policing, public debate about the use of algorithms has lagged behind policy implementation. The potential abuses from using genetic data in algorithms are too grave for this continuing lag in policy; future debates and subsequent laws need to legislate to mitigate abuses of algorithms.
The root cause of these potential exploitations is the business model that sells our genetic data. A final policy recommendation is that ancestry companies should be barred from sharing genetic information, and companies should not be able to sell genetic data or have it transferred if they are bought out. The current protections to safeguard people’s genetic data, and by extension their anonymity, are inadequate. Even if companies do have adequate cybersecurity measures, the data are still vulnerable to abuse when being transferred between companies with different privacy regulations. The best way to ensure consumer privacy is to stop data from being turned into a commodity.
Expanding GINA to cover all forms of discrimination, enforcing greater transparency on algorithms, and prohibiting the sale of genetic information are a start, but broader policy debates are needed to comprehensively look at how all types of personal (including genetic) data is collected, used, and stored. How much of our personal data we make available is no longer a personal issue, but a decision with consequences for everyone [44]. It is incoherent to have a right to privacy and an ability for others to share one’s genetic data, given there is nothing truly anonymous about our genetic information. Future conversations on trade-offs between medical advances and privacy are necessary, but we also urgently need more transparency around what data is collected on us and how it is used in algorithms that increasingly dictate our lives. The Cambridge Analytica scandal serves as a strong reminder of the price we pay for not safeguarding our privacy, and the next abuse of our private data could have even bigger consequences.
Andreas Sampson Gorski is a second-year Master of Public Policy candidate at the Goldman School of Public Policy.
References
- Solon, Olivia, and Emma Graham-Harrison. 2018. The six weeks that brought down Cambridge Analytica. 3 May . Accessed June 17, 2018. https://www.theguardian.com/uk-news/2018/may/03/cambridge-analytica-closing-what-happened-trump-brexit.
- Greenfield, Patick. 2018. The Cambridge Analytica files: the story so far. 26 March. Accessed July 27, 2018. https://www.theguardian.com/news/2018/mar/26/the-cambridge-analytica-files-the-story-so-far.
- Wong, Julia Carie. 2018. ‘It might work too well: the dark art of political advertising online’. 19 March. Accessed July 30, 2018. https://www.theguardian.com/technology/2018/mar/19/facebook-political-ads-social-media-history-online-democracy.
- Solon, Olivia, and Emma Graham-Harrison. The six weeks that brought down Cambridge Analytica.
- 23 and Me. 2018. “Ancestry Composition.” 23 and Me. Accessed October 26, 2018. https://permalinks.23andme.com/pdf/samplereport_ancestrycomp.pdf.
- Ibid.
- Collins, Nick. 2013. DNA ancestry tests branded ‘meaningless’. 7 March. Accessed June 6, 2018. https://www.telegraph.co.uk/news/science/science-news/9912822/DNA-ancestry-tests-branded-meaningless.html.
- Farr, Christina. 2018. 23andMe is getting more specific with its DNA ancestry test, adding 120 new regions. 18 February. Accessed July 17, 2018. https://www.cnbc.com/2018/02/28/23andme-adds-120-new-regions-to-its-ancestry-test.html.
- The White House. 2000. June 2000 White House Event. 26 June . Accessed June 29, 2018. https://www.genome.gov/10001356/june-2000-white-house-event.
- Schulson, Michael. 2017. Spit and Take. 29 December. Accessed July 29, 2018. http://www.slate.com/articles/technology/future_tense/2017/12/direct_to_consumer_genetic_testing_has_tons_of_privacy_issues_why_is_the.html.
- Pitts, Peter. 2017. The Privacy Delusions of Genetic Testing. 15 February. Accessed July 20, 2018. https://www.forbes.com/sites/realspin/2017/02/15/the-privacy-delusions-of-genetic-testing/#7a5482cc1bba.
- Issa, Amalia M. 2007. “Personalized Medicine and the Practise of Medicine in the 21st century.” McGill Journal of Medicine 53 – 57.
- Chavez, Nicole. 2018. DNA that led to Goldern State Killer suspect’s arrest was collected from his car while he shopped. 2 June. Accessed July 31, 2018. https://edition.cnn.com/2018/06/02/us/golden-state-killer-unsealed-warrants.
- Lussenhop, Jessica. 2018. Golden State Killer: The end of a 40-year hunt? 29 April. Accessed June 29, 2018. https://www.bbc.co.uk/news/world-us-canada-43915187.
- Ferguson, Andrew G. 2017. “Policing Predictive Policing.” Washington University Law Review 1109 – 1189.
- Lapowsky, Issie. 2018. How the LAPD uses data to predict crime. 22 May. Accessed July 30, 2018. https://www.wired.com/story/los-angeles-police-department-predictive-policing.
- Gazzar, Brenda. 2018. Activists File Lawsuit over LAPD’s Predictive Policing Program. 4 February. Accessed July 31, 2018. http://www.govtech.com/public-safety/Activists-File-Lawsuit-Over-LAPDs-Predictive-Policing-Program.html.
- Angwin, Julia, Jeff Larson, Saryu Mattu, and Lauren Kirchner. 2016. Machine Bias. 23 May. Accessed November 14, 2018. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Angwin, Larson, Mattu, and Kirchner. Machine Bias.
- O’Neill, Cathy. 2016. Weapons of Math Destruction. New York: Crown Publishing Group.
- Seife, Charles. 2013. 23andMe is terrifying, but not for the reasons the FDA thinks. 27 November. Accessed June 20, 2018. https://www.scientificamerican.com/article/23andme-is-terrifying-but-not-for-the-reasons-the-fda-thinks.
- Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
- Mistreanu, Sinia. 2018. Life Inside China’s Social Credit Laboratory. 3 April. Accessed July 5, 2018. https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory.
- Botsman, Rachel. 2017. Big data meets Big Brother as China moves to rate its citizens. 21 October. Accessed June 17, 2018. https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion.
- Ibid.
- Pitts. The Privacy Delusions of Genetic Testing.
- Hanbury, Dennis, and Mary Green. 2018. If you shopped at these 15 stores in the last year, your data might have been stolen. 20 July. Accessed July 20, 2018. http://uk.businessinsider.com/data-breaches-2018-4/#adidas-1.
- Mukherjee, Sy, and Erika Fry. 2018. Do DNA Testing Companies Like 23andMe Own Your Biological Data? 19 March. Accessed July 31, 2018. http://fortune.com/2018/03/19/dna-testing-23andme-owns-biological-data.
- Pitts. The Privacy Delusions of Genetic Testing.
- Schulson, Michael. 2017. Spit and Take. 29 December. Accessed July 29, 2018. http://www.slate.com/articles/technology/future_tense/2017/12/direct_to_consumer_genetic_testing_has_tons_of_privacy_issues_why_is_the.html.
- Ramsey, Lydia. 2018. Lawmakers are asking DNA-testing companies about their privacy policies – here’s what you should know when taking genetics tests like 23andMe or AncestryDNA. 21 June. Accessed July 4, 2018. http://uk.businessinsider.com/privacy-considerations-for-dna-tests-23andme-ancestry-helix-2017-12.
- Schulson. Spit and Take.
- Fairness, Coalition for Genetic. n.d. What does GINA mean? . Information Guide, Coalition for Genetic Fairness.
- Zhang, Sarah. 2017. The Loopholes in the Law Prohibiting Genetic Discrimination. 13 March. Accessed June 20, 2018. https://www.theatlantic.com/health/archive/2017/03/genetic-discrimination-law-gina/519216.
- Rothstein, Mark A., and Laura Rothstein. 2017. “The Use of Genetic Information in Real Property Transactions.” Real Property, Trust and Estate Law Section, American Bar Association 1 -6.
- Ibid.
- Zhang, Sarah. The Loopholes in the Law Prohibiting Genetic Discrimination.
- Wagner, Jennifer K. 2011. A new law to raise GINA’s floor in California. 7 December. Accessed November 4, 2018. https://theprivacyreport.com/2011/12/07/a-new-law-to-raise-ginas-floor-in-california/.
- Chen, Angela. 2017. A House committee thinks your boss should be able to see your genetic information. 20 March. Accessed July 30, 2018. https://www.theverge.com/2017/3/20/14880400/politics-law-bioethics-genetic-privacy-discrimination-gina-workplace-wellness.
- Committee on Education and the Workforce. 2017. Setting the Record Straight: Q&A on Voluntary Employee Wellness Programs . 16 March. Accessed October 30, 2018. https://edworkforce.house.gov/news/documentsingle.aspx?DocumentID=401456.
- US Department of State. 2018. Information for Parents on U.S. Citizenship and DNA Testing. Accessed October 27, 2018. https://travel.state.gov/content/travel/en/legal/travel-legal-considerations/us-citizenship/US-Citizenship-DNA-Testing.html.
- Swarns, Rachel. 2007. DNA Tests Offer Immigrants Hope or Despair. 10 April. Accessed November 19, 2018. https://www.nytimes.com/2007/04/10/us/10dna.html.
- Molteni, Megan. 2018. Family DNA testing at the border would be an ethical quagmire. 22 June. Accessed July 30, 2018. https://www.wired.com/story/family-dna-testing-at-the-border-would-be-an-ethical-quagmire.
- Koerth-Baker, Maggie. 2018. You can’t opt out of sharing your data, even if you didn’t opt in. 3 May. Accessed May 25, 2018. https://fivethirtyeight.com/features/you-cant-opt-out-of-sharing-your-data-even-if-you-didnt-opt-in.