Younghoon Jeong/Juhyun Oh/Jaimeen Ahn/Jongwon Lee/Jihyung Mon/Sungjoon Park/Alice Oh, "KOLD: Korean Offensive Language Dataset", arXiv:2205.11315, (May 2022).
[2022]
Youjin Kong, Intersectional Fairness in AI? A Critical Analysis, Feminism, Social Justice, and AI, (2021).
[2021]
Yolande Strengers et al., Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation, In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, (2020).
[2020]
Yolanda A. Rankin et al., Straighten Up and Fly Right: Rethinking Intersectionality in HCI Research, (2019).
[2019]
Y. T. Cao et al., Toward Gender-Inclusive Coreference Resolution, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (2020).
[2020]
Y. Liao et al., Racial mirroring effects on human-agent interaction in psychotherapeutic conversations, In Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI ’20, pages 430–442, Cagliari, Italy, (March 2020).
[2020]
Y. Kim et al., Anthropomorphism of computers: Is it mindful or mindless?, Computers in Human Behavior, 28(1):241–250, (2012).
Weizenbaum/Joseph, Computer Power and Human Reason: From Judgment to Calculation, (1976).
[1976]
Wachter, Sandra et al., Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI, Computer Law & Security Review 41 (2021).
W. Youyou et al., Computer-based personality judgments are more accurate than those made by humans, Proceedings of the National Academy of Sciences, 112(4):1036–1040, (January 2015.).
[2015]
W. Fedus et al., Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, arXiv:2101.03961 [cs], (January 2021).
[2021]
Viljoen, Salome, A Relational Theory of Data Governance (November 11, 2020), Yale Law Journal.
Vaswani et al., Attention is all you need, (2017).
[2017]
United States v. Carroll Towing Co. (159 F.2d 169.)
UNESCO, I'd blush if I could: closing gender divides in digital skills through education, 2019
Tom Taulli, Artificial Intelligence Basics: A Non-Technical Introduction (Ascent Audio, 2021)
Tolga Bolukbasi et al., Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, (Octorber 2 8, 2020).
Thiago Dias Oliva et al., Fighting hate speech, silencing drag queens? Artificial intelligence in content moderation and risks to LGBTQ voices online, Sexuality & Culture 25, 2 (2021).
The Medical Futurist, The Top 12 Health Chatbots, (2021. 8. 31.)
Talia B Gillis et al., Big data and discrimination, The University of Chicago Law Review 86, 2 (2019)
[2019]
T. W. Bynum, Artificial intelligence, biology, and intentional states in computers and ethics, Metaphilosophy, vol. 16, no. 4, pp. 355–377, (1985).
[1985]
T. W. Bickmore et al., Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant, Journal of Medical Internet Research, 20(9):e11510, (September 2018.).
T. B. Brown, B. Mann et al., Language Models are Few-Shot Learners, arXiv:2005.14165 [cs], (July 2020.).
[2020]
Séverine Dusollier, The 2019 Directive on Copyright in the Digital Single Market: Some progress, a few bad choices, and an overall failed ambition, Common Market Law Review, Kluwer Law International, 2020, 57 (4).
[2020]
Sungkyu Park, The presence of unexpected biases in online fact-checking, Misinformation Review, (JANUARY 27, 2021).
Sungjoon Part et al., KLUE: Korean Language Understanding Evaluation, (2021).
[2021]
Su Lin Blodgett et al., Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English, In Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) Workshop, (2017).
[2017]
Su Lin Blodgett et al., Language (Technology) is Power: A Critical Survey of Bias in NLP, In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, (2020).
[2020]
Stuart Russell/Peter Norvig, Artificial Intelligence: A Modern Approach(4th edition, 2020)
Stephen Hawking et al., Stephen Hawking: Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?, Independent, (2014. 5. 1.).
[2014]
Stanford Univ CRFM, On the Opportunities and Risks of Foundation Models, (18 Aug 2021).
Stanford HAI, Artificial Intelligence Index Report 2022 (2022).
[2022]
Solon Barocas/Moritz Hardt/Arvind Narayanan, FAIRNESS AND MACHINE LEARNING, June 16, 2021.
Solon Barocas et al., The Problem With Bias: Allocative Versus Representational Harms in Machine Learning, (2017).
[2017]
Sobel, Benjamin, Artificial Intelligence's Fair Use Crisis, Columbia Journal of Law & the Arts, Forthcoming, (September 4, 2017).
Siddhant Garg et al., Bert-based adversarial examples for text classification, arXiv preprint arXiv:2004.01970 (2020).
[2020]
Shavell, Steven et al., Rewards versus Intellectual Property Rights. The Journal of Law & Economics, vol. 44, no. 2, 2001, pp. 525–47, https://doi.org/10.1086/322811. (Apr. 2022).
Semantics derived automatically from language corpora contain human-like biases
Sarah Kreps, et al., All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation, (2020).
[2020]
Sandra Wachter et al., A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI, OXFORD LAW BLOG, (2018).
[2018]
Samuel Warren&Louis Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193 (1890).
[1890]
Samuel Gehman et al., RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models, (2020).
[2020]
S. Zdenek, Just Roll Your Mouse Over Me: Designing Virtual Women for Customer Service on the Web, Technical Communication Quarterly, 16(4):397 –430, (August 2007.).
[2007]
S. L. Blodgett et al., Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English, arXiv:1707.00061 [cs], (June 2017.).
[2017]
S. L. Blodgett et al., Language (Technology) is Power: A Critical Survey of Bias in NLP, arXiv:2005.14050 [cs], (May 2020).
[2020]
S. L. Blodgett et al., Demographic Dialectal Variation in Social Media: A Case Study of African-American English, In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1119–1130, Austin, Texas, (November 2016).
[2016]
S. Ghaffary, The algorithms that detect hate speech online are biased against black people, Vox, (August 2019).
[2019]
S. Cave et al., The Whiteness of AI, Philosophy & Technology, 33(4):685–703, (December 2020).
[2020]
Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513 (2015).
[2015]
Racial disparities in automated speech recognition
Prateek Joshi, How do Transformers Work in NLP? A Guide to the Latest State-of-the-Art Models (2019).
[2019]
Pavesich v. New Eng. Life Ins. Co., 122, Ga. 190, 50 S.E. 68, 78 (1905) (2d. ed., 1888). citing Thomas Cooley, Torts 29 (2d. ed., 1888). Griswold v. Connecticut, 381 U.S., at 484.
P. Oosterhoff, Online censors are a barrier to sex education, (2016).
[2016]
P. Joshi et al., The State and Fate of Linguistic Diversity and Inclusion in the NLP World, arXiv:2004.09095 [cs], (January 2021).
[2021]
P. Joshi et al., The State and Fate of Linguistic Diversity and Inclusion in the NLP World, arXiv:2004.09095
Open AI (GTP-1), Improving Language Understanding by Generative Pre-Training, (2018).
[2018]
Open AI (GPT-2), Language Models are Unsupervised Multitask Learners, (2019).
[2019]
Omer Tene, LinkedIn v. HiQ and the trans-Atlantic privacy divide, iapp, (Apr 22, 2022).
Olga Russakovsky, Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation, ACM Conference on Fairness, Accountability, and Transparency (FAccT), (2022).
[2022]
OECD, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, (1980).
[1980]
O. Keyes, The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition, Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):88:1–88:22, (November 2018).
[2018]
Nithya Sambasivan et al., Everyone wants to do the model work, not the data work: Data Cascades in High-Stakes AI, CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1-15. (May 2021).
[2021]
Nenad Tomasev et al., Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities, (2021).
N. Sambasivan et al., Re-imagining Algorithmic Fairness in India and Beyond, In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 315–328, Virtual Event, Canada, (March 2021).
[2021]
N. Mehrabi et al., A Survey on Bias and Fairness in Machine Learning. arXiv:1908.09635 [cs], (September 2019.).
Mühlhoff, Rainer, Predictive Privacy: Towards an Applied Ethics of Data Analytics (August 8, 2020). Ethics and Information Technology
Mona Sloane, Policy Recommendations -- 'A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI', European AI Alliance Futurium, (2018).
[2018]
Moin Nadeem et al., StereoSet: Measuring stereotypical bias in pretrained language models, (2021).
[2021]
Michael L. Littman et al,, Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report, Stanford University, Stanford, CA, (September 2021).
Meeyoung Cha et al., Prevalence of Misinformation and Factchecks on the COVID-19 Pandemic in 35 Countries: Observational Infodemiology Study, JMIR, Vol 8, No 1 (2021).
Meaningful information and the right to explanation
Matt Turek, Machine Common Sense (MCS), DARPA, (2018).
[2018]
Mark A Lemley et al., Remedies for robots, The University of Chicago Law Review 86, 5 (2019).
[2019]
M. West et al., I’d blush if I could : closing gender divides in digital skills through education, Technical report, UNESCO, (2019).
[2019]
M. Webb, The Impact of Artificial Intelligence on the Labor Market, SSRN Scholarly Paper ID 3482150, Social Science Research Network, Rochester, NY, (November 2019).
[2019]
M. Sap et al., The Risk of Racial Bias in Hate Speech Detection, In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy, (July 2019).
[2019]
M. R. Hasan et al., Excessive use of online video streaming services: Impact of recommender system use, psychological factors, and motives, Computers in Human Behavior, 80:220–228, (March 2018).
[2018]
M. Lewis et al., Gender stereotypes are reflected in the distributional structure of 25 languages, Nature Human Behaviour, 4(10):1021–1028, (October 2020).
[2020]
M. Kosinski et al., Private traits and attributes are predictable from digital records of human behavior, Proceedings of the National Academy of Sciences, 110(15):5802–5805, (April 2013).
[2013]
Luke Breitfeller et al., Finding Microaggressions in the Wild: A Case for Locating Elusive Phenomena in Social Media Posts, (2019).
[2019]
Luca Bertuzzi, AI regulation filled with thousands of amendments in the European Parliament, EURACTIV, (June 2, 2022).
L. M. Hampton, Black Feminist Musings on Algorithmic Oppression, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 1–1, (March 2021.)
[2021]
L. Hancox-Li et al., Epistemic values in feature importance methods: Lessons from feminist epistemology. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 817–826, Virtual Event, Canada, (March 2021).
[2021]
L. H. Hanu et al., How AI Is Learning to Identify Toxic Online Content, Scientific American, (2021).
L. Dixon et al., Measuring and Mitigating Unintended Bias in Text Classification, In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’18, pages 67–73, New Orleans, LA, USA, (December 2018).
[2018]
Kathleen Creel et al., The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems, In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21)
Kate Crawford/Trevor Paglen, "Excavating AI", excavating.ai, (2019).
[2019]
Karandeep Singh et al., Misinformation, believability, and vaccine acceptance over 40 countries: Takeaways from the initial phase of the COVID-19 infodemic, PLOS ONE, (February 9, 2022).
Kaminski, Margot E., The Right to Explanation, Explained (June 15, 2018). U of Colorado Law Legal Studies Research Paper No. 18-24, Berkeley Technology Law Journal, Vol. 34, No. 1, 2019.
Kaitlyn Zhou et al., Frequency-based Distortions in Contextualized Word Embeddings, (2021).
K. Quach, Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves, The Register, (October 2020.).
[2020]
K. McKee et al., Understanding Human Impressions of Artificial Intelligence, PsyArxiv, (2021).
K. H. Kwon et al., Unspeaking on Facebook? Testing network effects on selfcensorship of political expressions in social network sites, Quality & Quantity, 49(4):1417–1435, (July 2015).
[2015]
Julien Lauret, Amazon’s sexist AI recruiting tool: how did it go so wrong?, medium, (Aug 16, 2019).
Julian Risch et al., Toxic Comment Detection in Online Discussions, Deep Learning-Based Approaches for Sentiment Analysis, (25 January 2020).
Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016).
Joy Buolamwini et al., Gender shades: Intersectional accuracy disparities in commercial gender classification, In Conference on Fairness, Accountability and Transparency, (2018).
[2018]
Jon Kleinberg/Sendhil Mullainathan/Manish Raghavan, Inherent Trade-Offs in the Fair Determination of Risk Scores, (2016).
[2016]
Jon Kleinberg et al., Algorithmic monoculture and social welfare, Proceedings of the National Academy of Sciences 118, 22 (2021).
John Hewitt et al., On the Opportunities and Risks of Foundation Models, Center for Research on Foundation Models (CRFM), (2021).
[2021]
Jigsaw, Unintended Bias and Identity Terms, (October 2021).
[2021]
Jiang, L. et al., Delphi: Towards Machine Ethics and Norms, (2021), ArXiv, abs/2110.07574.
[2021]
Jeff Larson et al., How We Analyzed the COMPAS Recidivism Algorithm, Pro Publica, (May 23, 2016)
James Foulds et al., Bayesian Modeling of Intersectional Fairness: The Variance of Bias, (2020).
[2020]
Jaimeen Ahn/Hwaran Lee/Jinhwa Kim/Alice Oh, "Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate from the Perspective of DistilBERT", Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), (July 2022)
J. Złotowski et al., Anthropomorphism: Opportunities and Challenges in Human –Robot Interaction, International Journal of Social Robotics, 7(3):347–360, (June 2015).
[2015]
J. Zhao et al., Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints, arXiv:1707.09457 [cs, stat], (July 2017).
[2017]
J. Y. Kim et al., Intersectional Bias in Hate Speech and Abusive Language Datasets, arXiv:2005.05921 [cs], (May 2020.).
[2020]
J. Welbl et al., Challenges in Detoxifying Language Models, arXiv:2109.07445 [cs], (September 2021).
J. Menasce Horowitz et al., Trends in U.S. income and wealth inequality, Technical report, Pew Research Center, (January 2020).
[2020]
J. Lambert et al., How Robots Change the World - What automation really means for jobs, productivity and regions, Technical report, Oxford Economics, (2019).
[2019]
J. Borenstein et al., AI Ethics: A Long History and a Recent Burst of Attention, in Computer, vol. 54, no. 01, pp. 96-102, (2021).
J. Angwin et al., Machine Bias. ProPublica, (May 2016).
Ian Goodfellow/Yoshua Bengio/Aaron Courville, Deep Learning (MIT Press, 2016).
[2016]
IP Watchdog, DABUS Gets Its First Patent in South Africa Under Formalities Examination, (2021. 7. 29.).
IEEE 7000 (Model Process for Addressing Ethical Concerns During System Design), 09/15/2021.
I. Gabriel et al., The Challenge of Value Alignment: from Fairer Algorithms to AI Safety, arXiv:2101.06060 [cs], (January 2021).
[2021]
Harry Surden, Artificial Intelligence and Law: An Overview, Georgia State University Law Review, 35(2019).
[2019]
Haoran Zhang et al., Hurtful words: quantifying biases in clinical contextual word embeddings, (2020).
[2020]
Hans P. Moravec, Mind Children: The Future of Robot and Human Intelligence, (January 2, 1990).
H. Bergen. I’d Blush if I Could: Digital Assistants, Disembodied Cyborgs and the Problem of Gender, Word and Text, A Journal of Literary Studies and Linguistics, VI(01):95–113, (2016).
[2016]
Google (BERT), Pre-Traing of Deep Bidirectional Transformer for Language Understanding, (2018).
[2018]
G. Park et al., Automatic personality assessment through social media language, Journal of Personality and Social Psychology, 108(6):934–952, (June 2015).
[2015]
G. I. Winata et al., Language Models are Few-shot Multilingual Learners, arXiv:2109.07684 [cs], (September 2021).
G. Hwang, et al., It Sounds Like A Woman: Exploring Gender Stereotypes in South Korean Voice Assistants, In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA ’19, pages 1–6, Glasgow, Scotland Uk, (May 2019).
[2019]
G. Branwen. GPT-3 Creative Fiction, (June 2020.).
[2020]
Floridi, Luciano, Group Privacy - A Defense and an Interpretation (June 17, 2017).
Floridi, L., Open Data, Data Protection, and Group Privacy, Philos. Technol. 27, 1–3 (2014)
[2014]
Finale Doshi-Velez et al., Towards a rigorous science of interpretable machine learning, arXiv preprint arXiv:1702.08608 (2017).
[2017]
FEDERAL COURT OF AUSTRALIA, Thaler v Commissioner of Patents [2021] FCA 879, (30 July 2021).
F. Jaumotte et al., Rising Income Inequality: Technology, or Trade and Financial Globalization?, IMF Economic Review, 61(2):271–309, (June 2013).
[2013]
Eric Wu et al., How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals, Nature Medicine 27, 4 (2021).
Emily Sheng et al., The Woman Worked as a Babysitter: On Biases in Language Generation, In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, (2019).
[2019]
Emily Dinan et al., Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling, (2021).
[2021]
Elizabeth Clark et al., All That’s ’Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text, (2021).
[2021]
Ein Beitrag von Joanna Prisco, Amazon Shuts Down AI Hiring Tool for Being Sexist, Global Citizen, (12 Octorber 2018).
Edwards, Lilian and Veale, Michael, Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for, (2017). Duke Law and Technology Review, 16 (1). pp. 1-65. ISSN 2328-9600.
[2017]
Eduard Fosch Villaronga et al., Humans forget, machines remember: Artificial intelligence and the right to be forgotten, Computer Law & Security Review 34, 2 (2018), 304–313.
[2018]
EU, Directive 95 / 46 /EC on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data’
E. Strubell et al., Energy and Policy Considerations for Deep Learning in NLP, arXiv:1906.02243 [cs], (June 2019.).
[2019]
E. M. Bender et al., On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 610–623, Virtual Event, Canada, (March 2021).
[2021]
E. M. Bender et al., Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data, In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, p 5185–5198, Online, (July 2020).
[2020]
E. Dinan, et al., Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling, arXiv:2107.03451 [cs], (July 2021.).
[2021]
E. Colleoni et al., Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data, Journal of Communication, 64(2):317–332, (April 2014).
[2014]
E. Bender, The #BenderRule: On Naming the Languages We Study and Why It Matters, The Gradient, (September 2019).
Doshi-Velez, Finale, and Mason Kortz, Accountability of AI Under the Law: The Role of Explanation, Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society working paper (2017).
[2017]
Di Jin et al., Is bert really robust? a strong baseline for natural language attack on text classification and entailment, In Proceedings of the AAAI conference on artificial intelligence, Vol. 34, (2020), 8018–8025.
[2020]
Debora Nozza et al., HONEST: Measuring Hurtful Sentence Completion in Language Models, In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, (2021).
[2021]
David Jurgens et al., A Just and Comprehensive Strategy for Using NLP to Address Online Abuse, (2019).
[2019]
Daniel C Elton, Self-explaining AI as an alternative to interpretable AI, In International Conference on Artificial General Intelligence. Springer, (2020).
[2020]
Daniel Adiwardana et al, Towards a Human-like Open-Domain Chatbot, arXiv:2001.09977 [cs.CL] (2020).
[2020]
D. Sravani, L. Kameswari et al., Political Discourse Analysis: A Case Study of Code Mixing and Code Switching in Political Speeches, In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 1–5, Online, (June 2021).
[2021]
D. Quercia et al., Our Twitter Profiles, Our Selves: Predicting Personality with Twitter, In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, pages 180–185, (October 2011).
[2011]
D. Preoţiuc-Pietro et al., Beyond Binary Labels: Political Ideology Prediction of Twitter Users, In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p 729–740, Vancouver, Canada, (2017).
[2017]
D. Patterson, et al., Carbon Emissions and Large Neural Network Training?, arXiv:2104.10350 [cs], (April 2021).
[2021]
D. Nguyen et al., How Old Do You Think I Am?, A Study of Language and Age in Twitter, Proceedings of the International AAAI Conference on Web and Social Media, 7(1):439–448, (2013).
[2013]
D. Mytton, Data centre water consumption, NPJ Clean Water, 4(1):1–6, (February 2021).
[2021]
D. Hovy et al., The Social Impact of Natural Language Processing, In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), p 591–598, Berlin, Germany, (August 2016).
[2016]
D. Hovy et al., The Importance of Modeling Social Factors of Language: Theory and Practice, In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588–602, Online, (June 2021).
[2021]
D. Hendrycks et al., Aligning AI With Shared Human Values, arXiv:2008.02275 [cs], (July 2021).
[2021]
D. F. Mujtaba et al., Ethical Considerations in AI-Based Recruitment, In 2019 IEEE International Symposium on Technology and Society (ISTAS), pages 1–7, (November 2019).
[2019]
D. Acemoglu et al., Artificial Intelligence, Automation and Work, Working Paper 24196, National Bureau of Economic Research, (January 2018).
[2018]
Cristina Ruiz, Leading online database to remove 600,000 images after art project reveals its racist bias, The Art Newspaper, (23 September 2019).
Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, Department of Health, Education and Welfare, (1973)
[1973]
Chaofan Chen et al., This looks like that: deep learning for interpretable image recognition, arXiv preprint arXiv:1806.10574 (2018).
[2018]
Case 215/88 Casa Fleischhandels[1989] European Court of Justice ERC 2789[31]
[1989]
Cade Metz, Can a Machine Learn Morality?, The New York Times, (2021. 11. 19.)
C. Rosset, Turing-NLG: A 17-billion-parameter language model by Microsoft, (February 2020).
[2020]
C. Ischen et al., Privacy concerns in chatbot interactions, In International Workshop on Chatbot Research and Design, p. 34–48. Springer, (2019).
[2019]
C. Ingraham, How rising inequality hurts everyone, even the rich, Washington Post, (February 2018).
[2018]
C. Du, Chinese AI lab challenges Google, OpenAI with a model of 1.75 trillion parameters, PingWest (June 2021.).
[2021]
C. Breazeal et al., Infant-like Social Interactions between a Robot and a Human Caregiver, Adaptive Behavior, 8(1):49–74, (January 2000).
[2000]
C. B. Mann, Can Conversing with a Computer Increase Turnout? Mobilization Using Chatbot Communication, Journal of Experimental Political Science, 8(1):51–62, (2021).
Antonio Ginartet al., Making AI Forget You: Data Deletion in Machine Learning, (2019), arXiv:1907.05012 [cs.LG]
[2019]
Alice Xiang, Reconciling legal and technical approaches to algorithmic bias, Tennessee Law Review 88, 3 (2021).
Ali Alvi/Paresh Kharya, Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World’s Largest and Most Powerful Generative Language Model, Microsoft Research Blog, (2021. 10. 11.)
Alan Westin, Privacy and Freedom, IG, (1967).
[1967]
Affirmative Algorithms : The Legal Grounds for Fairness as Awareness
Abramowicz et al., Prize and Reward Alternatives to Intellectual Property, Prize and Reward Alternatives to Intellectual Property, in Research Handbook on the Economics of Intellectual Property Law (P.S. Menell & B. Depoorter eds., forthcoming 2019).
A. Wang et al., Directional Bias Amplification, arXiv:2102.12594 [cs], (June 2021). URL http://arxiv.org/abs/2102.12594. arXiv: 2102.12594.
[2021]
A. S. Miner et al., Smartphone-Based Conversational Agents and Responses to Questions About Mental Health, Interpersonal Violence, and Physical Health, JAMA internal medicine, 176(5):619–625, (May 2016).
[2016]
A. Rubel et al (2019)., Agency Laundering and Algorithmic Decision Systems, In N. G. Taylor, C. Christian-Lamb, M. H. Martin, and B. Nardi, editors, Information in Contemporary Society, Lecture Notes in Computer Science, pages 590–598, Cham, Springer.
[2019]
A. Romano, A group of YouTubers is claiming the site systematically demonetizes queer content, vox (October 2019.).
[2019]
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, Language Models are Unsupervised Multitask Learners, Open AI (2018).
[2018]
A. Pardes, The Emotional Chatbots Are Here to Probe Our Feelings, Wired, (January 2018).
[2018]
A. Lazaridou et al., Pitfalls of Static Language Modelling, arXiv:2102.01951 [cs], (February 2021).
[2021]
A. Koenecke et al., Racial disparities in automated speech recognition, Proceedings of the National Academy of Sciences, 117(14):7684–7689, (April 2020).
[2020]
A. Georgieff et al., What happened to jobs at high risk of automation?, Technical Report 255, OECD Publishing, (January 2021).
[2021]
A. Cercas Curry et al., Conversational Assistants and Gender Stereotypes: Public Perceptions and Desiderata for Voice Personas, In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 72– 78, Barcelona, Spain (Online), (December 2020).
[2020]
A. Abid et al., Persistent Anti-Muslim Bias in Large Language Models, arXiv:2101.05783 [cs], (January 2021).
[2021]
A Right to Reasonable Inferences : Re-Thinking Data Protection Law in the Age of Big Data and AI