With Great Power…Comes a Return to our Information Roots
With Great Power…Comes a Return to Our Information Roots
3 Lessons from the March 2022 NEASIST AI & Equity Conference
Sarah Bratt, PhD Candidate, Syracuse University School of Information Studies
Secretary, SIG-Artificial Intelligence (AI), ASIS&T
Artificial Intelligence (AI) is embedded deeply in our lives. To move AI forward equitably, we need not start from scratch; history can serve us. A powerful reminder of this was the day-long NEASIST event, an energizing reminder that many ASIS&T areas with historical ties to the apparently “bleeding edge” issues of AI: Equity and information ethics and policy (SIG-IEP, SIG-SM), Bibliometrics and information retrieval (e.g., SIG-MET, SIG-AI), Information literacy (SIG-ED), knowledge organization and management (e.g., SIG-CR, SIG-KM), and Information behavior, use, and seeking (e.g., SIG-USE) approaches. Crosscutting these foundations, I saw 3 themes that were always there in the background or explicitly mentioned:
- Surveillance Capitalism was always in the background
- AI education as a solution was widely endorsed
- Interdisciplinary problem-solving was always implied
In this brief commentary, I summarize the ways AI is related to our information roots that can help us see AI not as a major break with the past, but a rebranding and re-imagination of ongoing issues in information science and technology studies. Granted, there are issues of scale and technical innovation that make AI a new beast. Nonetheless, as here I highlight here, a key takeaway for readers from the NEASIS&T event is the fruitful path forward suggested by speakers and conference attendees’ conversations: bringing special interest group (SIG) expertise together, not only strengthen these foundational areas and apply them to AI, but also make policy and practice recommendations to AI across industry, academia, and government.
With Great Power: AI and Equity
The NEASIST conference “With Great Power: AI and Equity” Friday, March 25, 2022, by Simmons and ASIS&T, co-sponsored by SIG-AI, brought diverse perspectives to questions of the intersection of artificial intelligence (AI), machine learning (ML), and society. I was struck by the nuanced perspectives gained from deep technical knowledge of AI systems, connected to the most pertinent questions of our increasingly algorithmically-mediated experiences, services, and products. The scope of the topics was wide ranging—legal scholars brought perspectives related to behavioral data use, emergency preparedness researchers as well as privacy and security brought perspectives on how AI is shaping issues critical to public sector concerns, from COVID-19 case tracking to semi-automated policing.
Presenters brought issues to the table that indirectly or directly concerned questions of how internet service companies (like Google) and governments (like Russia) capitalize on the behavioral “exhaust” of users. Detailed cases helped to value was primarily in how solutions to these extractive technologies manipulate, and misrepresent, but also how they can be used for “good.” The solutions they brought were in 2 forms: 1) empirical research showing illustrative cases of the intersection of AI+equity and 2) drawing from the foundations of information science research, which I highlight here. I pose questions that I see as aligning with thematic meridians mapping to ASIS&T concerns and special interest groups (SIGs): Information ethics and policy, bibliometrics and information retrieval, information literacy, and knowledge organization and management. I conclude with our next steps forward, including critiques.
Information Ethics and Policy (SIG-IEP, SIG-SM)
- What happens when things go wrong? E.g., When a self-driving car hits person? We need to consider “really scary scenarios like smart weapons, killer robots, and delegating decision-making to technology in the first place.”
- Risk and fairness: Can we develop calibrated legal interventions to mitigate the risks of AI and equity?
- Foreseeability—what is the likelihood of the harm? Are any protective rights involved? Current laws ban systems if they pose “unacceptable risk” such as job applicant rankings. But we’ve seen many of these cases are “toothless regulation”—vague and ineffective.
- Can think about kinds of regulations to protect the end user from being manipulated?
Bibliometrics and Information Retrieval (e.g., SIG-MET, SIG-AI)
- The scraping of behavioral data creating technologies of surveillance—e.g., in health care, bank telling. Who are the workers that tech might replace displace or supplant? What kind of gig work is behind IR systems and search engine?
- Cui bono? Tech designers still don’t really understand tech and polarization tends to oversimplify into “good/bad,” e.g., self-driving cars. Dr. desJardins says “it’s just not that simple. Critical race theory says when you take systemic bias and you build systems on top of it, it’s nearly impossible to fight.”
- “In order to do machine learning (ML) you need to have a bias—bias is built in because they every model is the same otherwise. Someone who says “I’m going to eliminate bias in ML doesn’t know very much about ML. You’re guessing about the outcome and trying to model it based on an underlying assumption of how people perform. You say: ‘I believe these things will behave like those things’—the interpretation of that phrase is a bias—and that’s the point of ML.”
“Someone who says ‘I’m going to eliminate bias in ML.’ Doesn’t know very much about ML.”—Dr. Marie DeJardins Opening Keynote
Information Literacy (SIG-ED)
- AI and machine learning can push fake news and to detect fake news, e.g., Deep fakes. How can we help people develop literacy about provenance to detect what is true and what is false?
- Ukraine misinformation justified an invasion. Misinformation not just destabilized a nation, but society. How can librarians who have been doing information literacy for forever, indicating this is a sticky problem, address misinformation?
- What type of experience/skills would you suggest those interested in getting into the data lifecycle development process area in this field with such a great need in the future?
Knowledge Organization and Management (e.g., SIG-CR, SIG-KM)
- Representational bias: How you represent your data set matters. How we discretize values matters, e.g., job performance what are the buckets of age—at what level of abstract do you describe things?
- Exclusion bias is another kind of representational bias: what features do you NOT INCLUDE in your model?
- Collection bias—white male speakers of English as a first language for voice recognition systems tends to work well for native white men.
- Statistical bias—learning to predict cancer—very infrequent cells, but if they are fast—the cost of missing them is really problematic. Where are there bias and skew for rare events?
“Whenever we replace human effort with technology, we create another industrial revolution.” There are haves and have not. These are societal choices we make we when we regulate and deploy systems.”—Dr. Marie DeJardins, NEASIS&T Keynote
AI & Equity: Our Next Steps Forward
Overt themes of the day included conceptualizing algorithmic fairness, misinformation, the future of work and automated systems, and STEM education for minorities. Themes running under the talks and conversation, the elephant in the room, quietly looming but not quite looked at directly, were two themes: Surveillance capitalism, and Data Feminism. I’ve been reading Shoshana Zuboff’s landmark book The Age of Surveillance Capitalism (2018). Zuboff points out the behavioral “exhaust” of our interactions with digital systems is feeding advertisement—which Dr. Chirag Shah calls “the new smoking.” Similarly, Dr. Marie desJardins’ keynote talk brought into the spotlight the ways in which society is affected by behavioral data not only for ads, but in the case of Russia’s misinformation campaign to justify their invasion of Ukraine. The logic underlying how AI evaluates the efficacy of ads for shampoo (i.e., clicks-tracking, sales, conversion rates) holds for propaganda—it is marketing, if for a different “product.”
Two critiques:
1) No mention of feminist theory, except allusions to critical race and intersectionalism. Where’s Data Feminism? Especially since the title of the conference is With Great Power.
2) The idea to ‘just educate kids so they can become software engineers’ needs tempering with the research showing that we need to interrogate ideas that “empowerment” happens when we teach marginalized people to code. This still puts the burden on the shoulders of minorities to gain programming skills.
Cite this article in APA as: Bratt, S. (2022, April 6). With great power…comes a return to our information roots. Information Matters, Vol. 2, Issue 4. https://informationmatters.org/2022/04/with-great-powercomes-a-return-to-our-information-roots/