When History Trains the Future: AI Bias


By Morgan Hanks, User Experience Manager

I can’t seem to go a day without thinking about the ethics of AI. Maybe it’s my work in the library, maybe it’s a commitment to doing what’s right, but I’m constantly asking myself: how can we use this technology responsibly, thoughtfully and for the greater good? 

The most complex topic for me with these tools is bias. Bias is baked into these models because bias is baked into our history, which we use to train the models. That bias has real, lasting impacts. It shows up in financial algorithms that disadvantage minority groups, in predictive policing systems that disproportionately target marginalized communities and in facial recognition tools that misidentify people of color at much higher rates. Right now, AI without bias remains elusive. However, all hope is not lost. Institutions like MIT and IBM are developing an increasing number of tools, like IBM’s AI Fairness 360 Toolkit, to help identify and reduce bias and to retrain models for more equitable outcomes. 

If you’ve ever felt unsure about whether to trust an AI-generated result, you’re not alone. For some, the instinct might be to say, “Then just don’t use it!” But these tools are already widely used. They are reshaping how our world communicates, learns and makes decisions. As an information professional, I don't believe that avoiding AI tools is a responsible response.  

What’s Working:  

  • When applied carefully, AI can improve daily life. It already powers adaptive technologies that support people with disabilities, makes it easier to organize and personalize information, and creates more tailored learning experiences (like learning a language!). 
  • AI is under intense ethical scrutiny, which is far more criticism than we gave tools like social media in its early days. This observation shows how we’re learning to hold technology accountable. 

What’s Worrying:  

  • Critics like those behind “The Uselessness of AI Ethics” argue that many ethical guidelines are vague, unenforced and serve corporate PR agendas more than public good.  

A Librarian’s Take:  

If you tend to think in black and white, the ethics of AI may feel overwhelmingly gray. Even as someone interested in its potential, I often find myself pulled in multiple directions from an ethical standpoint. 

At the library, we see this morally complex terrain as both a chance to better understand the AI powered world we’re entering and a responsibility to help shape it. Our hope is that KDL can lead the way by: 

  1. Curating AI tools that are transparent, inclusive and developed with our communities in mind.  
  2. Empowering AI users in our communities to ask critical questions like, “Where did this information come from?” and “Who benefits from this information?”
  3. Advocating for a seat at the table by building AI systems that are more equitable and accountable.

Libraries are uniquely positioned to bridge the gap between abstract principles and everyday practice. We can help shape a future that reflects our values by piloting tools, auditing their use and sharing what we learn. We don’t expect perfection, but we do strive for progress through iteration, openness and community-driven use cases.  

I believe libraries, rooted in equity, transparency and public trust, are exactly where these conversations should happen. Our role is to ask hard questions, push for accountability and help ensure these tools serve all of us. 

This post is part of a blog series on AI, where we break down big ideas into simple, practical insights.