A recent report from Virginia Tech highlights potential geographic biases in the AI chatbot ChatGPT, specifically in providing locally tailored information on environmental justice issues. This revelation raises concerns about the tool’s ability to deliver area-specific information and its impact on communities across different counties. Let’s delve into the study’s findings and examine the implications of these biases.

United States map illustrating the areas where residents can access local-specific information on environmental justice issues (shown in blue) and areas where such information is not available (shown in red). Source: Virginia Tech

Geographic Disparities in Access to Environmental Justice Information

The researchers at Virginia Tech discovered that ChatGPT’s provision of location-specific information on environmental justice issues varied significantly across different states and counties. They found that densely populated states like Delaware and California had a higher percentage of the population that could access specific information, while rural states like Idaho and New Hampshire faced significant limitations, with more than 90% of their population residing in counties without access to local-specific information.


This disparity reveals a concerning bias favoring areas with larger urban populations, potentially leaving marginalized communities without the crucial information they need to address environmental challenges.

Related: Self-Made Story: ChatGPT’s Bio Told Me by ChatGPT

Implications for Environmental Justice

Access to accurate, locally tailored information is crucial for addressing environmental justice issues effectively. By limiting access to area-specific information, ChatGPT inadvertently exacerbates existing disparities by disproportionately impacting communities in rural areas. Marginalized communities already face numerous environmental injustices, such as pollution and lack of resources. The lack of localized information further widens the gap, hindering their ability to advocate for change and participate in decision-making processes.

Urgent Need for Further Research

The report emphasizes the need for additional research to explore and address the biases present in ChatGPT. Identifying such biases is the first step toward rectifying them and ensuring equitable access to information for all communities. Virginia Tech lecturer, Kim, rightly calls for continued study to shed light on these geographic biases and work towards eliminating them. It is crucial to refine AI models like ChatGPT to provide accurate and localized information to individuals regardless of their geographic location.



The revelation of geographic biases in ChatGPT’s provision of information on environmental justice issues raises concerns about equitable access to vital resources. By disproportionately favoring densely populated areas and neglecting rural regions, the tool inadvertently perpetuates environmental disparities.

Read Next: ChatGPT’s Mastery, Windows AI Hype, Crypto Chaos, and KYC Marvel

It is crucial for developers and researchers to address these biases and work towards enhancing the AI model’s ability to provide locally tailored information to all communities. Only through a concerted effort to rectify these biases can we ensure that AI technology supports environmental justice and empowers marginalized communities.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.