Many people presume that data is data and that bias only enters the equation when you have humans interpreting it.
But the reality is a bit more convoluted. Data itself is always biased. Many times, it’s obvious. And other times, it can be very subtle if you’re not paying close attention.
Kate Crawford is a principal researcher at Microsoft’s New York AI research lab, where they’ve been addressing AI ethics and working to minimize the impact of bias in data for years. In an article, Crawford outlines the problem succinctly: “What is interesting about training datasets is that they will always bear the marks of history, that history will be human, and it will always have the same kind of frailties and biases that humans have.”
But the danger isn’t necessarily bias itself, because there will always be bias. Rather, the danger lies in disregarding bias and not actively working to minimize it. As search marketers, it’s critical that we understand bias and the role it plays in what we do so that we can take steps to guard against it.
Understanding bias in data
It’s important to start with how bias gets into data. Bias is there because data is created by humans, who are making decisions around what and who that data represents.
Technologies like AI and machine learning were designed to augment human capability and have the potential to do a lot of good in the world. But even logical, impersonal algorithms have bias—they were created by humans after all—which means they also have the potential to be very harmful.
Take, for example, AI algorithms being used to predict recidivism—the likelihood of criminals reoffending. These types of tools are used throughout the US criminal justice system.
In an analysis of one such tool, COMPAS, ProPublica compared the predicted and actual recidivism rates of 10,000 criminal defendants in Florida over a two-year period. In addition to finding that 61% of predictions were inaccurate, the study found that black defendants were often predicted to be at a higher risk of recidivism than reality, while white defendants were often predicted to be at a lower risk than reality. Black defendants were also predicted to be twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism.
The ethics of AI
This unfortunate example of bias brings us to the critical nature of ethics in AI. For AI to do the most good in the world, it must be grounded in empathy. It must align with our moral values and ethical principles. We must deeply understand the data that we’re using to create the foundation of our system and how the resulting algorithms might impact people and the communities that interact with them.
At the Blacks in AI event, Timnit Gebru, a post-doctoral researcher at Microsoft’s New York lab said, “If we don’t have diversity in our set of researchers, we are at risk of solving a narrow set of problems that a few homogeneous groups of people think are important, and we are at risk of not addressing the problems that are faced by many people in the world.”
Microsoft has been working to reduce bias in AI for years now, since the days when the concept of AI was purely academic. We firmly believe that AI needs to be built in a way that earns trust, and that AI systems must be designed with protections for fairness, inclusiveness, reliability and safety, transparency and accountability, and privacy. To ensure this happens, we must remain vigilant about assessing and continuing to address potential risks, so that AI technologies are developed and deployed responsibly.
Contending with bias in search marketing
As marketers and marketing leaders, this applies to us as well. Our job is to understand the bias that lives within our datasets and to recognize that this bias may be tainting the outcomes from our machine learning models.
While we can’t fully remove bias from data, we can take steps to minimize it. We must ask questions like: is there enough diversity represented in my dataset? Is there enough diversity represented in the individuals who are doing research? Is there enough diversity in my customer base, or is it potentially skewed?
Think about how you search. When you are searching for something, you might find exactly what you are looking for—but you might be missing out on information you didn’t know existed. Or you might be searching with a skewed perspective.
From a digital marketer’s standpoint, there are numerous ways bias can enter search campaigns. When you create hyper-targeted search campaigns, there is potential that you are missing more diverse audiences.
Let’s say you are trying to market housing designed for people 55 years and older. So, you target your audience to that specific age group. Sure, you might reach your exact audience. But what about people looking for housing for their aging parents? What about people in their 40s and early 50s planning for the future? If you target only people in your exact demographic, you are adding bias to your data.
Similarly, if you only focus on top-of-funnel (TOFU) or bottom-of-funnel (BOFU) activity, then you are also increasing bias within your search data. A healthy search campaign contains a mix of TOFU and BOFU activity to ensure you are reaching consumers throughout multiple stages of the consumer decision journey. In our assisted living example, this would mean digital marketers need to reach customers in all stages, from those just beginning to look to those ready to sign on the dotted line.
Many search marketers are drawn to BOFU campaigns for good reason. Remarketing and brand terms/phrases often deliver higher click-through and conversation rates. But there is a cost. There is always a cost to biased data – in this case, missing out on potential new customers who are unfamiliar with your brand and are just beginning their search. Including broad, top-of-funnel keywords helps to attract net-new customers, minimize bias and create a stronger overall customer portfolio.
Challenge your assumptions and embrace empathy
Ultimately, we must be willing to accept that we all have biases. We must be willing to challenge our assumptions and embrace empathy. We must accept that we don’t know what we don’t know and try to do better. And perhaps most importantly, we must create inclusive, diverse environments where team members are free to share their own unique perspectives that our digital marketing campaigns reflect the real world – or maybe an even better world. Hanna Wallach, a senior researcher in Microsoft’s New York research lab, says it best, ‘This is an opportunity to really think about what values we are reflecting in our systems, and whether they are the values we want to be reflecting in our systems.’
Christi Olson is head of evangelism for search at Microsoft.
SOURCE: News – Read entire story here.