This blog post was originally written for a computer science course at Phillips Academy.
Over the past few years, hyper-local neighborhood social media apps like Citizen, Ring Neighbors (owned by Amazon), and Nextdoor have surged in popularity. And while these apps are marketed as making neighborhoods safer and more connected, they have also been met with concern that they amplify racial profiling, especially in upper-class, often gentrified neighborhoods.
The apps function just like any other social network: users can post updates about crime, safety, strangers, or other “suspicious” activity for their neighbors to comment on and share, and with Amazon Neighbors, users can also share surveillance footage from the associated Ring cameras. While not explicitly racist, these vague categorizations—”suspicious” or a “stranger”—make it easy for users to act upon existing racial biases. In one Nextdoor update (see below), a user reported a man with dreads and “3 other less desirables” who were just biking around the neighborhood.
And in another post on Amazon Neighbors (see below), a user shared a photo of a Black man simply walking outside with the caption “I find this odd. I’m not sure if he is photograph my house or my neighbors.”
There exist countless other examples of these apps’ users making unsubstantiated accusations, mostly towards people of color. Though the blatant racial profiling is of course troubling, it is far more worrisome how these apps amplify the individual prejudices of the users. Because Nextdoor, Neighbors, and Citizen are fundamentally social-media apps, updates about “suspicious” people of color are easily amplified and distributed. Especially in homogenous white communities, the ability to report “strangers”—who are more likely to just be people of color minding their business—exacerbates the exclusion of minorities in these neighborhoods. And since all these apps connect directly with local authorities, these often unfounded racial accusations can quickly escalate into encounters between people of the color and police, who are proven to exercise wide racial disparities in enforcement.
While racial stereotyping obviously existed prior to these apps, the technology and algorithms behind them clearly amplify existing inequalities by creating a culture where reporting “suspicious” people (of color) is both normal and convenient. So what can we do about it? How can these apps be improved to be more equitable?
Making them less convenient: though it sounds counterintuitive, making neighborhood security apps less easy to use might help reduce racial profiling. For instance, in August 2016, Nextdoor rolled out an update which required users to provide more detailed information about a suspect’s appearance in addition to race. This feature, though obviously not foolproof against racist profiling, nevertheless added necessary friction which forced users to at least think whether they are reporting based solely on race. And, according to the Harvard Business Review, this feature reportedly reduced racial profiling by 75 percent.
Less notifications: as quasi-social networks, these apps are designed for users to spend as much time on the platforms as possible. A consequence of this is that users are often barraged with notifications (see below) that promote a culture of paranoia—making users more suspicious, especially towards people of color who are more likely to be viewed as suspects of a crime even if race is not mentioned.
Separation from authority: because these neighborhood security apps rely upon users to provide data, they are not always accurate. Yet, all of these apps are directly tied to authorities and in some cases, police stations are even sponsoring the purchase of neighborhood security app systems. Given how inaccurate these apps can be, it just does not make sense to report all “suspicious activity” on these apps directly to the police without any form of regulation or monitoring.
Rebranding: these apps are even branded in a way which seems to implicitly promote exclusion. The app Citizen was even formerly called “Vigilante,” a direct reference to the culture of unjustified, often racist civilian enforcement that these apps are supposed to combat. Moreover, according to the Ring Neighbors website, the “Neighbors App is the new neighborhood watch.” While not overtly discriminatory, neighborhood watches have historically been linked to racial profiling—Trayvon Martin, for example, was fatally shot by George Zimmerman, a local neighborhood watch coordinator who believed he was protecting his community. For these apps to market themselves as “neighborhood watches” or even hubs for “vigilantes” seems irresponsible.
Should Neighborhood Security Apps Even Exist?
Ultimately, it seems impossible to completely eradicate racial profiling, and there is not even any proof that so-called “smart neighborhoods” reduce crime. The unparalleled level of power given to users who might racially profile is troubling, especially amidst dystopian claims that Ring Neighbors might be adding a facial recognition database of “suspicious” people. Though some might suggest that it is not the companies’ responsibility to monitor apps for racial profiling, the potential for these apps to result in incarceration, extrajudicial killings, or plain discrimination is too serious to ignore: these apps need to be regulated better.