Loading ...
With your background in photography, how did you find your passion for digital anthropology? The kind of photography I did was a mixture of photojournalism and portraiture, which are rooted in visual ethnography—you are held to specific standards and have to interpret a series or a situation with in-depth research. With portraiture, there is more of an artistic hand, but an idea of creating truth and showing or illuminating who your subject is in one singular, distilled moment. It’s a lot like conducting user research and finding the user takeaway that is best representative of the user testing session.

You study online harassment. How does a site or app’s design affect its users’ vulnerability to online harassment? Twitter is actually a perfect example. The design of its privacy only allows for users to exist as either public or private—a very binary setting. But harassment isn’t binary. Harassment is contextual, and it can also be literal. Harassment can come from someone a user knows intimately, to someone she or he has never met, to an entire group of strangers tracking her or his movements online. Does going private actually solve or mitigate harassment?

Drawing specifically from my Gamergate work—having spoken to more than 40 victims of Gamergate and also to participants in Gamergate—it’s an emotional negotiation for victims to go private after being attacked. They feel like they are giving up something to be safe and that they are misusing the product because being on the private setting doesn’t feel like the default experience in Twitter. So going private because you were attacked—not because you want to be private—hurts users. It forces them to think, “Did I lose? Someone forced me to do this.” Additionally, Gamergate would see users going private as a victory—that the victims they were harassing couldn’t handle their questions and thus, they were right in attacking them.

Additionally, language is so rarely public or private. Our social interactions are varied and nonbinary. I’ve presented a lot on this idea of attaching privacy at the tweet level, so instead of just having overreaching privacy settings, users can be private, but allow specific tweets they select to be public, turn replies off on tweets, etc. Blocking hashtags—which Twitter only started allowing last year—does help, but not in all cases since not all harassment stems from harassment campaigns.
Does going private actually solve or mitigate harassment?”

You’ve spoken about the need to design consent. What does consent look like in digital design? Consent looks nonautonomous, and it looks like agency. So, what is user agency inside of a system? How do we create more scalable conversations? From a practical standpoint, it looks like more privacy settings that users can slightly adjust or tailor. Instead of algorithmically blocking egg accounts [throwaway handles—often created by trolls—that use the default profile picture] on Twitter, what if I, as a user, could say, “No accounts under X age or no accounts with less than X followers”? What if users could say, “Make this tweet only viewable if I’m signed into Twitter” or “Make [my Twitter account] searchable only on Twitter”? It’s a lot like the aforementioned question about privacy. Because so much of what we “say” online is written and then becomes content, the ethos behind designing consent into conversational spaces comes from how private or public users are and how accessible their content is, as well as how much control users have over how their content is accessed and shared.

Are there any best design practices you would recommend to UX designers who want to mitigate online harassment in their own work? During my design exposition class in New York University’s Interactive Telecommunications Program, writer Clay Shirky would have us work through an idea and present it. Then he would ask us, “Now, what could possibly go wrong?” and we—along with the rest of the class—would have to think of every single way our products could be misused.

Ask yourself: What could possibly go wrong? How can my designs hurt protestors or limit protest? Or how can my design hurt a domestic violence victim? Online protestors need a platform and protection from the state, but domestic violence victims need state intervention and protection from people they know. Now, what about a victim who may not need state intervention, but also may not know her or his abuser at all? How does that change what you design?

Remember that a massive part of communication design is policy: How is your policy dictated through design? What is the pipeline for reporting threatening behavior? What can users do beyond reporting something? Talk to a variety of users of different races, different genders, different age ranges—and those with different threat models. Also, get well acquainted with threat models.

Do you think Microsoft should have done anything differently in light of what happened with chatbot Tay? Tay is a great example of something that seems to work on one platform, but doesn’t work on the platform of implementation. Tay actually would have been a really great chatbot if it had been embedded on a stand-alone website. But it was implemented on Twitter, and all the flaws of that platform weren’t taken into consideration. Social media spaces are such a different kind of space than a product or company site. Tay’s stakeholders should have looked at some examples of social media advertising campaigns going awry. I usually suggest looking at Charmin; it ran a campaign about toilet paper, and people responded about their butts. Which is funny! But think how easily social campaigns can get hijacked. The difference with Tay is that Tay was running autonomously, so a person wasn’t writing the responses. So then, was a person monitoring Tay in real time? If you’re launching anything on a social network, you have to be monitoring it in real time, as anyone who’s worked on social advertising teams will tell you.

So, yes, Microsoft should have done a lot differently. It should have created blacklists of words; it should have had a social team in place to monitor, guide and shift responses away from harmful things; it should have outlined what it considers harmful or offensive and what to do if Tay says something harmful. Bots can accidentally harm, and it’s up to their creators to mitigate that.

Why did you want to make your VR game Dark Patterns? I’m not sure if you can tell, but I’m a major privacy advocate! And I love speculative design. I was chatting with my friend and collaborator, Mani Nilchiani, who is Iranian. Mani and I are both activists—we protest, run meet-up groups and make tools for activists. We were talking a lot about what it means to exist inside systems when you are marginalized or surveilled. I, myself, have been a victim of harassment from Gamergate. So Mani and I had different threat models, but understand this similar idea of being watched.

Mani is a fantastic developer who works in product design, and we were curious: What is the future when every device is connected—for ease, for futurism, for comfort—when we live in a surveillance state? And then Trump was elected, so we really amped up what we were exploring: What is the future of product design when we give up safety for comfort? Is there a balance? What would the world look like if we’re always on, and always connected, but always being seen? We’re exploring that. Mani and I wanted to make a game that allows you to do more than just punch Nazis, but really think about what could happen. Within Dark Patterns, we are introducing ways to fight back against that surveillance system.

What are the ethical questions you grapple with as someone who works in machine learning? What is sustainable harm reduction in design? What are all the ways this could be erroneous or harm someone? When is data too much data, and how do we create helpful systems that don’t surveil users? Often with machine learning, we don’t know what the results will be until we have the results. We have to think about harm before we launch so we can properly run quality assurance and adjust—that’s so helpful. There’s need for more data to adjust, but we don’t want to create a system that is a panopticon. It’s figuring out that balance right now, since we’re in such a nascent stage for machine learning product design.
Caroline Sinders, a researcher and artist based between San Francisco and New York, works in machine learning, conversations, violence and emotional data. She is currently an online harassment researcher and designer at the Wikimedia Foundation, and an Eyebeam and BuzzFeed project fellow.
X

With a free Commarts account, you can enjoy 50% more free content
Create an Account
Get a subscription and have unlimited access
Subscribe
Already a subscriber or have a Commarts account?
Sign In
X

Get a subscription and have unlimited access
Subscribe
Already a subscriber?
Sign In