Topic on Research talk:ORES paper

Our role as technologists: Are we just encoding our own ideologies?

3
EpochFail (talkcontribs)

I asked this question out of due diligence, but it probably warrants a big discussion. There's probably degrees to which we do and do not encode our own ideologies.

E.g. I think that machine learning is important to quality control. I have somewhat of a techo-centric view of things. I also see value in "efficiency" when it comes to Wikipedia quality control. It's from this standpoint that I saw the technical conversation and the barrier of developing machine learning systems as critical. So, lots of ideology getting encoded there.

On the other hand, by not specifically building user interfaces, we make space -- we "hear to speech" (see http://actsofhope.blogspot.com/2007/08/hearing-to-speech.html). So, maybe we encode our ideologies to an extent, but we do not continue past that extent and instead make space to hear what others want to "say" through their own technological innovation.

I think it is interesting to draw a contrast between this approach and what we see coming out of Facebook/Google/Twitter/etc. and their shrink-wrapped "intelligent" technologies that fully encode a set of values and provide the user with little space to "speak" to their own values.

Staeiou (talkcontribs)

It's an important question to ask and discuss. A lot of the foundational scholarship in the software studies, politics of algorithms, and values in design literatures involves pointing to systems and saying, "Look! Values! Embedded in design!" Most of those canonical cases are also examples of very problematic values embedded in design. So the literature often comes across as saying that it is a bad thing to encode values in design.

I take the position that it is impossible to not encode values into systems. To say that you aren't encoding values into systems is the biggest ideological dupe of them all (and pretty dangerous). Instead, the more responsible move (IMO) is to explicitly say what your values are, give explanations about why you think they are important and valuable, and discuss how you have encoded them into a system. Then others can evaluate your stated values (which they may or may not agree with) and your implementation of your values (which may or may not be properly implemented).

Even though no traditional GUI user interfaces are built as part of the ORES core project, an API is definitely an interface that has its own affordances and constraints. But I do think it is interesting to draw a parallel to Facebook and maybe Twitter in particular -- Twitter used to be a lot more open about third party clients using their API, and lots of the innovation in Twitter came from users (retweets, hashtags). But they have tightened down the API heavily in recent years, particularly when someone provides a third party tool that they feel goes against what they think the user experience of Twitter should be.

So to wrap this up, I guess there are two levels of values in ORES: 1) the values in an open, auditable API built to let anyone create their own interfaces, and 2) the values encoded in this specific implementation of a classifier for article/edit quality. For example, you could have an open API for quality that uses a single classifier trained only on revert data and doesn't treat anons as a kind of protected class.

EpochFail (talkcontribs)

I think that there's another angle that I want to concern myself with -- ethically. I think it's far more ethical for me (the powerful Staff(TM) technologist) to try to enable others rather than just use my loud voice to enact my own visions. Staeiou, I wonder what your thoughts are there? It looks like this fits with value (1) and I agree. But I'd go farther than say I simply value it to say that there might be some wrongness/rightness involved in choosing how to use power in this case.

Reply to "Our role as technologists: Are we just encoding our own ideologies?"