Paper Jam #1 Recap: Social Transparency in Networked Information Exchange

Paper Jam #1 Recap: Social Transparency in Networked Information Exchange

Context: once a month, I meet with subscribers to the Paper Jam plan of my blog to discuss a paper at the intersection of topics such as computer-supported collaborative work, software engineering, human-computer interaction, and psychology. This post reports on Paper Jam #1.

πŸŽ‰
This post was co-created with Andrew Borg, Kristen Foster-Marks, Lena, and others. Thank you for being part of Paper Jams!

On January 17, we gathered for the inaugural Paper Jam and dove into Social Transparency in Networked Information Exchange by Stuart et al. (2012). The authors lay out a framework for understanding how identity, content, and interaction transparency affect online interactions and collaboration.

Let's zoom out a bit to understand what is meant by these different kinds of social transparency.

Three Kinds of Social Transparency

People interacting through computers leave more and more traces. We can see people's usernames and often also real names (identity transparency), what edits have been made to source code in a GitHub repository (content transparency), and who was involved in a pull request's review discussion (interaction transparency).

These kinds of social transparency aren't switched on or off. Instead, they exist on a spectrum. I can be completely anonymous, use my real name and verify it against a website using my national identity card, or go with something in between β€” such as just using my first name.

Every system that we build that lets people interact over content will be positioned somewhere along those dimensions, so ideally we make informed and deliberated decisions about this. To do this, we need to understand the effects those decisions will have β€” and their second-order effects, too.

For example, people behave differently in anonymity than in situations where their real name is visible – with more creativity, but also less trust and lower accountability to social norms. Content transparency can improve coordination but also increase stress. Interaction transparency – seeing what others are doing – can help establish norms, but also lead to herd behavior.

The main contributions of the paper are introducing and defining the distinction between these different kinds of social transparency, and a discussion of the expected effects and second-order effects certain decisions will have.

Many of us felt that the framework provided by this paper is like one of those lenses that, once you’ve put them on, is hard to take off again β€” you see these things everywhere.

Especially in our daily work, where we interact with our colleagues mostly through computer screens, chat boxes, code review interfaces, and so on, the ideas and concepts from the paper are very apparent. The language provided by the paper now helps us discuss the things we've been seeing and feeling for a long time, but haven't been able to articulate clearly so far.

"The limits of my language mean the limits of my world."

Thirteen Years Later

A few of us had overlooked the publication date of the paper and were then surprised to realize that it's from 2012. The issues it discusses seemed as current as ever.

Information overload is still a problem that we all experience in our lives. If anything, it has gotten significantly worse. The paper highlights notification fatigue as a possible effect of too much transparency. GitHub, Google Docs, Notion β€” the signal vs. noise rase keeps getting worse, at least in our perception.

It becomes all the more important to deliberately filter out things that aren't helpful. Leave Slack channels you don't need to be in, filter out issues that you won't work on, unfollow accounts and artifacts that don't provide you with enough value.

To be able to do deep work, you have to create the space for it.

When looking more closely, we also found a few instances in which the progress of technology had already changed human behavior to be noticeably different from what could have been assumed in the paper.

On the one hand, AI helps us filter out noise and summarize the things of value.

On the other hand, specifically related to AI and identity transparency, humans had to learn in the past few years that before interacting with an entity on the internet, they need to figure out whether they're talking to an LLM.

The fundamental challenge remains: too much transparency is overwhelming, and we β€” both the producers and consumers of media β€” need to evolve to handle this more deliberately.

Judging and Being Judged, in Real Time

From here, we moved our discussion to the other side of transparency – the stress of watching and being watched in online workspaces.

Depending on your relationship with the person, Slack's typing indicator can create immense anxiety. One of us recently switched jobs and was very nervous when their new manager lit up that indicator for ten minutes straight ... without sending anything in the meantime. Of course the resulting message was innocuous, but the stress level was up.

The first pull request on a new job can feel like a high-stakes test rather than a contribution. It's clear that you want to ship correct code, but without knowing who is watching you, it's easy to spiral into fretting about code conventions, commit message, and PR description far longer than necessary.

And even though social transparency can build trust, too much can lead to self-censorship. People hesitate to report bugs in open-source projects because they fear doing it wrong. Team members second-guess whether to post a question to the team channel because they don't want to add noise.

The question then becomes: how do we design for transparency that empowers rather than intimidates?

At Least Two Sides to Anonymity

Anonymity generally can lead to increased creativity because you don't have to fear repercussions that affect your social status. You can try out and play with behavior outside of established norms, and you might be able to say things that need to be said but that you're afraid to say with your identity attached.

Then again, there are bubbles of hate everywhere on the internet nowadays. This was not foreseen by the paper.

And one place where we thought anonymity can be problematic is in one-sided anonymous feedback systems: Alice judges Bob, knowing who Bob is, but Bob isn't able to see who is judging him.

Mechanisms like this can easily lead to dysfunction.

In theory, anonymous feedback allows people to be more candid. In practice, it can also create a lack of accountability and encourages vague, unconstructive criticism.

Without knowing who is giving feedback, people can't ask follow-up questions or understand the full context. It also opens the door for office politics to play out in toxic ways.

Based on our professional experiences, in our discussion we were far more in favor of designing feedback processes with transparency built in.

Transparency in Remote Work

As we started discussing how remote work changes the role of transparency, we first tried to look at how transparency works in a physical office. It happens pretty naturally:

  • You see if a coworker is at their desk.
  • You notice when someone looks busy, distracted, or open to a chat.
  • You overhear conversations that give you context.
  • You can see what doors are open or closed.

None of that exists by default in remote work. This means that transparency isn't something that just happens β€” it has to be designed deliberately.

We don't have physical laws to rely on like in a physical space, so designing our virtual space gives us many more freedoms but also requires us to take on much more responsibility.

This reframing helped us think about transparency not as an all-or-nothing concept but as a design choice. A few examples we discussed:

  • Availability signals: Just because someone is online doesn't mean they're available. We need ways to signal how available we are β€” whether through status messages, focus mode indicators, or even virtual spaces that mimic the casual interactions of a coffee break.
  • Documentation as a double-edged sword: Documenting processes and structures is essential for transparency, but if done wrong it can create noise. One participant mentioned they felt like they were maybe over-documenting everything and worried about whether they were overwhelming their manager with all the information.
  • Designing for trust: The anxiety of staring at Slack's typing indicator next to your manager's name doesn't exist in an office setting, where people can see body language and intent β€” or not. Remote work might need deliberate trust-building mechanisms to replace those missing physical cues.

Online collaboration β€” working together through computers β€” isn't just about mirroring an office. It's about deliberately designing an artificial environment where people can still thrive.

Who Defines Your Identity?

One of the more fascinating tangents of our call was about who controls digital identity. In Sweden, for example, digital identity is tied to banks. If a bank doesn't give you an account, you can't sign your tax forms, because you have no digital identity.

This led to broader questions:

  • What happens when identity verification is privatized?
  • Should social media platforms or governments be responsible for confirming identity?
  • What rights should people have to control their own digital footprint?

Considering that the paper mostly focuses on how social transparency works, we were happy to see our discussion touch on very current, deep questions like this that still aren't solved yet β€” and might be getting more urgent by the day.

Designing Social Transparency, Not Defaulting to It

By the end of our discussion, a central insight had emerged: social transparency isn't inherently good or bad. It's a tool that needs to be designed for its context.

  • Remote work demands active transparency design. We cannot rely on physical presence to provide awareness, so we need intentional systems for availability, visibility, and trust-building.
  • Not all transparency is helpful. Too many notifications can overwhelm. Too much visibility can create stress. We need tools that balance transparency with psychological safety.
  • Anonymity can be both empowering and dysfunctional. It can encourage honesty, but it can also lead to abuse β€” bet it in toxic online forums or in one-sided feedback systems.
  • Trust and identity are more complex than ever. With AI-generated content, bots, and identity manipulation, verifying who we interact with is a growing challenge.

At the heart of it all, we kept coming back to the idea that people shape systems, but systems also shape people.

One participant summed it up: "It's all people in the end β€” no matter what you do."

Transparency isn't inherently good or bad β€” it's all about how we design it. We can create digital workspaces that feel natural, supportive, and even better than physical offices β€” with the language from the paper helping us ponder and make these decisions.

Or, we can just keep staring at Slack's typing indicator, slowly losing our minds. The choice is ours. πŸ™‚


Want to take part in the next Paper Jam? We'd love to have you. Sign up here: