A List Apart
Krystal website hosting

A Content Model Is Not a Design System

Do you remember when having a great website was enough? Now, people are getting answers from Siri, Google search snippets, and mobile apps, not just our websites. Forward-thinking organizations have adopted an omnichannel content strategy, whose mission is to reach audiences across multiple digital channels and platforms.

But how do you set up a content management system (CMS) to reach your audience now and in the future? I learned the hard way that creating a content model—a definition of content types, attributes, and relationships that let people and systems understand content—with my more familiar design-system thinking would capsize my customer’s omnichannel content strategy. You can avoid that outcome by creating content models that are semantic and that also connect related content. 

I recently had the opportunity to lead the CMS implementation for a Fortune 500 company. The client was excited by the benefits of an omnichannel content strategy, including content reuse, multichannel marketing, and robot delivery—designing content to be intelligible to bots, Google knowledge panels, snippets, and voice user interfaces. 

A content model is a critical foundation for an omnichannel content strategy, and for our content to be understood by multiple systems, the model needed semantic types—types named according to their meaning instead of their presentation. Our goal was to let authors create content and reuse it wherever it was relevant. But as the project proceeded, I realized that supporting content reuse at the scale that my customer needed required the whole team to recognize a new pattern.

Despite our best intentions, we kept drawing from what we were more familiar with: design systems. Unlike web-focused content strategies, an omnichannel content strategy can’t rely on WYSIWYG tools for design and layout. Our tendency to approach the content model with our familiar design-system thinking constantly led us to veer away from one of the primary purposes of a content model: delivering content to audiences on multiple marketing channels.

Two essential principles for an effective content model

We needed to help our designers, developers, and stakeholders understand that we were doing something very different from their prior web projects, where it was natural for everyone to think about content as visual building blocks fitting into layouts. The previous approach was not only more familiar but also more intuitive—at least at first—because it made the designs feel more tangible. We discovered two principles that helped the team understand how a content model differs from the design systems that we were used to:

  1. Content models must define semantics instead of layout.
  2. And content models should connect content that belongs together.

Semantic content models

A semantic content model uses type and attribute names that reflect the meaning of the content, not how it will be displayed. For example, in a nonsemantic model, teams might create types like teasers, media blocks, and cards. Although these types might make it easy to lay out content, they don’t help delivery channels understand the content’s meaning, which in turn would have opened the door to the content being presented in each marketing channel. In contrast, a semantic content model uses type names like product, service, and testimonial so that each delivery channel can understand the content and use it as it sees fit. 

When you’re creating a semantic content model, a great place to start is to look over the types and properties defined by Schema.org, a community-driven resource for type definitions that are intelligible to platforms like Google search.

A semantic content model has several benefits:

  • Even if your team doesn’t care about omnichannel content, a semantic content model decouples content from its presentation so that teams can evolve the website’s design without needing to refactor its content. In this way, content can withstand disruptive website redesigns. 
  • A semantic content model also provides a competitive edge. By adding structured data based on Schema.org’s types and properties, a website can provide hints to help Google understand the content, display it in search snippets or knowledge panels, and use it to answer voice-interface user questions. Potential visitors could discover your content without ever setting foot in your website.
  • Beyond those practical benefits, you’ll also need a semantic content model if you want to deliver omnichannel content. To use the same content in multiple marketing channels, delivery channels need to be able to understand it. For example, if your content model were to provide a list of questions and answers, it could easily be rendered on a frequently asked questions (FAQ) page, but it could also be used in a voice interface or by a bot that answers common questions.

For example, using a semantic content model for articles, events, people, and locations lets A List Apart provide cleanly structured data for search engines so that users can read the content on the website, in Google knowledge panels, and even with hypothetical voice interfaces in the future.

Image showing an event in a CMS passing data to a Google knowledge panel, a website, and a voice interface

Content models that connect

After struggling to describe what makes a good content model, I’ve come to realize that the best models are those that are semantic and that also connect related content components (such as a FAQ item’s question and answer pair), instead of slicing up related content across disparate content components. A good content model connects content that should remain together so that multiple delivery channels can use it without needing to first put those pieces back together.

Think about writing an article or essay. An article’s meaning and usefulness depends upon its parts being kept together. Would one of the headings or paragraphs be meaningful on their own without the context of the full article? On our project, our familiar design-system thinking often led us to want to create content models that would slice content into disparate chunks to fit the web-centric layout. This had a similar impact to an article that were to have been separated from its headline. Because we were slicing content into standalone pieces based on layout, content that belonged together became difficult to manage and nearly impossible for multiple delivery channels to understand.

To illustrate, let’s look at how connecting related content applies in a real-world scenario. The design team for our customer presented a complex layout for a software product page that included multiple tabs and sections. Our instincts were to follow suit with the content model. Shouldn’t we make it as easy and as flexible as possible to add any number of tabs in the future?

Because our design-system instincts were so familiar, it felt like we had needed a content type called “tab section” so that multiple tab sections could be added to a page. Each tab section would display various types of content. One tab might provide the software’s overview or its specifications. Another tab might provide a list of resources. 

Our inclination to break down the content model into “tab section” pieces would have led to an unnecessarily complex model and a cumbersome editing experience, and it would have also created content that couldn’t have been understood by additional delivery channels. For example, how would another system have been able to tell which “tab section” referred to a product’s specifications or its resource list—would that other system have to have resorted to counting tab sections and content blocks? This would have prevented the tabs from ever being reordered, and it would have required adding logic in every other delivery channel to interpret the design system’s layout. Furthermore, if the customer were to have no longer wanted to display this content in a tab layout, it would have been tedious to migrate to a new content model to reflect the new page redesign.

Illustration showing a data tree flowing into a list of cards (data), flowing into a navigation menu on a website
A content model based on design components is unnecessarily complex, and it’s unintelligible to systems.

We had a breakthrough when we discovered that our customer had a specific purpose in mind for each tab: it would reveal specific information such as the software product’s overview, specifications, related resources, and pricing. Once implementation began, our inclination to focus on what’s visual and familiar had obscured the intent of the designs. With a little digging, it didn’t take long to realize that the concept of tabs wasn’t relevant to the content model. The meaning of the content that they were planning to display in the tabs was what mattered.

In fact, the customer could have decided to display this content in a different way—without tabs—somewhere else. This realization prompted us to define content types for the software product based on the meaningful attributes that the customer had wanted to render on the web. There were obvious semantic attributes like name and description as well as rich attributes like screenshots, software requirements, and feature lists. The software’s product information stayed together because it wasn’t sliced across separate components like “tab sections” that were derived from the content’s presentation. Any delivery channel—including future ones—could understand and present this content.

Illustration showing a data tree flowing into a formatted list, flowing into a navigation menu on a website
A good content model connects content that belongs together so it can be easily managed and reused.


In this omnichannel marketing project, we discovered that the best way to keep our content model on track was to ensure that it was semantic (with type and attribute names that reflected the meaning of the content) and that it kept content together that belonged together (instead of fragmenting it). These two concepts curtailed our temptation to shape the content model based on the design. So if you’re working on a content model to support an omnichannel content strategy—or even if you just want to make sure that Google and other interfaces understand your content—remember:

  • A design system isn’t a content model. Team members may be tempted to conflate them and to make your content model mirror your design system, so you should protect the semantic value and contextual structure of the content strategy during the entire implementation process. This will let every delivery channel consume the content without needing a magic decoder ring.
  • If your team is struggling to make this transition, you can still reap some of the benefits by using Schema.org–based structured data in your website. Even if additional delivery channels aren’t on the immediate horizon, the benefit to search engine optimization is a compelling reason on its own.
  • Additionally, remind the team that decoupling the content model from the design will let them update the designs more easily because they won’t be held back by the cost of content migrations. They’ll be able to create new designs without the obstacle of compatibility between the design and the content, and ​they’ll be ready for the next big thing. 

By rigorously advocating for these principles, you’ll help your team treat content the way that it deserves—as the most critical asset in your user experience and the best way to connect with your audience.

Design for Safety, An Excerpt

Antiracist economist Kim Crayton says that “intention without strategy is chaos.” We’ve discussed how our biases, assumptions, and inattention toward marginalized and vulnerable groups lead to dangerous and unethical tech—but what, specifically, do we need to do to fix it? The intention to make our tech safer is not enough; we need a strategy.

This chapter will equip you with that plan of action. It covers how to integrate safety principles into your design work in order to create tech that’s safe, how to convince your stakeholders that this work is necessary, and how to respond to the critique that what we actually need is more diversity. (Spoiler: we do, but diversity alone is not the antidote to fixing unethical, unsafe tech.)

The process for inclusive safety

When you are designing for safety, your goals are to:

  • identify ways your product can be used for abuse,
  • design ways to prevent the abuse, and
  • provide support for vulnerable users to reclaim power and control.

The Process for Inclusive Safety is a tool to help you reach those goals (Fig 5.1). It’s a methodology I created in 2018 to capture the various techniques I was using when designing products with safety in mind. Whether you are creating an entirely new product or adding to an existing feature, the Process can help you make your product safe and inclusive. The Process includes five general areas of action:

  • Conducting research
  • Creating archetypes
  • Brainstorming problems
  • Designing solutions
  • Testing for safety
Fig 5.1: Each aspect of the Process for Inclusive Safety can be incorporated into your design process where it makes the most sense for you. The times given are estimates to help you incorporate the stages into your design plan.

The Process is meant to be flexible—it won’t make sense for teams to implement every step in some situations. Use the parts that are relevant to your unique work and context; this is meant to be something you can insert into your existing design practice.

And once you use it, if you have an idea for making it better or simply want to provide context of how it helped your team, please get in touch with me. It’s a living document that I hope will continue to be a useful and realistic tool that technologists can use in their day-to-day work.

If you’re working on a product specifically for a vulnerable group or survivors of some form of trauma, such as an app for survivors of domestic violence, sexual assault, or drug addiction, be sure to read Chapter 7, which covers that situation explicitly and should be handled a bit differently. The guidelines here are for prioritizing safety when designing a more general product that will have a wide user base (which, we already know from statistics, will include certain groups that should be protected from harm). Chapter 7 is focused on products that are specifically for vulnerable groups and people who have experienced trauma.

Step 1: Conduct research

Design research should include a broad analysis of how your tech might be weaponized for abuse as well as specific insights into the experiences of survivors and perpetrators of that type of abuse. At this stage, you and your team will investigate issues of interpersonal harm and abuse, and explore any other safety, security, or inclusivity issues that might be a concern for your product or service, like data security, racist algorithms, and harassment.

Broad research

Your project should begin with broad, general research into similar products and issues around safety and ethical concerns that have already been reported. For example, a team building a smart home device would do well to understand the multitude of ways that existing smart home devices have been used as tools of abuse. If your product will involve AI, seek to understand the potentials for racism and other issues that have been reported in existing AI products. Nearly all types of technology have some kind of potential or actual harm that’s been reported on in the news or written about by academics. Google Scholar is a useful tool for finding these studies.

Specific research: Survivors

When possible and appropriate, include direct research (surveys and interviews) with people who are experts in the forms of harm you have uncovered. Ideally, you’ll want to interview advocates working in the space of your research first so that you have a more solid understanding of the topic and are better equipped to not retraumatize survivors. If you’ve uncovered possible domestic violence issues, for example, the experts you’ll want to speak with are survivors themselves, as well as workers at domestic violence hotlines, shelters, other related nonprofits, and lawyers.

Especially when interviewing survivors of any kind of trauma, it is important to pay people for their knowledge and lived experiences. Don’t ask survivors to share their trauma for free, as this is exploitative. While some survivors may not want to be paid, you should always make the offer in the initial ask. An alternative to payment is to donate to an organization working against the type of violence that the interviewee experienced. We’ll talk more about how to appropriately interview survivors in Chapter 6.

Specific research: Abusers

It’s unlikely that teams aiming to design for safety will be able to interview self-proclaimed abusers or people who have broken laws around things like hacking. Don’t make this a goal; rather, try to get at this angle in your general research. Aim to understand how abusers or bad actors weaponize technology to use against others, how they cover their tracks, and how they explain or rationalize the abuse.

Step 2: Create archetypes

Once you’ve finished conducting your research, use your insights to create abuser and survivor archetypes. Archetypes are not personas, as they’re not based on real people that you interviewed and surveyed. Instead, they’re based on your research into likely safety issues, much like when we design for accessibility: we don’t need to have found a group of blind or low-vision users in our interview pool to create a design that’s inclusive of them. Instead, we base those designs on existing research into what this group needs. Personas typically represent real users and include many details, while archetypes are broader and can be more generalized.

The abuser archetype is someone who will look at the product as a tool to perform harm (Fig 5.2). They may be trying to harm someone they don’t know through surveillance or anonymous harassment, or they may be trying to control, monitor, abuse, or torment someone they know personally.

Fig 5.2: Harry Oleson, an abuser archetype for a fitness product, is looking for ways to stalk his ex-girlfriend through the fitness apps she uses.

The survivor archetype is someone who is being abused with the product. There are various situations to consider in terms of the archetype’s understanding of the abuse and how to put an end to it: Do they need proof of abuse they already suspect is happening, or are they unaware they’ve been targeted in the first place and need to be alerted (Fig 5.3)?

Fig 5.3: The survivor archetype Lisa Zwaan suspects her husband is weaponizing their home’s IoT devices against her, but in the face of his insistence that she simply doesn’t understand how to use the products, she’s unsure. She needs some kind of proof of the abuse.

You may want to make multiple survivor archetypes to capture a range of different experiences. They may know that the abuse is happening but not be able to stop it, like when an abuser locks them out of IoT devices; or they know it’s happening but don’t know how, such as when a stalker keeps figuring out their location (Fig 5.4). Include as many of these scenarios as you need to in your survivor archetype. You’ll use these later on when you design solutions to help your survivor archetypes achieve their goals of preventing and ending abuse.

Fig 5.4: The survivor archetype Eric Mitchell knows he’s being stalked by his ex-boyfriend Rob but can’t figure out how Rob is learning his location information.

It may be useful for you to create persona-like artifacts for your archetypes, such as the three examples shown. Instead of focusing on the demographic information we often see in personas, focus on their goals. The goals of the abuser will be to carry out the specific abuse you’ve identified, while the goals of the survivor will be to prevent abuse, understand that abuse is happening, make ongoing abuse stop, or regain control over the technology that’s being used for abuse. Later, you’ll brainstorm how to prevent the abuser’s goals and assist the survivor’s goals.

And while the “abuser/survivor” model fits most cases, it doesn’t fit all, so modify it as you need to. For example, if you uncovered an issue with security, such as the ability for someone to hack into a home camera system and talk to children, the malicious hacker would get the abuser archetype and the child’s parents would get survivor archetype.

Step 3: Brainstorm problems

After creating archetypes, brainstorm novel abuse cases and safety issues. “Novel” means things not found in your research; you’re trying to identify completely new safety issues that are unique to your product or service. The goal with this step is to exhaust every effort of identifying harms your product could cause. You aren’t worrying about how to prevent the harm yet—that comes in the next step.

How could your product be used for any kind of abuse, outside of what you’ve already identified in your research? I recommend setting aside at least a few hours with your team for this process.

If you’re looking for somewhere to start, try doing a Black Mirror brainstorm. This exercise is based on the show Black Mirror, which features stories about the dark possibilities of technology. Try to figure out how your product would be used in an episode of the show—the most wild, awful, out-of-control ways it could be used for harm. When I’ve led Black Mirror brainstorms, participants usually end up having a good deal of fun (which I think is great—it’s okay to have fun when designing for safety!). I recommend time-boxing a Black Mirror brainstorm to half an hour, and then dialing it back and using the rest of the time thinking of more realistic forms of harm.

After you’ve identified as many opportunities for abuse as possible, you may still not feel confident that you’ve uncovered every potential form of harm. A healthy amount of anxiety is normal when you’re doing this kind of work. It’s common for teams designing for safety to worry, “Have we really identified every possible harm? What if we’ve missed something?” If you’ve spent at least four hours coming up with ways your product could be used for harm and have run out of ideas, go to the next step.

It’s impossible to guarantee you’ve thought of everything; instead of aiming for 100 percent assurance, recognize that you’ve taken this time and have done the best you can, and commit to continuing to prioritize safety in the future. Once your product is released, your users may identify new issues that you missed; aim to receive that feedback graciously and course-correct quickly.

Step 4: Design solutions

At this point, you should have a list of ways your product can be used for harm as well as survivor and abuser archetypes describing opposing user goals. The next step is to identify ways to design against the identified abuser’s goals and to support the survivor’s goals. This step is a good one to insert alongside existing parts of your design process where you’re proposing solutions for the various problems your research uncovered.

Some questions to ask yourself to help prevent harm and support your archetypes include:

  • Can you design your product in such a way that the identified harm cannot happen in the first place? If not, what roadblocks can you put up to prevent the harm from happening?
  • How can you make the victim aware that abuse is happening through your product?
  • How can you help the victim understand what they need to do to make the problem stop?
  • Can you identify any types of user activity that would indicate some form of harm or abuse? Could your product help the user access support?

In some products, it’s possible to proactively recognize that harm is happening. For example, a pregnancy app might be modified to allow the user to report that they were the victim of an assault, which could trigger an offer to receive resources for local and national organizations. This sort of proactiveness is not always possible, but it’s worth taking a half hour to discuss if any type of user activity would indicate some form of harm or abuse, and how your product could assist the user in receiving help in a safe manner.

That said, use caution: you don’t want to do anything that could put a user in harm’s way if their devices are being monitored. If you do offer some kind of proactive help, always make it voluntary, and think through other safety issues, such as the need to keep the user in-app in case an abuser is checking their search history. We’ll walk through a good example of this in the next chapter.

Step 5: Test for safety

The final step is to test your prototypes from the point of view of your archetypes: the person who wants to weaponize the product for harm and the victim of the harm who needs to regain control over the technology. Just like any other kind of product testing, at this point you’ll aim to rigorously test out your safety solutions so that you can identify gaps and correct them, validate that your designs will help keep your users safe, and feel more confident releasing your product into the world.

Ideally, safety testing happens along with usability testing. If you’re at a company that doesn’t do usability testing, you might be able to use safety testing to cleverly perform both; a user who goes through your design attempting to weaponize the product against someone else can also be encouraged to point out interactions or other elements of the design that don’t make sense to them.

You’ll want to conduct safety testing on either your final prototype or the actual product if it’s already been released. There’s nothing wrong with testing an existing product that wasn’t designed with safety goals in mind from the onset—“retrofitting” it for safety is a good thing to do.

Remember that testing for safety involves testing from the perspective of both an abuser and a survivor, though it may not make sense for you to do both. Alternatively, if you made multiple survivor archetypes to capture multiple scenarios, you’ll want to test from the perspective of each one.

As with other sorts of usability testing, you as the designer are most likely too close to the product and its design by this point to be a valuable tester; you know the product too well. Instead of doing it yourself, set up testing as you would with other usability testing: find someone who is not familiar with the product and its design, set the scene, give them a task, encourage them to think out loud, and observe how they attempt to complete it.

Abuser testing

The goal of this testing is to understand how easy it is for someone to weaponize your product for harm. Unlike with usability testing, you want to make it impossible, or at least difficult, for them to achieve their goal. Reference the goals in the abuser archetype you created earlier, and use your product in an attempt to achieve them.

For example, for a fitness app with GPS-enabled location features, we can imagine that the abuser archetype would have the goal of figuring out where his ex-girlfriend now lives. With this goal in mind, you’d try everything possible to figure out the location of another user who has their privacy settings enabled. You might try to see her running routes, view any available information on her profile, view anything available about her location (which she has set to private), and investigate the profiles of any other users somehow connected with her account, such as her followers.

If by the end of this you’ve managed to uncover some of her location data, despite her having set her profile to private, you know now that your product enables stalking. Your next step is to go back to step 4 and figure out how to prevent this from happening. You may need to repeat the process of designing solutions and testing them more than once.

Survivor testing

Survivor testing involves identifying how to give information and power to the survivor. It might not always make sense based on the product or context. Thwarting the attempt of an abuser archetype to stalk someone also satisfies the goal of the survivor archetype to not be stalked, so separate testing wouldn’t be needed from the survivor’s perspective.

However, there are cases where it makes sense. For example, for a smart thermostat, a survivor archetype’s goals would be to understand who or what is making the temperature change when they aren’t doing it themselves. You could test this by looking for the thermostat’s history log and checking for usernames, actions, and times; if you couldn’t find that information, you would have more work to do in step 4.

Another goal might be regaining control of the thermostat once the survivor realizes the abuser is remotely changing its settings. Your test would involve attempting to figure out how to do this: are there instructions that explain how to remove another user and change the password, and are they easy to find? This might again reveal that more work is needed to make it clear to the user how they can regain control of the device or account.

Stress testing

To make your product more inclusive and compassionate, consider adding stress testing. This concept comes from Design for Real Life by Eric Meyer and Sara Wachter-Boettcher. The authors pointed out that personas typically center people who are having a good day—but real users are often anxious, stressed out, having a bad day, or even experiencing tragedy. These are called “stress cases,” and testing your products for users in stress-case situations can help you identify places where your design lacks compassion. Design for Real Life has more details about what it looks like to incorporate stress cases into your design as well as many other great tactics for compassionate design.

Sustainable Web Design, An Excerpt

In the 1950s, many in the elite running community had begun to believe it wasn’t possible to run a mile in less than four minutes. Runners had been attempting it since the late 19th century and were beginning to draw the conclusion that the human body simply wasn’t built for the task. 

But on May 6, 1956, Roger Bannister took everyone by surprise. It was a cold, wet day in Oxford, England—conditions no one expected to lend themselves to record-setting—and yet Bannister did just that, running a mile in 3:59.4 and becoming the first person in the record books to run a mile in under four minutes. 

This shift in the benchmark had profound effects; the world now knew that the four-minute mile was possible. Bannister’s record lasted only forty-six days, when it was snatched away by Australian runner John Landy. Then a year later, three runners all beat the four-minute barrier together in the same race. Since then, over 1,400 runners have officially run a mile in under four minutes; the current record is 3:43.13, held by Moroccan athlete Hicham El Guerrouj.

We achieve far more when we believe that something is possible, and we will believe it’s possible only when we see someone else has already done it—and as with human running speed, so it is with what we believe are the hard limits for how a website needs to perform.

Establishing standards for a sustainable web

In most major industries, the key metrics of environmental performance are fairly well established, such as miles per gallon for cars or energy per square meter for homes. The tools and methods for calculating those metrics are standardized as well, which keeps everyone on the same page when doing environmental assessments. In the world of websites and apps, however, we aren’t held to any particular environmental standards, and only recently have gained the tools and methods we need to even make an environmental assessment.

The primary goal in sustainable web design is to reduce carbon emissions. However, it’s almost impossible to actually measure the amount of CO2 produced by a web product. We can’t measure the fumes coming out of the exhaust pipes on our laptops. The emissions of our websites are far away, out of sight and out of mind, coming out of power stations burning coal and gas. We have no way to trace the electrons from a website or app back to the power station where the electricity is being generated and actually know the exact amount of greenhouse gas produced. So what do we do? 

If we can’t measure the actual carbon emissions, then we need to find what we can measure. The primary factors that could be used as indicators of carbon emissions are:

  1. Data transfer 
  2. Carbon intensity of electricity

Let’s take a look at how we can use these metrics to quantify the energy consumption, and in turn the carbon footprint, of the websites and web apps we create.

Data transfer

Most researchers use kilowatt-hours per gigabyte (kWh/GB) as a metric of energy efficiency when measuring the amount of data transferred over the internet when a website or application is used. This provides a great reference point for energy consumption and carbon emissions. As a rule of thumb, the more data transferred, the more energy used in the data center, telecoms networks, and end user devices.

For web pages, data transfer for a single visit can be most easily estimated by measuring the page weight, meaning the transfer size of the page in kilobytes the first time someone visits the page. It’s fairly easy to measure using the developer tools in any modern web browser. Often your web hosting account will include statistics for the total data transfer of any web application (Fig 2.1).

Fig 2.1: The Kinsta hosting dashboard displays data transfer alongside traffic volumes. If you divide data transfer by visits, you get the average data per visit, which can be used as a metric of efficiency.

The nice thing about page weight as a metric is that it allows us to compare the efficiency of web pages on a level playing field without confusing the issue with constantly changing traffic volumes. 

Reducing page weight requires a large scope. By early 2020, the median page weight was 1.97 MB for setups the HTTP Archive classifies as “desktop” and 1.77 MB for “mobile,” with desktop increasing 36 percent since January 2016 and mobile page weights nearly doubling in the same period (Fig 2.2). Roughly half of this data transfer is image files, making images the single biggest source of carbon emissions on the average website. 

History clearly shows us that our web pages can be smaller, if only we set our minds to it. While most technologies become ever more energy efficient, including the underlying technology of the web such as data centers and transmission networks, websites themselves are a technology that becomes less efficient as time goes on.

Fig 2.2: The historical page weight data from HTTP Archive can teach us a lot about what is possible in the future.

You might be familiar with the concept of performance budgeting as a way of focusing a project team on creating faster user experiences. For example, we might specify that the website must load in a maximum of one second on a broadband connection and three seconds on a 3G connection. Much like speed limits while driving, performance budgets are upper limits rather than vague suggestions, so the goal should always be to come in under budget.

Designing for fast performance does often lead to reduced data transfer and emissions, but it isn’t always the case. Web performance is often more about the subjective perception of load times than it is about the true efficiency of the underlying system, whereas page weight and transfer size are more objective measures and more reliable benchmarks for sustainable web design. 

We can set a page weight budget in reference to a benchmark of industry averages, using data from sources like HTTP Archive. We can also benchmark page weight against competitors or the old version of the website we’re replacing. For example, we might set a maximum page weight budget as equal to our most efficient competitor, or we could set the benchmark lower to guarantee we are best in class. 

If we want to take it to the next level, then we could also start looking at the transfer size of our web pages for repeat visitors. Although page weight for the first time someone visits is the easiest thing to measure, and easy to compare on a like-for-like basis, we can learn even more if we start looking at transfer size in other scenarios too. For example, visitors who load the same page multiple times will likely have a high percentage of the files cached in their browser, meaning they don’t need to transfer all of the files on subsequent visits. Likewise, a visitor who navigates to new pages on the same website will likely not need to load the full page each time, as some global assets from areas like the header and footer may already be cached in their browser. Measuring transfer size at this next level of detail can help us learn even more about how we can optimize efficiency for users who regularly visit our pages, and enable us to set page weight budgets for additional scenarios beyond the first visit.

Page weight budgets are easy to track throughout a design and development process. Although they don’t actually tell us carbon emission and energy consumption analytics directly, they give us a clear indication of efficiency relative to other websites. And as transfer size is an effective analog for energy consumption, we can actually use it to estimate energy consumption too.

In summary, reduced data transfer translates to energy efficiency, a key factor to reducing carbon emissions of web products. The more efficient our products, the less electricity they use, and the less fossil fuels need to be burned to produce the electricity to power them. But as we’ll see next, since all web products demand some power, it’s important to consider the source of that electricity, too.

Carbon intensity of electricity

Regardless of energy efficiency, the level of pollution caused by digital products depends on the carbon intensity of the energy being used to power them. Carbon intensity is a term used to define the grams of CO2 produced for every kilowatt-hour of electricity (gCO2/kWh). This varies widely, with renewable energy sources and nuclear having an extremely low carbon intensity of less than 10 gCO2/kWh (even when factoring in their construction); whereas fossil fuels have very high carbon intensity of approximately 200–400 gCO2/kWh. 

Most electricity comes from national or state grids, where energy from a variety of different sources is mixed together with varying levels of carbon intensity. The distributed nature of the internet means that a single user of a website or app might be using energy from multiple different grids simultaneously; a website user in Paris uses electricity from the French national grid to power their home internet and devices, but the website’s data center could be in Dallas, USA, pulling electricity from the Texas grid, while the telecoms networks use energy from everywhere between Dallas and Paris.

We don’t have control over the full energy supply of web services, but we do have some control over where we host our projects. With a data center using a significant proportion of the energy of any website, locating the data center in an area with low carbon energy will tangibly reduce its carbon emissions. Danish startup Tomorrow reports and maps this user-contributed data, and a glance at their map shows how, for example, choosing a data center in France will have significantly lower carbon emissions than a data center in the Netherlands (Fig 2.3).

Fig 2.3: Tomorrow’s electricityMap shows live data for the carbon intensity of electricity by country.

That said, we don’t want to locate our servers too far away from our users; it takes energy to transmit data through the telecom’s networks, and the further the data travels, the more energy is consumed. Just like food miles, we can think of the distance from the data center to the website’s core user base as “megabyte miles”—and we want it to be as small as possible.

Using the distance itself as a benchmark, we can use website analytics to identify the country, state, or even city where our core user group is located and measure the distance from that location to the data center used by our hosting company. This will be a somewhat fuzzy metric as we don’t know the precise center of mass of our users or the exact location of a data center, but we can at least get a rough idea. 

For example, if a website is hosted in London but the primary user base is on the West Coast of the USA, then we could look up the distance from London to San Francisco, which is 5,300 miles. That’s a long way! We can see that hosting it somewhere in North America, ideally on the West Coast, would significantly reduce the distance and thus the energy used to transmit the data. In addition, locating our servers closer to our visitors helps reduce latency and delivers better user experience, so it’s a win-win.

Converting it back to carbon emissions

If we combine carbon intensity with a calculation for energy consumption, we can calculate the carbon emissions of our websites and apps. A tool my team created does this by measuring the data transfer over the wire when loading a web page, calculating the amount of electricity associated, and then converting that into a figure for CO2 (Fig 2.4). It also factors in whether or not the web hosting is powered by renewable energy.

If you want to take it to the next level and tailor the data more accurately to the unique aspects of your project, the Energy and Emissions Worksheet accompanying this book shows you how.

Fig 2.4: The Website Carbon Calculator shows how the Riverford Organic website embodies their commitment to sustainability, being both low carbon and hosted in a data center using renewable energy.

With the ability to calculate carbon emissions for our projects, we could actually take a page weight budget one step further and set carbon budgets as well. CO2 is not a metric commonly used in web projects; we’re more familiar with kilobytes and megabytes, and can fairly easily look at design options and files to assess how big they are. Translating that into carbon adds a layer of abstraction that isn’t as intuitive—but carbon budgets do focus our minds on the primary thing we’re trying to reduce, and support the core objective of sustainable web design: reducing carbon emissions.

Browser Energy

Data transfer might be the simplest and most complete analog for energy consumption in our digital projects, but by giving us one number to represent the energy used in the data center, the telecoms networks, and the end user’s devices, it can’t offer us insights into the efficiency in any specific part of the system.

One part of the system we can look at in more detail is the energy used by end users’ devices. As front-end web technologies become more advanced, the computational load is increasingly moving from the data center to users’ devices, whether they be phones, tablets, laptops, desktops, or even smart TVs. Modern web browsers allow us to implement more complex styling and animation on the fly using CSS and JavaScript. Furthermore, JavaScript libraries such as Angular and React allow us to create applications where the “thinking” work is done partly or entirely in the browser. 

All of these advances are exciting and open up new possibilities for what the web can do to serve society and create positive experiences. However, more computation in the user’s web browser means more energy used by their devices. This has implications not just environmentally, but also for user experience and inclusivity. Applications that put a heavy processing load on the user’s device can inadvertently exclude users with older, slower devices and cause batteries on phones and laptops to drain faster. Furthermore, if we build web applications that require the user to have up-to-date, powerful devices, people throw away old devices much more frequently. This isn’t just bad for the environment, but it puts a disproportionate financial burden on the poorest in society.

In part because the tools are limited, and partly because there are so many different models of devices, it’s difficult to measure website energy consumption on end users’ devices. One tool we do currently have is the Energy Impact monitor inside the developer console of the Safari browser (Fig 2.5).

Fig 2.5: The Energy Impact meter in Safari (on the right) shows how a website consumes CPU energy.

You know when you load a website and your computer’s cooling fans start spinning so frantically you think it might actually take off? That’s essentially what this tool is measuring. 

It shows us the percentage of CPU used and the duration of CPU usage when loading the web page, and uses these figures to generate an energy impact rating. It doesn’t give us precise data for the amount of electricity used in kilowatts, but the information it does provide can be used to benchmark how efficiently your websites use energy and set targets for improvement.

Voice Content and Usability

We’ve been having conversations for thousands of years. Whether to convey information, conduct transactions, or simply to check in on one another, people have yammered away, chattering and gesticulating, through spoken conversation for countless generations. Only in the last few millennia have we begun to commit our conversations to writing, and only in the last few decades have we begun to outsource them to the computer, a machine that shows much more affinity for written correspondence than for the slangy vagaries of spoken language.

Computers have trouble because between spoken and written language, speech is more primordial. To have successful conversations with us, machines must grapple with the messiness of human speech: the disfluencies and pauses, the gestures and body language, and the variations in word choice and spoken dialect that can stymie even the most carefully crafted human-computer interaction. In the human-to-human scenario, spoken language also has the privilege of face-to-face contact, where we can readily interpret nonverbal social cues.

In contrast, written language immediately concretizes as we commit it to record and retains usages long after they become obsolete in spoken communication (the salutation “To whom it may concern,” for example), generating its own fossil record of outdated terms and phrases. Because it tends to be more consistent, polished, and formal, written text is fundamentally much easier for machines to parse and understand.

Spoken language has no such luxury. Besides the nonverbal cues that decorate conversations with emphasis and emotional context, there are also verbal cues and vocal behaviors that modulate conversation in nuanced ways: how something is said, not what. Whether rapid-fire, low-pitched, or high-decibel, whether sarcastic, stilted, or sighing, our spoken language conveys much more than the written word could ever muster. So when it comes to voice interfaces—the machines we conduct spoken conversations with—we face exciting challenges as designers and content strategists.

Voice Interactions

We interact with voice interfaces for a variety of reasons, but according to Michael McTear, Zoraida Callejas, and David Griol in The Conversational Interface, those motivations by and large mirror the reasons we initiate conversations with other people, too (http://bkaprt.com/vcu36/01-01). Generally, we start up a conversation because:

  • we need something done (such as a transaction),
  • we want to know something (information of some sort), or
  • we are social beings and want someone to talk to (conversation for conversation’s sake).

These three categories—which I call transactional, informational, and prosocial—also characterize essentially every voice interaction: a single conversation from beginning to end that realizes some outcome for the user, starting with the voice interface’s first greeting and ending with the user exiting the interface. Note here that a conversation in our human sense—a chat between people that leads to some result and lasts an arbitrary length of time—could encompass multiple transactional, informational, and prosocial voice interactions in succession. In other words, a voice interaction is a conversation, but a conversation is not necessarily a single voice interaction.

Purely prosocial conversations are more gimmicky than captivating in most voice interfaces, because machines don’t yet have the capacity to really want to know how we’re doing and to do the sort of glad-handing humans crave. There’s also ongoing debate as to whether users actually prefer the sort of organic human conversation that begins with a prosocial voice interaction and shifts seamlessly into other types. In fact, in Voice User Interface Design, Michael Cohen, James Giangola, and Jennifer Balogh recommend sticking to users’ expectations by mimicking how they interact with other voice interfaces rather than trying too hard to be human—potentially alienating them in the process (http://bkaprt.com/vcu36/01-01).

That leaves two genres of conversations we can have with one another that a voice interface can easily have with us, too: a transactional voice interaction realizing some outcome (“buy iced tea”) and an informational voice interaction teaching us something new (“discuss a musical”).

Transactional voice interactions

Unless you’re tapping buttons on a food delivery app, you’re generally having a conversation—and therefore a voice interaction—when you order a Hawaiian pizza with extra pineapple. Even when we walk up to the counter and place an order, the conversation quickly pivots from an initial smattering of neighborly small talk to the real mission at hand: ordering a pizza (generously topped with pineapple, as it should be).

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I get a Hawaiian pizza with extra pineapple?

Burhan: Sure, what size?

Alison: Large.

Burhan: Anything else?

Alison: No thanks, that’s it.

Burhan: Something to drink?

Alison: I’ll have a bottle of Coke.

Burhan: You got it. That’ll be $13.55 and about fifteen minutes.

Each progressive disclosure in this transactional conversation reveals more and more of the desired outcome of the transaction: a service rendered or a product delivered. Transactional conversations have certain key traits: they’re direct, to the point, and economical. They quickly dispense with pleasantries.

Informational voice interactions

Meanwhile, some conversations are primarily about obtaining information. Though Alison might visit Crust Deluxe with the sole purpose of placing an order, she might not actually want to walk out with a pizza at all. She might be just as interested in whether they serve halal or kosher dishes, gluten-free options, or something else. Here, though we again have a prosocial mini-conversation at the beginning to establish politeness, we’re after much more.

Alison: Hey, how’s it going?

Burhan: Hi, welcome to Crust Deluxe! It’s cold out there. How can I help you?

Alison: Can I ask a few questions?

Burhan: Of course! Go right ahead.

Alison: Do you have any halal options on the menu?

Burhan: Absolutely! We can make any pie halal by request. We also have lots of vegetarian, ovo-lacto, and vegan options. Are you thinking about any other dietary restrictions?

Alison: What about gluten-free pizzas?

Burhan: We can definitely do a gluten-free crust for you, no problem, for both our deep-dish and thin-crust pizzas. Anything else I can answer for you?

Alison: That’s it for now. Good to know. Thanks!

Burhan: Anytime, come back soon!

This is a very different dialogue. Here, the goal is to get a certain set of facts. Informational conversations are investigative quests for the truth—research expeditions to gather data, news, or facts. Voice interactions that are informational might be more long-winded than transactional conversations by necessity. Responses tend to be lengthier, more informative, and carefully communicated so the customer understands the key takeaways.

Voice Interfaces

At their core, voice interfaces employ speech to support users in reaching their goals. But simply because an interface has a voice component doesn’t mean that every user interaction with it is mediated through voice. Because multimodal voice interfaces can lean on visual components like screens as crutches, we’re most concerned in this book with pure voice interfaces, which depend entirely on spoken conversation, lack any visual component whatsoever, and are therefore much more nuanced and challenging to tackle.

Though voice interfaces have long been integral to the imagined future of humanity in science fiction, only recently have those lofty visions become fully realized in genuine voice interfaces.

Interactive voice response (IVR) systems

Though written conversational interfaces have been fixtures of computing for many decades, voice interfaces first emerged in the early 1990s with text-to-speech (TTS) dictation programs that recited written text aloud, as well as speech-enabled in-car systems that gave directions to a user-provided address. With the advent of interactive voice response (IVR) systems, intended as an alternative to overburdened customer service representatives, we became acquainted with the first true voice interfaces that engaged in authentic conversation.

IVR systems allowed organizations to reduce their reliance on call centers but soon became notorious for their clunkiness. Commonplace in the corporate world, these systems were primarily designed as metaphorical switchboards to guide customers to a real phone agent (“Say Reservations to book a flight or check an itinerary”); chances are you will enter a conversation with one when you call an airline or hotel conglomerate. Despite their functional issues and users’ frustration with their inability to speak to an actual human right away, IVR systems proliferated in the early 1990s across a variety of industries (http://bkaprt.com/vcu36/01-02, PDF).

While IVR systems are great for highly repetitive, monotonous conversations that generally don’t veer from a single format, they have a reputation for less scintillating conversation than we’re used to in real life (or even in science fiction).

Screen readers

Parallel to the evolution of IVR systems was the invention of the screen reader, a tool that transcribes visual content into synthesized speech. For Blind or visually impaired website users, it’s the predominant method of interacting with text, multimedia, or form elements. Screen readers represent perhaps the closest equivalent we have today to an out-of-the-box implementation of content delivered through voice.

Among the first screen readers known by that moniker was the Screen Reader for the BBC Micro and NEEC Portable developed by the Research Centre for the Education of the Visually Handicapped (RCEVH) at the University of Birmingham in 1986 (http://bkaprt.com/vcu36/01-03). That same year, Jim Thatcher created the first IBM Screen Reader for text-based computers, later recreated for computers with graphical user interfaces (GUIs) (http://bkaprt.com/vcu36/01-04).

With the rapid growth of the web in the 1990s, the demand for accessible tools for websites exploded. Thanks to the introduction of semantic HTML and especially ARIA roles beginning in 2008, screen readers started facilitating speedy interactions with web pages that ostensibly allow disabled users to traverse the page as an aural and temporal space rather than a visual and physical one. In other words, screen readers for the web “provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information,” writes Aaron Gustafson in A List Apart. “At least they do when documents are authored thoughtfully” (http://bkaprt.com/vcu36/01-05).

Though deeply instructive for voice interface designers, there’s one significant problem with screen readers: they’re difficult to use and unremittingly verbose. The visual structures of websites and web navigation don’t translate well to screen readers, sometimes resulting in unwieldy pronouncements that name every manipulable HTML element and announce every formatting change. For many screen reader users, working with web-based interfaces exacts a cognitive toll.

In Wired, accessibility advocate and voice engineer Chris Maury considers why the screen reader experience is ill-suited to users relying on voice:

From the beginning, I hated the way that Screen Readers work. Why are they designed the way they are? It makes no sense to present information visually and then, and only then, translate that into audio. All of the time and energy that goes into creating the perfect user experience for an app is wasted, or even worse, adversely impacting the experience for blind users. (http://bkaprt.com/vcu36/01-06)

In many cases, well-designed voice interfaces can speed users to their destination better than long-winded screen reader monologues. After all, visual interface users have the benefit of darting around the viewport freely to find information, ignoring areas irrelevant to them. Blind users, meanwhile, are obligated to listen to every utterance synthesized into speech and therefore prize brevity and efficiency. Disabled users who have long had no choice but to employ clunky screen readers may find that voice interfaces, particularly more modern voice assistants, offer a more streamlined experience.

Voice assistants

When we think of voice assistants (the subset of voice interfaces now commonplace in living rooms, smart homes, and offices), many of us immediately picture HAL from 2001: A Space Odyssey or hear Majel Barrett’s voice as the omniscient computer in Star Trek. Voice assistants are akin to personal concierges that can answer questions, schedule appointments, conduct searches, and perform other common day-to-day tasks. And they’re rapidly gaining more attention from accessibility advocates for their assistive potential.

Before the earliest IVR systems found success in the enterprise, Apple published a demonstration video in 1987 depicting the Knowledge Navigator, a voice assistant that could transcribe spoken words and recognize human speech to a great degree of accuracy. Then, in 2001, Tim Berners-Lee and others formulated their vision for a Semantic Web “agent” that would perform typical errands like “checking calendars, making appointments, and finding locations” (http://bkaprt.com/vcu36/01-07, behind paywall). It wasn’t until 2011 that Apple’s Siri finally entered the picture, making voice assistants a tangible reality for consumers.

Thanks to the plethora of voice assistants available today, there is considerable variation in how programmable and customizable certain voice assistants are over others (Fig 1.1). At one extreme, everything except vendor-provided features is locked down; for example, at the time of their release, the core functionality of Apple’s Siri and Microsoft’s Cortana couldn’t be extended beyond their existing capabilities. Even today, it isn’t possible to program Siri to perform arbitrary functions, because there’s no means by which developers can interact with Siri at a low level, apart from predefined categories of tasks like sending messages, hailing rideshares, making restaurant reservations, and certain others.

At the opposite end of the spectrum, voice assistants like Amazon Alexa and Google Home offer a core foundation on which developers can build custom voice interfaces. For this reason, programmable voice assistants that lend themselves to customization and extensibility are becoming increasingly popular for developers who feel stifled by the limitations of Siri and Cortana. Amazon offers the Alexa Skills Kit, a developer framework for building custom voice interfaces for Amazon Alexa, while Google Home offers the ability to program arbitrary Google Assistant skills. Today, users can choose from among thousands of custom-built skills within both the Amazon Alexa and Google Assistant ecosystems.

Fig 1.1: Voice assistants like Amazon Alexa and Google Home tend to be more programmable, and thus more flexible, than their counterpart Apple Siri.

As corporations like Amazon, Apple, Microsoft, and Google continue to stake their territory, they’re also selling and open-sourcing an unprecedented array of tools and frameworks for designers and developers that aim to make building voice interfaces as easy as possible, even without code.

Often by necessity, voice assistants like Amazon Alexa tend to be monochannel—they’re tightly coupled to a device and can’t be accessed on a computer or smartphone instead. By contrast, many development platforms like Google’s Dialogflow have introduced omnichannel capabilities so users can build a single conversational interface that then manifests as a voice interface, textual chatbot, and IVR system upon deployment. I don’t prescribe any specific implementation approaches in this design-focused book, but in Chapter 4 we’ll get into some of the implications these variables might have on the way you build out your design artifacts.

Voice Content

Simply put, voice content is content delivered through voice. To preserve what makes human conversation so compelling in the first place, voice content needs to be free-flowing and organic, contextless and concise—everything written content isn’t.

Our world is replete with voice content in various forms: screen readers reciting website content, voice assistants rattling off a weather forecast, and automated phone hotline responses governed by IVR systems. In this book, we’re most concerned with content delivered auditorily—not as an option, but as a necessity.

For many of us, our first foray into informational voice interfaces will be to deliver content to users. There’s only one problem: any content we already have isn’t in any way ready for this new habitat. So how do we make the content trapped on our websites more conversational? And how do we write new copy that lends itself to voice interactions?

Lately, we’ve begun slicing and dicing our content in unprecedented ways. Websites are, in many respects, colossal vaults of what I call macrocontent: lengthy prose that can extend for infinitely scrollable miles in a browser window, like microfilm viewers of newspaper archives. Back in 2002, well before the present-day ubiquity of voice assistants, technologist Anil Dash defined microcontent as permalinked pieces of content that stay legible regardless of environment, such as email or text messages:

A day’s weather forcast [sic], the arrival and departure times for an airplane flight, an abstract from a long publication, or a single instant message can all be examples of microcontent. (http://bkaprt.com/vcu36/01-08)

I’d update Dash’s definition of microcontent to include all examples of bite-sized content that go well beyond written communiqués. After all, today we encounter microcontent in interfaces where a small snippet of copy is displayed alone, unmoored from the browser, like a textbot confirmation of a restaurant reservation. Microcontent offers the best opportunity to gauge how your content can be stretched to the very edges of its capabilities, informing delivery channels both established and novel.

As microcontent, voice content is unique because it’s an example of how content is experienced in time rather than in space. We can glance at a digital sign underground for an instant and know when the next train is arriving, but voice interfaces hold our attention captive for periods of time that we can’t easily escape or skip, something screen reader users are all too familiar with.

Because microcontent is fundamentally made up of isolated blobs with no relation to the channels where they’ll eventually end up, we need to ensure that our microcontent truly performs well as voice content—and that means focusing on the two most important traits of robust voice content: voice content legibility and voice content discoverability.

Fundamentally, the legibility and discoverability of our voice content both have to do with how voice content manifests in perceived time and space.

Designing for the Unexpected

I’m not sure when I first heard this quote, but it’s something that has stayed with me over the years. How do you create services for situations you can’t imagine? Or design products that work on devices yet to be invented?

Flash, Photoshop, and responsive design

When I first started designing websites, my go-to software was Photoshop. I created a 960px canvas and set about creating a layout that I would later drop content in. The development phase was about attaining pixel-perfect accuracy using fixed widths, fixed heights, and absolute positioning.

Ethan Marcotte’s talk at An Event Apart and subsequent article “Responsive Web Design” in A List Apart in 2010 changed all this. I was sold on responsive design as soon as I heard about it, but I was also terrified. The pixel-perfect designs full of magic numbers that I had previously prided myself on producing were no longer good enough.

The fear wasn’t helped by my first experience with responsive design. My first project was to take an existing fixed-width website and make it responsive. What I learned the hard way was that you can’t just add responsiveness at the end of a project. To create fluid layouts, you need to plan throughout the design phase.

A new way to design

Designing responsive or fluid sites has always been about removing limitations, producing content that can be viewed on any device. It relies on the use of percentage-based layouts, which I initially achieved with native CSS and utility classes:

.column-span-6 {
  width: 49%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;

.column-span-4 {
  width: 32%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;

.column-span-3 {
  width: 24%;
  float: left;
  margin-right: 0.5%;
  margin-left: 0.5%;

Then with Sass so I could take advantage of @includes to re-use repeated blocks of code and move back to more semantic markup:

.logo {
  @include colSpan(6);

.search {
  @include colSpan(3);

.social-share {
  @include colSpan(3);

Media queries

The second ingredient for responsive design is media queries. Without them, content would shrink to fit the available space regardless of whether that content remained readable (The exact opposite problem occurred with the introduction of a mobile-first approach).

Wireframes showing three boxes at a large size, and three very narrow boxes at a mobile size
Components becoming too small at mobile breakpoints

Media queries prevented this by allowing us to add breakpoints where the design could adapt. Like most people, I started out with three breakpoints: one for desktop, one for tablets, and one for mobile. Over the years, I added more and more for phablets, wide screens, and so on. 

For years, I happily worked this way and improved both my design and front-end skills in the process. The only problem I encountered was making changes to content, since with our Sass grid system in place, there was no way for the site owners to add content without amending the markup—something a small business owner might struggle with. This is because each row in the grid was defined using a div as a container. Adding content meant creating new row markup, which requires a level of HTML knowledge.

Row markup was a staple of early responsive design, present in all the widely used frameworks like Bootstrap and Skeleton.

<section class="row">
  <div class="column-span-4">1 of 7</div>
  <div class="column-span-4">2 of 7</div>
  <div class="column-span-4">3 of 7</div>

<section class="row">
  <div class="column-span-4">4 of 7</div>
  <div class="column-span-4">5 of 7</div>
  <div class="column-span-4">6 of 7</div>

<section class="row">
  <div class="column-span-4">7 of 7</div>
Wireframe showing three rows of boxes
Components placed in the rows of a Sass grid

Another problem arose as I moved from a design agency building websites for small- to medium-sized businesses, to larger in-house teams where I worked across a suite of related sites. In those roles I started to work much more with reusable components. 

Our reliance on media queries resulted in components that were tied to common viewport sizes. If the goal of component libraries is reuse, then this is a real problem because you can only use these components if the devices you’re designing for correspond to the viewport sizes used in the pattern library—in the process not really hitting that “devices that don’t yet exist”  goal.

Then there’s the problem of space. Media queries allow components to adapt based on the viewport size, but what if I put a component into a sidebar, like in the figure below?

Wireframes showing different configurations of boxes at three different sizes
Components responding to the viewport width with media queries

Container queries: our savior or a false dawn?

Container queries have long been touted as an improvement upon media queries, but at the time of writing are unsupported in most browsers. There are JavaScript workarounds, but they can create dependency and compatibility issues. The basic theory underlying container queries is that elements should change based on the size of their parent container and not the viewport width, as seen in the following illustrations.

Wireframes showing different configurations of boxes at different sizes
Components responding to their parent container with container queries

One of the biggest arguments in favor of container queries is that they help us create components or design patterns that are truly reusable because they can be picked up and placed anywhere in a layout. This is an important step in moving toward a form of component-based design that works at any size on any device.

In other words, responsive components to replace responsive layouts.

Container queries will help us move from designing pages that respond to the browser or device size to designing components that can be placed in a sidebar or in the main content, and respond accordingly.

My concern is that we are still using layout to determine when a design needs to adapt. This approach will always be restrictive, as we will still need pre-defined breakpoints. For this reason, my main question with container queries is, How would we decide when to change the CSS used by a component? 

A component library removed from context and real content is probably not the best place for that decision. 

As the diagrams below illustrate, we can use container queries to create designs for specific container widths, but what if I want to change the design based on the image size or ratio?

Wireframes showing different layouts at 600px and 400px
Cards responding to their parent container with container queries
Wireframes showing different configurations of content at the same size
Cards responding based on their own content

In this example, the dimensions of the container are not what should dictate the design; rather, the image is.

It’s hard to say for sure whether container queries will be a success story until we have solid cross-browser support for them. Responsive component libraries would definitely evolve how we design and would improve the possibilities for reuse and design at scale. But maybe we will always need to adjust these components to suit our content.

CSS is changing

Whilst the container query debate rumbles on, there have been numerous advances in CSS that change the way we think about design. The days of fixed-width elements measured in pixels and floated div elements used to cobble layouts together are long gone, consigned to history along with table layouts. Flexbox and CSS Grid have revolutionized layouts for the web. We can now create elements that wrap onto new rows when they run out of space, not when the device changes.

.wrapper {
  display: grid;
  grid-template-columns: repeat(auto-fit, 450px);
  gap: 10px;

The repeat() function paired with auto-fit or auto-fill allows us to specify how much space each column should use while leaving it up to the browser to decide when to spill the columns onto a new line. Similar things can be achieved with Flexbox, as elements can wrap over multiple rows and “flex” to fill available space. 

.wrapper {
  display: flex;
  flex-wrap: wrap;
  justify-content: space-between;

.child {
  flex-basis: 32%;
  margin-bottom: 20px;

The biggest benefit of all this is you don’t need to wrap elements in container rows. Without rows, content isn’t tied to page markup in quite the same way, allowing for removals or additions of content without additional development.

A wireframe showing seven boxes in a larger container
A traditional Grid layout without the usual row containers

This is a big step forward when it comes to creating designs that allow for evolving content, but the real game changer for flexible designs is CSS Subgrid. 

Remember the days of crafting perfectly aligned interfaces, only for the customer to add an unbelievably long header almost as soon as they're given CMS access, like the illustration below?

Cards unable to respond to a sibling’s content changes

Subgrid allows elements to respond to adjustments in their own content and in the content of sibling elements, helping us create designs more resilient to change.

Wireframes showing several boxes with the contents aligned across boxes
Cards responding to content in sibling cards
.wrapper {
  display: grid;
  grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
     grid-template-rows: auto 1fr auto;
  gap: 10px;

.sub-grid {
  display: grid;
  grid-row: span 3;
  grid-template-rows: subgrid; /* sets rows to parent grid */

CSS Grid allows us to separate layout and content, thereby enabling flexible designs. Meanwhile, Subgrid allows us to create designs that can adapt in order to suit morphing content. Subgrid at the time of writing is only supported in Firefox but the above code can be implemented behind an @supports feature query. 

Intrinsic layouts 

I’d be remiss not to mention intrinsic layouts, the term created by Jen Simmons to describe a mixture of new and old CSS features used to create layouts that respond to available space. 

Responsive layouts have flexible columns using percentages. Intrinsic layouts, on the other hand, use the fr unit to create flexible columns that won’t ever shrink so much that they render the content illegible.

fr units is a way to say I want you to distribute the extra space in this way, but...don’t ever make it smaller than the content that’s inside of it.

—Jen Simmons, “Designing Intrinsic Layouts”

Intrinsic layouts can also utilize a mixture of fixed and flexible units, allowing the content to dictate the space it takes up.

A slide from a presentation showing two boxes with max content and one with auto
Slide from “Designing Intrinsic Layouts” by Jen Simmons

What makes intrinsic design stand out is that it not only creates designs that can withstand future devices but also helps scale design without losing flexibility. Components and patterns can be lifted and reused without the prerequisite of having the same breakpoints or the same amount of content as in the previous implementation. 

We can now create designs that adapt to the space they have, the content within them, and the content around them. With an intrinsic approach, we can construct responsive components without depending on container queries.

Another 2010 moment?

This intrinsic approach should in my view be every bit as groundbreaking as responsive web design was ten years ago. For me, it’s another “everything changed” moment. 

But it doesn’t seem to be moving quite as fast; I haven’t yet had that same career-changing moment I had with responsive design, despite the widely shared and brilliant talk that brought it to my attention. 

One reason for that could be that I now work in a large organization, which is quite different from the design agency role I had in 2010. In my agency days, every new project was a clean slate, a chance to try something new. Nowadays, projects use existing tools and frameworks and are often improvements to existing websites with an existing codebase. 

Another could be that I feel more prepared for change now. In 2010 I was new to design in general; the shift was frightening and required a lot of learning. Also, an intrinsic approach isn’t exactly all-new; it’s about using existing skills and existing CSS knowledge in a different way. 

You can’t framework your way out of a content problem

Another reason for the slightly slower adoption of intrinsic design could be the lack of quick-fix framework solutions available to kick-start the change. 

Responsive grid systems were all over the place ten years ago. With a framework like Bootstrap or Skeleton, you had a responsive design template at your fingertips.

Intrinsic design and frameworks do not go hand in hand quite so well because the benefit of having a selection of units is a hindrance when it comes to creating layout templates. The beauty of intrinsic design is combining different units and experimenting with techniques to get the best for your content.

And then there are design tools. We probably all, at some point in our careers, used Photoshop templates for desktop, tablet, and mobile devices to drop designs in and show how the site would look at all three stages.

How do you do that now, with each component responding to content and layouts flexing as and when they need to? This type of design must happen in the browser, which personally I’m a big fan of. 

The debate about “whether designers should code” is another that has rumbled on for years. When designing a digital product, we should, at the very least, design for a best- and worst-case scenario when it comes to content. To do this in a graphics-based software package is far from ideal. In code, we can add longer sentences, more radio buttons, and extra tabs, and watch in real time as the design adapts. Does it still work? Is the design too reliant on the current content?

Personally, I look forward to the day intrinsic design is the standard for design, when a design component can be truly flexible and adapt to both its space and content with no reliance on device or container dimensions.

Content first 

Content is not constant. After all, to design for the unknown or unexpected we need to account for content changes like our earlier Subgrid card example that allowed the cards to respond to adjustments to their own content and the content of sibling elements.

Thankfully, there’s more to CSS than layout, and plenty of properties and values can help us put content first. Subgrid and pseudo-elements like ::first-line and ::first-letter help to separate design from markup so we can create designs that allow for changes.

Instead of old markup hacks like this—

  <span class="first-line">First line of text with different styling</span>...

—we can target content based on where it appears.

.element::first-line {
  font-size: 1.4em;

.element::first-letter {
  color: red;

Much bigger additions to CSS include logical properties, which change the way we construct designs using logical dimensions (start and end) instead of physical ones (left and right), something CSS Grid also does with functions like min(), max(), and clamp().

This flexibility allows for directional changes according to content, a common requirement when we need to present content in multiple languages. In the past, this was often achieved with Sass mixins but was often limited to switching from left-to-right to right-to-left orientation.

In the Sass version, directional variables need to be set.

$direction: rtl;
$opposite-direction: ltr;

$start-direction: right;
$end-direction: left;

These variables can be used as values—

body {
  direction: $direction;
  text-align: $start-direction;

—or as properties.

margin-#{$end-direction}: 10px;
padding-#{$start-direction}: 10px;

However, now we have native logical properties, removing the reliance on both Sass (or a similar tool) and pre-planning that necessitated using variables throughout a codebase. These properties also start to break apart the tight coupling between a design and strict physical dimensions, creating more flexibility for changes in language and in direction.

margin-block-end: 10px;
padding-block-start: 10px;

There are also native start and end values for properties like text-align, which means we can replace text-align: right with text-align: start.

Like the earlier examples, these properties help to build out designs that aren’t constrained to one language; the design will reflect the content’s needs.

Wireframe showing different text alignment options

Fixed and fluid 

We briefly covered the power of combining fixed widths with fluid widths with intrinsic layouts. The min() and max() functions are a similar concept, allowing you to specify a fixed value with a flexible alternative. 

For min() this means setting a fluid minimum value and a maximum fixed value.

.element {
  width: min(50%, 300px);
Wireframe showing a 300px box inside of an 800px box, and a 200px box inside of a 400px box

The element in the figure above will be 50% of its container as long as the element’s width doesn’t exceed 300px.

For max() we can set a flexible max value and a minimum fixed value.

.element {
  width: max(50%, 300px);
Wireframe showing a 400px box inside of an 800px box, and a 300px box inside of a 400px box

Now the element will be 50% of its container as long as the element’s width is at least 300px. This means we can set limits but allow content to react to the available space. 

The clamp() function builds on this by allowing us to set a preferred value with a third parameter. Now we can allow the element to shrink or grow if it needs to without getting to a point where it becomes unusable.

.element {
  width: clamp(300px, 50%, 600px);
Wireframe showing an 800px box inside of a 1400px box, a 400px box inside of an 800px box, and a 300px box inside of a 400px box

This time, the element’s width will be 50% (the preferred value) of its container but never less than 300px and never more than 600px.

With these techniques, we have a content-first approach to responsive design. We can separate content from markup, meaning the changes users make will not affect the design. We can start to future-proof designs by planning for unexpected changes in language or direction. And we can increase flexibility by setting desired dimensions alongside flexible alternatives, allowing for more or less content to be displayed correctly.

Situation first

Thanks to what we’ve discussed so far, we can cover device flexibility by changing our approach, designing around content and space instead of catering to devices. But what about that last bit of Jeffrey Zeldman’s quote, “...situations you haven’t imagined”?

It’s a very different thing to design for someone seated at a desktop computer as opposed to someone using a mobile phone and moving through a crowded street in glaring sunshine. Situations and environments are hard to plan for or predict because they change as people react to their own unique challenges and tasks.

This is why choice is so important. One size never fits all, so we need to design for multiple scenarios to create equal experiences for all our users.

Thankfully, there is a lot we can do to provide choice.

Responsible design 

“There are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.”

I Used the Web for a Day on a 50 MB Budget

Chris Ashton

One of the biggest assumptions we make is that people interacting with our designs have a good wifi connection and a wide screen monitor. But in the real world, our users may be commuters traveling on trains or other forms of transport using smaller mobile devices that can experience drops in connectivity. There is nothing more frustrating than a web page that won’t load, but there are ways we can help users use less data or deal with sporadic connectivity.

The srcset attribute allows the browser to decide which image to serve. This means we can create smaller ‘cropped’ images to display on mobile devices in turn using less bandwidth and less data.

  srcset="large.jpg 1024w,
             medium.jpg 640w,
             small.jpg 320w"
     alt="Image alt text" />

The preload attribute can also help us to think about how and when media is downloaded. It can be used to tell a browser about any critical assets that need to be downloaded with high priority, improving perceived performance and the user experience. 

<link rel="stylesheet" href="style.css"> <!--Standard stylesheet markup-->
<link rel="preload" href="style.css" as="style"> <!--Preload stylesheet markup-->

There’s also native lazy loading, which indicates assets that should only be downloaded when they are needed.

<img src="image.png" loading="lazy" alt="…">

With srcset, preload, and lazy loading, we can start to tailor a user’s experience based on the situation they find themselves in. What none of this does, however, is allow the user themselves to decide what they want downloaded, as the decision is usually the browser’s to make. 

So how can we put users in control?

The return of media queries 

Media queries have always been about much more than device sizes. They allow content to adapt to different situations, with screen size being just one of them.

We’ve long been able to check for media types like print and speech and features such as hover, resolution, and color. These checks allow us to provide options that suit more than one scenario; it’s less about one-size-fits-all and more about serving adaptable content. 

As of this writing, the Media Queries Level 5 spec is still under development. It introduces some really exciting queries that in the future will help us design for multiple other unexpected situations.

For example, there’s a light-level feature that allows you to modify styles if a user is in sunlight or darkness. Paired with custom properties, these features allow us to quickly create designs or themes for specific environments.

@media (light-level: normal) {
  --background-color: #fff;
  --text-color: #0b0c0c;  

@media (light-level: dim) {
  --background-color: #efd226;
  --text-color: #0b0c0c;

Another key feature of the Level 5 spec is personalization. Instead of creating designs that are the same for everyone, users can choose what works for them. This is achieved by using features like prefers-reduced-data, prefers-color-scheme, and prefers-reduced-motion, the latter two of which already enjoy broad browser support. These features tap into preferences set via the operating system or browser so people don’t have to spend time making each site they visit more usable. 

Media queries like this go beyond choices made by a browser to grant more control to the user.

Expect the unexpected

In the end, the one thing we should always expect is for things to change. Devices in particular change faster than we can keep up, with foldable screens already on the market.

We can’t design the same way we have for this ever-changing landscape, but we can design for content. By putting content first and allowing that content to adapt to whatever space surrounds it, we can create more robust, flexible designs that increase the longevity of our products. 

A lot of the CSS discussed here is about moving away from layouts and putting content at the heart of design. From responsive components to fixed and fluid units, there is so much more we can do to take a more intrinsic approach. Even better, we can test these techniques during the design phase by designing in-browser and watching how our designs adapt in real-time.

When it comes to unexpected situations, we need to make sure our products are usable when people need them, whenever and wherever that might be. We can move closer to achieving this by involving users in our design decisions, by creating choice via browsers, and by giving control to our users with user-preference-based media queries. 

Good design for the unexpected should allow for change, provide choice, and give control to those we serve: our users themselves.

Asynchronous Design Critique: Getting Feedback

“Any comment?” is probably one of the worst ways to ask for feedback. It’s vague and open ended, and it doesn’t provide any indication of what we’re looking for. Getting good feedback starts earlier than we might expect: it starts with the request. 

It might seem counterintuitive to start the process of receiving feedback with a question, but that makes sense if we realize that getting feedback can be thought of as a form of design research. In the same way that we wouldn’t do any research without the right questions to get the insights that we need, the best way to ask for feedback is also to craft sharp questions.

Design critique is not a one-shot process. Sure, any good feedback workflow continues until the project is finished, but this is particularly true for design because design work continues iteration after iteration, from a high level to the finest details. Each level needs its own set of questions.

And finally, as with any good research, we need to review what we got back, get to the core of its insights, and take action. Question, iteration, and review. Let’s look at each of those.

The question

Being open to feedback is essential, but we need to be precise about what we’re looking for. Just saying “Any comment?”, “What do you think?”, or “I’d love to get your opinion” at the end of a presentation—whether it’s in person, over video, or through a written post—is likely to get a number of varied opinions or, even worse, get everyone to follow the direction of the first person who speaks up. And then... we get frustrated because vague questions like those can turn a high-level flows review into people instead commenting on the borders of buttons. Which might be a hearty topic, so it might be hard at that point to redirect the team to the subject that you had wanted to focus on.

But how do we get into this situation? It’s a mix of factors. One is that we don’t usually consider asking as a part of the feedback process. Another is how natural it is to just leave the question implied, expecting the others to be on the same page. Another is that in nonprofessional discussions, there’s often no need to be that precise. In short, we tend to underestimate the importance of the questions, so we don’t work on improving them.

The act of asking good questions guides and focuses the critique. It’s also a form of consent: it makes it clear that you’re open to comments and what kind of comments you’d like to get. It puts people in the right mental state, especially in situations when they weren’t expecting to give feedback.

There isn’t a single best way to ask for feedback. It just needs to be specific, and specificity can take many shapes. A model for design critique that I’ve found particularly useful in my coaching is the one of stage versus depth.

A chart showing Depth on one axis and Stage on another axis, with Depth decreasing as Stage increases

Stage” refers to each of the steps of the process—in our case, the design process. In progressing from user research to the final design, the kind of feedback evolves. But within a single step, one might still review whether some assumptions are correct and whether there’s been a proper translation of the amassed feedback into updated designs as the project has evolved. A starting point for potential questions could derive from the layers of user experience. What do you want to know: Project objectives? User needs? Functionality? Content? Interaction design? Information architecture? UI design? Navigation design? Visual design? Branding?

Here’re a few example questions that are precise and to the point that refer to different layers:

  • Functionality: Is automating account creation desirable?
  • Interaction design: Take a look through the updated flow and let me know whether you see any steps or error states that I might’ve missed.
  • Information architecture: We have two competing bits of information on this page. Is the structure effective in communicating them both?
  • UI design: What are your thoughts on the error counter at the top of the page that makes sure that you see the next error, even if the error is out of the viewport? 
  • Navigation design: From research, we identified these second-level navigation items, but once you’re on the page, the list feels too long and hard to navigate. Are there any suggestions to address this?
  • Visual design: Are the sticky notifications in the bottom-right corner visible enough?

The other axis of specificity is about how deep you’d like to go on what’s being presented. For example, we might have introduced a new end-to-end flow, but there was a specific view that you found particularly challenging and you’d like a detailed review of that. This can be especially useful from one iteration to the next where it’s important to highlight the parts that have changed.

There are other things that we can consider when we want to achieve more specific—and more effective—questions.

A simple trick is to remove generic qualifiers from your questions like “good,” “well,” “nice,” “bad,” “okay,” and “cool.” For example, asking, “When the block opens and the buttons appear, is this interaction good?” might look specific, but you can spot the “good” qualifier, and convert it to an even better question: “When the block opens and the buttons appear, is it clear what the next action is?”

Sometimes we actually do want broad feedback. That’s rare, but it can happen. In that sense, you might still make it explicit that you’re looking for a wide range of opinions, whether at a high level or with details. Or maybe just say, “At first glance, what do you think?” so that it’s clear that what you’re asking is open ended but focused on someone’s impression after their first five seconds of looking at it.

Sometimes the project is particularly expansive, and some areas may have already been explored in detail. In these situations, it might be useful to explicitly say that some parts are already locked in and aren’t open to feedback. It’s not something that I’d recommend in general, but I’ve found it useful to avoid falling again into rabbit holes of the sort that might lead to further refinement but aren’t what’s most important right now.

Asking specific questions can completely change the quality of the feedback that you receive. People with less refined critique skills will now be able to offer more actionable feedback, and even expert designers will welcome the clarity and efficiency that comes from focusing only on what’s needed. It can save a lot of time and frustration.

The iteration

Design iterations are probably the most visible part of the design work, and they provide a natural checkpoint for feedback. Yet a lot of design tools with inline commenting tend to show changes as a single fluid stream in the same file, and those types of design tools make conversations disappear once they’re resolved, update shared UI components automatically, and compel designs to always show the latest version—unless these would-be helpful features were to be manually turned off. The implied goal that these design tools seem to have is to arrive at just one final copy with all discussions closed, probably because they inherited patterns from how written documents are collaboratively edited. That’s probably not the best way to approach design critiques, but even if I don’t want to be too prescriptive here: that could work for some teams.

The asynchronous design-critique approach that I find most effective is to create explicit checkpoints for discussion. I’m going to use the term iteration post for this. It refers to a write-up or presentation of the design iteration followed by a discussion thread of some kind. Any platform that can accommodate this structure can use this. By the way, when I refer to a “write-up or presentation,” I’m including video recordings or other media too: as long as it’s asynchronous, it works.

Using iteration posts has many advantages:

  • It creates a rhythm in the design work so that the designer can review feedback from each iteration and prepare for the next.
  • It makes decisions visible for future review, and conversations are likewise always available.
  • It creates a record of how the design changed over time.
  • Depending on the tool, it might also make it easier to collect feedback and act on it.

These posts of course don’t mean that no other feedback approach should be used, just that iteration posts could be the primary rhythm for a remote design team to use. And other feedback approaches (such as live critique, pair designing, or inline comments) can build from there.

I don’t think there’s a standard format for iteration posts. But there are a few high-level elements that make sense to include as a baseline:

  1. The goal
  2. The design
  3. The list of changes
  4. The questions

Each project is likely to have a goal, and hopefully it’s something that’s already been summarized in a single sentence somewhere else, such as the client brief, the product manager’s outline, or the project owner’s request. So this is something that I’d repeat in every iteration post—literally copy and pasting it. The idea is to provide context and to repeat what’s essential to make each iteration post complete so that there’s no need to find information spread across multiple posts. If I want to know about the latest design, the latest iteration post will have all that I need.

This copy-and-paste part introduces another relevant concept: alignment comes from repetition. So having posts that repeat information is actually very effective toward making sure that everyone is on the same page.

The design is then the actual series of information-architecture outlines, diagrams, flows, maps, wireframes, screens, visuals, and any other kind of design work that’s been done. In short, it’s any design artifact. For the final stages of work, I prefer the term blueprint to emphasize that I’ll be showing full flows instead of individual screens to make it easier to understand the bigger picture. 

It can also be useful to label the artifacts with clear titles because that can make it easier to refer to them. Write the post in a way that helps people understand the work. It’s not too different from organizing a good live presentation. 

For an efficient discussion, you should also include a bullet list of the changes from the previous iteration to let people focus on what’s new, which can be especially useful for larger pieces of work where keeping track, iteration after iteration, could become a challenge.

And finally, as noted earlier, it’s essential that you include a list of the questions to drive the design critique in the direction you want. Doing this as a numbered list can also help make it easier to refer to each question by its number.

Not all iterations are the same. Earlier iterations don’t need to be as tightly focused—they can be more exploratory and experimental, maybe even breaking some of the design-language guidelines to see what’s possible. Then later, the iterations start settling on a solution and refining it until the design process reaches its end and the feature ships.

I want to highlight that even if these iteration posts are written and conceived as checkpoints, by no means do they need to be exhaustive. A post might be a draft—just a concept to get a conversation going—or it could be a cumulative list of each feature that was added over the course of each iteration until the full picture is done.

Over time, I also started using specific labels for incremental iterations: i1, i2, i3, and so on. This might look like a minor labelling tip, but it can help in multiple ways:

  • Unique—It’s a clear unique marker. Within each project, one can easily say, “This was discussed in i4,” and everyone knows where they can go to review things.
  • Unassuming—It works like versions (such as v1, v2, and v3) but in contrast, versions create the impression of something that’s big, exhaustive, and complete. Iterations must be able to be exploratory, incomplete, partial.
  • Future proof—It resolves the “final” naming problem that you can run into with versions. No more files named “final final complete no-really-its-done.” Within each project, the largest number always represents the latest iteration.

To mark when a design is complete enough to be worked on, even if there might be some bits still in need of attention and in turn more iterations needed, the wording release candidate (RC) could be used to describe it: “with i8, we reached RC” or “i12 is an RC.”

The review

What usually happens during a design critique is an open discussion, with a back and forth between people that can be very productive. This approach is particularly effective during live, synchronous feedback. But when we work asynchronously, it’s more effective to use a different approach: we can shift to a user-research mindset. Written feedback from teammates, stakeholders, or others can be treated as if it were the result of user interviews and surveys, and we can analyze it accordingly.

This shift has some major benefits that make asynchronous feedback particularly effective, especially around these friction points:

  1. It removes the pressure to reply to everyone.
  2. It reduces the frustration from swoop-by comments.
  3. It lessens our personal stake.

The first friction point is feeling a pressure to reply to every single comment. Sometimes we write the iteration post, and we get replies from our team. It’s just a few of them, it’s easy, and it doesn’t feel like a problem. But other times, some solutions might require more in-depth discussions, and the amount of replies can quickly increase, which can create a tension between trying to be a good team player by replying to everyone and doing the next design iteration. This might be especially true if the person who’s replying is a stakeholder or someone directly involved in the project who we feel that we need to listen to. We need to accept that this pressure is absolutely normal, and it’s human nature to try to accommodate people who we care about. Sometimes replying to all comments can be effective, but if we treat a design critique more like user research, we realize that we don’t have to reply to every comment, and in asynchronous spaces, there are alternatives:

  • One is to let the next iteration speak for itself. When the design evolves and we post a follow-up iteration, that’s the reply. You might tag all the people who were involved in the previous discussion, but even that’s a choice, not a requirement. 
  • Another is to briefly reply to acknowledge each comment, such as “Understood. Thank you,” “Good points—I’ll review,” or “Thanks. I’ll include these in the next iteration.” In some cases, this could also be just a single top-level comment along the lines of “Thanks for all the feedback everyone—the next iteration is coming soon!”
  • Another is to provide a quick summary of the comments before moving on. Depending on your workflow, this can be particularly useful as it can provide a simplified checklist that you can then use for the next iteration.

The second friction point is the swoop-by comment, which is the kind of feedback that comes from someone outside the project or team who might not be aware of the context, restrictions, decisions, or requirements—or of the previous iterations’ discussions. On their side, there’s something that one can hope that they might learn: they could start to acknowledge that they’re doing this and they could be more conscious in outlining where they’re coming from. Swoop-by comments often trigger the simple thought “We’ve already discussed this…”, and it can be frustrating to have to repeat the same reply over and over.

Let’s begin by acknowledging again that there’s no need to reply to every comment. If, however, replying to a previously litigated point might be useful, a short reply with a link to the previous discussion for extra details is usually enough. Remember, alignment comes from repetition, so it’s okay to repeat things sometimes!

Swoop-by commenting can still be useful for two reasons: they might point out something that still isn’t clear, and they also have the potential to stand in for the point of view of a user who’s seeing the design for the first time. Sure, you’ll still be frustrated, but that might at least help in dealing with it.

The third friction point is the personal stake we could have with the design, which could make us feel defensive if the review were to feel more like a discussion. Treating feedback as user research helps us create a healthy distance between the people giving us feedback and our ego (because yes, even if we don’t want to admit it, it’s there). And ultimately, treating everything in aggregated form allows us to better prioritize our work.

Always remember that while you need to listen to stakeholders, project owners, and specific advice, you don’t have to accept every piece of feedback. You have to analyze it and make a decision that you can justify, but sometimes “no” is the right answer. 

As the designer leading the project, you’re in charge of that decision. Ultimately, everyone has their specialty, and as the designer, you’re the one who has the most knowledge and the most context to make the right decision. And by listening to the feedback that you’ve received, you’re making sure that it’s also the best and most balanced decision.

Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.

Asynchronous Design Critique: Giving Feedback

Feedback, in whichever form it takes, and whatever it may be called, is one of the most effective soft skills that we have at our disposal to collaboratively get our designs to a better place while growing our own skills and perspectives.

Feedback is also one of the most underestimated tools, and often by assuming that we’re already good at it, we settle, forgetting that it’s a skill that can be trained, grown, and improved. Poor feedback can create confusion in projects, bring down morale, and affect trust and team collaboration over the long term. Quality feedback can be a transformative force. 

Practicing our skills is surely a good way to improve, but the learning gets even faster when it’s paired with a good foundation that channels and focuses the practice. What are some foundational aspects of giving good feedback? And how can feedback be adjusted for remote and distributed work environments? 

On the web, we can identify a long tradition of asynchronous feedback: from the early days of open source, code was shared and discussed on mailing lists. Today, developers engage on pull requests, designers comment in their favorite design tools, project managers and scrum masters exchange ideas on tickets, and so on.

Design critique is often the name used for a type of feedback that’s provided to make our work better, collaboratively. So it shares a lot of the principles with feedback in general, but it also has some differences.

The content

The foundation of every good critique is the feedback’s content, so that’s where we need to start. There are many models that you can use to shape your content. The one that I personally like best—because it’s clear and actionable—is this one from Lara Hogan.

An equation: Observation plus impact plus question equals actionable feedback.

While this equation is generally used to give feedback to people, it also fits really well in a design critique because it ultimately answers some of the core questions that we work on: What? Where? Why? How? Imagine that you’re giving some feedback about some design work that spans multiple screens, like an onboarding flow: there are some pages shown, a flow blueprint, and an outline of the decisions made. You spot something that could be improved. If you keep the three elements of the equation in mind, you’ll have a mental model that can help you be more precise and effective.

Here is a comment that could be given as a part of some feedback, and it might look reasonable at a first glance: it seems to superficially fulfill the elements in the equation. But does it?

Not sure about the buttons’ styles and hierarchy—it feels off. Can you change them?

Observation for design feedback doesn’t just mean pointing out which part of the interface your feedback refers to, but it also refers to offering a perspective that’s as specific as possible. Are you providing the user’s perspective? Your expert perspective? A business perspective? The project manager’s perspective? A first-time user’s perspective?

When I see these two buttons, I expect one to go forward and one to go back.

Impact is about the why. Just pointing out a UI element might sometimes be enough if the issue may be obvious, but more often than not, you should add an explanation of what you’re pointing out.

When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow.

The question approach is meant to provide open guidance by eliciting the critical thinking in the designer receiving the feedback. Notably, in Lara’s equation she provides a second approach: request, which instead provides guidance toward a specific solution. While that’s a viable option for feedback in general, for design critiques, in my experience, defaulting to the question approach usually reaches the best solutions because designers are generally more comfortable in being given an open space to explore.

The difference between the two can be exemplified with, for the question approach:

When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Would it make sense to unify them?

Or, for the request approach:

When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same pair of forward and back buttons.

At this point in some situations, it might be useful to integrate with an extra why: why you consider the given suggestion to be better.

When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.

Choosing the question approach or the request approach can also at times be a matter of personal preference. A while ago, I was putting a lot of effort into improving my feedback: I did rounds of anonymous feedback, and I reviewed feedback with other people. After a few rounds of this work and a year later, I got a positive response: my feedback came across as effective and grounded. Until I changed teams. To my shock, my next round of feedback from one specific person wasn’t that great. The reason is that I had previously tried not to be prescriptive in my advice—because the people who I was previously working with preferred the open-ended question format over the request style of suggestions. But now in this other team, there was one person who instead preferred specific guidance. So I adapted my feedback for them to include requests.

One comment that I heard come up a few times is that this kind of feedback is quite long, and it doesn’t seem very efficient. No… but also yes. Let’s explore both sides.

No, this style of feedback is actually efficient because the length here is a byproduct of clarity, and spending time giving this kind of feedback can provide exactly enough information for a good fix. Also if we zoom out, it can reduce future back-and-forth conversations and misunderstandings, improving the overall efficiency and effectiveness of collaboration beyond the single comment. Imagine that in the example above the feedback were instead just, “Let’s make sure that all screens have the same two forward and back buttons.” The designer receiving this feedback wouldn’t have much to go by, so they might just apply the change. In later iterations, the interface might change or they might introduce new features—and maybe that change might not make sense anymore. Without the why, the designer might imagine that the change is about consistency… but what if it wasn’t? So there could now be an underlying concern that changing the buttons would be perceived as a regression.

Yes, this style of feedback is not always efficient because the points in some comments don’t always need to be exhaustive, sometimes because certain changes may be obvious (“The font used doesn’t follow our guidelines”) and sometimes because the team may have a lot of internal knowledge such that some of the whys may be implied.

So the equation above isn’t meant to suggest a strict template for feedback but a mnemonic to reflect and improve the practice. Even after years of active work on my critiques, I still from time to time go back to this formula and reflect on whether what I just wrote is effective.

The tone

Well-grounded content is the foundation of feedback, but that’s not really enough. The soft skills of the person who’s providing the critique can multiply the likelihood that the feedback will be well received and understood. Tone alone can make the difference between content that’s rejected or welcomed, and it’s been demonstrated that only positive feedback creates sustained change in people.

Since our goal is to be understood and to have a positive working environment, tone is essential to work on. Over the years, I’ve tried to summarize the required soft skills in a formula that mirrors the one for content: the receptivity equation.

Another equation: Timing plus attitude plus form equals respectful feedback.

Respectful feedback comes across as grounded, solid, and constructive. It’s the kind of feedback that, whether it’s positive or negative, is perceived as useful and fair.

Timing refers to when the feedback happens. To-the-point feedback doesn’t have much hope of being well received if it’s given at the wrong time. Questioning the entire high-level information architecture of a new feature when it’s about to ship might still be relevant if that questioning highlights a major blocker that nobody saw, but it’s way more likely that those concerns will have to wait for a later rework. So in general, attune your feedback to the stage of the project. Early iteration? Late iteration? Polishing work in progress? These all have different needs. The right timing will make it more likely that your feedback will be well received.

Attitude is the equivalent of intent, and in the context of person-to-person feedback, it can be referred to as radical candor. That means checking before we write to see whether what we have in mind will truly help the person and make the project better overall. This might be a hard reflection at times because maybe we don’t want to admit that we don’t really appreciate that person. Hopefully that’s not the case, but that can happen, and that’s okay. Acknowledging and owning that can help you make up for that: how would I write if I really cared about them? How can I avoid being passive aggressive? How can I be more constructive?

Form is relevant especially in a diverse and cross-cultural work environments because having great content, perfect timing, and the right attitude might not come across if the way that we write creates misunderstandings. There might be many reasons for this: sometimes certain words might trigger specific reactions; sometimes nonnative speakers might not understand all the nuances of some sentences; sometimes our brains might just be different and we might perceive the world differently—neurodiversity must be taken into consideration. Whatever the reason, it’s important to review not just what we write but how.

A few years back, I was asking for some feedback on how I give feedback. I received some good advice but also a comment that surprised me. They pointed out that when I wrote “Oh, […],” I made them feel stupid. That wasn’t my intent! I felt really bad, and I just realized that I provided feedback to them for months, and every time I might have made them feel stupid. I was horrified… but also thankful. I made a quick fix: I added “oh” in my list of replaced words (your choice between: macOS’s text replacement, aText, TextExpander, or others) so that when I typed “oh,” it was instantly deleted. 

Something to highlight because it’s quite frequent—especially in teams that have a strong group spirit—is that people tend to beat around the bush. It’s important to remember here that a positive attitude doesn’t mean going light on the feedback—it just means that even when you provide hard, difficult, or challenging feedback, you do so in a way that’s respectful and constructive. The nicest thing that you can do for someone is to help them grow.

We have a great advantage in giving feedback in written form: it can be reviewed by another person who isn’t directly involved, which can help to reduce or remove any bias that might be there. I found that the best, most insightful moments for me have happened when I’ve shared a comment and I’ve asked someone who I highly trusted, “How does this sound?,” “How can I do it better,” and even “How would you have written it?”—and I’ve learned a lot by seeing the two versions side by side.

The format

Asynchronous feedback also has a major inherent advantage: we can take more time to refine what we’ve written to make sure that it fulfills two main goals: the clarity of communication and the actionability of the suggestions.

Clarity plus Actionability

Let’s imagine that someone shared a design iteration for a project. You are reviewing it and leaving a comment. There are many ways to do this, and of course context matters, but let’s try to think about some elements that may be useful to consider.

In terms of clarity, start by grounding the critique that you’re about to give by providing context. Specifically, this means describing where you’re coming from: do you have a deep knowledge of the project, or is this the first time that you’re seeing it? Are you coming from a high-level perspective, or are you figuring out the details? Are there regressions? Which user’s perspective are you taking when providing your feedback? Is the design iteration at a point where it would be okay to ship this, or are there major things that need to be addressed first?

Providing context is helpful even if you’re sharing feedback within a team that already has some information on the project. And context is absolutely essential when giving cross-team feedback. If I were to review a design that might be indirectly related to my work, and if I had no knowledge about how the project arrived at that point, I would say so, highlighting my take as external.

We often focus on the negatives, trying to outline all the things that could be done better. That’s of course important, but it’s just as important—if not more—to focus on the positives, especially if you saw progress from the previous iteration. This might seem superfluous, but it’s important to keep in mind that design is a discipline where there are hundreds of possible solutions for every problem. So pointing out that the design solution that was chosen is good and explaining why it’s good has two major benefits: it confirms that the approach taken was solid, and it helps to ground your negative feedback. In the longer term, sharing positive feedback can help prevent regressions on things that are going well because those things will have been highlighted as important. As a bonus, positive feedback can also help reduce impostor syndrome.

There’s one powerful approach that combines both context and a focus on the positives: frame how the design is better than the status quo (compared to a previous iteration, competitors, or benchmarks) and why, and then on that foundation, you can add what could be improved. This is powerful because there’s a big difference between a critique that’s for a design that’s already in good shape and a critique that’s for a design that isn’t quite there yet.

Another way that you can improve your feedback is to depersonalize the feedback: the comments should always be about the work, never about the person who made it. It’s “This button isn’t well aligned” versus “You haven’t aligned this button well.” This is very easy to change in your writing by reviewing it just before sending.

In terms of actionability, one of the best approaches to help the designer who’s reading through your feedback is to split it into bullet points or paragraphs, which are easier to review and analyze one by one. For longer pieces of feedback, you might also consider splitting it into sections or even across multiple comments. Of course, adding screenshots or signifying markers of the specific part of the interface you’re referring to can also be especially useful.

One approach that I’ve personally used effectively in some contexts is to enhance the bullet points with four markers using emojis. So a red square 🟥 means that it’s something that I consider blocking; a yellow diamond 🔶 is something that I can be convinced otherwise, but it seems to me that it should be changed; and a green circle 🟢 is a detailed, positive confirmation. I also use a blue spiral 🌀 for either something that I’m not sure about, an exploration, an open alternative, or just a note. But I’d use this approach only on teams where I’ve already established a good level of trust because if it happens that I have to deliver a lot of red squares, the impact could be quite demoralizing, and I’d reframe how I’d communicate that a bit.

Let’s see how this would work by reusing the example that we used earlier as the first bullet point in this list:

  • 🔶 Navigation—When I see these two buttons, I expect one to go forward and one to go back. But this is the only screen where this happens, as before we just used a single button and an “×” to close. This seems to be breaking the consistency in the flow. Let’s make sure that all screens have the same two forward and back buttons so that users don’t get confused.
  • 🟢 Overall—I think the page is solid, and this is good enough to be our release candidate for a version 1.0.
  • 🟢 Metrics—Good improvement in the buttons on the metrics area; the improved contrast and new focus style make them more accessible.
  •  🟥  Button Style—Using the green accent in this context creates the impression that it’s a positive action because green is usually perceived as a confirmation color. Do we need to explore a different color?
  • 🔶Tiles—Given the number of items on the page, and the overall page hierarchy, it seems to me that the tiles shouldn’t be using the Subtitle 1 style but the Subtitle 2 style. This will keep the visual hierarchy more consistent.
  • 🌀 Background—Using a light texture works well, but I wonder whether it adds too much noise in this kind of page. What is the thinking in using that?

What about giving feedback directly in Figma or another design tool that allows in-place feedback? In general, I find these difficult to use because they hide discussions and they’re harder to track, but in the right context, they can be very effective. Just make sure that each of the comments is separate so that it’s easier to match each discussion to a single task, similar to the idea of splitting mentioned above.

One final note: say the obvious. Sometimes we might feel that something is obviously good or obviously wrong, and so we don’t say it. Or sometimes we might have a doubt that we don’t express because the question might sound stupid. Say it—that’s okay. You might have to reword it a little bit to make the reader feel more comfortable, but don’t hold it back. Good feedback is transparent, even when it may be obvious.

There’s another advantage of asynchronous feedback: written feedback automatically tracks decisions. Especially in large projects, “Why did we do this?” could be a question that pops up from time to time, and there’s nothing better than open, transparent discussions that can be reviewed at any time. For this reason, I recommend using software that saves these discussions, without hiding them once they are resolved. 

Content, tone, and format. Each one of these subjects provides a useful model, but working to improve eight areas—observation, impact, question, timing, attitude, form, clarity, and actionability—is a lot of work to put in all at once. One effective approach is to take them one by one: first identify the area that you lack the most (either from your perspective or from feedback from others) and start there. Then the second, then the third, and so on. At first you’ll have to put in extra time for every piece of feedback that you give, but after a while, it’ll become second nature, and your impact on the work will multiply.

Thanks to Brie Anne Demkiw and Mike Shelton for reviewing the first draft of this article.

That’s Not My Burnout

Are you like me, reading about people fading away as they burn out, and feeling unable to relate? Do you feel like your feelings are invisible to the world because you’re experiencing burnout differently? When burnout starts to push down on us, our core comes through more. Beautiful, peaceful souls get quieter and fade into that distant and distracted burnout we’ve all read about. But some of us, those with fires always burning on the edges of our core, get hotter. In my heart I am fire. When I face burnout I double down, triple down, burning hotter and hotter to try to best the challenge. I don’t fade—I am engulfed in a zealous burnout

So what on earth is a zealous burnout?

Imagine a woman determined to do it all. She has two amazing children whom she, along with her husband who is also working remotely, is homeschooling during a pandemic. She has a demanding client load at work—all of whom she loves. She gets up early to get some movement in (or often catch up on work), does dinner prep as the kids are eating breakfast, and gets to work while positioning herself near “fourth grade” to listen in as she juggles clients, tasks, and budgets. Sound like a lot? Even with a supportive team both at home and at work, it is. 

Sounds like this woman has too much on her plate and needs self-care. But no, she doesn’t have time for that. In fact, she starts to feel like she’s dropping balls. Not accomplishing enough. There’s not enough of her to be here and there; she is trying to divide her mind in two all the time, all day, every day. She starts to doubt herself. And as those feelings creep in more and more, her internal narrative becomes more and more critical.

Suddenly she KNOWS what she needs to do! She should DO MORE. 

This is a hard and dangerous cycle. Know why? Because once she doesn’t finish that new goal, that narrative will get worse. Suddenly she’s failing. She isn’t doing enough. SHE is not enough. She might fail, she might fail her family...so she’ll find more she should do. She doesn’t sleep as much, move as much, all in the efforts to do more. Caught in this cycle of trying to prove herself to herself, never reaching any goal. Never feeling “enough.” 

So, yeah, that’s what zealous burnout looks like for me. It doesn’t happen overnight in some grand gesture but instead slowly builds over weeks and months. My burning out process looks like speeding up, not a person losing focus. I speed up and up and up...and then I just stop.

I am the one who could

It’s funny the things that shape us. Through the lens of childhood, I viewed the fears, struggles, and sacrifices of someone who had to make it all work without having enough. I was lucky that my mother was so resourceful and my father supportive; I never went without and even got an extra here or there. 

Growing up, I did not feel shame when my mother paid with food stamps; in fact, I’d have likely taken on any debate on the topic, verbally eviscerating anyone who dared to criticize the disabled woman trying to make sure all our needs were met with so little. As a child, I watched the way the fear of not making those ends meet impacted people I love. As the non-disabled person in my home, I would take on many of the physical tasks because I was “the one who could” make our lives a little easier. I learned early to associate fears or uncertainty with putting more of myself into it—I am the one who can. I learned early that when something frightens me, I can double down and work harder to make it better. I can own the challenge. When people have seen this in me as an adult, I’ve been told I seem fearless, but make no mistake, I’m not. If I seem fearless, it’s because this behavior was forged from other people’s fears. 

And here I am, more than 30 years later still feeling the urge to mindlessly push myself forward when faced with overwhelming tasks ahead of me, assuming that I am the one who can and therefore should. I find myself driven to prove that I can make things happen if I work longer hours, take on more responsibility, and do more

I do not see people who struggle financially as failures, because I have seen how strong that tide can be—it pulls you along the way. I truly get that I have been privileged to be able to avoid many of the challenges that were present in my youth. That said, I am still “the one who can” who feels she should, so if I were faced with not having enough to make ends meet for my own family, I would see myself as having failed. Though I am supported and educated, most of this is due to good fortune. I will, however, allow myself the arrogance of saying I have been careful with my choices to have encouraged that luck. My identity stems from the idea that I am “the one who can” so therefore feel obligated to do the most. I can choose to stop, and with some quite literal cold water splashed in my face, I’ve made the choice to before. But that choosing to stop is not my go-to; I move forward, driven by a fear that is so a part of me that I barely notice it’s there until I’m feeling utterly worn away.

So why all the history? You see, burnout is a fickle thing. I have heard and read a lot about burnout over the years. Burnout is real. Especially now, with COVID, many of us are balancing more than we ever have before—all at once! It’s hard, and the procrastinating, the avoidance, the shutting down impacts so many amazing professionals. There are important articles that relate to what I imagine must be the majority of people out there, but not me. That’s not what my burnout looks like.

The dangerous invisibility of zealous burnout

A lot of work environments see the extra hours, extra effort, and overall focused commitment as an asset (and sometimes that’s all it is). They see someone trying to rise to challenges, not someone stuck in their fear. Many well-meaning organizations have safeguards in place to protect their teams from burnout. But in cases like this, those alarms are not always tripped, and then when the inevitable stop comes, some members of the organization feel surprised and disappointed. And sometimes maybe even betrayed. 

Parents—more so mothers, statistically speaking—are praised as being so on top of it all when they can work, be involved in the after-school activities, practice self-care in the form of diet and exercise, and still meet friends for coffee or wine. During COVID many of us have binged countless streaming episodes showing how it’s so hard for the female protagonist, but she is strong and funny and can do it. It’s a “very special episode” when she breaks down, cries in the bathroom, woefully admits she needs help, and just stops for a bit. Truth is, countless people are hiding their tears or are doom-scrolling to escape. We know that the media is a lie to amuse us, but often the perception that it’s what we should strive for has penetrated much of society.

Women and burnout

I love men. And though I don’t love every man (heads up, I don’t love every woman or nonbinary person either), I think there is a beautiful spectrum of individuals who represent that particular binary gender. 

That said, women are still more often at risk of burnout than their male counterparts, especially in these COVID stressed times. Mothers in the workplace feel the pressure to do all the “mom” things while giving 110%. Mothers not in the workplace feel they need to do more to “justify” their lack of traditional employment. Women who are not mothers often feel the need to do even more because they don’t have that extra pressure at home. It’s vicious and systemic and so a part of our culture that we’re often not even aware of the enormity of the pressures we put on ourselves and each other. 

And there are prices beyond happiness too. Harvard Health Publishing released a study a decade ago that “uncovered strong links between women’s job stress and cardiovascular disease.” The CDC noted, “Heart disease is the leading cause of death for women in the United States, killing 299,578 women in 2017—or about 1 in every 5 female deaths.” 

This relationship between work stress and health, from what I have read, is more dangerous for women than it is for their non-female counterparts.

But what if your burnout isn’t like that either?

That might not be you either. After all, each of us is so different and how we respond to stressors is too. It’s part of what makes us human. Don’t stress what burnout looks like, just learn to recognize it in yourself. Here are a few questions I sometimes ask friends if I am concerned about them.

Are you happy? This simple question should be the first thing you ask yourself. Chances are, even if you’re burning out doing all the things you love, as you approach burnout you’ll just stop taking as much joy from it all.

Do you feel empowered to say no? I have observed in myself and others that when someone is burning out, they no longer feel they can say no to things. Even those who don’t “speed up” feel pressure to say yes to not disappoint the people around them.

What are three things you’ve done for yourself? Another observance is that we all tend to stop doing things for ourselves. Anything from skipping showers and eating poorly to avoiding talking to friends. These can be red flags. 

Are you making excuses? Many of us try to disregard feelings of burnout. Over and over I have heard, “It’s just crunch time,” “As soon as I do this one thing, it will all be better,” and “Well I should be able to handle this, so I’ll figure it out.” And it might really be crunch time, a single goal, and/or a skill set you need to learn. That happens—life happens. BUT if this doesn’t stop, be honest with yourself. If you’ve worked more 50-hour weeks since January than not, maybe it’s not crunch time—maybe it’s a bad situation that you’re burning out from.

Do you have a plan to stop feeling this way? If something is truly temporary and you do need to just push through, then it has an exit route with a
defined end.

Take the time to listen to yourself as you would a friend. Be honest, allow yourself to be uncomfortable, and break the thought cycles that prevent you from healing. 

So now what?

What I just described is a different path to burnout, but it’s still burnout. There are well-established approaches to working through burnout:

  • Get enough sleep.
  • Eat healthy.
  • Work out.
  • Get outside.
  • Take a break.
  • Overall, practice self-care.

Those are hard for me because they feel like more tasks. If I’m in the burnout cycle, doing any of the above for me feels like a waste. The narrative is that if I’m already failing, why would I take care of myself when I’m dropping all those other balls? People need me, right? 

If you’re deep in the cycle, your inner voice might be pretty awful by now. If you need to, tell yourself you need to take care of the person your people depend on. If your roles are pushing you toward burnout, use them to help make healing easier by justifying the time spent working on you. 

To help remind myself of the airline attendant message about putting the mask on yourself first, I have come up with a few things that I do when I start feeling myself going into a zealous burnout.

Cook an elaborate meal for someone! 

OK, I am a “food-focused” individual so cooking for someone is always my go-to. There are countless tales in my home of someone walking into the kitchen and turning right around and walking out when they noticed I was “chopping angrily.” But it’s more than that, and you should give it a try. Seriously. It’s the perfect go-to if you don’t feel worthy of taking time for yourself—do it for someone else. Most of us work in a digital world, so cooking can fill all of your senses and force you to be in the moment with all the ways you perceive the world. It can break you out of your head and help you gain a better perspective. In my house, I’ve been known to pick a place on the map and cook food that comes from wherever that is (thank you, Pinterest). I love cooking Indian food, as the smells are warm, the bread needs just enough kneading to keep my hands busy, and the process takes real attention for me because it’s not what I was brought up making. And in the end, we all win!

Vent like a foul-mouthed fool

Be careful with this one! 

I have been making an effort to practice more gratitude over the past few years, and I recognize the true benefits of that. That said, sometimes you just gotta let it all out—even the ugly. Hell, I’m a big fan of not sugarcoating our lives, and that sometimes means that to get past the big pile of poop, you’re gonna wanna complain about it a bit. 

When that is what’s needed, turn to a trusted friend and allow yourself some pure verbal diarrhea, saying all the things that are bothering you. You need to trust this friend not to judge, to see your pain, and, most importantly, to tell you to remove your cranium from your own rectal cavity. Seriously, it’s about getting a reality check here! One of the things I admire the most about my husband (though often after the fact) is his ability to break things down to their simplest. “We’re spending our lives together, of course you’re going to disappoint me from time to time, so get over it” has been his way of speaking his dedication, love, and acceptance of me—and I could not be more grateful. It also, of course, has meant that I needed to remove my head from that rectal cavity. So, again, usually those moments are appreciated in hindsight.

Pick up a book! 

There are many books out there that aren’t so much self-help as they are people just like you sharing their stories and how they’ve come to find greater balance. Maybe you’ll find something that speaks to you. Titles that have stood out to me include:

  • Thrive by Arianna Huffington
  • Tools of Titans by Tim Ferriss
  • Girl, Stop Apologizing by Rachel Hollis
  • Dare to Lead by Brené Brown

Or, another tactic I love to employ is to read or listen to a book that has NOTHING to do with my work-life balance. I’ve read the following books and found they helped balance me out because my mind was pondering their interesting topics instead of running in circles:

  • The Drunken Botanist by Amy Stewart
  • Superlife by Darin Olien
  • A Brief History of Everyone Who Ever Lived by Adam Rutherford
  • Gaia’s Garden by Toby Hemenway 

If you’re not into reading, pick up a topic on YouTube or choose a podcast to subscribe to. I’ve watched countless permaculture and gardening topics in addition to how to raise chickens and ducks. For the record, I do not have a particularly large food garden, nor do I own livestock of any kind...yet. I just find the topic interesting, and it has nothing to do with any aspect of my life that needs anything from me.

Forgive yourself 

You are never going to be perfect—hell, it would be boring if you were. It’s OK to be broken and flawed. It’s human to be tired and sad and worried. It’s OK to not do it all. It’s scary to be imperfect, but you cannot be brave if nothing were scary.

This last one is the most important: allow yourself permission to NOT do it all. You never promised to be everything to everyone at all times. We are more powerful than the fears that drive us. 

This is hard. It is hard for me. It’s what’s driven me to write this—that it’s OK to stop. It’s OK that your unhealthy habit that might even benefit those around you needs to end. You can still be successful in life.

I recently read that we are all writing our eulogy in how we live. Knowing that your professional accomplishments won’t be mentioned in that speech, what will yours say? What do you want it to say? 

Look, I get that none of these ideas will “fix it,” and that’s not their purpose. None of us are in control of our surroundings, only how we respond to them. These suggestions are to help stop the spiral effect so that you are empowered to address the underlying issues and choose your response. They are things that work for me most of the time. Maybe they’ll work for you.

Does this sound familiar? 

If this sounds familiar, it’s not just you. Don’t let your negative self-talk tell you that you “even burn out wrong.” It’s not wrong. Even if rooted in fear like my own drivers, I believe that this need to do more comes from a place of love, determination, motivation, and other wonderful attributes that make you the amazing person you are. We’re going to be OK, ya know. The lives that unfold before us might never look like that story in our head—that idea of “perfect” or “done” we’re looking for, but that’s OK. Really, when we stop and look around, usually the only eyes that judge us are in the mirror. 

Do you remember that Winnie the Pooh sketch that had Pooh eat so much at Rabbit’s house that his buttocks couldn’t fit through the door? Well, I already associate a lot with Rabbit, so it came as no surprise when he abruptly declared that this was unacceptable. But do you recall what happened next? He put a shelf across poor Pooh’s ankles and decorations on his back, and made the best of the big butt in his kitchen. 

At the end of the day we are resourceful and know that we are able to push ourselves if we need to—even when we are tired to our core or have a big butt of fluff ‘n’ stuff in our room. None of us has to be afraid, as we can manage any obstacle put in front of us. And maybe that means we will need to redefine success to allow space for being uncomfortably human, but that doesn’t really sound so bad either. 

So, wherever you are right now, please breathe. Do what you need to do to get out of your head. Forgive and take care.

Beware the Cut ‘n’ Paste Persona

This Person Does Not Exist is a website that generates human faces with a machine learning algorithm. It takes real portraits and recombines them into fake human faces. We recently scrolled past a LinkedIn post stating that this website could be useful “if you are developing a persona and looking for a photo.” 

We agree: the computer-generated faces could be a great match for personas—but not for the reason you might think. Ironically, the website highlights the core issue of this very common design method: the person(a) does not exist. Like the pictures, personas are artificially made. Information is taken out of natural context and recombined into an isolated snapshot that’s detached from reality. 

But strangely enough, designers use personas to inspire their design for the real world. 

Personas: A step back

Most designers have created, used, or come across personas at least once in their career. In their article “Personas - A Simple Introduction,” the Interaction Design Foundation defines personas as “fictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand.” In their most complete expression, personas typically consist of a name, profile picture, quotes, demographics, goals, needs, behavior in relation to a certain service/product, emotions, and motivations (for example, see Creative Companion’s Persona Core Poster). The purpose of personas, as stated by design agency Designit, is “to make the research relatable, [and] easy to communicate, digest, reference, and apply to product and service development.”

The decontextualization of personas

Personas are popular because they make “dry” research data more relatable, more human. However, this method constrains the researcher’s data analysis in such a way that the investigated users are removed from their unique contexts. As a result, personas don’t portray key factors that make you understand their decision-making process or allow you to relate to users’ thoughts and behavior; they lack stories. You understand what the persona did, but you don’t have the background to understand why. You end up with representations of users that are actually less human.

This “decontextualization” we see in personas happens in four ways, which we’ll explain below. 

Personas assume people are static 

Although many companies still try to box in their employees and customers with outdated personality tests (referring to you, Myers-Briggs), here’s a painfully obvious truth: people are not a fixed set of features. You act, think, and feel differently according to the situations you experience. You appear different to different people; you might act friendly to some, rough to others. And you change your mind all the time about decisions you’ve taken. 

Modern psychologists agree that while people generally behave according to certain patterns, it’s actually a combination of background and environment that determines how people act and take decisions. The context—the environment, the influence of other people, your mood, the entire history that led up to a situation—determines the kind of person you are in each specific moment. 

In their attempt to simplify reality, personas do not take this variability into account; they present a user as a fixed set of features. Like personality tests, personas snatch people away from real life. Even worse, people are reduced to a label and categorized as “that kind of person” with no means to exercise their innate flexibility. This practice reinforces stereotypes, lowers diversity, and doesn’t reflect reality. 

Personas focus on individuals, not the environment

In the real world, you’re designing for a context, not for an individual. Each person lives in a family, a community, an ecosystem, where there are environmental, political, and social factors you need to consider. A design is never meant for a single user. Rather, you design for one or more particular contexts in which many people might use that product. Personas, however, show the user alone rather than describe how the user relates to the environment. 

Would you always make the same decision over and over again? Maybe you’re a committed vegan but still decide to buy some meat when your relatives are coming over. As they depend on different situations and variables, your decisions—and behavior, opinions, and statements—are not absolute but highly contextual. The persona that “represents” you wouldn’t take into account this dependency, because it doesn’t specify the premises of your decisions. It doesn’t provide a justification of why you act the way you do. Personas enact the well-known bias called fundamental attribution error: explaining others’ behavior too much by their personality and too little by the situation.

As mentioned by the Interaction Design Foundation, personas are usually placed in a scenario that’s a “specific context with a problem they want to or have to solve”—does that mean context actually is considered? Unfortunately, what often happens is that you take a fictional character and based on that fiction determine how this character might deal with a certain situation. This is made worse by the fact that you haven’t even fully investigated and understood the current context of the people your persona seeks to represent; so how could you possibly understand how they would act in new situations? 

Personas are meaningless averages

As mentioned in Shlomo Goltz’s introductory article on Smashing Magazine, “a persona is depicted as a specific person but is not a real individual; rather, it is synthesized from observations of many people.” A well-known critique to this aspect of personas is that the average person does not exist, as per the famous example of the USA Air Force designing planes based on the average of 140 of their pilots’ physical dimensions and not a single pilot actually fitting within that average seat. 

The same limitation applies to mental aspects of people. Have you ever heard a famous person say, “They took what I said out of context! They used my words, but I didn’t mean it like that.” The celebrity’s statement was reported literally, but the reporter failed to explain the context around the statement and didn’t describe the non-verbal expressions. As a result, the intended meaning was lost. You do the same when you create personas: you collect somebody’s statement (or goal, or need, or emotion), of which the meaning can only be understood if you provide its own specific context, yet report it as an isolated finding. 

But personas go a step further, extracting a decontextualized finding and joining it with another decontextualized finding from somebody else. The resulting set of findings often does not make sense: it’s unclear, or even contrasting, because it lacks the underlying reasons on why and how that finding has arisen. It lacks meaning. And the persona doesn’t give you the full background of the person(s) to uncover this meaning: you would need to dive into the raw data for each single persona item to find it. What, then, is the usefulness of the persona?

Composite image of a man composed of many different photos

The relatability of personas is deceiving

To a certain extent, designers realize that a persona is a lifeless average. To overcome this, designers invent and add “relatable” details to personas to make them resemble real individuals. Nothing captures the absurdity of this better than a sentence by the Interaction Design Foundation: “Add a few fictional personal details to make the persona a realistic character.” In other words, you add non-realism in an attempt to create more realism. You deliberately obscure the fact that “John Doe” is an abstract representation of research findings; but wouldn’t it be much more responsible to emphasize that John is only an abstraction? If something is artificial, let’s present it as such.

It’s the finishing touch of a persona’s decontextualization: after having assumed that people’s personalities are fixed, dismissed the importance of their environment, and hidden meaning by joining isolated, non-generalizable findings, designers invent new context to create (their own) meaning. In doing so, as with everything they create, they introduce a host of biases. As phrased by Designit, as designers we can “contextualize [the persona] based on our reality and experience. We create connections that are familiar to us.” This practice reinforces stereotypes, doesn’t reflect real-world diversity, and gets further away from people’s actual reality with every detail added. 

To do good design research, we should report the reality “as-is” and make it relatable for our audience, so everyone can use their own empathy and develop their own interpretation and emotional response.

Dynamic Selves: The alternative to personas

If we shouldn’t use personas, what should we do instead? 

Designit has proposed using Mindsets instead of personas. Each Mindset is a “spectrum of attitudes and emotional responses that different people have within the same context or life experience.” It challenges designers to not get fixated on a single user’s way of being. Unfortunately, while being a step in the right direction, this proposal doesn’t take into account that people are part of an environment that determines their personality, their behavior, and, yes, their mindset. Therefore, Mindsets are also not absolute but change in regard to the situation. The question remains, what determines a certain Mindset?

Another alternative comes from Margaret P., author of the article “Kill Your Personas,” who has argued for replacing personas with persona spectrums that consist of a range of user abilities. For example, a visual impairment could be permanent (blindness), temporary (recovery from eye surgery), or situational (screen glare). Persona spectrums are highly useful for more inclusive and context-based design, as they’re based on the understanding that the context is the pattern, not the personality. Their limitation, however, is that they have a very functional take on users that misses the relatability of a real person taken from within a spectrum. 

In developing an alternative to personas, we aim to transform the standard design process to be context-based. Contexts are generalizable and have patterns that we can identify, just like we tried to do previously with people. So how do we identify these patterns? How do we ensure truly context-based design? 

Understand real individuals in multiple contexts

Nothing is more relatable and inspiring than reality. Therefore, we have to understand real individuals in their multi-faceted contexts, and use this understanding to fuel our design. We refer to this approach as Dynamic Selves.

Let’s take a look at what the approach looks like, based on an example of how one of us applied it in a recent project that researched habits of Italians around energy consumption. We drafted a design research plan aimed at investigating people’s attitudes toward energy consumption and sustainable behavior, with a focus on smart thermostats. 

1. Choose the right sample

When we argue against personas, we’re often challenged with quotes such as “Where are you going to find a single person that encapsulates all the information from one of these advanced personas[?]” The answer is simple: you don’t have to. You don’t need to have information about many people for your insights to be deep and meaningful. 

In qualitative research, validity does not derive from quantity but from accurate sampling. You select the people that best represent the “population” you’re designing for. If this sample is chosen well, and you have understood the sampled people in sufficient depth, you’re able to infer how the rest of the population thinks and behaves. There’s no need to study seven Susans and five Yuriys; one of each will do. 

Similarly, you don’t need to understand Susan in fifteen different contexts. Once you’ve seen her in a couple of diverse situations, you’ve understood the scheme of Susan’s response to different contexts. Not Susan as an atomic being but Susan in relation to the surrounding environment: how she might act, feel, and think in different situations. 

Given that each person is representative of a part of the total population you’re researching, it becomes clear why each should be represented as an individual, as each already is an abstraction of a larger group of individuals in similar contexts. You don’t want abstractions of abstractions! These selected people need to be understood and shown in their full expression, remaining in their microcosmos—and if you want to identify patterns you can focus on identifying patterns in contexts.

Yet the question remains: how do you select a representative sample? First of all, you have to consider what’s the target audience of the product or service you are designing: it might be useful to look at the company’s goals and strategy, the current customer base, and/or a possible future target audience. 

In our example project, we were designing an application for those who own a smart thermostat. In the future, everyone could have a smart thermostat in their house. Right now, though, only early adopters own one. To build a significant sample, we needed to understand the reason why these early adopters became such. We therefore recruited by asking people why they had a smart thermostat and how they got it. There were those who had chosen to buy it, those who had been influenced by others to buy it, and those who had found it in their house. So we selected representatives of these three situations, from different age groups and geographical locations, with an equal balance of tech savvy and non-tech savvy participants. 

2. Conduct your research

After having chosen and recruited your sample, conduct your research using ethnographic methodologies. This will make your qualitative data rich with anecdotes and examples. In our example project, given COVID-19 restrictions, we converted an in-house ethnographic research effort into remote family interviews, conducted from home and accompanied by diary studies.

To gain an in-depth understanding of attitudes and decision-making trade-offs, the research focus was not limited to the interviewee alone but deliberately included the whole family. Each interviewee would tell a story that would then become much more lively and precise with the corrections or additional details coming from wives, husbands, children, or sometimes even pets. We also focused on the relationships with other meaningful people (such as colleagues or distant family) and all the behaviors that resulted from those relationships. This wide research focus allowed us to shape a vivid mental image of dynamic situations with multiple actors. 

It’s essential that the scope of the research remains broad enough to be able to include all possible actors. Therefore, it normally works best to define broad research areas with macro questions. Interviews are best set up in a semi-structured way, where follow-up questions will dive into topics mentioned spontaneously by the interviewee. This open-minded “plan to be surprised” will yield the most insightful findings. When we asked one of our participants how his family regulated the house temperature, he replied, “My wife has not installed the thermostat’s app—she uses WhatsApp instead. If she wants to turn on the heater and she is not home, she will text me. I am her thermostat.”

3. Analysis: Create the Dynamic Selves

During the research analysis, you start representing each individual with multiple Dynamic Selves, each “Self” representing one of the contexts you have investigated. The core of each Dynamic Self is a quote, which comes supported by a photo and a few relevant demographics that illustrate the wider context. The research findings themselves will show which demographics are relevant to show. In our case, as our research focused on families and their lifestyle to understand their needs for thermal regulation, the important demographics were family type, number and nature of houses owned, economic status, and technological maturity. (We also included the individual’s name and age, but they’re optional—we included them to ease the stakeholders’ transition from personas and be able to connect multiple actions and contexts to the same person).

Three cards, each showing a different lifestyle photo, a quote that correlates to that dynamic self's attitude about technology, and some basic demographic info

To capture exact quotes, interviews need to be video-recorded and notes need to be taken verbatim as much as possible. This is essential to the truthfulness of the several Selves of each participant. In the case of real-life ethnographic research, photos of the context and anonymized actors are essential to build realistic Selves. Ideally, these photos should come directly from field research, but an evocative and representative image will work, too, as long as it’s realistic and depicts meaningful actions that you associate with your participants. For example, one of our interviewees told us about his mountain home where he used to spend every weekend with his family. Therefore, we portrayed him hiking with his little daughter. 

At the end of the research analysis, we displayed all of the Selves’ “cards” on a single canvas, categorized by activities. Each card displayed a situation, represented by a quote and a unique photo. All participants had multiple cards about themselves.

A collection of many cards representing many dynamic self personas

4. Identify design opportunities

Once you have collected all main quotes from the interview transcripts and diaries, and laid them all down as Self cards, you will see patterns emerge. These patterns will highlight the opportunity areas for new product creation, new functionalities, and new services—for new design. 

In our example project, there was a particularly interesting insight around the concept of humidity. We realized that people don’t know what humidity is and why it is important to monitor it for health: an environment that’s too dry or too wet can cause respiratory problems or worsen existing ones. This highlighted a big opportunity for our client to educate users on this concept and become a health advisor.

Benefits of Dynamic Selves

When you use the Dynamic Selves approach in your research, you start to notice unique social relations, peculiar situations real people face and the actions that follow, and that people are surrounded by changing environments. In our thermostat project, we have come to know one of the participants, Davide, as a boyfriend, dog-lover, and tech enthusiast. 

Davide is an individual we might have once reduced to a persona called “tech enthusiast.” But we can have tech enthusiasts who have families or are single, who are rich or poor. Their motivations and priorities when deciding to purchase a new thermostat can be opposite according to these different frames. 

Once you have understood Davide in multiple situations, and for each situation have understood in sufficient depth the underlying reasons for his behavior, you’re able to generalize how he would act in another situation. You can use your understanding of him to infer what he would think and do in the contexts (or scenarios) that you design for.

A comparison. On one side, three people are fused into one to create a persona; in the second, the three people exist as separate dynamic selves.

The Dynamic Selves approach aims to dismiss the conflicted dual purpose of personas—to summarize and empathize at the same time—by separating your research summary from the people you’re seeking to empathize with. This is important because our empathy for people is affected by scale: the bigger the group, the harder it is to feel empathy for others. We feel the strongest empathy for individuals we can personally relate to.  

If you take a real person as inspiration for your design, you no longer need to create an artificial character. No more inventing details to make the character more “realistic,” no more unnecessary additional bias. It’s simply how this person is in real life. In fact, in our experience, personas quickly become nothing more than a name in our priority guides and prototype screens, as we all know that these characters don’t really exist. 

Another powerful benefit of the Dynamic Selves approach is that it raises the stakes of your work: if you mess up your design, someone real, a person you and the team know and have met, is going to feel the consequences. It might stop you from taking shortcuts and will remind you to conduct daily checks on your designs.

And finally, real people in their specific contexts are a better basis for anecdotal storytelling and therefore are more effective in persuasion. Documentation of real research is essential in achieving this result. It adds weight and urgency behind your design arguments: “When I met Alessandra, the conditions of her workplace struck me. Noise, bad ergonomics, lack of light, you name it. If we go for this functionality, I’m afraid we’re going to add complexity to her life.”


Designit mentioned in their article on Mindsets that “design thinking tools offer a shortcut to deal with reality’s complexities, but this process of simplification can sometimes flatten out people’s lives into a few general characteristics.” Unfortunately, personas have been culprits in a crime of oversimplification. They are unsuited to represent the complex nature of our users’ decision-making processes and don’t account for the fact that humans are immersed in contexts. 

Design needs simplification but not generalization. You have to look at the research elements that stand out: the sentences that captured your attention, the images that struck you, the sounds that linger. Portray those, use them to describe the person in their multiple contexts. Both insights and people come with a context; they cannot be cut from that context because it would remove meaning. 

It’s high time for design to move away from fiction, and embrace reality—in its messy, surprising, and unquantifiable beauty—as our guide and inspiration.

Immersive Content Strategy

Beyond the severe toll of the coronavirus pandemic, perhaps no other disruption has transformed user experiences quite like how the tethers to our formerly web-biased era of content have frayed. We’re transitioning to a new world of remote work and digital content. We’re also experimenting with unprecedented content channels that, not too long ago, elicited chuckles at the watercooler, like voice interfaces, digital signage, augmented reality, and virtual reality.

Many factors are responsible. Perhaps it’s because we yearn for immersive spaces that temporarily resurrect the Before Times, or maybe it’s due to the boredom and tedium of our now-cemented stuck-at-home routines. But aural user experiences slinging voice content, and immersive user experiences unlocking new forms of interacting with formerly web-bound content, are no longer figments of science fiction. They’re fast becoming a reality in the here and now.

The idea of immersive experiences is all the rage these days, and content strategists and designers are now seriously examining this still-amorphous trend. Immersive experiences embrace concepts like geolocation, digital signage, and extended reality (XR). XR encompasses augmented reality (AR) and virtual reality (VR) as well as their fusion: mixed reality (MR). Sales of immersive equipment like gaming and VR headsets have skyrocketed during the pandemic, and content strategists are increasingly attuned to the kaleidoscope of devices and interfaces users now interact with on a daily basis to acquire information.

Immersive user experiences are becoming commonplace, and, more importantly, new tools and frameworks are emerging for designers and developers looking to get their hands dirty. But that doesn’t mean our content is ready for prime time in settings unbound from the web like physical spaces, digital signage, or extended reality. Recasting your fixed web content in more immersive ways will enable more than just newfangled user experiences; it’ll prepare you for flexibility in an unpredictable future as well.

Agnostic content for immersive experiences

These days, we interact with content through a slew of devices. It’s no longer the case that we navigate information on a single desktop computer screen. In my upcoming book Voice Content and Usability (A Book Apart, coming June 2021), I draw a distinction between what I call macrocontent—the unwieldy long-form copy plastered across browser viewports—and Anil Dash’s definition of microcontent: the kind of brisk, contextless bursts of content that we find nowadays on Apple Watches, Samsung TVs, and Amazon Alexas.

Today, content also has to be ready for contextless situations—not only in truncated form when we struggle to make out tiny text on our smartwatches or scroll through new television series on Roku but also in places it’s never ended up before. As the twenty-first century continues apace, our clients and our teams are beginning to come to terms with the fact that the way copy is consumed in just a few decades will bear no resemblance whatsoever to the prosaic browsers and even smartphones of today.

What do we mean by immersive content?

Immersive experiences are those that, according to Forrester, blur “the boundaries between the human, digital, physical, and virtual realms” to facilitate smarter, more interactive user experiences. But what do we mean by immersive content? I define immersive content as content that plays in the sandbox of physical and virtual space—copy and media that are situationally or locationally aware rather than rooted in a static, unmoving computer screen.

Whether a space is real or virtual, immersive content (or spatialcontent) will be a key way in which our customers and users deal with information in the coming years. Unlike voice content, which deals with time and sound, immersive content works with space and sight. Immersive content operates not along the axis of links and page changes but rather along situational changes, as the following figure illustrates.

In this illustration, each rectangle represents different displays that appear based on situational changes such as movement in space or adjustment of perspective that result in the delivery of different content from the previous context. One of these, such as the rightmost display, can be a web-enabled content display with links to other content presented in the same display. This illustration thus demonstrates two forms of navigation: traditional link navigation and immersive situational navigation.

Acknowledging the actual or imagined surroundings of where we are as human beings will have vast implications for content strategy, omnichannel marketing, usability testing, and accessibility. Before we dig deeper, let’s define a few clear categories of immersive content:

  • Digital signage content. Though it may seem a misnomer, digital signage is one of the most widespread examples of immersive content already in use today. For example, you may have seen it used to display a guide of stores at a mall or to aid wayfinding in an airport. While still largely bound to flat screens, it’s an example of content in space.
  • Locational content. Locational content involves copy that is delivered to a user on a personal device based on their current location in the world or within an identified physical space. Most often mediated through Bluetooth low-energy (BLE) beacon technology or GPS location services, it’s an example of content at a point in space.
  • Augmented reality content. Unlike locational content, which doesn’t usually adjust itself seamlessly based on how users move in real-world space, AR content is now common in museums and other environments—typically as overlays that are superimposed over actual physical surroundings and adjust dynamically according to the user’s position and perspective. It’s content projected into real-world space.
  • Virtual reality content. Like AR content, VR content is dependent on its imagined surroundings in terms of how it displays, but it’s part of a nonexistent space that is fully immersive, an example of content projected into virtual space.
  • Navigable content. Long a gimmicky playground for designers and developers interested in pushing the envelope, navigable content is copy that users can move across and sift through as if it were a physical space itself: true content as space.

The following illustration depicts these types of immersive content in their typical habitats.

Digital signage content typically appears to everyone within a space. Locational content is delivered via personal devices. AR is content projected into the real world through an overlay, while VR creates an immersive virtual environment. Finally, navigable content is content as the space itself.

Why auditing immersive content is important

Alongside conversational and voice content, immersive content is a compelling example of breaking content out of the limiting box where it has long lived: the browser viewport, the computer screen, and the 8.5”x11” or broadsheet borders of print media. For centuries, our written copy has been affixed to the staid standards of whatever bookbinders, newspaper printing presses, and screen manufacturers decided. Today, however, for the first time, we’re surmounting those arbitrary barriers and situating content in contexts that challenge all the assumptions we’ve made since the era of Gutenberg—and, arguably, since clay tablets, papyrus manuscripts, and ancient scrolls.

Today, it’s never been more pressing to implement an omnichannel content strategy that centers the reality our customers increasingly live in: a world in which information can end up on any device, even if it has no tether to a clickable or scrollable setting. One of the most important elements of such a future-proof content strategy is an omnichannel content audit that evaluates your content from a variety of standpoints so you can manage and plan it effectively. These audits generally consist of several steps:

  • Write a questionnaire. Each content item needs to be examined from the perspective of each channel through a series of channel-relevant questions, like whether content is legible or discoverable on every conduit through which it travels.
  • Settle the criteria. No questionnaire is complete for a content audit without evaluation criteria that measure how the content performs and recommendation criteria that determine necessary steps to improve its efficacy.
  • Discuss with stakeholders. At the end of any content audit, it’s important to leaf through the results and any recommendations in a frank discussion with stakeholders, including content strategists, editors, designers, and others.

In my previous article for A List Apart, I shared the work we did on a conversational content audit for Ask GeorgiaGov, the first (but now decommissioned) Alexa skill for residents of the state of Georgia. Such a content audit is just one facet of the multifaceted omnichannel content strategy along various dimensions you’ll need to consider. Nonetheless, there are a few things all content audits share in terms of foundational evaluation criteria across all content delivery channels:

  • Content legibility. Is the content readable or easily consumable from a variety of vantage points and perspectives? In the case of immersive content, this can include examining verbosity tolerance (how long content can be before users zone out, a big factor in digital signage) and phantom references (like links and calls to action that make sense on the web but not on a VR headset).
  • Content discoverability. It’s no longer guaranteed in immersive content experiences that every piece of content can be accessed from other content items, and content loses almost all of its context when displayed unmoored from other content in digital signs or AR overlays. For discoverability’s sake, avoid relegating content to unreachable siloes, whether content is inaccessible due to physical conditions (like walls or other obstacles) or technical ones (like a finicky VR headset).

Like voice content, immersive content requires ample attention to the ways in which users approach and interact with content in physical and virtual spaces. And as I write in Voice Content and Usability, it’s also the case that cross-channel interactions can influence how we work with copy and media. After all, how often do subway and rail commuters glance up while scrolling through service advisories on their smartphones to consult a potentially more up-to-date alert on a digital sign?

Digital signage content: Content in space

Signage has long been a fixture of how we find our way through physical spaces, ever since the earliest roads crisscrossed civilizations. Today, digital signs are becoming ubiquitous across shopping centers, university campuses, and especially transit systems, with the New York City subway recently introducing countdown clocks that display service advisories on a ticker along the bottom of the screen, just below train arrival times.

Digital signs can deliver critical content at important times, such as during emergencies, without the limitations imposed by the static nature of analog signs. News tickers on digital signs, for instance, can stretch for however long they need to, though succinctness is still highly prized. But digital signage’s rich potential to deliver immersive content also presents challenges when it comes to content modeling and governance.

Are news items delivered to digital signs simply teaser or summary versions of full articles? Without a fully functional and configurable digital sign in your office, how will you preview them in context before they go live? To solve this problem for the New York City subway, the Metropolitan Transportation Authority (MTA) manages all digital signage content across all signs within a central Drupal content management system (CMS), which synthesizes data such as train arrival times from real-time feeds and transit messages administered in the CMS for arbitrary delivery to any platform across the network.

How to present content items in digital signs also poses problems. As the following figure illustrates, do you overtake the entire screen at the risk of obscuring other information, do you leave it in a ticker that may be ignored, or do you use both depending on the priority or urgency of the content you’re presenting?

On the left are examples of digital signage where informational messages obscure important data. On the right are examples of digital signage where informational messages are constricted to a small scrolling ticker at the bottom of the screen.

While some digital signs have the benefit of touch screens and occupying entire digital kiosks, many are tasked with providing key information in as little space as possible, where users don’t have the luxury of manipulating the interface to customize the content they wish to view. The New York City subway makes a deliberate choice to allow urgent alerts to spill across the entire screen, which limits the sign’s usefulness for those who simply need to know when the next train is arriving in the interest of more important information that is relevant to all passengers—and those who need captions for loudspeaker announcements.

Auditing for digital signage content

Because digital signs value brevity and efficiency, digital signage content often isn’t the main focus of what’s displayed. Digital signs on the São Paulo metro, for instance, juggle service alerts, breaking news, and health advisories. For this reason, auditing digital signage content for legibility and discoverability is key to ensuring users can interact with it gracefully, regardless of how often it appears, how highly prioritized it is, or what it covers.

When it comes to legibility, ask yourself these questions and consider the digital sign content you’re authoring based on these concerns:

  • Font size and typography. Many digital signs use sans-serif typefaces, which are easier to read from a distance, and many also employ uppercase for all text, especially in tickers. Consider which typefaces advance rather than obscure legibility, even when the digital sign content overtakes the entire screen.
  • Angles and perspective. Is your digital sign content readily readable from various angles and various vantage points? Does the reflectivity of the screen impact your content’s legibility when standing just below the sign? How does your content look when it’s displayed to a user craning their neck and peering at it askew?
  • Color contrast and lighting. Digital signs are no longer just fixtures of subterranean worlds; they’re above-ground and in well-lit spaces too. Color contrast and lighting strongly influence how legible your digital sign content can be.

As for discoverability, digital signs present challenges of both physical discoverability (can the sign itself be easily found and consulted?) and content discoverability (how long does a reader have to stare at the sign for the content they need to show up?):

  • Physical discoverability. Are signs placed in prominent locations where users will come across them? The MTA was criticized for the poor placement of many of its digital countdown clocks in the New York City subway, something that can block a user from ever accessing content they need.
  • Content discoverability. Because digital signs can only display so much content at once, even if there’s a large amount of copy to deliver eventually, users of digital signs may need to wait too long for their desired content to appear, or the content they seek may be too deprioritized for it to show up while they’re looking at the sign.

Both legibility and discoverability of digital sign content require thorough approaches when authoring, designing, and implementing content for digital signs.

Usability and accessibility in digital signage content

In addition to audits, in any physical environment, immersive content on digital signs requires a careful and bespoke approach to consider not only how content will be consumed on the sign itself but also all the ways in which users move around and refer to digital signage as they consult it for information. After all, our content is no longer couched in a web page or recited by a screen reader, both objects we can control ourselves; instead, it’s flashed and displayed on flat screens and kiosks in physical spaces. 

Consider how the digital sign and the content it presents appear to people who use mobility aids such as wheelchairs or walkers. Is the surrounding physical environment accessible enough so that wheelchair users can easily read and discover the content they seek on a digital sign, which may be positioned too high for a seated reader? By the same token, can colorblind and dyslexic people read the chosen typeface in the color scheme it’s rendered in? Is there an aural equivalent of the content for Blind people navigating your digital signage, in close proximity to the sign itself, serving as synchronized captions?

Locational content: Content at a point in space

Unlike digital signage content, which is copy or media displayed in a space, locational (or geolocational) content is copy or media delivered to a device—usually a phone or watch—based on a point in space (if precise location is acquired through GPS location services) or a swath of space (typically driven by Bluetooth Low Energy beacons that have certain ranges). For smartphone and smartwatch users, GPS location services can often pinpoint a relatively accurate sense of where a person is, while Bluetooth Low Energy (BLE) beacons can triangulate their position based on devices that have Bluetooth enabled.

Examples of locational content might include links to more detailed information online, coupons, and sales relevant to merchandise or objects near the person viewing it.

Though BLE beacons remain a fairly finicky and untested realm of spatial technology, they’ve quickly gained traction in large shopping centers and public spaces such as airports where users agree to receive content relevant to their current location, most often in the form of push notifications that whisk users away into a separate view with more comprehensive information. But because these tiny chunks of copy are often tightly contained and contextless, teams designing for locational content need to focus on how users interact with their devices as they move through physical spaces.

Auditing for locational content

Fortunately, because locational content is often delivered to the same visual devices that we use on a regular basis—smartphones, smartwatches, and tablets—auditing for content legibility can embrace many of the same principles we employ to evaluate other content. For discoverability, some of the most important considerations include:

  • Locational discoverability. BLE beacons are notorious for their imprecision, though they continue to improve in quality. GPS location, too, can be an inaccurate measure of where someone is at any given time. The last thing you want your customers to experience is an incorrect triangulation of where they are leading to embarrassing mistakes and bewilderment when unexpected content travels down the wire.
  • Proximity. Because of the relative lack of precision when it comes to BLE beacons and GPS location services, placing content items too close together in a coordinate map may trigger too many notifications or resource deliveries to a user, thus overwhelming them, or a certain content item may inadvertently supersede another because they’re spaced too closely together.

As push notifications and location sharing become more common, locational content is rapidly becoming an important way to funnel users toward somewhat longer-form content that might otherwise go unnoticed when a customer is in a brick-and-mortar store.

Usability and accessibility in locational content

Because locational content requires users to move around physical spaces and trigger triangulation, consider how different types of users will move and also whether unforeseen issues can arise. For example, researchers in Japan found that users who walk while staring at their phones are highly disruptive to the flow and movement of those around them. Is your locational content possibly creating a situation where users bump into others, or worse, get into accidents? For instance, writing copy that’s quick and to the point or preventing notifications from being prematurely dismissed could allow users to ignore their devices until they have time to safely glance at them.

Limited mobility and cognitive disabilities can place many disabled users of locational content at a deep disadvantage. While gamification may encourage users to seek as many items of locational content as possible in a given span of time for promotional purposes, consider whether it excludes wheelchair users or people who encounter obstacles when switching between contexts rapidly. There are good use cases for locational content, but what’s compelling for some users might be confounding for others.

AR and VR content: Content projected into space

Augmented reality, once the stuff of science fiction holograms and futuristic cityscapes, is becoming more available to the masses thanks to wearable AR devices, high-performing smartphones and tablets, and innovation in machine vision capabilities, though the utopian future of true “holographic” content remains as yet unrealized. Meanwhile, virtual reality has seen incredible growth over the pandemic as homebound users—by interacting with copy and media in fictional worlds—increasingly seek escapist ways to access content normally spread across flat screens.

While AR and VR content is still in its infancy, the vast majority is currently couched in overlays that are superimposed over real-world environments or objects and can be opaque (occupying some of a device’s field of vision) or semi-transparent (creating an eerie, shimmery film on which text or media is displayed). Thanks to advancements in machine vision, these content overlays can track the motion of perceived objects in the physical or virtual world, bamboozling us into thinking these overlays are traveling in our fields of vision just like the things we see around us do.

Formerly restricted to realms like museums, expensive video games, and gimmicky prototypes, AR and VR content is now becoming much more popular among companies that are interested in more immersive experiences capable of delivering content alongside objects in real-life brick-and-mortar environments, as well as virtual or imagined landscapes, like fully immersive brand experiences that transport customers to a pop-up store in their living room.

To demonstrate this, my former team at Acquia Labs built an experimental proof of concept that examines how VR content can be administered within a CMS and a pilot project for grocery stores that explores what can happen when product information is displayed as AR content next to consumer goods in supermarket aisles. The following illustration shows, in the context of this latter experiment, how a smartphone camera interacts with a machine vision service and a Drupal CMS to acquire information to render alongside the item.

A diagram depicting how someone might look at a physical object through their phone, and AR tools can connect to a CMS to download and display relevant information about the object virtually beside it.

Auditing for AR and VR content

Because AR and VR content, unlike other forms of immersive content, fundamentally plays in the same sandbox as the real world (or an imaginary one), legibility and discoverability can become challenging. The potential risks for AR and VR content are in many regards a fusion of the problems found in both digital signage and locational content, encompassing both physical placement and visual perspective, especially when it comes to legibility:

  • Content visibility. Is the AR or VR overlay too transparent to comfortably read the copy or view the image contained therein, or is it so opaque that it obscures its surroundings? AR and VR content must coexist gracefully with its exterior, and the two must enhance rather than obfuscate each other. Does the way your content is delivered compromise a user’s feeling of immersion in the environment behind it?
  • Content perspective. Unless you’re limited to a smartphone or similar handheld device, many AR and VR overlays, especially in immersive headsets, don’t display content or media as an immobile rectangular box, as it defeats the purpose of the illusion and can be jarring to users as they adjust their field of vision, breaking them out of the fantasy you’re hoping to create. For this reason, your AR or VR experience must not only dictate how environments and objects are angled and lit but also how the content associated with them is perceived. Is your content readable from various angles and points in the AR view or VR world?

When it comes to discoverability of your AR and VR content, issues like accuracy in machine vision and triangulation of your user’s location and orientation become much more important:

  • Machine vision. Most relevantly for AR content, if your copy or media is predicated on machine vision that perceives an object by identifying it according to certain characteristics, how accurate is that prediction? Does some content go undiscovered because certain objects go undetected in your AR-enabled device?
  • Location accuracy. If your content relies on the user’s current location and orientation in relation to some point in space, as is common in both AR and VR content use cases, how accurately do devices dictate correct delivery at just the right time and place? Are the ranges within which content is accessible too limited, leading to flashes of content as you take a step to the left or right? Are there locations that simply can’t be reached, leading to forever-siloed copy or media?

Due to the intersection of technical considerations and design concerns, AR and VR content, like voice content and indeed other forms of immersive content, requires a concerted effort across multiple teams to ensure resources are delivered not just legibly but also discoverably.

Usability and accessibility in AR and VR content

Out of all the forms of immersive content we’ve covered so far, AR and VR content is possibly the medium that demands the most assiduously crafted solutions in accessibility testing and usability testing. Because AR and VR content, especially in headsets or wearable devices, requires motion through real or imagined space, its impact on accessibility cannot be overstated. Adding a third dimension—and arguably, a fourth: time—to our perception of content requires attention not only to how content is accessed but also all the other elements that comprise a fully immersive visual experience.

VR headsets commonly induce virtual reality motion sickness in many individuals. Poorly implemented transitions between states occurring in quick succession where content is visible and then invisible, and then visible again, can lead to epileptic seizures if not built with the utmost care. Finally, users moving quickly through spaces may inadvertently trigger vertigo in themselves or even collide with hazardous objects, resulting in potentially serious injuries. There’s a reason we aren’t wearing wearable headsets outside carefully secured environments.

Navigable content: Content as space

This is only the beginning of immersive content. Increasingly, we’re also toying with ideas that seemed harebrained even a few decades ago, like navigable content—copy and media that can be traversed as if the content itself were a navigable space. Imagine zooming in and out of tracts of text and stepping across glyphs like hopping between islands in a Super Mario game. Ambitious designers and developers are exploring this emerging concept of navigable content in exciting ways, both in and out of AR and VR. In many ways, truly navigable content is the endgame of how virtual reality presents information.

Imagining an encyclopedia that we can browse like the classic 1990s opening sequence of the BBC’s Eyewitness television episodes is no longer as far-fetched as we think. Consider, for instance, Robby Leonardi’s interactive résumé, which invites you to play a character as you learn about his career, or Bruno Simon’s ambitious portfolio, where you drive an animated truck around his website. For navigable content, the risks and rewards for user experience and accessibility remain largely unexplored, just like the hazy fringes of the infinite maps VR worlds make possible.


The story of immersive content is in its early stages. As newly emerging channels for content see greater adoption, requiring us to relay resources like text and media to never-before-seen destinations like digital signage, location-enabled devices, and AR and VR overlays, the demands on our content strategy and design approaches will become both fascinating and frustrating. As seemingly fantastical new interfaces continue to emerge over the horizon, we’ll need an omnichannel content strategy to guide our own journeys as creatives and to orient the voyages of our users into the immersive.

Content audits and effective content strategies aren’t just the domain of staid websites and boxy mobile or tablet interfaces—or even aurally rooted voice interfaces. They’re a key component of our increasingly digitized spaces, too, cornerstones of immersive experiences that beckon us to consume content where we are at any moment, unmoored from a workstation or a handheld. Because it lacks long-standing motifs of the web like context and clickable links, immersive content invites us to revisit our content with a fresh perspective. How will immersive content reinvent how we deliver information like the web did only a few decades ago, like voice has done in the past ten years?

Only the test of time, and the allure of immersion, will tell.

Do You Need to Localize Your Website?

Global markets give you access to new customers. All you need to do is inform potential buyers about your product or service. 

Your website is a good place to introduce your product or service outside your locale. Localizing your web content sounds like the right way to reach out to the global market. Localization will bridge the language barriers, or the wider scope of differing cultures. 

Before we move on further with the discussion, let’s focus on the definition of “localization.” 

What is localization?

According to the Cambridge Dictionary, localization (as a marketing term) is “the process of making a product or service more suitable for a particular country, area, etc.,” while translation is “something that is translated, or the process of translating something, from one language to another.” 

In practice, the difference can be a little blurred. While it’s true that localization includes both language and non-language aspects, most cultural adjustments in the localization process are done through the language. Hence, the two terms are often interchangeable. 

Good translators will not simply find an equivalent of a word in another language. They will actively research their materials and have an in-depth understanding of the languages they work in

Depending on the situation, they may or may not convert measurement units and date formats. Technical guide books may need accurate unit conversion, but changing “Fahrenheit 451” to “Celsius 233” would be simply awkward. A good translator will suggest what to change and what to leave as it is. 

Some people call this conversion process “localization.” The truth is, unit conversions had become a part of translation, long before the word “localization” was used to describe the process. 

When we talk about linguistic versus non-linguistic aspects of a medium, and view them as separate entities, localization and translation may look different. However, when we look at the whole process of translating the message, seeing both elements as translatable items, the terms are interchangeable. 

In this article, the terms “localization” and “translation” will be used interchangeably. We are going to discuss how to use a website as a communication tool to gain a new market in different cultures. 

Localization: who is it for?

A good localization is not cheap, so it would be wise to ask yourselves several questions beforehand: 

  • Who is your audience?
  • What kind of culture do they live in?
  • What kind of problems may arise during the localization process? 

I will explain the details below. 

Who is your ideal audience?

Knowing your target audience should be at the top of your business plan. 

For some, localization is not needed because they live in the same region and speak the same language as their target market. For example, daycare services, local coffee shops, and restaurants. 

In some cases, people who live in the same region may speak different languages. In a bilingual society, you may want to cater to speakers of both languages as a sign of respect. In a multilingual society, aim to translate to the lingua franca and/or the language used by the majority. It makes people feel seen and it can create a positive image to your brand. 

Sometimes, website translation is required by law. In Quebec, for instance, where French is spoken as the provincial language, you’ll need to include a French version of your website. You may also want to check other types of linguistic experiences you need to provide

If your target market lives across the sea and speaks a different language, you may not have any choice but to localize. However, if those people can speak your  language, consider other aspects (cultural and/or legal) to make an informed decision on whether to translate.  

Although there are many benefits of website translation, you don’t always have to do it now. Especially when your budget is tight or you can spend it on something more urgent. It’s better to postpone than to have a badly translated website. The price of cheap translation is costly.  

If you’re legally required to launch a bilingual website but you don’t have the budget, you may want to check if you can be exempted. If you are not exempted, hire volunteers or seek government support, if possible. 

Unless required otherwise by law, there is nothing wrong with using your current language in your product or service. You can maintain the already-formed relationship by focusing on what you have in common: the same interest. 

Understanding cultural and linguistic intricacies

For example, you have a coding tutorial website. Your current audience is IT professionals—mostly college graduates. You see an opportunity to expand to India. 

Localization is unlikely to be needed in this case, as most Indian engineers have a good grasp of English. So, instead of doing a web translation project, you can use your money to improve or develop a new product or service for your Indian audience. Maybe you want to set up a workshop or a meetup in India. Or a bootcamp retreat in the country. 

You can achieve this by focusing on the similarities you have with your audience. 

The same rule applies to other countries where English language is commonly used by IT professionals. In the developing world, where English is rarely used, some self-taught programmers become “good hackers” to earn some money. You may wonder how, despite their lack of English skill, they can learn programming.

There’s an explanation for it. 

There are two types of language skills: passive (listening, reading) and active (speaking, writing). Passive language skills are usually learned first. Active language skills are developed later. You learn to speak by listening, and learn to write by reading. You go through this process as a child and, again, when you learn a new language as an adult. (This is not to confuse language acquisition with language learning, but to note that the process is relatively the same.) 

As most free IT course materials are available online in English, some programmers may have to adapt and study English (passively) as they go. They may not be considered “fluent” in a formal way, but it doesn’t mean they lack the ability to grasp the language. They may not be able to speak or write perfectly, but they can understand technical texts. 

In short, passive and active language skills can grow at different speeds. This fact leads you to a new group of potential audience: those who can understand English, but only passively. 

If your product is in a text format, translation won’t be necessary for this type of audience. If it’s an audio or video format, you may need to add subtitles, since native English speakers speak in so many different accents and at various speeds. Captioning will also help the hard of hearing. It may be required by regional or national accessibility legislation too. And it’s the right thing to do.

One might argue that if these people can understand English, they will understand the text better in their native tongue. 

Well, if all the programs you’re using or referring to are available in their native language version, it may not be a problem. But in reality, this is often not the case. 

Linguistic consistency helps programmers work faster. And this alone should trump the presumed ease that comes with translation. 

Some problems with localization 

I was once involved in a global internet company’s localization project in Indonesia. 

Indonesian SMEs mostly speak Indonesian since they mainly serve the domestic market. So, it was the right decision to target Indonesian SMEs using Indonesian language. 

The company had the budget to target Indonesia’s market of 58 million SMEs, and there weren’t too many competitors yet. I think the localization plan was justified. But even with this generally well-thought-out plan, there were some problems in its execution. 

The materials were filled with jargon and annoying outlinks. You could not just read an instruction until it was completed, because after a few words, you would be confronted with a “smart term.” Now to understand this smart term, you would have to follow a link that would take you to a separate page that was supposed to explain everything, but in that page you would find more smart terms that you’d need to click. At this point, the scent of information would have grown cold, and you’d likely have forgotten what you were reading or why. 

Small business owners are among the busiest folks you can find. They do almost everything by themselves. They would not waste their time trying to read pages of instructions that throw them right and left. 

Language-wise, the instructions could have been simplified. Design-wise, a hover/focus pop-up containing a brief definition or description could have been used to explain special terms. 

I agree pop-ups can be distracting, but in terms of ease, for this use case, they would have worked far better than outlinks. There are some ways to improve hover/focus pop-ups to make them more readable. 

However, if the content of those pop-ups (definition, description, etc.) cannot be brief, it is wiser to write it down as a separate paragraph. 

In my client’s case, they could have started each instruction by describing the definitions of those special terms. Those definitions ought to be written in one page so as to reduce the amount of time spent on clicking and returning to the intended page. This solution can also be applied when a definition is too long to be put inside a hover/focus bubble. 

The text problem, in my client’s case, came with the source language. It was later transferred to the target language thanks to localization. They could have solved the problem at the source language level, but I think it would have been too late at that point. 

Transcreation, i.e., “taking a concept in one language and completely recreating it in another language,” doesn’t solve a problem like this because the issue is more technical than linguistic. Translators would still have to adjust their work to the given environment. They’d still have to retain all the links and translate all jargon-laden content. 

The company should have hired a local writer to rewrite the content in the target language. It would have worked better. They didn’t take this route for a reason: namely, those “smart terms” were used as keywords. So as much as we hated them, we had to keep them there.  

How to prepare a web localization project

Let’s say you have considered everything. You’ve learned about your target audience, how your product will solve their problem, and that you have the budget to reach out to them. Naturally, you want to reach them now before your competitors do. 

Now you can proceed with your web localization project plan. 

One thing I want to repeat is that localization will transfer any errors you have in your original content to the translated pages. So you’ll need to do some content pre-checks before starting a web translation project. It will be cheaper to fix the problems before the translation project commences. 

Pre-localization checks should include assessing the text you intend to translate. Ask someone outside the team to read the text and ask them to give their feedback. It’s even better if that someone represents the target audience. 

Then make corrections, if need be. Use as little jargon as possible. Let readers focus on one article with no interruption. 

Some companies like to coin new terms to create keywords that will lead people to their sites. This can be a smart move, and it is arguably good for search engine optimization. But if you want to build rapport with your audience, you must make your message clear and understandable. Clear communication, not the invention of new words, should be your priority. 

Following this course of action might mean sacrificing keywords for clarity, but it also promises a lower bounce rate since visitors will stay longer on your site. After all, people are more likely to read your writing to the end if they are not being frustrated by difficult terms.

Once your text is ready, you can start your localization project. You can hire a language agency or build your own team. 

If you have a lot of content, it may be wise to outsource your project to a language agency. Doing so can save you time and money. An outside specialist consultancy will have the technology and skills to work on various types of localization projects. They can also translate your website to different languages at once. 

As an alternative, you might directly hire freelance editors and translators to work on your project. Depending on many factors, this might end up less or more expensive than hiring an agency. 

Make sure that the translators you hire, whether directly or through an agency, have relevant experience. If your text is about marketing, for instance, the translators and editors must be experts in this field. This is to make sure they can get your message across. 

Most translation tools used today can retain sentence formatting, links, and HTML code, so you don’t need to worry about these. 

Focus on the message you want to carry to your target audience. Be sensitive about cultural remarks and be careful about any potential misunderstanding caused by your translation. Consult with your language team about certain phrases that may become problematic when translated. Pick your words carefully. Choose the right expressions. 

If you localize a website, you must be sure to provide customer service support in target-friendly language. This allows you to reply to customers immediately, rather than having to wait for a translator to become involved.  

In summary, don’t be hasty when doing a web localization/translation project. There are a lot of things to consider beforehand. A well prepared plan will yield a better result. A good quality translation will not only bridge the language gap but it can also build trust and solidify your brand image in the mind of your target audience.

Human-Readable JavaScript: A Tale of Two Experts

Everyone wants to be an expert. But what does that even mean? Over the years I’ve seen two types of people who are referred to as “experts.” Expert 1 is someone who knows every tool in the language and makes sure to use every bit of it, whether it helps or not. Expert 2 also knows every piece of syntax, but they’re pickier about what they employ to solve problems, considering a number of factors, both code-related and not. 

Can you take a guess at which expert we want working on our team? If you said Expert 2, you’d be right. They’re a developer focused on delivering readable code—lines of JavaScript others can understand and maintain. Someone who can make the complex simple. But “readable” is rarely definitive—in fact, it’s largely based on the eyes of the beholder. So where does that leave us? What should experts aim for when writing readable code? Are there clear right and wrong choices? The answer is, it depends.

The obvious choice

In order to improve developer experience, TC39 has been adding lots of new features to ECMAScript in recent years, including many proven patterns borrowed from other languages. One such addition, added in ES2019, is Array.prototype.flat() It takes an argument of depth or Infinity, and flattens an array. If no argument is given, the depth defaults to 1.

Prior to this addition, we needed the following syntax to flatten an array to a single level.

let arr = [1, 2, [3, 4]];

[].concat.apply([], arr);
// [1, 2, 3, 4]

When we added flat(), that same functionality could be expressed using a single, descriptive function.

// [1, 2, 3, 4]

Is the second line of code more readable? The answer is emphatically yes. In fact, both experts would agree.

Not every developer is going to be aware that flat() exists. But they don’t need to because flat() is a descriptive verb that conveys the meaning of what is happening. It’s a lot more intuitive than concat.apply().

This is the rare case where there is a definitive answer to the question of whether new syntax is better than old. Both experts, each of whom is familiar with the two syntax options, will choose the second. They’ll choose the shorter, clearer, more easily maintained line of code.

But choices and trade-offs aren’t always so decisive.

The gut check

The wonder of JavaScript is that it’s incredibly versatile. There is a reason it’s all over the web. Whether you think that’s a good or bad thing is another story.

But with that versatility comes the paradox of choice. You can write the same code in many different ways. How do you determine which way is “right”? You can’t even begin to make a decision unless you understand the available options and their limitations.

Let’s use functional programming with map() as the example. I’ll walk through various iterations that all yield the same result.

This is the tersest version of our map() examples. It uses the fewest characters, all fit into one line. This is our baseline.

const arr = [1, 2, 3];
let multipliedByTwo = arr.map(el => el * 2);
// multipliedByTwo is [2, 4, 6]

This next example adds only two characters: parentheses. Is anything lost? How about gained? Does it make a difference that a function with more than one parameter will always need to use the parentheses? I’d argue that it does. There is little to no detriment  in adding them here, and it improves consistency when you inevitably write a function with multiple parameters. In fact, when I wrote this, Prettier enforced that constraint; it didn’t want me to create an arrow function without the parentheses.

let multipliedByTwo = arr.map((el) => el * 2);

Let’s take it a step further. We’ve added curly braces and a return. Now this is starting to look more like a traditional function definition. Right now, it may seem like overkill to have a keyword as long as the function logic. Yet, if the function is more than one line, this extra syntax is again required. Do we presume that we will not have any other functions that go beyond a single line? That seems dubious.

let multipliedByTwo = arr.map((el) => {
  return el * 2;

Next we’ve removed the arrow function altogether. We’re using the same syntax as before, but we’ve swapped out for the function keyword. This is interesting because there is no scenario in which this syntax won’t work; no number of parameters or lines will cause problems, so consistency is on our side. It’s more verbose than our initial definition, but is that a bad thing? How does this hit a new coder, or someone who is well versed in something other than JavaScript? Is someone who knows JavaScript well going to be frustrated by this syntax in comparison?

let multipliedByTwo = arr.map(function(el) {
  return el * 2;

Finally we get to the last option: passing just the function. And timesTwo can be written using any syntax we like. Again, there is no scenario in which passing the function name causes a problem. But step back for a moment and think about whether or not this could be confusing. If you’re new to this codebase, is it clear that timesTwo is a function and not an object? Sure, map() is there to give you a hint, but it’s not unreasonable to miss that detail. How about the location of where timesTwo is declared and initialized? Is it easy to find? Is it clear what it’s doing and how it’s affecting this result? All of these are important considerations.

const timesTwo = (el) => el * 2;
let multipliedByTwo = arr.map(timesTwo);

As you can see, there is no obvious answer here. But making the right choice for your codebase means understanding all the options and their limitations. And knowing that consistency requires parentheses and curly braces and return keywords.

There are a number of questions you have to ask yourself when writing code. Questions of performance are typically the most common. But when you’re looking at code that is functionally identical, your determination should be based on humans—how humans consume code.

Maybe newer isn’t always better

So far we’ve found a clear-cut example of where both experts would reach for the newest syntax, even if it’s not universally known. We’ve also looked at an example that poses a lot of questions but not as many answers.

Now it’s time to dive into code that I’ve written before...and removed. This is code that made me the first expert, using a little-known piece of syntax to solve a problem to the detriment of my colleagues and the maintainability of our codebase.

Destructuring assignment lets you unpack values from objects (or arrays). It typically looks something like this.

const {node} = exampleObject;

It initializes a variable and assigns it a value all in one line. But it doesn’t have to.

let node
;({node} = exampleObject)

The last line of code assigns a variable to a value using destructuring, but the variable declaration takes place one line before it. It’s not an uncommon thing to want to do, but many people don’t realize you can do it.

But look at that code closely. It forces an awkward semicolon for code that doesn’t use semicolons to terminate lines. It wraps the command in parentheses and adds the curly braces; it’s entirely unclear what this is doing. It’s not easy to read, and, as an expert, it shouldn’t be in code that I write.

let node
node = exampleObject.node

This code solves the problem. It works, it’s clear what it does, and my colleagues will understand it without having to look it up. With the destructuring syntax, just because I can doesn’t mean I should.

Code isn’t everything

As we’ve seen, the Expert 2 solution is rarely obvious based on code alone; yet there are still clear distinctions between which code each expert would write. That’s because code is for machines to read and humans to interpret. So there are non-code factors to consider!

The syntax choices you make for a team of JavaScript developers is different than those you should make for a team of polyglots who aren’t steeped in the minutiae. 

Let’s take spread vs. concat() as an example.

Spread was added to ECMAScript a few years ago, and it’s enjoyed wide adoption. It’s sort of a utility syntax in that it can do a lot of different things. One of them is concatenating a number of arrays.

const arr1 = [1, 2, 3];
const arr2 = [9, 11, 13];
const nums = [...arr1, ...arr2];

As powerful as spread is, it isn’t a very intuitive symbol. So unless you already know what it does, it’s not super helpful. While both experts may safely assume a team of JavaScript specialists are familiar with this syntax, Expert 2 will probably question whether that’s true of a team of polyglot programmers. Instead, Expert 2 may select the concat() method instead, as it’s a descriptive verb that you can probably understand from the context of the code.

This code snippet gives us the same nums result as the spread example above.

const arr1 = [1, 2, 3];
const arr2 = [9, 11, 13];
const nums = arr1.concat(arr2);

And that’s but one example of how human factors influence code choices. A codebase that’s touched by a lot of different teams, for example, may have to hold more stringent standards that don’t necessarily keep up with the latest and greatest syntax. Then you move beyond the main source code and consider other factors in your tooling chain that make life easier, or harder, for the humans who work on that code. There is code that can be structured in a way that’s hostile to testing. There is code that backs you into a corner for future scaling or feature addition. There is code that’s less performant, doesn’t handle different browsers, or isn’t accessible. All of these factor into the recommendations Expert 2 makes.

Expert 2 also considers the impact of naming. But let’s be honest, even they can’t get that right most of the time.


Experts don’t prove themselves by using every piece of the spec; they prove themselves by knowing the spec well enough to deploy syntax judiciously and make well-reasoned decisions. This is how experts become multipliers—how they make new experts.

So what does this mean for those of us who consider ourselves experts or aspiring experts? It means that writing code involves asking yourself a lot of questions. It means considering your developer audience in a real way. The best code you can write is code that accomplishes something complex, but is inherently understood by those who examine your codebase.

And no, it’s not easy. And there often isn’t a clear-cut answer. But it’s something you should consider with every function you write.

Now THAT’S What I Call Service Worker!

The Service Worker API is the Dremel of the web platform. It offers incredibly broad utility while also yielding resiliency and better performance. If you’ve not used Service Worker yet—and you couldn’t be blamed if so, as it hasn’t seen wide adoption as of 2020—it goes something like this:

  1. On the initial visit to a website, the browser registers what amounts to a client-side proxy powered by a comparably paltry amount of JavaScript that—like a Web Worker—runs on its own thread.
  2. After the Service Worker’s registration, you can intercept requests and decide how to respond to them in the Service Worker’s fetch() event.

What you decide to do with requests you intercept is a) your call and b) depends on your website. You can rewrite requests, precache static assets during install, provide offline functionality, and—as will be our eventual focus—deliver smaller HTML payloads and better performance for repeat visitors.

Getting out of the woods

Weekly Timber is a client of mine that provides logging services in central Wisconsin. For them, a fast website is vital. Their business is located in Waushara County, and like many rural stretches in the United States, network quality and reliability isn’t great.

A screenshot of a wireless coverage map for Waushara County, Wisconsin with a color overlay. Most of the overlay is colored tan, which represents areas of the county which have downlink speeds between 3 and 9.99 megabits per second. There are sparse light blue and dark blue areas which indicate faster service, but are far from being the majority of the county.
Figure 1. A wireless coverage map of Waushara County, Wisconsin. The tan areas of the map indicate downlink speeds between 3 and 9.99 Mbps. Red areas are even slower, while the pale and dark blue areas are faster.

Wisconsin has farmland for days, but it also has plenty of forests. When you need a company that cuts logs, Google is probably your first stop. How fast a given logging company’s website is might be enough to get you looking elsewhere if you’re left waiting too long on a crappy network connection.

I initially didn’t believe a Service Worker was necessary for Weekly Timber’s website. After all, if things were plenty fast to start with, why complicate things? On the other hand, knowing that my client services not just Waushara County, but much of central Wisconsin, even a barebones Service Worker could be the kind of progressive enhancement that adds resilience in the places it might be needed most.

The first Service Worker I wrote for my client’s website—which I’ll refer to henceforth as the “standard” Service Worker—used three well-documented caching strategies:

  1. Precache CSS and JavaScript assets for all pages when the Service Worker is installed when the window’s load event fires.
  2. Serve static assets out of CacheStorage if available. If a static asset isn’t in CacheStorage, retrieve it from the network, then cache it for future visits.
  3. For HTML assets, hit the network first and place the HTML response into CacheStorage. If the network is unavailable the next time the visitor arrives, serve the cached markup from CacheStorage.

These are neither new nor special strategies, but they provide two benefits:

  • Offline capability, which is handy when network conditions are spotty.
  • A performance boost for loading static assets.

That performance boost translated to a 42% and 48% decrease in the median time to First Contentful Paint (FCP) and Largest Contentful Paint (LCP), respectively. Better yet, these insights are based on Real User Monitoring (RUM). That means these gains aren’t just theoretical, but a real improvement for real people.

A screenshot of request/response timings in Chrome's developer tools. It depicts a service worker on a page serving a static asset from CacheStorage in roughly 23 milliseconds.
Figure 2. A breakdown of request/response timings depicted in Chrome’s developer tools. The request is for a static asset from CacheStorage. Because the Service Worker doesn’t need to access the network, it takes about 23 milliseconds to “download” the asset from CacheStorage.

This performance boost is from bypassing the network entirely for static assets already in CacheStorage—particularly render-blocking stylesheets. A similar benefit is realized when we rely on the HTTP cache, only the FCP and LCP improvements I just described are in comparison to pages with a primed HTTP cache without an installed Service Worker.

If you’re wondering why CacheStorage and the HTTP cache aren’t equal, it’s because the HTTP cache—at least in some cases—may still involve a trip to the server to verify asset freshness. Cache-Control’s immutable flag gets around this, but immutable doesn’t have great support yet. A long max-age value works, too, but the combination of Service Worker API and CacheStorage gives you a lot more flexibility.

Details aside, the takeaway is that the simplest and most well-established Service Worker caching practices can improve performance. Potentially more than what well-configured Cache-Control headers can provide. Even so, Service Worker is an incredible technology with far more possibilities. It’s possible to go farther, and I’ll show you how.

A better, faster Service Worker

The web loves itself some “innovation,” which is a word we equally love to throw around. To me, true innovation isn’t when we create new frameworks or patterns solely for the benefit of developers, but whether those inventions benefit people who end up using whatever it is we slap up on the web. The priority of constituencies is a thing we ought to respect. Users above all else, always.

The Service Worker API’s innovation space is considerable. How you work within that space can have a big effect on how the web is experienced. Things like navigation preload and ReadableStream have taken Service Worker from great to killer. We can do the following with these new capabilities, respectively:

  • Reduce Service Worker latency by parallelizing Service Worker startup time and navigation requests.
  • Stream content in from CacheStorage and the network.

Moreover, we’re going to combine these capabilities and pull out one more trick: precache header and footer partials, then combine them with content partials from the network. This not only reduces how much data we download from the network, but it also improves perceptual performance for repeat visits. That’s innovation that helps everyone.

Grizzled, I turn to you and say “let’s do this.”

Laying the groundwork

If the idea of combining precached header and footer partials with network content on the fly seems like a Single Page Application (SPA), you’re not far off. Like an SPA, you’ll need to apply the “app shell” model to your website. Only instead of a client-side router plowing content into one piece of minimal markup, you have to think of your website as three separate parts:

  • The header.
  • The content.
  • The footer.

For my client’s website, that looks like this:

A screenshot of the Weekly Timber website color coded to delineate each partial that makes up the page. The header is color coded as blue, the footer as red, and the main content in between as yellow.
Figure 3. A color coding of the Weekly Timber website’s different partials. The Footer and Header partials are stored in CacheStorage, while the Content partial is retrieved from the network unless the user is offline.

The thing to remember here is that the individual partials don’t have to be valid markup in the sense that all tags need to be closed within each partial. The only thing that counts in the final sense is that the combination of these partials must be valid markup.

To start, you’ll need to precache separate header and footer partials when the Service Worker is installed. For my client’s website, these partials are served from the /partial-header and /partial-footer pathnames:

self.addEventListener("install", event => {
  const cacheName = "fancy_cache_name_here";
  const precachedAssets = [
    "/partial-header",  // The header partial
    "/partial-footer",  // The footer partial
    // Other assets worth precaching

  event.waitUntil(caches.open(cacheName).then(cache => {
    return cache.addAll(precachedAssets);
  }).then(() => {
    return self.skipWaiting();

Every page must be fetchable as a content partial minus the header and footer, as well as a full page with the header and footer. This is key because the initial visit to a page won’t be controlled by a Service Worker. Once the Service Worker takes over, then you serve content partials and assemble them into complete responses with the header and footer partials from CacheStorage.

If your site is static, this means generating a whole other mess of markup partials that you can rewrite requests to in the Service Worker’s fetch() event. If your website has a back end—as is the case with my client—you can use an HTTP request header to instruct the server to deliver full pages or content partials.

The hard part is putting all the pieces together—but we’ll do just that.

Putting it all together

Writing even a basic Service Worker can be challenging, but things get real complicated real fast when assembling multiple responses into one. One reason for this is that in order to avoid the Service Worker startup penalty, we’ll need to set up navigation preload.

Implementing navigation preload

Navigation preload addresses the problem of Service Worker startup time, which delays navigation requests to the network. The last thing you want to do with a Service Worker is hold up the show.

Navigation preload must be explicitly enabled. Once enabled, the Service Worker won’t hold up navigation requests during startup. Navigation preload is enabled in the Service Worker’s activate event:

self.addEventListener("activate", event => {
  const cacheName = "fancy_cache_name_here";
  const preloadAvailable = "navigationPreload" in self.registration;

  event.waitUntil(caches.keys().then(keys => {
    return Promise.all([
      keys.filter(key => {
        return key !== cacheName;
      }).map(key => {
        return caches.delete(key);
      preloadAvailable ? self.registration.navigationPreload.enable() : true

Because navigation preload isn’t supported everywhere, we have to do the usual feature check, which we store in the above example in the preloadAvailable variable.

Additionally, we need to use Promise.all() to resolve multiple asynchronous operations before the Service Worker activates. This includes pruning those old caches, as well as waiting for both clients.claim() (which tells the Service Worker to assert control immediately rather than waiting until the next navigation) and navigation preload to be enabled.

A ternary operator is used to enable navigation preload in supporting browsers and avoid throwing errors in browsers that don’t. If preloadAvailable is true, we enable navigation preload. If it isn’t, we pass a Boolean that won’t affect how Promise.all() resolves.

With navigation preload enabled, we need to write code in our Service Worker’s fetch() event handler to make use of the preloaded response:

self.addEventListener("fetch", event => {
  const { request } = event;

  // Static asset handling code omitted for brevity
  // ...

  // Check if this is a request for a document
  if (request.mode === "navigate") {
    const networkContent = Promise.resolve(event.preloadResponse).then(response => {
      if (response) {
        addResponseToCache(request, response.clone());

        return response;

      return fetch(request.url, {
        headers: {
          "X-Content-Mode": "partial"
      }).then(response => {
        addResponseToCache(request, response.clone());

        return response;
    }).catch(() => {
      return caches.match(request.url);

    // More to come...

Though this isn’t the entirety of the Service Worker’s fetch() event code, there’s a lot that needs explaining:

  1. The preloaded response is available in event.preloadResponse. However, as Jake Archibald notes, the value of event.preloadResponse will be undefined in browsers that don’t support navigation preload. Therefore, we must pass event.preloadResponse to Promise.resolve() to avoid compatibility issues.
  2. We adapt in the resulting then callback. If event.preloadResponse is supported, we use the preloaded response and add it to CacheStorage via an addResponseToCache() helper function. If not, we send a fetch() request to the network to get the content partial using a custom X-Content-Mode header with a value of partial.
  3. Should the network be unavailable, we fall back to the most recently accessed content partial in CacheStorage.
  4. The response—regardless of where it was procured from—is then returned to a variable named networkContent that we use later.

How the content partial is retrieved is tricky. With navigation preload enabled, a special Service-Worker-Navigation-Preload header with a value of true is added to navigation requests. We then work with that header on the back end to ensure the response is a content partial rather than the full page markup.

However, because navigation preload isn’t available in all browsers, we send a different header in those scenarios. In Weekly Timber’s case, we fall back to a custom X-Content-Mode header. In my client’s PHP back end, I’ve created some handy constants:


// Is this a navigation preload request?

// Is this an explicit request for a content partial?
define("PARTIAL_MODE", isset($_SERVER["HTTP_X_CONTENT_MODE"]) && stristr($_SERVER["HTTP_X_CONTENT_MODE"], "partial") !== false);

// If either is true, this is a request for a content partial
define("USE_PARTIAL", NAVIGATION_PRELOAD === true || PARTIAL_MODE === true);


From there, the USE_PARTIAL constant is used to adapt the response:


if (USE_PARTIAL === false) {


if (USE_PARTIAL === false) {


The thing to be hip to here is that you should specify a Vary header for HTML responses to take the Service-Worker-Navigation-Preload (and in this case, the X-Content-Mode header) into account for HTTP caching purposes—assuming you’re caching HTML at all, which may not be the case for you.

With our handling of navigation preloads complete, we can then move onto the work of streaming content partials from the network and stitching them together with the header and footer partials from CacheStorage into a single response that the Service Worker will provide.

Streaming partial content and stitching together responses

While the header and footer partials will be available almost instantaneously because they’ve been in CacheStorage since the Service Worker’s installation, it’s the content partial we retrieve from the network that will be the bottleneck. It’s therefore vital that we stream responses so we can start pushing markup to the browser as quickly as possible. ReadableStream can do this for us.

This ReadableStream business is a mind-bender. Anyone who tells you it’s “easy” is whispering sweet nothings to you. It’s hard. After I wrote my own function to merge streamed responses and messed up a critical step—which ended up not improving page performance, mind you—I modified Jake Archibald’s mergeResponses() function to suit my needs:

async function mergeResponses (responsePromises) {
  const readers = responsePromises.map(responsePromise => {
    return Promise.resolve(responsePromise).then(response => {
      return response.body.getReader();

  let doneResolve,

  const done = new Promise((resolve, reject) => {
    doneResolve = resolve;
    doneReject = reject;

  const readable = new ReadableStream({
    async pull (controller) {
      const reader = await readers[0];

      try {
        const { done, value } = await reader.read();

        if (done) {

          if (!readers[0]) {


          return this.pull(controller);

      } catch (err) {
        throw err;
    cancel () {

  const headers = new Headers();
  headers.append("Content-Type", "text/html");

  return {
    response: new Response(readable, {

As usual, there’s a lot going on:

  1. mergeResponses() accepts an argument named responsePromises, which is an array of Response objects returned from either a navigation preload, fetch(), or caches.match(). Assuming the network is available, this will always contain three responses: two from caches.match() and (hopefully) one from the network.
  2. Before we can stream the responses in the responsePromises array, we must map responsePromises to an array containing one reader for each response. Each reader is used later in a ReadableStream() constructor to stream each response’s contents.
  3. A promise named done is created. In it, we assign the promise’s resolve() and reject() functions to the external variables doneResolve and doneReject, respectively. These will be used in the ReadableStream() to signal whether the stream is finished or has hit a snag.
  4. The new ReadableStream() instance is created with a name of readable. As responses stream in from CacheStorage and the network, their contents will be appended to readable.
  5. The stream’s pull() method streams the contents of the first response in the array. If the stream isn’t canceled somehow, the reader for each response is discarded by calling the readers array’s shift() method when the response is fully streamed. This repeats until there are no more readers to process.
  6. When all is done, the merged stream of responses is returned as a single response, and we return it with a Content-Type header value of text/html.

This is much simpler if you use TransformStream, but depending on when you read this, that may not be an option for every browser. For now, we’ll have to stick with this approach.

Now let’s revisit the Service Worker’s fetch() event from earlier, and apply the mergeResponses() function:

self.addEventListener("fetch", event => {
  const { request } = event;

  // Static asset handling code omitted for brevity
  // ...

  // Check if this is a request for a document
  if (request.mode === "navigate") {
    // Navigation preload/fetch() fallback code omitted.
    // ...

    const { done, response } = await mergeResponses([


At the end of the fetch() event handler, we pass the header and footer partials from CacheStorage to the mergeResponses() function, and pass the result to the fetch() event’s respondWith() method, which serves the merged response on behalf of the Service Worker.

Are the results worth the hassle?

This is a lot of stuff to do, and it’s complicated! You might mess something up, or maybe your website’s architecture isn’t well-suited to this exact approach. So it’s important to ask: are the performance benefits worth the work? In my view? Yes! The synthetic performance gains aren’t bad at all:

A bar graph comparing First Contentful Paint and Largest Contentful Paint performance for the Weekly Timber website for scenarios in which there is no service worker, a "standard" service worker, and a streaming service worker that stitches together content partials from CacheStorage and the network. The first two scenarios are basically the same, while the streaming service worker delivers measurably better performance for both FCP and LCP—especially for FCP!
Figure 4. A bar chart of median FCP and LCP synthetic performance data across various Service Worker types for the Weekly Timber website.

Synthetic tests don’t measure performance for anything except the specific device and internet connection they’re performed on. Even so, these tests were conducted on a staging version of my client’s website with a low-end Nokia 2 Android phone on a throttled “Fast 3G” connection in Chrome’s developer tools. Each category was tested ten times on the homepage. The takeaways here are:

  • No Service Worker at all is slightly faster than the “standard” Service Worker with simpler caching patterns than the streaming variant. Like, ever so slightly faster. This may be due to the delay introduced by Service Worker startup, however, the RUM data I’ll go over shows a different case.
  • Both LCP and FCP are tightly coupled in scenarios where there’s no Service Worker or when the “standard” Service Worker is used. This is because the content of the page is pretty simple and the CSS is fairly small. The Largest Contentful Paint is usually the opening paragraph on a page.
  • However, the streaming Service Worker decouples FCP and LCP because the header content partial streams in right away from CacheStorage.
  • Both FCP and LCP are lower in the streaming Service Worker than in other cases.
A bar chart comparing the RUM median FCP and LCP performance of no service worker, a "standard" service worker, and a streaming service worker. Both the "standard" and streaming service worker offer better FCP and LCP performance over no service worker, but the streaming service worker excels at FCP performance, while only being slightly slower at LCP than the "standard" service worker.
Figure 5. A bar chart of median FCP and LCP RUM performance data across various Service Worker types for the Weekly Timber website.

The benefits of the streaming Service Worker for real users is pronounced. For FCP, we receive an 79% improvement over no Service Worker at all, and a 63% improvement over the “standard” Service Worker. The benefits for LCP are more subtle. Compared to no Service Worker at all, we realize a 41% improvement in LCP—which is incredible! However, compared to the “standard” Service Worker, LCP is a touch slower.

Because the long tail of performance is important, let’s look at the 95th percentile of FCP and LCP performance:

A bar chart comparing the RUM median FCP and LCP performance of no service worker, a "standard" service worker, and a streaming service worker. Both the "standard" and streaming service workers are faster than no service worker at all, but the streaming service worker beats out the "standard" service worker for both FCP and LCP.
Figure 6. A bar chart of 95th percentile FCP and LCP RUM performance data across various Service Worker types for the Weekly Timber website.

The 95th percentile of RUM data is a great place to assess the slowest experiences. In this case, we see that the streaming Service Worker confers a 40% and 51% improvement in FCP and LCP, respectively, over no Service Worker at all. Compared to the “standard” Service Worker, we see a reduction in FCP and LCP by 19% and 43%, respectively. If these results seem a bit squirrely compared to synthetic metrics, remember: that’s RUM data for you! You never know who’s going to visit your website on which device on what network.

While both FCP and LCP are boosted by the myriad benefits of streaming, navigation preload (in Chrome’s case), and sending less markup by stitching together partials from both CacheStorage and the network, FCP is the clear winner. Perceptually speaking, the benefit is pronounced, as this video would suggest:

Figure 7. Three WebPageTest videos of a repeat view of the Weekly Timber homepage. On the left is the page not controlled by a Service Worker, with a primed HTTP cache. On the right is the same page controlled by a streaming Service Worker, with CacheStorage primed.

Now ask yourself this: If this is the kind of improvement we can expect on such a small and simple website, what might we expect on a website with larger header and footer markup payloads?

Caveats and conclusions

Are there trade-offs with this on the development side? Oh yeah.

As Philip Walton has noted, a cached header partial means the document title must be updated in JavaScript on each navigation by changing the value of document.title. It also means you’ll need to update the navigation state in JavaScript to reflect the current page if that’s something you do on your website. Note that this shouldn’t cause indexing issues, as Googlebot crawls pages with an unprimed cache.

There may also be some challenges on sites with authentication. For example, if your site’s header displays the current authenticated user on log in, you may have to update the header partial markup provided by CacheStorage in JavaScript on each navigation to reflect who is authenticated. You may be able to do this by storing basic user data in localStorage and updating the UI from there.

There are certainly other challenges, but it’ll be up to you to weigh the user-facing benefits versus the development costs. In my opinion, this approach has broad applicability in applications such as blogs, marketing websites, news websites, ecommerce, and other typical use cases.

All in all, though, it’s akin to the performance improvements and efficiency gains that you’d get from an SPA. Only the difference is that you’re not replacing time-tested navigation mechanisms and grappling with all the messiness that entails, but enhancing them. That’s the part I think is really important to consider in a world where client-side routing is all the rage.

“What about Workbox?,” you might ask—and you’d be right to. Workbox simplifies a lot when it comes to using the Service Worker API, and you’re not wrong to reach for it. Personally, I prefer to work as close to the metal as I can so I can gain a better understanding of what lies beneath abstractions like Workbox. Even so, Service Worker is hard. Use Workbox if it suits you. As far as frameworks go, its abstraction cost is very low.

Regardless of this approach, I think there’s incredible utility and power in using the Service Worker API to reduce the amount of markup you send. It benefits my client and all the people that use their website. Because of Service Worker and the innovation around its use, my client’s website is faster in the far-flung parts of Wisconsin. That’s something I feel good about.

Special thanks to Jake Archibald for his valuable editorial advice, which, to put it mildly, considerably improved the quality of this article.

Keeping Your Design Mind New and Fresh

“Only a fool knows everything.”

African Proverb

Since March 2020, most of us have been working from home, and the days blend into each other and look the same. This is not the first time I have experienced this type of feeling. 

My commute — New York to New Jersey — is what folks in my area call the reverse commute.While going to the office, my days began to look the same: riding the subway to a bus to a shuttle to get to my job. Have you ever arrived at a destination and not even realized how you got there? This is how I began to experience the world everyday. I stopped paying attention to my surroundings.

Because I worked a lot, the only time I would take off was for the holidays. During this time, I was a consultant and was coming to the end of an existing contract. For six years straight, I did this, until I decided to take six weeks off work to travel to Europe and visit places I had not seen before.

A family friend let me stay with her in Munich, Germany; I did not speak German, and so began my adventure. I was in a new place, where I did not know anyone, and I got lost every single day. My eyes were opened to the fact that every day is an opportunity. It just took me going on a trip and traveling halfway around the world to realize it. There are new things to experience each and every day.

When I returned to the U.S. and went back to work, I made a conscious decision to make each day different. Sometimes I would walk a new route. Some days I would take another train. Each change meant I saw something new: new clothing, new buildings, and new faces. It really impacted the way I viewed myself in the world.

But what do you do when you cannot travel? Seeing a situation with new eyes takes practice, and you can still create the opportunity to see something by not taking your surroundings for granted.

How do we do this? For me, I adopted a new philosophy of being WOQE: watching, observing, questioning, and exploring.

Two people sit on a bench, one in a suit with arms crossed and the other wearing a backpack while looking through a camera. The letters WOQE surround them.


Let go of assumptions to open up your mind. This takes looking at yourself and understanding your beliefs.

When I am looking to design something, I always have to tell myself that I am not the user. I don’t know where they come from, and I don’t know their reason for making the decisions they do. I begin the work to understand where they are coming from. It all starts with why.


View the situation from different angles. Architects think about the details of a building and look at different viewpoints and perspectives (i.e., outside the building, different sides of the building, etc.)

How can you apply this approach to your designs? Here’s an example. I sketched something once as part of an augmented reality experience. Using my mobile device, I was able to walk around the sketch and see it from all sides, including the top and bottom. As a UX Designer, I have had to view items from both a user’s perspective and the business’ perspective. If I am giving a talk at a conference, I look at the talk from an audience perspective and my own.


Use the “5 Why Technique” to get to the root of the problem. This involves asking “why” 5 times.

You know how kids keep asking “why” when you answer a question from them? This approach is how you can get to the root of problems. For example, a friend of mine who is blind expressed interest in playing a popular augmented reality game. This intrigued me and I used a whiteboard as I worked through the 5 Whys with my friend. Here is the process we took:

“Why can’t someone who is blind play Pokémon Go?” I asked.

“Because the game is visual and requires someone to see what is on the screen.”

“Why is the game only a visual perspective?”

“Because this is the way it was designed.”

“Why was it designed this way?”

“Because frequently designers are creating for themselves and may not think about who they might be excluding.”

“Why are designers excluding people?”

“Because they were never taught to include them.”

“Why were they never taught?”

“Design programs often do not include an inclusive and accessible curriculum.”

This may not be a scientific way of approaching a problem, but it is a starting point. My friend could not play this augmented reality game because designers were not taught to make this game for someone who is blind. After this exercise, I was able to work with a group of students who worked with my friend to create an augmented reality concept and ultimately a game using audio and haptic feedback.

It all started with why.


Collaborate with others to learn from others and teach others what you know. Let your friends and colleagues know what you are working on, and perhaps talk it through with them.

When I was a freelance designer, I worked on my own and found it challenging when I would get stuck on a design. I searched online and found a group of designers who would come and share their work with each other for feedback. Through this group, I was able to get some insightful comments on my designs and explain some of my decisions. I began to collaborate with the folks in the group and found it very helpful. When talking to clients, this made me feel more confident explaining my designs because I had already been through the process with my online group.

With all of our days blending into each other in this pandemic, we as designers have an unprecedented opportunity to really shake things up. Furthermore, we are problem solvers. As you move forward with your design practice, consider being WOQE to design with a fresh mind.

How to Get a Dysfunctional Team Back on Track

Maybe you’ve been part of a team that you’ve seen slowly slide into a rut. You didn’t notice it happen, but you’re now not shipping anything, no one’s talking to each other, and the management’s Eye of Sauron has cast its gaze upon you.

Maybe you’ve just joined a team that’s in the doldrums.

Maybe the people who used to oil the wheels that kept everyone together have moved on and you’re having to face facts—you all hate each other.

However you’ve ended up in this situation, the fact is that you’re now here and it’s up to someone to do something about it. And that person might be you.

You’re not alone

The first thing to understand is that you’re not the only person to ever encounter problems. Things like this happen all the time at work, but there are simple steps you can take and habits you can form to ease the situation and even dig yourself (and your team) out of the hole. I’ll share some techniques that have helped me, and maybe they can work for you, too.

So let me tell you a story about a hot mess I found myself in and how we turned it around. Names and details have been changed to protect the innocent.

It always starts out great

An engineer called Jen was working with me on a new feature on our product that lets people create new meal recipes themselves. I was the Project Manager. We were working in six-week cycles.

She had to rely on an API that was managed by Tom (who was in another team) to allow her to get and set the new recipe information on a central database. Before we kicked off, everyone knew the overall objective and everyone was all smiles and ready to go.

The system architecture was a legacy mishmash of different parts of local databases and API endpoints. And, no prizes for guessing what’s coming next, the API documentation was like Swiss cheese.

Two weeks into a six-week cycle, Jen hit Tom up with a list of her dream API calls that she wanted to use to build her feature. She asked him to confirm or deny they would work—or even if they existed at all—because once she started digging into the docs, it wasn’t clear to her if the API could support her plans.

However, Tom had form for sticking his head in the sand and not responding to requests he didn’t like. Tom went to ground and didn’t respond. Tom’s manager, Frankie, was stretched too thin, and hence wasn’t paying attention to this until I was persistently asking about it, in increasingly fraught tones.

In the meantime, Jen tried to do as much as she could. Every day she built a bit more based on her as-yet unapproved design, hoping it would all work out.

With two weeks left to go, Tom eventually responded with a short answer—which boiled down to “The API doesn’t support these calls and I don’t see why I should build something that does. Why don’t you get the data from the other part of the system? And by the way, if I’m forced to do this, it will take at least six weeks.”

And as we know, six weeks into two weeks doesn’t go. Problem.

How did we sort it?

Step 1 — Accept

When things go south, what do you do?

Accept it.

Acknowledge whatever has happened to get you into this predicament. Take some notes about it to use in team appraisals and retrospectives. Take a long hard look at yourself, too.

Write a concise, impersonal summary of where you are. Try not to write it from your point of view. Imagine that you’re in your boss’ seat and just give them the facts as they are. Don’t dress things up to make them sound better. Don’t over-exaggerate the bad. Leave the emotions to the side.

When you can see your situation clearly, you’ll make better decisions.

Now, pointing out the importance of taking some time to cool down and gather your thoughts seems obvious, but it’s based on the study of some of the most basic circuitry in our brains. Daniel Goleman’s 1995 book, Emotional Intelligence: Why It Can Matter More Than IQ, introduces the concept of emotional hijacking; the idea that the part of our brain that deals with emotion—the limbic system—can biologically interrupt rational thinking when it is overstimulated. For instance, experiments show that the angrier men get, the poorer are the decisions they make at the casino. And another study found that people in a negative emotional state are more likely to deviate from logical norms. To put it another way, if you’re pissed off, you can’t think straight.

So when you are facing up to the facts, avoid the temptation to keep it off-the-record and only discuss it on the telephone or in person with your colleagues. There’s nothing to be scared of by writing it down. If it turns out that you’re wrong about something, you can always admit it and update your notes. If you don’t write it down, then there’s always scope for misunderstanding or misremembering in future.

In our case, we summarized how we’d ended up at that juncture; the salient points were:

  • I hadn’t checked to ensure we had scoped it properly before committing to the work. It wasn’t a surprise that the API coverage was patchy, but I turned a blind eye because we were excited about the new feature.
  • Jen should have looked for the hard problem first rather than do a couple of weeks’ worth of nice, easy work around the edges. That’s why we lost two weeks off the top.
  • Tom and Frankie’s communication was poor. The reasons for that don’t form part of this discussion, but something wasn’t right in that team.

And that’s step one.

Step 2 — Rejoice

Few people like to make mistakes, but everyone will make one at some point in their life. Big ones, small ones, important ones, silly ones—we all do it. Don’t beat yourself up.

A Venn diagram with one circle showing the set of people who make mistakes. In a smaller circle completely inside the first is the set of people who think they don't make mistakes.

At the start of my career, I worked on a team whose manager had a very high opinion of himself. He was good, but what I learned from him was that he spread that confidence around the team. If something was looking shaky, he insisted that if we could “smell smoke,” that he had to be the first to know so he could do something about it. If we made a mistake, there was no hiding from it. We learned how to face up to it and accept responsibility, but what was more important was learning from him the feeling we were the best people to fix it.

There was no holding of grudges. What was done, was done. It was all about putting it behind us.

He would tell us that we were only in this team because he had handpicked us because we were the best and he only wanted the best around him. Now, that might all have been manipulative nonsense, but it worked.

The only thing you can control is what you do now, so try not to fret about what happened in the past or get anxious about what might happen in the future.

With that in mind, once you’ve written the summary of your sticky situation, set it aside!

I’ll let you in on a secret. No one else is interested in how you got here. They might be asking you about it (probably because they are scared that someone will ask them), but they’re always going to be more interested in how you’re going to sort the problem out.

So don’t waste time pointing fingers. Don’t prepare slide decks to throw someone under the bus. Tag that advice with a more general “don’t be an asshole” rule.

If you’re getting consistent heat about the past, it’s because you’re not doing a good enough job filling the bandwidth with a solid, robust, and realistic plan for getting out of the mess.

So focus on the future.

Sometimes it’s not easy to do that, but remember that none of this is permanent. Trust in the fact that if you pull it together, you’ll be in a much more powerful position to decide what to do next.

Maybe the team will hold together with a new culture or, if it is irretrievably broken, once you’re out of the hole then you can do something about it and switch teams or even switch jobs. But be the person who sorted it out, or at the very least, be part of the gang who sorted it out. That will be obvious to outsiders and makes for a much better interview question response.

In our story with Jen, we had a short ten-minute call with everyone involved on the line. We read out the summary and asked if anyone had anything to add.

Tom spoke up and said that he never gets time to update the API documentation because he always has to work on emergencies. We added that to our summary:

  • Tom has an ongoing time management problem. He doesn’t have enough time allocated to maintain and improve the API documentation.

After that was added, everyone agreed that the summary was accurate.

I explained that the worst thing that could now happen was that we had to report back to the wider business that we’d messed up and couldn’t hit our deadline.

If we did that, we’d lose face. There would be real financial consequences. It would show up on our appraisals. It wouldn’t be good. It wouldn’t be the end of the world, but it wasn’t something that we wanted. Everyone probably knew all that already, but there’s a power in saying it out loud. Suddenly, it doesn’t seem so scary.

Jen spoke up to say that she was new here and really didn’t want to start out like this. There was some murmuring in general support. I wrapped up that part of the discussion.

I purposefully didn’t enter into a discussion about the solution yet. We had all come together to admit the circumstances we were in. We’d done that. It was enough for now.

Step 3 — Move on

Stepping back for a second, as the person who is going to lead the team out of the wilderness, you may want to start getting in everyone’s face. You’ll be tempted to rely on your unlimited reserves of personal charm or enthusiasm to vibe everyone up. Resist the urge! Don’t do it!

Your job is to give people the space to let them do their best work.

I learned this the hard way. I’m lucky enough that I can bounce back quickly, but when someone is under pressure, funnily enough, a super-positive person who wants to throw the curtains open and talk about what a wonderful day it is might not be the most motivational person to be around. I’ve unwittingly walked into some short-tempered conversations that way.

Don’t micromanage. In fact, scrap all of your management tricks. Your job is to listen to what people are telling you—even if they’re telling you things by not talking.

Reframe the current problem. Break it up into manageable chunks.

The first task to add to your list of things to do is simply to “Decide what we’re going to do about [the thing].”

It’s likely that there’s a nasty old JIRA ticket that everyone has been avoiding or has been bounced back and forth between different team members. Set that aside. There’s too much emotional content invested in that ticket now.

Create a new task that’s entirely centered on making a decision. Now, break it down into subtasks for each member of the team, like “Submit a proposal for what to do next.” Put your own suggestions in the mix but do your best to dissociate yourself from them.

Once you start getting some suggestions back and can tick those tasks off the list, you start to generate positive momentum. Nurture that.

If a plan emerges, champion it. Be wary of naysayers. Challenge them respectfully with “How do you think we should…?” questions. If they have a better idea, champion that instead; if they don’t respond at all, then gently suggest “Maybe we should go with this if no one else has a better idea.”

Avoid words like “need,” “just,” “one,” or “small.” Basically, anything that imposes a view of other people’s work. It seems trivial, but try to see it from the other side.

Saying, “I just need you to change that one small thing” hits the morale-killing jackpot. It unthinkingly diminishes someone else’s efforts. An engineer or a designer could reasonably react by thinking “What do you know about how to do this?!” Your job is to help everyone drop their guard and feel safe enough to contribute.

Instead, try “We’re all looking at you here because you’re good at this and this is a nasty problem. Maybe you know a way to make this part work?”

More often than not, people want to help.

So I asked Jen, Tom, and Frankie to submit their proposals for a way through the mess.

It wasn’t straightforward. Just because we’d all agreed how we got here didn’t just magically make all the problems disappear. Tom was still digging his heels in about not wanting to write more code, and kept pushing back on Jen.

There was a certain amount of back and forth. Although, with some constant reminders that we should maybe focus on what will move us forward, we eventually settled on a plan.

Like most compromises, it wasn’t pretty or simple. Jen was going to have to rely on using the local database for a certain amount of the lower-priority features. Tom was going to have to create some additional API functions and would end up with some unnecessary traffic that might create too much load on the API.

And even with the compromise, Tom wouldn’t be finished in time. He’d need another couple of weeks.

But it was a plan!

N.B. Estimating is a whole other subject that I won’t cover here. Check out the Shape Up process for some great advice on that.

Step 4 — Spread the word

Once you’ve got a plan, commit to it and tell everyone affected what’s going on.

When communicating with people who are depending on you, take the last line of your email, which usually contains the summary or the “ask,” and put it at the top. When your recipient reads the message, the opener is the meat. Good news or bad news, that’s what they’re interested in. They’ll read on if they want more.

If it’s bad news, set someone up for it with a simple “I’m sorry to say I’ve got bad news” before you break it to them. No matter who they are, kindly framing the conversation will help them digest it.

When discussing it with the team, put the plan somewhere everyone can see it. Transparency is key.

Don’t pull any moves—like publishing deadline dates to the team that are two weeks earlier than the date you’ve told the business. Teams aren’t stupid. They’ll know that’s what you do.

Publish the new deadlines in a place where everyone on the team can see them, and say we’re aiming for this date but we’re telling the business that we’ll definitely be done by that date.

In our case, I posted an update to the rest of the business as part of our normal weekly reporting cycle to announce we’d hit a bump that was going to affect our end date.

Here’s an extract:

Hi everyone,

Here’s the update for the week. I’m afraid there’s a bit of bad news to start but there is some good news too.


We uncovered a misunderstanding between Jen and Tom this week. The outcome is that Tom has more API work to do than he anticipated. This affects the delivery date and means we’re now planning to finish 10 working days later on November 22.

**Expected completion date ** CHANGED ****
Original estimate: November 8
Current estimate: November 22


We successfully released version 1.3 of the app into the App Store 🎉.

And so on...

That post was available for everyone within the team to see. Everyone knew what was to be done and what the target was.

I had to field some questions from above, but I was ready with my summary of what went wrong and what we’d all agreed to do as a course of action. All I had to do was refer to it. Then I could focus on sharing the plan.

And all manner of things shall be well

Now, I’d like to say that we then had tea and scones every day for the next month and it was all rather spiffing. But that would be a lie.

There was some more wailing and gnashing of teeth, but we all got through it and—even though we tried to finish early but failed—we did manage to finish by the November 22 date.

And then, after a bit of a tidy up, we all moved on to the next project, a bit older and a bit wiser. I hope that helps you if you’re in a similar scenario. Send me a tweet or email me at liam.nugent@hey.com with any questions or comments. I’d love to hear about your techniques and advice.

The Future of Web Software Is HTML-over-WebSockets

The future of web-based software architectures is already taking form, and this time it’s server-rendered (again). Papa’s got a brand new bag: HTML-over-WebSockets and broadcast everything all the time.

The dual approach of marrying a Single Page App with an API service has left many dev teams mired in endless JSON wrangling and state discrepancy bugs across two layers. This costs dev time, slows release cycles, and saps the bandwidth for innovation.

But a new WebSockets-driven approach is catching web developers’ attention. One that reaffirms the promises of classic server-rendered frameworks: fast prototyping, server-side state management, solid rendering performance, rapid feature development, and straightforward SEO. One that enables multi-user collaboration and reactive, responsive designs without building two separate apps. The end result is a single-repo application that feels to users just as responsive as a client-side all-JavaScript affair, but with straightforward templating and far fewer loading spinners, and no state misalignments, since state only lives in one place. All of this sets us up for a considerably easier (and faster!) development path. 

Reclaiming all of that time spent addressing architecture difficulties grants you a pool of surplus hours that you can use to do awesome. Spend your dev budget, and your company’s salary budget, happily building full-stack features yourself, and innovating on things that benefit your company and customers. 

And in my opinion, there’s no better app framework for reclaiming tedious development time than Ruby on Rails. Take another look at the underappreciated Stimulus. Beef up the View in your MVC with ViewComponents. Add in the CableReady and StimulusReflex libraries for that Reactive Rails (as it has been dubbed) new car smell, and you’re off to the races. But we’ll get back to Rails in a bit...

This all started with web frameworks...

Web frameworks burst onto the scene around 2005 amidst a sea of mostly figure-it-out-for-yourself scripting language libraries glued together and thrown onto hand-maintained Apache servers. This new architecture promised developers a more holistic approach that wrapped up all the fiddly stuff in no-touch conventions, freeing developers to focus on programming ergonomics, code readability, and fast-to-market features. All a developer had to do was learn the framework’s core language, get up to speed on the framework itself and its conventions, and then start churning out sophisticated web apps while their friends were still writing XML configuration files for all those other approaches.

Despite the early criticisms that always plague new approaches, these server-rendered frameworks became tools of choice, especially for fast-moving startups—strapped for resources—that needed an attractive, feature-rich app up yesterday.

But then the JavaScript everything notion took hold...

As the web development world pushed deeper into the 2010s, the tides began to turn, and server-rendered frameworks took something of a backseat to the Single Page Application, wholly built in JavaScript and run entirely on the client’s computer. At many companies, the “server” became relegated to hosting an API data service only, with most of the business logic and all of the HTML rendering happening on the client, courtesy of the big ’ol package of JavaScript that visitors were forced to download when they first hit the site. 

This is where things started to get ugly.

Fast-forward to 2020 and the web isn’t getting any faster, as we were promised it would with SPAs. Shoving megabytes of JavaScript down an iPhone 4’s throat doesn’t make for a great user experience. And if you thought building a professional web app took serious resources, what about building a web app and an API service and a communication layer between them? Do we really believe that every one of our users is going to have a device capable of digesting 100 kB of JSON and rendering a complicated HTML table faster than a server-side app could on even a mid-grade server?

Developing and hosting these JavaScript-forward apps didn’t get any cheaper either. In many cases we’re now doing twice the work, and maybe even paying twice the developers, to achieve the same results we had before with server-side app development.

In 2005, app frameworks blew everyone’s minds with “build a blog app in 15 minutes” videos. Fifteen years later, doing the same thing with an SPA approach can require two codebases, a JSON serialization layer, and dozens of spinners all over the place so we can still claim a 50ms First Contentful Paint. Meanwhile, the user watches some blank gray boxes, hoping for HTML to finally render from all the JSON their browser is requesting and digesting. 

How did we get here? This is not my beautiful house! Were we smart in giving up all of that server-rendered developer happiness and doubling down on staff and the time to implement in order to chase the promise of providing our users some fancier user interfaces?

Well. Yes. Sort of.

We’re not building web software for us. We’re building it for them. The users of our software have expectations of how it’s going to work for them. We have to meet them where they are. Our users are no longer excited about full-page refreshes and ugly Rube Goldberg-ian multi-form workflows. The SPA approach was the next logical leap from piles of unorganized spaghetti JavaScript living on the server. The problem, though: it was a 5% improvement, not a 500% improvement. 

Is 5% better worth twice the work? What about the developer cost?

Bedazzling the web app certainly makes things fancier from the user’s perspective. Done well, it can make the app feel slicker and more interactive, and it opens up a wealth of new non-native interaction elements. Canonizing those elements as components was the next natural evolution. Gone are the days of thinking through how an entire HTML document could be mutated to give the illusion of the user interacting with an atomic widget on the page—now, that can be implemented directly, and we can think about our UX in terms of component breakdowns. But, alas, the costs begin to bite us almost immediately.

Go ahead, write that slick little rating stars component. Add some cool animations, make the mouseover and click area feel good, give some endorphin-generating feedback when a selection is made. But now what? In a real app, we need to persist that change, right? The database has to be changed to reflect this new state, and the app in front of the user’s eyes needs to reflect that new reality too. 

In the old days, we’d give the user a couple star GIFs, each a link that hit the same server endpoint with a different param value. Server-side, we’d save that change to the database, then send back a whole new HTML page for their browser to re-render; maybe we’d even get fancy and use AJAX to do it behind the scenes, obviating the need for the full HTML and render. Let’s say the former costs x in developer time and salary (and we won’t even talk about lost opportunity cost for features rolled out too late for the market). In that case, the fancy AJAX-based approach costs x + n (you know, some “extra JavaScript sprinkles”), but the cost of lots and lots of n grows as our app becomes more and more of a JavaScript spaghetti sprinkles mess.

Over in the SPA world, we’re now writing JavaScript in the client-side app and using JSX or Handlebars templates to render the component, then code to persist that change to the front-end data store, then a PUT request to the API, where we’re also writing an API endpoint to handle the request, a JSON serializer (probably with its own pseudo-template) to package up our successful response, and then front-end code to ensure we re-render the component (and some branching logic to maybe rollback and re-render the client-side state change if the backend failed on us). This costs a lot more than even x + n in developer time and salary. And if you’ve split your team into “front-end” and “back-end” people, you might as well go ahead and double that cost (both time and money) for many non-trivial components where you need two different people to finish the implementation. Sure, the SPA mitigates some of the ever-growing spaghetti problem, but at what cost for a business racing to be relevant in the market or get something important out to the people who need it?

One of the other arguments we hear in support of the SPA is the reduction in cost of cyber infrastructure. As if pushing that hosting burden onto the client (without their consent, for the most part, but that’s another topic) is somehow saving us on our cloud bills. But that’s ridiculous. For any non-trivial application, you’re still paying for a server to host the API and maybe another for the database, not to mention load balancers, DNS, etc. And here’s the thing: none of that cost even comes close to what a software company pays its developers! Seriously, think about it. I’ve yet to work at any business where our technical infrastructure was anything more than a fraction of our salary overhead. And good developers expect raises. Cloud servers generally just get cheaper over time.

If you want to be efficient with your money—especially as a cash-strapped startup—you don’t need to cheap out on cloud servers; you need to get more features faster out of your existing high-performance team.

In the old, old days, before the web frameworks, you’d pay a developer for six weeks to finally unveil…the log-in page. Cue the sad trombone. Then frameworks made that log-in page an hour of work, total, and people were launching web startups overnight. The trumpets sound! Now, with our SPA approach, we’re back to a bunch of extra work. It’s costing us more money because we’re writing two apps at once. There’s that trombone again...

We’re paying a lot of money for that 5% user experience improvement.

But what if we could take the best client-side JavaScript ideas and libraries from that 5% improvement and reconnect them with the developer ergonomics and salary savings of a single codebase? What if components and organized JavaScript could all live in one rock-solid app framework optimized for server-side rendering? What if there is a path to a 500% jump?

Sound impossible? It’s not. I’ve seen it, like C-beams glittering in the dark near the Tannhäuser Gate. I’ve built that 500% app, in my free time, with my kids running around behind me barking like dogs. Push broadcasts to logged-in users. Instant updates to the client-side DOM in milliseconds. JavaScript-driven 3D animations that interact with real-time chat windows. All in a single codebase, running on the same server hardware I’d use for a “classic” server-rendered app (and maybe I can even scale that hardware down since I’m rendering HTML fragments more often than full-page documents). No separate front-end app. Clean, componentized JavaScript and server-side code, married like peanut butter and jelly. It’s real, I tell you!

Socket to me! (Get it? Get it? Ah, nevermind...)

Finalized in 2011, support for WebSockets in modern browsers ramped up throughout the 2010s and is now fully supported in all modern browsers. With the help of a small bit of client-side JavaScript, you get a full-duplex socket connection between browser and server. Data can pass both ways, and can be pushed from either side at any time, no user-initiated request needed.

Like the game industry’s ever-expanding moves into cloud-based gaming, the future of web apps is not going to be about pushing even heavier obligations onto the user/client, but rather the opposite: let the client act as a thin terminal that renders the state of things for the human. WebSockets provide the communication layer, seamless and fast; a direct shot from the server to the human.

But this wasn’t terribly easy for many developers to grok at first. I sure didn’t. And the benefits weren’t exactly clear either. After years (decades, even) of wrapping our heads around the HTTP request cycle, to which all server-handled features must conform, adopting this WebSocket tech layer required a lot of head scratching. As with many clever new technologies or protocols, we needed a higher-level abstraction that provided something really effective for getting a new feature in front of a user, fast.

Enter HTML-over-WebSockets...

Want a hyper-responsive datalist typeahead that is perfectly synced with the database? On every keystroke, send a query down the WebSocket and get back precisely the changed set of option tags, nothing more, nothing less.

How about client-side validations? Easy. On every input change, round up the form values and send ’em down the WebSocket. Let your server framework validate and send back changes to the HTML of the form, including any errors that need to be rendered. No need for JSON or complicated error objects.

User presence indicators? Dead simple. Just check who has an active socket connection.

What about multi-user chat? Or document collaboration? In classic frameworks and SPAs, these are the features we put off because of their difficulty and the code acrobatics needed to keep everyone’s states aligned. With HTML-over-the-wire, we’re just pushing tiny bits of HTML based on one user’s changes to every other user currently subscribed to the channel. They’ll see exactly the same thing as if they hit refresh and asked the server for the entire HTML page anew. And you can get those bits to every user in under 30ms.

We’re not throwing away the promise of components either. Where this WebSockets-based approach can be seen as a thick server/thin client, so too can our components. It’s fractal, baby! Make that component do delightful things for the user with smart JavaScript, and then just ask the server for updated HTML, and mutate the DOM. No need for a client-side data store to manage the component’s state since it’ll render itself to look exactly like what the server knows it should look like now. The HTML comes from the server, so no need for JSX or Handlebars or <insert other JavaScript templating library here>. The server is always in control: rendering the initial component’s appearance and updating it in response to any state change, all through the socket. 

And there’s nothing saying you have to use those socket channels to send only HTML. Send a tiny bit of text, and have the client do something smart. Send a chat message from one user to every other user, and have their individual clients render that message in whatever app theme they’re currently using. Imagine the possibilities!

But it’s complex/expensive/requires a bunch of new infrastructure, right?

Nope. Prominent open-source web servers support it natively, generally without needing any kind of extra configuration or setup. Many server-side frameworks will automatically ship the JS code to the client for native support in communicating over the socket. In Rails, for example, setting up your app to use WebSockets is as easy as configuring the built-in ActionCable and then deploying as usual on the same hardware you would have used otherwise. Anecdotally, the typical single Rails server process seems to be perfectly happy supporting nearly 4,000 active connections. And you can easily swap in the excellent AnyCable to bump that up to around 10,000+ connections per node by not relying on the built-in Ruby WebSocket server. Again, this is on the usual hardware you’d be running your web server on in the first place. You don’t need to set up any extra hardware or increase your cloud infrastructure.

This new approach is quickly appearing as extensions, libraries, or alternative configurations in a variety of languages and web frameworks, from Django’s Sockpuppet to Phoenix’s LiveView and beyond. Seriously, go dig around for WebSockets-based libraries for your favorite app framework and then step into a new way of thinking about your app architectures. Build something amazing and marvel at the glorious HTML bits zipping along on the socket, like jet fighters passing in the night. It’s more than a new technical approach; it’s a new mindset, and maybe even a new wellspring of key app features that will drive your startup success.

But I’d be remiss if I didn’t highlight for the reader my contender for Best Framework in a Leading Role. Sure, any app framework can adopt this approach, but for my money, there’s a strong case to be made that the vanguard could be Ruby on Rails. 

So we come back around to Rails, 15 years on from its launch...

Set up a Rails 6 app with the latest versions of Turbolinks, Stimulus, StimulusReflex, CableReady, and GitHub’s ViewComponent gem, and you can be working with Reactive Rails in a way that simultaneously feels like building a classic Rails app and like building a modern, componentized SPA, in a single codebase, with all the benefits of server-side rendering, HTML fragment caching, easy SEO, rock-solid security, and the like. You’ll suddenly find your toolbelt bursting with straightforward tools to solve previously daunting challenges.

Oh, and with Turbolinks, you also get wrappers allowing for hybrid native/HTML UIs in the same codebase. Use a quick deploy solution like Heroku or Hatchbox, and one developer can build a responsive, reactive, multi-platform app in their spare time. Just see this Twitter clone if you don’t believe me. 

OK, that all sounds exciting, but why Rails specifically? Isn’t it old and boring? You already said any framework can benefit from this new WebSocket, DOM-morphing approach, right? 

Sure. But where Rails has always shined is in its ability to make rapid prototyping, well…rapid, and in its deep ecosystem of well-polished gems. Rails also hasn’t stopped pushing the envelope forward, with the latest version 6.1.3 of the framework boasting a ton of cool features. 

If you’ve got a small, resource-strapped team, Rails (and Ruby outside of the framework) still serves as a potent force multiplier that lets you punch way above your weight, which probably explains the $92 billion in revenue it’s helped drive over the years. With this new approach, there’s a ton more weight behind that punch. While your competitors are fiddling with their JSON serializers and struggling to optimize away all the loading spinners, you’re rolling out a new multi-user collaborative feature every week…or every day

You win. Your fellow developers win. Your business wins. And, most importantly, your users win.

That’s what Rails promised from the day it was released to the community. That’s why Rails spawned so many imitators in other languages, and why it saw such explosive growth in the startup world for years. And that same old rapid prototyping spirit, married to this new HTML-over-the-wire approach, positions Rails for a powerful resurgence. 

Ruby luminary and author of The Ruby Way, Obie Fernandez, seems to think so.

Heck, even Russ Hanneman thinks this approach with StimulusReflex is the new hotness.

And the good folks over at Basecamp (creators of Rails in the first place), dropped their own take on the concept, Hotwire, just in time for the 2020 holidays, so your options for tackling this new and exciting technique continue to expand.

Don’t call it a comeback, because Rails has been here for years. With this new architectural approach, brimming with HTML-over-WebSockets and full-duplex JavaScript interactions, Rails becomes something new, something beautiful, something that demands attention (again). 

Reactive Rails, with StimulusReflex and friends, is a must-look for anyone exhausted from toiling with JSON endpoints or JSX, and I’m super excited to see the new crop of apps that it enables.

Designing Inclusive Content Models

In the 1920s, Robert Moses designed a system of parkways surrounding New York City. His designs, which included overpasses too low for public buses, have become an often-cited example of exclusionary design and are argued by biographer Robert A. Caro to represent a purposeful barrier between the city’s Black and Puerto Rican residents and nearby beaches. 

Regardless of the details of Moses’s parkway project, it’s a particularly memorable reminder of the political power of design and the ways that choices can exclude various groups based on abilities and resources. The growing interest in inclusive design highlights questions of who can participate, and in relation to the web, this has often meant a focus on accessibility and user experience, as well as on questions related to team diversity and governance. 

But principles of inclusive design should also play a role early in the design and development process, during content modeling. Modeling defines what content objects consist of and, by extension, who will be able to create them. So if web professionals are interested in inclusion, we need to go beyond asking who can access content and also think about how the design of content can install barriers that make it difficult for some people to participate in creation. 

Currently, content models are primarily seen as mirrors that reflect inherent structures in the world. But if the world is biased or exclusionary, this means our content models will be too. Instead, we need to approach content modeling as an opportunity to filter out harmful structures and create systems in which more people can participate in making the web. Content models designed for inclusivity welcome a variety of voices and can ultimately increase products’ diversity and reach.

Content models as mirrors

Content models are tools for describing the objects that will make up a project, their attributes, and the possible relations between them. A content model for an art museum, for example, would typically describe, among other things, artists (including attributes such as name, nationality, and perhaps styles or schools), and artists could then be associated with artworks, exhibitions, etc. (The content model would also likely include objects like blog posts, but in this article we’re interested in how we model and represent objects that are “out there” in the real world, rather than content objects like articles and quizzes that live natively on websites and in apps.)

The common wisdom when designing content models is to go out and research the project’s subject domain by talking with subject matter experts and project stakeholders. As Mike Atherton and Carrie Hane describe the process in Designing Connected Content, talking with the people who know the most about a subject domain (like art in the museum example above) helps to reveal an “inherent” structure, and discovering or revealing that structure ensures that your content is complete and comprehensible.

Additional research might go on to investigate how a project’s end users understand a domain, but Atherton and Hane describe this stage as mostly about terminology and level of detail. End users might use a different word than experts do or care less about the nuanced distinctions between Fauvism and neo-Expressionism, but ultimately, everybody is talking about the same thing. A good content model is just a mirror that reflects the structure you find.  

Cracks in the mirrors

The mirror approach works well in many cases, but there are times when the structures that subject matter experts perceive as inherent are actually the products of biased systems that quietly exclude. Like machine learning algorithms trained on past school admissions or hiring decisions, existing structures tend to work for some people and harm others. Rather than recreating these structures, content modelers should consider ways to improve them. 

A basic example is LinkedIn’s choice to require users to specify a company when creating a new work experience. Modeling experience in this way is obvious to HR managers, recruiters, and most people who participate in conventional career paths, but it assumes that valuable experience is only obtained through companies, and could potentially discourage people from entering other types of experiences that would allow them to represent alternative career paths and shape their own stories.

Figure 1. LinkedIn’s current model for experience includes Company as a required attribute.

These kinds of mismatches between required content attributes and people’s experiences either create explicit barriers (“I can’t participate because I don’t know how to fill in this field”) or increase the labor required to participate (“It’s not obvious what I should put here, so I’ll have to spend time thinking of a workaround”). 

Setting as optional fields that might not apply to everyone is one inclusive solution, as is increasing the available options for responses requiring a selection. However, while gender-inclusive choices provide an inclusive way to handle form inputs, it’s also worth considering when business objectives would be met just as well by providing open text inputs that allow users to describe themselves in their own terms. 

Instead of LinkedIn’s highly prescribed content, for example, Twitter bios’ lack of structure lets people describe themselves in more inclusive ways. Some people use the space to list formal credentials, while others provide alternate forms of identification (e.g., mother, cyclist, or coffee enthusiast) or jokes. Because the content is unstructured, there are fewer expectations about its use, taking pressure off those who don’t have formal credentials and giving more flexibility to those who do. 

Browsing the Twitter bios of designers, for example, reveals a range of identification strategies, from listing credentials and affiliations to providing broad descriptions. 

Figure 2. Veerle Pieters’s Twitter bio uses credentials, affiliations, and personal interests. 
Figure 3. Jason Santa Maria’s Twitter bio uses a broad description. 
Figure 4. Erik Spiekermann’s Twitter bio uses a single word.

In addition to considering where structured content might exclude, content modelers should also consider how length guidelines can implicitly create barriers for content creators. In the following section, we look at a project in which we chose to reduce the length of contributor bios as a way to ensure that our content model didn’t leave anyone out. 

Live in America

Live in America is a performing arts festival scheduled to take place in October 2021 in Bentonville, Arkansas. The goal of the project is to survey the diversity of live performance from across the United States, its territories, and Mexico, and bring together groups of artists that represent distinct local traditions. Groups of performers will come from Alabama, Las Vegas, Detroit, and the border city of El PasoJuárez. Indigineous performers from Albuquerque are scheduled to put on a queer powwow. Performers from Puerto Rico will organize a cabaret. 

An important part of the festival’s mission is that many of the performers involved aren’t integrated into the world of large art institutions, with their substantial fiscal resources and social connections. Indeed, the project’s purpose is to locate and showcase examples of live performance that fly under curators’ radars and that, as a result of their lack of exposure, reveal what makes different communities truly unique. 

As we began to think about content modeling for the festival’s website, these goals had two immediate consequences:

First, the idea of exploring the subject domain of live performance doesn’t exactly work for this project because the experts we might have approached would have told us about a version of the performing arts world that festival organizers were specifically trying to avoid. Experts’ mental models of performers, for example, might include attributes like residencies, fellowships and grants, curricula vitae and awards, artist statements and long, detailed bios. All of these attributes might be perceived as inherent or natural within one, homogenous community—but outside that community they’re not only a sign of misalignment, they represent barriers to participation.

Second, the purposeful diversity of festival participants meant that locating a shared mental model wasn’t the goal. Festival organizers want to preserve the diversity of the communities involved, not bring them all together or show how they’re the same. It’s important that people in Las Vegas think about performance differently than people in Alabama and that they structure their projects and working relationships in distinct ways. 

Content modeling for Live in America involved defining what a community is, what a project is, and how these are related. But one of the most interesting challenges we faced was how to model a person—what attributes would stand in for the people that would make the event possible. 

It was important that we model participants in a way that preserved and highlighted diversity and also in a way that included everyone—that let everyone take part in their own way and that didn’t overburden some people or ask them to experience undue anxiety or perform extra work to make themselves fit within a model of performance that didn’t match their own. 

Designing an inclusive content model for Live in America meant thinking hard about what a bio would look like. Some participants come from the institutionalized art world, where bios are long and detailed and often engage in intricate and esoteric forms of credentialing. Other participants create art but don’t have the same resources. Others are just people who were chosen to speak for and about their communities: writers, chefs, teachers, and musicians. 

The point of the project is to highlight both performance that has not been recognized and the people who have not been recognized for making it. Asking for a written form that has historically been built around institutional recognition would only highlight the hierarchies that festival organizers want to leave behind.

The first time we brought up the idea of limiting bios to five words, our immediate response was, “Can we get away with that?” Would some artists balk at not being allowed the space to list their awards? It’s a ridiculously simple idea, but it also gets at the heart of content modeling: what are the things and how do we describe them? What are the formats and limitations that we put on the content that would be submitted to us? What are we asking of the people who will write the content? How can we configure the rules so that everyone can participate?

Five-word bios place everyone on the same ground. They ask everyone to create something new but also manageable. They’re comparable. They set well-known artists next to small-town poets, and let them play together. They let in diverse languages, but keep out the historical structures that set people apart. They’re also fun:

  • Byron F. Aspaas of Albuquerque is “Diné. Táchii'nii nishłį́ Tódichii'nii bashishchiin.”
  • Danny R.W. Baskin of Northwest Arkansas is “Baroque AF but eating well.”
  • Brandi Dobney of New Orleans is “Small boobs, big dreams.”
  • Imani Mixon of Detroit is “best dresser, dream catcher, storyteller.”
  • Erika P. Rodríguez of Puerto Rico is “Anti-Colonialist Photographer. Caribeña. ♡ Ice Cream.”
  • David Dorado Romo of El PasoJuárez is “Fonterizo historian wordsmith saxophonist glossolalian.”
  • Mikayla Whitmore of Las Vegas is “hold the mayo, thank you.”
  • Mary Zeno of Alabama is “a down home folk poet.”

Modeling for inclusion

We tend to think of inclusive design in terms of removing barriers to access, but content modeling also has an important role to play in ensuring that the web is a place where there are fewer barriers to creating content, especially for people with diverse and underrepresented backgrounds. This might involve rethinking the use of structured content or asking how length guidelines might create burdens for some people. But regardless of the tactics, designing inclusive content models begins by acknowledging the political work that these models perform and asking whom they include or exclude from participation. 

All modeling is, after all, the creation of a world. Modelers establish what things exist and how they relate to each other. They make some things impossible and others so difficult that they might as well be. They let some people in and keep others out. Like overpasses that prevent public buses from reaching the beach, exclusionary models can quietly shape the landscape of the web, exacerbating the existing lack of diversity and making it harder for those who are already underrepresented to gain entry.

As discussions of inclusive design continue to gain momentum, content modeling should play a role precisely because of the world-building that is core to the process. If we’re building worlds, we should build worlds that let in as many people as possible. To do this, our discussions of content modeling need to include an expanded range of metaphors that go beyond just mirroring what we find in the world. We should also, when needed, filter out structures that are harmful or exclusionary. We should create spaces that ask the same of everyone and that use the generativity of everyone’s responses to create web products that emerge out of more diverse voices.

The Never-Ending Job of Selling Design Systems

I’m willing to bet that you probably didn’t start your web career because you wanted to be a politician or a salesperson. But here’s the cold, hard truth, friend: if you want to work on design systems, you don’t have a choice. Someone has to pay for your time, and that means someone has to sell what you do to an audience that speaks value in an entirely different language. 

It’s not exactly easy to connect the benefits of a design system directly to revenue. With an ecomm site, you can add a feature and measure the impact. With other conversion-based digital experiences, if your work is good, your customers will convert more. But because a design system is (usually) an internal tool, it’s just harder to connect those dots. 

This article boils down the methods I’ve put into practice convincing executives not just to fund the initial push of design system work, but to keep funding it. I’ll share how I’ve adjusted the language I use to describe common design system benefits, allowing me to more clearly communicate with decision makers.

Know your audience

In my experience, design systems can be owned by information technology teams, marketing and communications departments, or (best case scenario) cross-disciplinary teams that bring many specialists together. The first thing you need to do is determine where the system lives, as in which department owns and cares for it. 

If it’s part of IT, for example, you need to think like a CIO or an IT Director and speak to their objectives and values. These leaders are typically more internally focused; they’ll filter the value of the design system in terms of the employees of the company. In contrast, if the system belongs to Marketing, put on your CMO or Marketing Director hat. Marketing teams are often externally focused; they think in terms of B2B audiences and end users. 

The way organizations structure the ownership of a design system can be more complex, but let’s use these two paths (internal vs external) as frameworks for building a persuasive case for those owners.

Internal-orientation motivators

Based on the research we’ve done since 2018, there are three very specific internal motivators for having a design system:

  • Efficiency
  • Onboarding
  • Scale.

Efficiency benefit

Design systems allow for the rapid prototyping of new ideas using existing, production-ready components. They allow teams to reuse design and code, and they allow individuals to focus their creative energy on new problems instead of wasting it on old ones. Executives and decision-makers may abstractly understand all that, but you need to be able to tell them what it will take to realize the efficiency benefit. 

There’s a theoretical maximum to how productive a team can be. When you talk about a design system creating more efficiency in your processes, you’re really talking about raising the ceiling on that max. As happens with so many things in life, though, that comes with a trade-off. Early on, while a team is actually building the system, they won’t be as productive on the rest of their work.

The efficiency curve looks like this:

The Design System Efficiency Curve. Line graph illustrating the curvilinear relationship of productivity over time in terms of overall efficiency, in situations of transition from having no design system in place through in-process set up of the system, to eventually having an established design system. Productivity is represented on the y-axis and Time on the x-axis. Starting at 0,0 productivity dips down as the team diverts resources to set up the system, but eventually surpasses standard productivity once the system is in place.
Figure 1. With Productivity on the y-axis and Time on the x-axis, the Design System Efficiency Curve dips down at the start as the team ramps up on the system, but eventually surpasses standard productivity once the system is in place.

If you’re talking to an executive, it’s important to acknowledge this dip in productivity. 

Spend some time working out these specific calculations for your organization. For example, you might need four team members for three months to reach a point where the system will save everyone on the team approximately two hours per week. You’re candidly acknowledging the necessary investment while demonstrating the eventual benefits. And make sure to mention that the productivity benefits will continue indefinitely! The math will almost always end up on your side. 

Another critical point to raise is that simply having a design system has a cumulative effect on the efficiency of your teams. Since the system is an internal tool that can be used 1) across multiple products or experiences, 2) by many teams throughout the organization, and 3) in many phases of the product design and development process, you are gaining efficiencies on many levels. 

The team working on in-store kiosks can build their interface with a well-tested set of components. Your UX people can use the system to prototype and test with production-ready code. The people responsible for grooming the backlog know there is a stable pattern library upon which they are building new features or fixing old ones. Anyone looking for answers to what, why, or how your organization designs and builds products will find those answers in the living system.

The efficiency at each of these (and many other) decision points is how we can raise the ceiling on our total possible efficiency. How this plays out is very different in each organization. I’m here to tell you that part of the work is thinking about how a design system will impact every part of your process—not just design or development.

What to measure

Action: Measure the cost of productivity with and without a design system.

If you aren’t already, start measuring how productive your team is now. The easiest way to do this is to break your team’s work down into measurable cycles. Once you have a rough idea of how much you can get done in a cycle of work, you’ll be able to compare your efficiency before the system was in place with your efficiency after. This kind of measurable benefit will speak volumes to your executive team.

Onboarding benefits

Growth is expensive. When you hire a new team member, you don’t just supply a salary and benefits. You need a computer, a desk, a chair, accounts to all the software/services…the list goes on. And all these expenses hit before your new employee is a fully contributing member of the team. You won’t start to recoup your investment for a few months, at least. 

Design systems can reduce the time it takes your new hire to become a productive contributor. Once you have a healthy design system in place, you’re able to provide an employee with a clearly-defined and effective toolset that is well-documented and can be applied across multiple initiatives. More specifically, assigning new hires to start out working on the design system team will allow them to quickly learn how your organization designs and builds digital products.

Onboarding Model. Diagram illustrating the movement and eventual cycling (depicted by arrows pointing to the right) of individuals in a "Hiring Pool"(represented by a cluster of dots on the left of the graphic) into the DS Team (design system team), represented by a diamond shape in the center of the graphic, then exiting the DS Team to join Other Teams (smaller diamond shapes on the right of the graphic), and finally, back into the DS Team (dashed-line arrow looping below and to the left, back into the DS Team diamond shape).
Figure 2. A Model for Onboarding. As you bring people into your organization from your hiring pool, consider having them start on your design system team and then rotate out onto other teams. As you grow, folks who haven’t had a turn on the system team can rotate in as well.

On the left in Fig. 2, you have a pool of potential employees. As you hire individuals, you can bring them into the design system team, where they’ll gain a deep understanding of how your organization builds digital products. Once they’re up to speed, you can seamlessly move them to another product, discipline, or feature-based team where they’ll take this knowledge and hit the ground running. Additionally, your organization can benefit from having all team members (even those who have been around for a while) periodically work a rotation with the design system team. This continuously spreads the design system expertise around the organization and makes it part of the fabric of how you work.

And don’t think this approach is only valuable for designers or developers. A healthy design system team comprises people from many disciplines. In addition to team member rotation, building in time to mentor folks from many different disciplines can prove tremendously valuable in the long run. A highly functional design system team can serve as an ideal model of workflow and can educate many team members dispersed throughout the organization about how to approach their work.

Believe me, executives’ eyes will light up when you share how a design system can ensure high productivity in record time. As a caution, though, rotating people in and out of any team too often can leave them feeling exhausted and can make it hard for them to be productive. Remember, you have the flexibility to scale this to a level that makes sense for your team. Be smart and use this approach as it works in your context.

What to measure

Action: Measure the time it takes for teams to become productive.

As new people are added, a team typically returns to the “forming” stage of Tuckman’s stages of group development. This is part of the reason that growth is expensive. But with a design system in place and a healthy culture, you can reduce the time it takes the team to get back to “performing.”

Scale benefits

Traditionally, you have to hire more people to scale productivity. A design system enables a team to accomplish more with less. Reusability is a major reason teams choose to work in a more systematic way. Small teams with an effective system can design, build, and maintain hundreds of sites each year. They’d never come close without a design system to work with. 

UX Pin has a design system guide that starts by acknowledging something that most of us ignore.

Scaling design through hiring, without putting standards in place, is a myth. With every new hire, new ideas for color palettes, typography and patterns appear in the product, growing the inconsistency and increasing the maintenance cost. Every new hire increases the design entropy.

A well-executed system allows a team to scale while keeping design entropy at bay.

What to measure

Action: Compare the amount of people on your team to the amount of work they are accomplishing.

Adding people to a team doesn’t necessarily mean they’ll get more work done faster. This is well-documented in historical software books like Fred Brooks’ The Mythical Man-Month. Eventually, you will have to investigate changing other factors (besides just adding more people) to increase productivity. A good design system can be one of these factors that increases the productivity of the team members you already have. It’s this change in productivity over scale that you need to measure and compare in order to prove value for this benefit.

External-orientation motivators

Let’s shift to thinking about the benefits that a design system offers to end-users. The four primary external motivators are:

  • Consistency
  • Trust
  • Accessibility
  • Usability.

Consistency and Trust benefits

Consistency is widely assumed to be the primary benefit of a design system. We identify dozens of button designs, color variations, and inconsistent typefaces in hopes of convincing higher-ups to allow us to build a system to bring it all in line. After working on design systems for the last five or six years, I can say with confidence that a design system will not make your product more consistent. 

You see, us web designers and developers are very scrappy. We can create the most inconsistent experiences within even the most rigid systems. It’s not the system itself that creates consistency, it’s the culture of an organization. It’s all of the unspoken expectations—the filters through which we make decisions—that give us the confidence to pause and ask if the work we’re doing fits culturally with the product we’re building. A good CMO knows this, and they won’t buy the oversimplified idea that a design system will solve the rampant inconsistencies in our work. 

Because of this, these executives often have a different (and easier to measure) question: “Does it convert?” This perspective and line of conversation is not an ideal approach. Believe me, we can create experiences that convert but are not good for our users or our brands. Given this, a conversation with your CMO might go better if you shift the language to talk about trust instead.

With inconsistent experiences, your users subconsciously lose trust in your brand. They’ve been conditioned to expect a certain kind of user experience, and that’s what they should be given, even across multiple websites or products. Vanessa Mitchell wrote about why brand trust is more vital to survival now than it’s ever been:

“Brand trust as an ‘“insurance policy”’ against future issues is not a new concept. Most organizations know trust bestowed by the consumer can not only make or break a business, it can also ensure you survive a problem in the future. But few achieve brand trust adequately, preferring to pay lip service rather than delve into what it really means: Authentically caring about customers and their needs.”

When your customer is using your product to accomplish a very specific task, that one task is the only thing that matters to them. Creating a consistent experience that works for everyone and allows them to accomplish their goals is building trust. CMOs need to understand how design systems empower trusted relationships so those relationships contribute to your bottom line.

What to measure

Action: Measure the engagement of your customers.

Customer engagement can be measured with web analytics platforms. What you’re looking for will vary depending on the context for your organization, but trends in things like time on site, visit frequency, subscription rates, and bounce rates will give you meaningful data to work with. It’s also very common to track customer engagement with metrics like Net Promoter Score (NPS) by asking simple questions of customers repeatedly over time. There are so many ways to structure tests of the usability of your work, so I’d encourage you to loop in the UX team to help you find tests that will demonstrate the user engagement success of the design system effort.

Accessibility benefits

Accessibility can be a tremendous benefit of a design system. Do the work properly the first time, then allow that beautifully accessible component to serve your customers each time it is used. Certainly, it’s not a fail-safe measure—there is still integration-level testing to ensure component accessibility translates to the larger experience—but ensuring the accessibility of individual components will result in more accessible experiences. And integrating good accessibility practices into your system means more folks within your organization are aligned with this important work. 

You might find at first that marketers aren’t all that interested in accessibility, but they should be. Did you know that there were 814 web accessibility related lawsuits (just in the US!) in 2017? Did you know that there were almost 2,300 in 2018? That’s a 181% increase. This must be a priority. First, because it’s the right thing to do. Second, because it’s important to the sustainability of the business. A design system can help you address this issue, and it can help you maintain compliance as you grow. This is the kind of message that resonates with leadership.

What to Measure

Action: Measure your compliance to accessibility guidelines over time.

Many organizations have a regular cadence of accessibility audits across their digital properties. While some of this can be automated, there’s always a manual aspect needed to truly evaluate the accessibility of a site or application. Tracking how often regressions occur in the properties served by your design system can be a great way to demonstrate the value that system is bringing to the organization.

Usability benefits

As with so many aspects of a design system, usability benefits come from repetition. Design system pros often hope to focus energy on solving a usability challenge only once before moving on to the next problem. This absolutely is a benefit of a well-constructed system. It’s also very true that “familiarity breeds usability.” Your customers will learn to use your products and begin to subconsciously rely on that familiarity with the experience to lower their cognitive load. This should be just as important to our executive leadership as it is to those of us who are practitioners. 

You can also reframe this benefit in the context of conversion. Helping our users accomplish their goals is helping them convert. They are there to use your product. So make it easy to do, and they’ll do it more. This is what businesses need and what executives want to see—improving the business by helping customers. As mentioned above, we want to make sure we’re doing this in healthy ways for both our users and our brands.

What to Measure

Action: This might be the easiest one—measure conversion!

Running usability studies will help to validate and measure the success of your work with the system, which many organizations are already doing. Your goal should be to validate that components are usable, which will allow you to build a culture of user-centered design. Setting the bar for what it takes to evolve the system—such as requiring that changes are tested with real users—introduces this idea into the core of all your processes, where it should be.

Sell investment, not cost

Knowing how and which internal and external motivators to touch on during conversations is significant, but there’s one last thing I’d like to mention, and it has to do with your way of thinking. A major factor in many of these conversations lies simply in how we frame things: move the conversation about the cost of building a design system into a conversation about the present and residual benefits of the investment you’re making. It’s easy to view the time and effort required to build a system as an investment in ultimately delivering high-quality digital products. But leadership will be more willing to consider realistic budgets and timelines if you talk about it like a long-term investment that has benefits on multiple levels throughout the business. This also leaves you with the ability to regularly remind them that this product will never be done—it will require ongoing funding and support.

A design system project will not succeed if you don’t convince others that it’s the right thing to do. Successful, sustainable design systems start with the people, so you have to begin by building consensus. Building a design system means you’re asking everyone to change how they work—everyone has to be on board.

This concept of collaboration is so core to the work of design systems that it led all of us here at Sparkbox to look for opportunities to better understand how teams around the world are designing, building, and using a more systematic approach to digital product design. For the last three years, we’ve been gathering and sharing data in the form of the Design Systems Survey and the Design System Calendar. If you are considering a design system for your organization, or if you work with a design system team, the survey and calendar may be helpful in your quest to build better products.

Navigating the Awkward: A Framework for Design Conversations

We’ve all been there. A client or coworker shows us this amazing thing they (and maybe their entire team) have worked on for hours or weeks. They are so proud of it. It’s new or maybe it just looks new. They may or may not ask you what you think—but you’re there to experience it. And your brain quietly screams.

As an experienced designer, you often have an intuitive reaction and can quickly spot bad designs; they may be visually incongruent, poorly structured, confusing, lack social awareness, or look like they are trying too hard.

If your initial response is so negative that it slips through into your expression or voice or body language, it can completely sabotage any possibility of buy-in. And, far more seriously, it can ruin the relationship of trust and collaboration you’re building with that person. 

Reflecting on my own successes and failures—and the experiences of others—I’ve put together a conversational framework for navigating these all-too-frequent design interactions, whether you’re an in-house designer, a consultant, or an agency employee. 

Be a relationship steward

“Getting things done” is often accomplished at the expense of relationships and sustainable design solutions. As in, the “We need to manage this situation” approach (emphasis on the “manage”) quite often looks more immediately effective on paper than the “We need to be productive while stewarding this project for this partner” mindset.  

The thing is, a design stewardship mindset to working with clients/partners is a better bet; thinking beyond buy-in or proving your point or getting your own way pays off in both an immediate situation, and long-term, for both sides.

I’ve had plenty of those “design conversations gone wrong” over the years, and have noticed a common set of whys and hows behind the scenes. To help me consciously factor them in and stay focused, I’ve developed this simple conversational framework:

Element 1: Move from selling to helping.

Element 2: Question your triggers and explore the problem.

Element 3: Map the problem to the client’s values.

Element 4: Formulate questions for the client based on values.

Element 5: Listen and be prepared to challenge your assumptions.

Element 6: Reflect back on the problem and share recommendations with the client.

We’re going to explore all that below, but here’s a quick reference version of conversational frameworks you can look at as we go.

Healthy self-talk

When confronted with a bad design, there are some common reactions a designer might have—what we often catch ourselves saying in our head (hopefully!) or directly to our clients. (I need to preface by saying I borrowed some of these from a viral “Hi, I’m a ...you might know me from my greatest hits...” on Twitter.)

  • You are not your users!
  • Blindly following another organization’s best practices is not going to guarantee successful conversion for your business.
  • Have they ever heard there’s such a thing as Calls to Action?
  • Really, you couldn’t have bothered to tell the user ahead of time how many steps this process involves?
  • No, a chatbot won’t magically fix your horrible content!
  • Is this clipart?!!
  • Don’t use your org chart for navigation...not even on your intranet.
  • You can’t mix apples and oranges.
  • Views do not equal engagement metrics!
  • Stop celebrating outputs instead of outcomes!
  • Diversity is more than just white women.
  • You’re talking about implementation details, but I still don’t even know what problem we’re trying to solve.
  • Not another FAQ!
  • Does accessibility mean anything to these folks?
  • We don’t need 15 unique designs for this button. There is a style guide for that!
  • Good luck with your SEO efforts; keyword stuffing won’t get you ranking!
  • Can we start designing experiences instead of pages and features?

I am sure you can relate. While there’s nothing inherently wrong about these statements—and there are times when it is worth being upfront and saying them as-is—we also know they might be ineffective, or worse yet, perceived as confrontational. 

Someone worked hard on this. They put a lot of thought into it. They love it. They want this to be the solution. 

So, how can we avoid defensiveness? How do we engage the other person in a meaningful conversation that comes from a place of empathy instead of arrogant expertise? 

In describing “How Shifting Your Mindset Can Ignite Transformation,” Keith Yamashita points out that “each of us comes into the world curious, open, wanting to bond and wanting to have great connections with other people,” yet “our training, societal norms, school, and early jobs beat all of that out of us.” Self-awareness and inner reflection are essential to helping us reconnect with other humans. Practicing mindfulness is a great way to develop and enhance these skills.

It’s not me, it’s you (Element 1)

First step to getting your message across is shifting your position from “How do I share my perspective” to “How can I help my (clients/partners/coworkers) improve their current product?” 

Make room for the needs of others and create some distance from your ego and. In particular, try to refrain from saying what you find so intuitive, as well as delay providing your opinion. 

Blair Enns, who writes about the importance of being a vulnerable expert, says it beautifully (emphasis is mine):

  • You can be slick or the client can be slick. It’s better if it’s the client.
  • You can fumble and be awkward in the conversation or the client can fumble and be awkward. It’s better you are the awkward one.
  • You can have all the answers to the client’s questions or the client can have all the answers to your questions. It’s better to ask the questions. (Nobody has all the answers.)
  • Those who are not trained in selling often think of the cliches and think they must be seen to be in control, to have the answers, to have the polish. The opposite however is better. You can still be the expert by showing vulnerability. You don’t need to manufacture answers you do not have. It’s okay to say “let me think about that.”

Allowing others to be in the spotlight may take some practice and requires you to be self-aware. When you find yourself triggered and itching to comment or to disagree with something, try the following exercise:

  1. Pause.
  2. Acknowledge that you are frustrated and want to jump in.
  3. Invite yourself to be curious about the trigger instead of judging yourself or others. 

The more you practice this kind of self-awareness, the more you’ll notice your triggers and change how you respond to them. This quick mental exercise gives you the space to make an intentional choice. For similar practical strategies, take a look at “How to Turn Empathy into Your Secret Strength.”

Winning the moment isn’t a win (Element 2)

One potential trigger may be rooted in your mindset: are you more focused on trying to get “buy-in,” or on building positive, lasting relationships to support ongoing collaboration and stewardship?

To do this, you need to first ask yourself some questions to get to the bottom of what your impulse is trying to communicate. You then need to do some slow thinking and identify a question that will engage your partner in a conversation.

Here’s a hypothetical situation to explore what this might look like.

You’re shown a very clunky, centralized system designed so users can register for recreational activities around the city. The client wants your team to create a chatbot to support it. 

Your internal reaction: “Instead of pages and features, can we start designing experiences?”

Analyzing your reaction:


When we focus on pages and features like chatbot solutions, we typically aren’t seeing the whole picture.


Organizations can get distracted by a shiny opportunity or single perceived problem in a product, but these can frequently overshadow where real impact can be made. 


The 80/20 Pareto principle has a strong pull for many organizations.


Organizations want solutions that take minimal perceived time and effort.


Organizations want to save money/go with the cheaper option. 

So what?

As a result, organizations risk prioritizing what seems to be the easy thing at the expense of other, more user-friendly and profitable solutions. 

This example is simplistic, but notice that by asking a few sets of questions, we were able to move from a reactive statement to a reason why something may not be working—a reason that’s a lot less emotional and more factual. You could use a modified 5 Whys approach like this, or some other questioning method that suits the situation. 

If you dissect our example more closely, you’ll see that unlike the initial reaction, which speaks more to design elements like pages and features, we are now talking about more broadly relatable topics across business lines, such as cost savings or risk assessment. Structuring your conversation around topics most familiar to the other person and reflecting their core values can help us be more successful in improving their product.

Ask with values in mind, close with opportunities (Element 3)

I recently attended an excellent event on “Speaking Truth to Power,” presented by the Canada School of Public Service. The keynote speaker, Taki Sarantakis, shared his strategies for how to be an effective expert and advisor, such as:

  • Be credible and build trust
  • Have humility and empathy
  • Make sure that the person you are advising understands that the advice they do not want to hear is for their benefit.

He also broke down a few concepts that could be a barrier to implementing this advice. If we see ourselves as “speaking truth to power” we are likely making a values judgement. We believe and project to others that we have all truth and no power, while the person on the other end has all the power, and no truth. It’s an arrogant position that weakens our ability to make any productive progress. Framing our interactions as a battle will likely result in a lose-lose situation.  

Sarantakis then presents an example conversation that is rooted in credibility and humility, and comes from a place of care. He underscores that any advice you choose to share has to absolutely come from a place of concern for the person making a final decision, and not from a desire to show off and say so on record. It roughly looks like this:

  • Here is what you need to know...
  • You know X, but you may not know Y and Z.
  • I know this is something you may not want to hear, but I need to say it because it is important that you know this.

As part of the panel discussion that followed the keynote, Kym Shumsky, who has lots of experience advising senior leaders, reinforced Sarantakis points by stating that valuing truth, knowledge, and accuracy over relationship-building can be detrimental. Thinking back on my personal experiences, I fully agree. 

So how do we build trust, credibility, and share from a place of care? Steve Bryant, Head of Content at Article Group, has some thought-provoking words on this in his article “Make relationships, not things”:

Relationships are based on trust. Trust takes time and honesty. You can’t just create a pile of content and be done with it. You can’t “thing” your way to people trusting you.

Which is to say: the question isn’t what content to create.

The question isn’t how to create that content.

The question is why do you care about the people you’re creating the content for? What makes them special? What kind of relationship do you want to have?

How do you want them to feel?

Translating core values into specific needs (Element 4)

Going back to the exercises we just explored and what we think could be the source of the problem, it’s time to start moving backward from the core values to specific design characteristics that need to be addressed.  

You have to always start the conversation as a set of questions. Beginning with questions allows you to set aside the expert hat, be curious, and let the client share their experiences. It shows them you care and are there to listen generously

Build rapport, be present, and be there to listen (Element 5)

Erika Hall offers timeless advice about the need to build rapport and understand our partners in her article “Everyday Empathy”:

And as social science shows, trying to bridge the gap with facts will never change anyone’s mind. The key is to value — truly value — and reflect the perspective of the people you want to influence. [...] Attention is a gift beyond measure.

A great bit of advice on “being present” rather than “presenting” on a topic is offered by Blair Enns (author of The Win Without Pitching Manifesto) in the episode “Replacing Presentations with Conversations.” Being present also means being vulnerable and open to discovering something new that might change your initial reaction. 

And then be prepared to truly listen, not convince. Sarah Richards points out how important it is to understand the different mental models that partners bring to the table and work together to form new ones to accomplish common goals:

How many times have you said you are going to talk to someone who is blocking you? Now count how many times have you said you are going to listen to someone who is blocking you? When we have someone in our organisation who disagrees with us, we go to see if we can convince someone that our way of thinking, our way of doing things, is the best way of doing it.

Here is what a conversation relating to “Can we start designing experiences instead of pages and features?” might look like, if we follow this approach:

You: What do you hope to accomplish with a chatbot?

Partner: We want people to get answers to their questions as quickly as possible, so they can register and pay for local recreation activities of their choice faster. We live in a beautiful city and it’s a pity when residents and visitors can’t take advantage of everything it has to offer. 

You: What have you heard from the people who experienced barriers to quickly registering and paying?

Partner: They complain that they can’t easily find activities in community centers closest to them or that there is no way for them to see all current and upcoming classes around the city at a glance, or that additional information about different activities is not provided within the system and they often have to look up events or class instructors separately to find more information on other websites. They also are not able to browse all activities by type of recreation, like “nature” activities, which might include hiking, city tours, birdwatching, garden events, and festivals. They often do not know what terminology to use to search for events and activities, so they say it is difficult to find things they already do not know about.

You: How do you think this makes them feel?

Partner: They say this frustrates them, as information on other websites might differ from the information in our system and they end up wasting their time guessing which one is correct and up-to-date. They then end up having to call the community center or organization providing an event for more information, to figure out if it is a good fit, before registering and paying; which significantly delays the process. 

I think you see where this is going. 

Here are a few more follow-up questions:

  • Have you tried registering for an activity using the system? How did you feel/what did you experience? 
  • What would you like people using your system to feel/experience?
  • You’ve mentioned a number of barriers that people experience. How well do you think a chatbot will be able to remove these barriers now?
  • What are some of the risks you foresee in trying to solve these problems?

At this point, if you hear something that makes you pause and question your assumptions, ask further questions and consider going back to the drawing board. Maybe you need to ask yourself: What are my lenses?

Respond with care and invite collaboration (Element 6)

If what you’ve heard confirms your assumptions, you could offer a few concise, summative statements and a recommendation. Whatever you say needs to integrate the vocabulary used by the client (mirroring), to show them that you were listening and critically reflecting on the situation. 

Let’s see how that might look:

“Based on what you’ve shared, it seems that you want to make it quick and easy for anyone in the city to discover, decide on, and pay for a local recreation activity. The experience of the people using the system is very important to you, as you want them to enjoy the city they live in, as well as support the vibrancy of the city economically by registering and paying for local activities.

If we want to help people enjoy and experience the city through events and activities, we need to make it simple and frictionless for them. The barriers they experience cannot be solved with a chatbot solution because the information people are looking for is often missing and not integrated into the current system in a meaningful way. So the chatbot would not give them the answers they need, creating further frustration. 

Adding a chatbot also creates an extra layer of complexity. It does not solve the underlying cause of frustration stemming from lack of relevant and integrated information. Instead, it leaves the current experience broken and creates yet another place people need to go to for possible answers.

It would also be a huge risk and time investment to design a chatbot, as your current content is not structured in a way that would allow us to have useful information extracted.

Given your time and resource constraints, I would suggest we explore some other solutions together.” 

Framing and reinforcing the conversation

To recap, here are the six essential elements of the conversational framework:

Element 1: Mentally move from how you can share and sell your perspective to how you can help your partner.

Element 2: Ask yourself probing questions to better understand your reaction to the “bad design” trigger and what is at the core of the problem.

Element 3: Map the core of the problem to value(s) you can use to begin the conversation with a partner.

Element 4: Use value(s) identified to formulate and ask questions.

Element 5: Get ready to truly listen to your partner and be prepared to challenge your assumptions.

Element 6: Review your responses to probing questions and identify recommendations you can share back with the partner.

This conversational framework starts with us as individuals, forces us to critically deconstruct our own reactions, then asks us to reframe what we find from a perspective of what matters and is known to our clients. It reminds us that we should learn something in the process by having intentional yet open conversations.

Future of design leadership is stewardship

The work we do in the web industry touches people—so we need to be people. We need to be human, build trust, and sustain relationships with our clients and partners. If we aren’t doing a good job there, can we really claim it’s not impinging on our designs and end users?

Our growth as web professionals can’t be limited to technical expertise; design leadership is stewardship. It’s rooted in listen, then respond, in learning how to pause, create space, and get to the root of the problem in a productive and respectful way. We need to learn how to intercept our reactions, so that we can shift how we approach triggering situations, stay still and listen, and open up conversations rife with possibilities, not progress suppressors. Guide clients toward better design choices by meeting them in the moment and partnering with them.

In design work, being a steward does not mean that you should push to get your way. Neither does it mean you should indulge clients and create broken or unethical products. Rather, it proposes an attuned way of approaching potentially contentious conversations to arrive at a solid, ethical design. It is about framing the conversation positively and ushering it as a steward, rather than stalling discussion by being the gatekeeper. 


The Web is obese

In 1994, there were 3,000 websites. In 2019, there were estimated to be 1.7 billion, almost one website for every three people on the planet. Not only has the number of websites exploded, the weight of each page has also skyrocketed. Between 2003 and 2019, the average webpage weight grew from about 100 KB to about 4 MB. The results?

“In our analysis of 5.2 million pages,” Brian Dean reported for Backlinko in October 2019, “the average time it takes to fully load a webpage is 10.3 seconds on desktop and 27.3 seconds on mobile.” In 2013, Radware calculated that the average load time for a webpage on mobile was 4.3 seconds.

Study after study shows that people absolutely hate slow webpages. In 2018, Google research found that 53% of mobile site visitors left a page that took longer than three seconds to load. A 2015 study by Radware found that “a site that loads in 3 seconds experiences 22% fewer page views, a 50% higher bounce rate, and a 22% fewer conversions than a site that loads in 1 second, while a site that loads in 5 seconds experiences 35% fewer page views, a 105% higher bounce rate, and 38% fewer conversions.”

The causes of webpage bloat? Images and videos are mainly to blame. By 2022, it’s estimated that online videos will make up more than 82% of all consumer Internet traffic—15 times more than in 2017. However, from the code to the content, everything about Web design has become super-bloated and super-polluting. Consider that if a typical webpage that weighs 4 MB is downloaded 600,000 times, one tree will need to be planted in order to deal with the resulting pollution.

They say a picture paints a thousand words. Well, 1,000 words of text takes up roughly two A4 (210 mm wide and 297 mm long) pages and weighs about 6 KB. You’d place about four images that are 9 cm x 16 cm on two A4 pages. Let’s say these images are well optimized and weigh 40 KB each. (A poorly optimized image could weigh several megabytes.) Even with such high optimization, two A4 pages of images will weigh around 160 KB. That’s 27 times more than the two A4 pages of text. A 30-second video, on the other hand, could easily weigh 3 MB. Videos create massively more pollution than text. Text is the ultimate compression technique. It is by far the most environmentally friendly way to communicate. If you want to save the planet, use more text. Think about digital weight.

From an energy point of view, it’s not simply about page weight. Some pages may have very heavy processing demands once they are downloaded. Other pages, particularly those that are ad-driven, will download with lots of third-party websites hanging off them, either feeding them content, or else demanding to be fed data, often personal data on the site’s visitor. It’s like a type of Trojan Horse. You think you’re accessing one website or app, but then all these other third parties start accessing you. According to Trent Walton, the top 50 most visited websites had an average of 22 third-party websites hanging off them. The New York Times had 64, while Washington Post had 63. All these third-party websites create pollution and invade privacy.

There is a tremendous amount of out-of-date content on websites. I have worked with hundreds of websites where we had to delete up to 90% of the pages in order to start seeing improvements. Poorly written, out-of-date code is also a major problem. By cleaning up its JavaScript code, Wikipedia estimated that they saved 4.3 terabytes a day of data bandwidth for their visitors. By saving those terabytes, we saved having to plant almost 700 trees to deal with the yearly pollution that would have been caused.

If you want to help save the planet, reduce digital weight. Clean up your website. Before you add an image, make sure that it does something useful and it’s the most optimized image possible. Every time you add code, make sure it does something useful and it’s the leanest code possible. Always be on the lookout for waste images, waste code, waste content. Get into the habit of removing something every time you add something.

Publishing is an addiction. Giving a website to an organization is like giving a pub to an alcoholic. You remember the saying, “There’s a book inside everyone”? Well, the Web let the book out. It’s happy days for a while as we all publish, publish, publish. Then…

“Hi, I’m Gerry. I have a 5,000-page website.”

“Hi, Gerry.”

“I used to have a 500-page website, but I had no self-control. It was one more page, one more page… What harm could one more page do?”

Redesign is rehab for websites. Every two to three years some manager either gets bored with the design or some other manager meets a customer who tells them about how horrible it is to find anything on the website. The design team rounds up a new bunch of fake images and fake content for the top-level pages, while carefully avoiding going near the heaving mess at the lower levels. After the launch, everyone is happy for a while (except the customers, of course) because in many organizations what is important is to be seen to be doing things and producing and launching things, rather than to do something useful.

If you must do something, do something useful. That often means not doing, removing, minimizing, cleaning up.

Beware the tiny tasks. We’ve used the Top Tasks method to identify what matters and what doesn’t matter to people, whether they’re buying a car, choosing a university, looking after their health, buying some sort of technology product, or whatever. In any environment we’ve carried it out in—and we’ve done it more than 500 times—there are no more than 100 things that could potentially matter.

In a health environment, these might include symptoms, treatment, prevention, costs, waiting times, etc. When buying a car they might include price, engine type, warranties, service costs, etc. We’ve carried out Top Tasks surveys in some 40 countries and 30 languages, with upwards of 400,000 people voting. In every single survey the same patterns emerge. Let’s say there are 100 potential tasks. People are asked to vote on the tasks that are most important to them. When the results come in, we will find that five of the tasks will get the first 25% of the vote. 50 tasks will get the final 25% of the vote. The top five tasks get as much of the vote as the bottom 50. It’s the same pattern in Norway, New Zealand, Israel, USA, Canada, UK, Brazil, wherever.

The bottom 50 are what I call the tiny tasks. When a tiny task goes to sleep at night it dreams of being a top task. These tiny tasks—the true waste generators—are highly ambitious and enthusiastic. They will do everything they can to draw attention to themselves, and one of the best ways of doing that is to produce lots of content, design, code.

Once we get the Top Tasks results, we sometimes analyze how much organizational effort is going into each task. Invariably, there is an inverse relationship between the importance of the task to the customer and the effort that the organization is making in relation to these tasks. The more important it is to the customer, the less is being done; the less important it is to the customer, the more is being done.

Beware of focusing too much energy, time and resources on the tiny tasks. Reducing the tiny tasks is the number one way you can reduce the number of pages and features. Save the planet. Delete the tiny tasks.

A plague of useless images

I was giving a talk at an international government digital conference once, and I asked people to send me examples of where digital government was working well. One suggestion was for a website in a language I don’t speak. When I visited it, I saw one of those typical big images that you see on so many websites. I thought to myself: I’m going to try and understand this website based on its images.

The big image was of a well-dressed, middle-aged woman walking down the street while talking on her phone. I put on my Sherlock Holmes hat. Hmm… Something to do with telecommunications, perhaps? Why would they choose a woman instead of a man, or a group of women and men? She’s married, I deduced by looking at the ring on her finger. What is that telling me? And what about her age? Why isn’t she younger or older? And why is she alone? Questions, questions, but I’m no Sherlock Holmes. I couldn’t figure out anything useful from this image.

I scrolled down the page. Ah, three more images. The first one is a cartoon-like image of a family on vacation. Hmm… The next one is of two men and one woman in a room. One of them has reached their hand out and placed it on something, but I can’t see what that something is, because the other two have placed their hands on top of that hand. It’s a type of pledge or something, a secret society, perhaps? Two of them are smiling and the third is trying to smile. What could that mean? And then the final picture is of a middle-aged man staring into the camera, neither smiling nor unsmiling, with a somewhat kind, thoughtful look. What is happening?

I must admit that after examining all the visual evidence I had absolutely no clue what this government website was about. So, I translated it. It was about the employment conditions and legal status of government employees. Now, why didn’t I deduce that from the images?

The Web is smothering us in useless images that create lots of pollution. These clichéd, stock images communicate absolutely nothing of value, interest or use. They are one of the worst forms of digital pollution and waste, as they cause page bloat, making it slower for pages to download, while pumping out wholly unnecessary pollution. They take up space on the page, forcing more useful content out of sight, making people scroll for no good reason.

Interpublic is a very large global advertising agency. As with all advertising agencies they stress how “creative” they are, which means they love huge, meaningless, happy-clappy polluting images. When I tested their homepage, it emitted almost 8 grams of CO2 as it downloaded, putting Interpublic in the worst 10% of website polluters, according to the Website Carbon Calculator. (For comparison, the Google homepage emits 0.23 grams.) One single image on its homepage weighed 3.2 MB. This image could easily have been 10 times smaller, while losing nothing in visual appeal. The Interpublic website is like a filthy, rusty 25-year-old diesel truck, belching fumes as it trundles down the Web.

Instead of optimizing images so that they’ll download faster, the opposite is often happening. High-resolution images are a major cost to the environment. If, for example, you move from a 4K resolution image to an 8K one, the file size doesn’t double, it trebles. For example, I saved an image at 4K and it was 6.9 MB. At 8K it was 18 MB.

Digital “progress” and “innovation” often means an increasing stress on the environment. Everything is more. Everything is higher. Everything is faster. And everything is exponentially more demanding of the environment. Digital is greedy for energy and the more it grows the greedier it gets. We need digital innovation that reduces environmental stress, that reduces the digital footprint. We need digital designers who think about the weight of every design decision they make.

We must start by trying to use the option that damages the environment least, and that is text. Don’t assume that images are automatically more powerful than text. Sometimes, text does the job better.

  • In a test with an insurance company, it was found that a promotion for a retirement product was deemed less accurate when an image of a face was used than when text only was used.
  • An initiative by the UK government to get people to sign up to become potential organ donors tested eight approaches. The approaches that used images were least effective. Text-only worked best.


“Hello. Is that the Department of Useless Images?”


“We have this contact form and we need a useless image for it.”

“How about a family cavorting in a field of spring flowers with butterflies dancing in the background?”


There are indeed many situations where images are genuinely useful, particularly when it comes to helping people better understand how a product works or looks. Airbnb, for example, found that its growth only began to accelerate after it invested in getting quality images of the rental properties on offer.

If you need to use images, optimize them and consider using real ones of real people doing real things.

They say a picture paints a thousand words but sometimes it’s a thousand words of crap.

Reliable hosting, great service, personal attention.


Keep in touch with Graphicz Ltd website design and share

Graphicz on Social Media

01323 872296. 07836 551000.

Website by Graphicz