Organization & Role
Heap is a digital analytics platform. As their founding documentation strategist, I’m responsible for the user experience, information architecture, content strategy, and all content within the Help Center and developer/API docs.
Project
I was hired to select a new CMS, oversee the CMS migration, and manage the CMS going forward. As part of this project, I also revised all existing content to have a unified voice per our in-house writing guide.
Note: this was a 16-month long project broken up into multiple mini-projects. To keep this high-level, there’s a lot I left out, including style guide creation, granular copy decisions, budget discussions, and more. For more info on this project, reach out via the contact info on my homepage.
Problem
To better understand their reasons for wanting to migrate to a new CMS, I completed a series of 1:1s with folks on Success, Support, Engineering, Product, Design, and Sales to ask about their pain points with the current docs. I whittled down this feedback to five focus areas.
#1 Site layout implies docs are strictly technical
Though most of the docs were written for non-technical audiences, the CMS, ReadMe, had a developer docs UX. This made non-technical readers assume the content wasn’t right for them, who resorted to contacting support and increasing their ticket load.
Above is a screenshot of ReadMe’s Docs as an example of a typical ReadMe site. Though I don’t have screenshots of the old Heap docs, this is fairly similar to what it looked like at the time.
#2 Lack of support for a 2-tier taxonomy
The old site featured a long, text-heavy left navigation bar listing 150 articles divided into 10 categories (which required lots of scrolling). Many of the category and doc titles were too ambiguous to be useful to the wayward searcher, such as “Events” and “Data Format”. Imagine that left nav with 150 links crammed into 10 categories 😱
There was easily enough content to warrant an urgent need for a 2-tier architecture with main and subcategories, but ReadMe did not allow for this type of architecture, so I decided to revise the information architecture as part of the CMS migration.
#3 A sub-par search experience
The search UX was frustratingly difficult, with searches that matched the exact doc title not including that article in the results. This led users to assume there was no answer (even if there was) and reach out to support, increasing their ticket load for questions that were already answered in the docs.
#4 Lack of custom branding features
ReadMe only allowed for minimal custom branding, so the docs looked significantly off-brand compared to the website and in-app UX. There were a number of features that other CMSs offer out-of-the-box, such as a custom landing page and a better flow for contacting support, that ReadMe didn’t have.
#5 Inconsistencies in voice and tone
Because the docs had been written by multiple people, there were inconsistencies in voice, tone, and terminology and formatting. Many examples and screenshots were out of date, which frustrated customers and decreased their trust in the product.
Approach
With these pain points gathered, it was clear there was a lot of work to do. I divided up the project into the phases listed below.
Gathering Stakeholders
To make sure I was continuously getting feedback throughout the process, I asked team members to be stakeholders. This meant representing their team by giving feedback to make sure the result would meet their teams’ needs.
I opted for eight stakeholders, including:
- One Product Manager
- One Designer
- One Engineer
- One Customer Education Specialist
- One Professional Services Specialist
- Two Support Engineers
I decided who to work with based on their area of focus, interest in the project, and bandwidth. To make it easier to collaborate, I shared most of my work via tools they already use, including Google Drive, Dropbox, and Confluence.
Benchmarking Success
My next decision was how to measure the “success” of the project. I started by writing out what the opposite of each of the pain points would be:
- An approachable UX for all types of visitors (technical and non-technical).
- A flexible 2-tier taxonomy designed to grow with our product.
- An intuitive, useful search experience with increased clicks on results.
- Branding features to align with the product and marketing website.
- All content rewritten to be in a consistent tone of voice.
Based on this, I established the following baseline success metrics:
- 👎 Decrease in ‘no’ replies to the ‘was this doc helpful’ prompt in each doc
- 🎫 Decrease in customer write-ins about docs
- 📈 Increase in engagement with docs that were previously less discoverable
CMS Evaluation
For the CMS evaluation, I developed a grading rubric based on the requirements shared with me by the team. I divided these requirements into three categories with a weighted point value:
- Must have (3 points): a CMS would be instantly disqualified if it did not include this feature.
- Like to have (2 points): a CMS would lose points if it did not have this feature.
- Nice to have (1 point): enhancement features that we could ultimately live without.
Pictured below is the top section of the rubric along with a few CMSs we considered. You can see a clean copy of the full CMS Evaluation Rubric in Drive for a complete list of requirements.
Next, I generated a list of potential CMSs based on popular recommendations in professional spaces. I also used search autofill, where I entered a CMS name along with “versus” and let Google autofill the rest, ex. as “Contentful vs.” to find out who the top competitors were.
I wound up with a list of ~18 CMSs. To narrow it down, I put them through an ‘instant disqualification’ test where I spent 1-2 minutes scanning each option until I found a missing ‘must have’. I put those under a ‘Disqualified’ header with a note about why they didn’t make the cut.
Some stakeholders were particularly invested in certain CMSs, ex. ones that they could easily integrate with their toolset. Maintaining notes on which were disqualified and why helped justify my reasoning when those CMSs were brought up later on.
This whittled it down to the top five options, which included all of the must-haves. I completed the full rubric for each one, then totaled up the points. Below is a screenshot of the bottom half of the completed rubric showing the scoring system with points added.
As I eliminated options, I added a high-level summary of reasons for their disqualification to make it easy for reviewers to understand.
The CMS we chose (WordPress) came out with 84 points and the best price. With all of this data in hand, I announced my decision, then moved onto the information architecture phase.
Information Architecture
I started off high-level by outlining a site map consisting of:
- Top and bottom navigation elements
- Homepage sections
- 2-tier information architecture
Since I’m not a designer, I mocked up low-res screenshots based on competitor and innovator Help Centers. The mock below is based on Slack’s Help Center with content modified via the DOM. This gave stakeholders (including our design vendor) a sense of the overall look and feel of the new site.
Next, I mapped out the docs architecture. The left sidebar allowed for a birds-eye view of the taxonomy, with headers for each category along with a list of what docs would be beneath it. I also left comments explaining content decisions made along the way.
Once complete, I had stakeholders review and leave feedback sequentially. I started with Product, Design, and Eng for their product and design expertise, then brought in Customer Education, Professional Services, and Solutions Engineering for their perspective working directly with customers.
User Testing
Once the team review was complete, I conducted a series of user tests with non-Heap people in my professional network who most aligned with our customer persona, including:
- A Marketer at a SaaS business
- A Product Manager at a B2B
- A Customer Success leader at an eCommerce startup
Since my department did not have budget for testing, I offered an IOU to participate in one of their user research studies as an exchange.
This was my first time conducting user testing, and there wasn’t testing process on another team for me to align with. To prepare, I absorbed best practices by speed-reading books and watching videos on the subject. Based on my learnings, I opted to conduct the following types of tests:
🌳 A tree test, where participants were given a task and prompted to click through the architecture to select the option that they believed most likely contained the information they needed.
🃏 An open card sort where participants were presented with a set of cards containing doc titles, asked to create groups, and then sorted the cards into those groups. In the image below, I focused on how our analysis examples docs would be grouped.
🔗 A hybrid card sort with several predefined groups along with the option to create new groups.
I used the max limits of free software (Optimal Workshop) to complete these tests with 10 participants, who shared their screen while I recorded. In addition to the non-Heap participants, I recruited a dozen new Heap hires across various teams to complete these tests on their own time.
The tests were eye-opening; we identified (and solved) for several information architecture problems, including a ground-up rewrite of the entire Analysis Examples section, and new titles for docs where the previous title did not match the contents.
I shared my findings with the stakeholder group for their feedback and my suggested changes, and received unanimous approval based on the test results.
Content Revision & Migration
With the architecture finalized, I set about migrating all 200+ docs from ReadMe to WordPress. Based on past migrations, I estimated this would take me around 70 hours to complete, so I blocked off most of a 3-week period to complete this process.
To stay organized, I set up a spreadsheet mapping the titles and locations of the current docs to their new titles and locations in the new Help Center. I also used this sheet to track my progress through the migration, marking ‘Yes’ to docs that were moved.
I also added a function to the top calculating the percentage of migrated docs.
I wound up completing the migration at exactly the 3-week mark, thanks to the blocked-off time frame, many cups of tea, and all of the lofi hiphop playlists on YouTube.
Pre-Launch Planning
Based on average site traffic, I planned launch day for a Monday morning at 6am PST. This would give me the whole day for urgent same-day fixes, and the rest of the week for additional site updates.
As the day approached, I set up a list of pre-launch, launch (right at 6am), and post-launch tasks. Here’s a sample of what that list looked like:
PRE-LAUNCH
- ✅Various final content improvements
- ✅Double-check that all unlisted docs in ReadMe are hidden in WordPress
- ✅Finalize what top/trending articles should be listed on homepage
- ✅Coordinate DNS certificate for URL redirect
DAY OF LAUNCH (6am PST)
- ✅Set up redirects in ReadMe using https
- ✅Delete all ReadMe articles to activate redirects
- ✅Check that nice reply (our thumbs up-thumbs down) survey works
- ✅Update top navigation links across heap.io and help.heap.io
- ✅Announce launch in Slack! 🚀
This list acted as a living project management tool the day of the launch, with me and my launch teammates checking off items one-by-one.
For quick coordination, I also created a dedicated #help-center-launch channel and added everyone involved (an on-call engineer, a designer, and all stakeholders) for rapid-fire discussion.
Launch Day Challenges
It wouldn’t be a true launch without some snags, would it? Though the day went smoothly for the most part, I encountered the following hiccups:
🧊 Cached results in ReadMe despite changes being pushed live
For mysterious reasons, ReadMe’s results stayed cached for about 18 hours for Heap employees. Luckily, I quickly confirmed this wasn’t the case for non-Heap viewers, who saw the updates as soon as they were pushed. Accordingly, no action was needed; we just had to let the cached results disappear.
❌ Accidentally deleted content in ReadMe (with no ‘restore’ function)
Another limitation of ReadMe (at the time) is that when you delete a doc, it’s gone forever. In the migration, I erroneously removed one doc that was meant to stay. The slow horror that washed over me as I frantically navigated the site – and ReadMe’s docs – looking for a way to restore the deleted doc, and being unable to find one, is indescribable.
Fortunately, thanks to the caching issue referenced above, I was able to retrieve and restore the doc via the cached version, with only the revision history lost.
Outcome
Remember the ReadMe docs site shown above? The one that looked like this?
The official new Heap Help Center homepage, on the day of launch, looked like this:
Results
The new Help Center was praised by customers and colleagues, with one customer sharing they were “obsessed” with the new design. Over the next few quarters, I confirmed the success of the project via the metrics that I benchmarked at the beginning:
- 👍 I observed a decrease in thumbs-down clicks to the ‘was this doc helpful’ prompt in each doc!
- 📥 The support team observed a decrease in write-ins related to docs!
- 📊 Pageview metrics showed a spike in views of certain docs that had previously had less views!
Feel free to explore the current version of the Heap Help Center for yourself at help.heap.io.
Note: One piece of post-launch feedback was that our developer docs now felt outdated To address this, I did a dev docs revamp, which I’ve written about in Revamping a developer documentation hub.