Note from the Editor: This article is part I in a series dedicated to helping you increase your online experiment velocity. Stay tuned for future instalments.
Applying a lesson from Henry Ford to your experimentation program
If someone gave you $5,000 and a pile of scrap material and asked you to build a car, how would you do it?
Back in 1901, this was the question Ford Motor Company was facing. Ford’s response was to hire a few people to produce each vehicle by hand over the course of several days. The company would hire someone, teach them how to build a car, and put them on a team.
Simple logic tells us that the process that works to build one car will work to build 10. And 100. And 1,000.
But if someone asked you to produce 1,000 cars, would you continue to take the same approach? Of course not.
Ford discovered that, at scale, hiring specialists in different areas and leveraging automation allowed him to produce vehicles much more efficiently and profitably. The assembly line was born.
At a scale of few, automation and specialization doesn’t make any sense. In fact, you could argue that it would take way longer to produce a single car if you had to set up extensive infrastructure as well.
But when creating 1,000+ cars, it is crazy to have two to three people building cars start to finish constantly. This would be a huge waste of human efficiency.
One of the many things that made Henry Ford a success was his ability to step back from his tasks and evaluate his process, or “machine”, at a high level. He recognized that producing hundreds of cars required a different system than producing one.
You should view your experimentation program in the same way. While it is important to iron out your processes when you are running 10 experiments, you’ll need to re-evaluate system requirements to scale to hundreds of experiments. At Widerfunnel, we call this system your “Experimentation Operating System”.
How to utilize this article
This article will be most useful if you already have a testing foundation and are running 10 or more experiments per year. This means you have organizational buy-in and experimentation technology in place, as well as basic resources and processes.
Reaching 10 experiments per year is a major milestone for any testing organization. But many companies have their sights set on something more: Moving from dozens of experiments to hundreds.
At Widerfunnel, we often see experimentation programs fail because an organization is trying to run hundreds of experiments using the same strategies and processes they used to launch 10.
This series is a practical guide meant to address the key questions that organizations face when scaling their production from tens of experiments per year to hundreds.
Determining how to structure your experimentation program: Centralized or decentralized?
At its core, the question of whether to centralize or decentralize your experimentation program is about:
- Where knowledge needs to be stored (or where you want it to be stored), and
- How resources and budget are allocated.
By nature, every single-person testing program begins as a centralized model: All of the knowledge is owned by an individual, and is funded by a single budget.
At a certain point—usually around the 20 to 40 experiments per year mark—organizations have to decide whether to continue to use an (upgraded) centralized structure, or whether transitioning to a decentralized structure makes more sense.
Neither is right or wrong and both have pros and cons based on the needs of your organization and the staff and skills you have available to you.
Why you may want to centralize your experimentation program
Centralization is most effective when:
- It is difficult for your organization to recruit elite experimentation talent, and you don’t have many experienced experimenters in-house
- You are at an organization that is not naturally wired for experimentation or has a history of aversion to failure and mistakes
- You require strict control over process, or
- You have lower website traffic levels, limiting the number of experiments you will eventually be able to run
Centralization means that results and insights are owned by a central team that has oversight across multiple teams or business units; this enables the recycling of ideas. Companies that want to take maximum advantage of the knowledge and insights gained through experimentation will benefit from the standardization offered in a centralized structure.
More often than not, centralization is the better option for a company launching between 10 and 100 experiments per year. The critical transition point depends on when experimentation moves from a single team to many teams. This often creates communication challenges that bring into question whether centralization makes the most sense.
Centralization can function smoothly until an organization reaches 200 to 500 experiments per year. At this stage, an organization is often required to decentralize in order to continue to scale efficiently. This doesn’t mean it can’t work but, in most cases, centralization becomes inefficient.
The drawbacks of centralization
Centralization does have certain drawbacks. One of the biggest is cultural.
When experimentation is owned by a central team, individual product owners don’t have the freedom or ability to run experiments. These individuals may feel powerless or undercut by a central testing authority that is running experiments on experiences for which they are responsible.
A successful centralized model requires top-down authority: An individual with expertise in experiment design and prioritization, who can guide a less experienced tester or product owner. There must be clear, constructive communication with the individual owners of the experiences where tests are being launched.
Effective centralized teams are usually considered “services to the business”; their job is to enable and regulate experimentation throughout the organization.
The benefits of centralization
Arguably, the biggest benefit to kicking off your experimentation efforts with a centralized structure is that it requires less high-end experimentation talent. As many organizations know, it can be exceedingly difficult to find even one experimentation expert. Recently, one large insurance company told us they had been searching for an Experimentation Strategist in the San Francisco area for upwards of 6 months before finally taking down the posting entirely.
While a decentralized model requires experimentation expertise in several different areas of the business, a centralized model can service several business units leveraging a single experimentation team or person. A centralized model allows one individual or team to own the majority of the knowledge around experimentation, and share that knowledge via unified standards and training where possible.
This is usually beneficial in the long-run because it is easier to transition to a decentralized model when different teams are all using similar processes with minimal divergence. In contrast, a decentralized team will have trouble trying to unify to a central model when many business units have adopted a unique operating system.
Why you may want to decentralize your experimentation program
Decentralization is most effective in organizations where:
- Website traffic levels are extremely high,
- The organization is spread across various geographies,
- There are distinctly different product verticals (that do not overlap),
- There is an organizational culture that grants employees the freedom to make mistakes publicly
For larger, global organizations, some form of decentralization usually becomes inevitable. Inefficiencies begin to occur when a central team simply does not have the breadth of knowledge necessary to perform experimentation effectively and efficiently.
For example, if an organization has both an e-commerce business as well as a software business, a centralized team likely will not have sufficient understanding of either business, making it less efficient in both. Another example is when the enterprise is dispersed across multiple geographies where specific market knowledge is required. Of course there are exceptions to this, especially if the organization is effective at sharing knowledge and combating siloing.
For these reasons, we rarely see decentralization in smaller companies; these organizations are not restricted by issues that become exposed at large scale. Even within large organizations, it is still very common for experimentation programs to start as centralized during the Initiating Phase of maturity before converting to a hybrid and eventually to a decentralized model.
This “training wheels” approach allows companies to safely roll out experimentation with oversight, rather than being exposed to the risk of allowing people all over the organization to experiment freely (as they would in a decentralized model).
Structuring your experimentation program using a hybrid model
Some organizations prefer to operate using a hybrid system.
In this model, a centralized team owns the more strategic tasks—such as experimentation strategy, analysis, and experiment design—while engineering and design resources are provided by the business unit that is requesting the test. This central body is often referred to as a “testing counsel” or “steering committee” and supports decentralized production teams through coaching and training.
This model can be equally effective to a centralized model: Departments do not have to fight over the prioritization backlog, since each owns production resources, but they are still able to leverage the centralized system for more intensive tasks.
Drawbacks to a hybrid experimentation model
There are two drawbacks to this method, though.
First, accountability can become fragmented. It is difficult to hold a testing team accountable for certain results if they do not have autonomy over the resources needed to produce them. As priorities shift for individual business units, experiments are often the first to be scaled back. This drawback can be overcome with extremely clear program KPIs and responsibilities, but this can be difficult to achieve.
Second, if production resources are not up to a certain standard, results may not be a good reflection of the work of either team.
Overall considerations when it comes to structuring your program
When deciding how to structure your organization’s experimentation program, you will want to weigh the following factors:
1) Talent available in your company or region. A large talent pool can weight towards decentralization, while a smaller talent pool may point to centralization.
2) Whether depth of knowledge in individual business units is required. More depth makes a case for decentralization, less suggests centralization.
3) How budget is allocated. If budgets are highly fragmented, it may be easier to operate in a decentralized model, which allows each business unit to commit as much as they would like to testing. In highly political organizations, it is common for business units to fight over resources, which can lead to suboptimal tests being prioritized based on power and popularity rather than overall organizational impact. However, if experimentation is a top-down initiative coming from an individual or group with overarching budget control, a centralized team may make more sense.
4) Possible experiment velocity based on website traffic. If your ceiling is 10,000+ experiments, you should consider a decentralized model more seriously than if you only have velocity capacity to run 40 experiments per year. The lower your website traffic volume, the more centralization makes sense.
5) Existing culture of organization. Many “fail-fast” startup cultures are more easily able to adapt a decentralized culture since experimentation is more natural for them. Large organizations with aversion to failure may find more success starting with a centralized model for hand-holding as the culture develops.
It is important to remember that these are just three of the most common structures for an experimentation program, existing on the spectrum from centralization to decentralization. While it is true that different solutions work better in certain organizations than others, these models are often correlated to maturity and scale.
Smaller, lower maturity organizations tend to start with a centralized model (often because they outsource), and larger global enterprises tend to decentralize due to sheer size. As companies grow, it is risky to flip between centralized and decentralized directly, so the hybrid model offers a nice transitionary model to bridge skill gaps along the way.
Watch: Here’s an interview with experimentation pioneer and Widerfunnel founder, Chris Goward, and article author, Michael St. Laurent, discussing how to increase experimentation velocity in your testing program:
How you structure your experimentation program is just one piece of increasing your experiment velocity. Over the coming months, we will dig into other components. Look for the tagline “Going from 10 to 100 experiments” for future instalments.