And boy, was it a doozy. We collected and analyzed more than 4 million data points across four cities and more than 20 business categories to understand how the local algorithm functions across the financial industry. It spanned 8 months, 10 team members, and many, many hours. And I’m extremely proud of the team who put this together (seriously, go check it out).
When we first kicked off this study, we thought it would be a relatively straightforward process. Get data, analyze data, write up insights, publish. How hard could it be?
Turns out it was much harder than we expected! So, I figured I’d share our lessons learned with you all so that when you take on creating your own industry study, you can manage your own expectations better than we managed ours.
Lesson #1: It will take longer than you expect. ⏳
We were… ambitious with our first project timeline. Originally we planned to launch this thing within 12 weeks. I think we could have hit that publish date if this had been the only thing the team focused on for those 12 weeks, but balancing this with client work (which always takes priority) proved to be challenging for everyone involved. We all put in a lot of late nights to pull this off, and in the end this took about twice as long as we expected it to.
My biggest piece of advice is to schedule yourself blocks of time to really get into the flow when working on a project like this. I’m a fan of Cal Newport’s “deep work” concept and it kept us moving along. This kind of analysis requires a different level of focus than responding to emails or working on a content audit; allow yourself room in your schedule to get into that headspace and make progress.
It seems obvious, but let me explain our missteps. Here’s what we tried first:
We had an analyst (who is an SEO) comb through the Power BI file and pull out correlating factors, city-specific and industry-specific insights, and data points that stood out as outliers.
- We let our marketing team and some industry publications know this was in the works so we could tee up promotion opportunities. In retrospect, this was premature.
- Then we planned for our writers (who are also SEOs) to interpret those insights and write up the analysis.
Then we planned to pass it off to our copy editors and designers to pretty it up, and our marketing team to promote.
In theory, this seemed like an efficient way to manage this process and avoid unnecessary hang-ups in the publishing process.
In practice, this resulted in repeated analysis efforts, meetings to review findings, and long email threads trying to explain things to people who weren’t included in those meetings. The writers had a difficult time writing a cohesive explanation for the insights the analyst pulled without having done the analysis themselves. We somehow managed to pull in designers both too early and too late in the creative process, which led to some confusion about the visual needs for the asset.
We ended up roughly following this process, which we’ll repeat for our next study:
- We assigned each person a question they were responsible for answering.
- Each person conducted their own analysis and wrote up their own findings.
- The Local team peer-reviewed each section and opened it up to the wider Seer team to poke holes in it.
- We met to discuss each section and the challenges we were facing.
- We pulled all sections together and copyedited for clarity and consistency.
- After the copy was finalized, we handed it off to designers to create visual assets to assist in telling the story.
- We kept our marketing and social media promotion team in the loop with timelines so they could support the launch. When the time finally came, they had assets queued up so we could fully launch with a coordinated effort.
Lesson #3: You will (and should!) question your methodology and findings along the way, but don’t let that prevent you from publishing. 💭
This one honestly caught us off guard, but it was a beast to push through and every person who worked on this experienced some element of self doubt throughout this project. We’d find something unexpected in our dataset and immediately come up with ways to invalidate it, despite what the data was showing us. Did we pull the right data? Did we pull enough data? What if our methodology is flawed?
But we’d developed our methodology and flipped it inside out and sideways plenty of times before we even requested data. We’d read multiple industry studies that were organized in similar ways. We’re all experienced SEOs who are confident in our analysis chops, so why did we find ourselves questioning our findings so much?
I think it’s because we’re all experienced analysts that we questioned our findings. We’ve learned over the years to follow data, question assumptions, and evaluate everything from as many angles as possible. Those are important qualities in an analyst that translate to well-thought-out digital marketing strategies. But, they really got in our way when we took on this project.
- Trust yourself and trust your data.
- Front Load your questioning and poke holes in your methodology early on so you can move forward with your analysis confidently.
- Peer review early and often.
Don’t let analysis paralysis get the best of you! There will always be more ways to look at the data and different ways to interpret it. But if you try to answer every single question completely, you’ll never hit that publish button. It’s better to put something out there than to try and make it perfect.
Also, your first draft doesn’t have to be your final one. You can always publish revisions or follow-up studies from the same dataset.
Reflecting on the whole journey, this was a valuable exercise in coordinating resources, estimating timelines, conducting analysis, and overcoming perfectionism. Hopefully, these lessons can help some of you launch your own study!
Want to learn more about how Seer approaches big data? Sign up for the Seer newsletter to stay up to date on what we’re working on!