Step 0: Write Good Code, Test as You Build
The best way to save yourself time testing is to do it right the first time (duh). Write standards compliant code. Use a validator to check for egregious errors. Check a site like caniuse.com to see which browsers support which HTML5 and CSS3 functionality. Don’t fall into the trap of using one single browser while you’re developing. We like to use Firefox to develop, but we’re always constantly trying it in Chrome and the latest IE at a minimum.
Step 1: What Browsers Do Your Users Use?
As I illustrated in my last post, our users are quite a bit different from the global web usage. We looked at 5 months of usage. Really, you want as much data as you can get but you also want the most recent data you can get, so again there’s a tradeoff. For us, we don’t get enough traffic to confidently use a single month to make big decisions. 5 months worked out to a number of visitors where I felt confident in what we were looking at.
If you’re a new site, just pick the most popular browsers – I like StatCounter Global Stats as a data source – and adjust based on how you think users will use your site (for instance, if mobile will be a key component of usage, bump up the importance of Safari on iOS and Chrome on Android/iOS).
Step 2: What Browsers Are You Going To Support?
Here’s the browser usage chart from my last post:
Green are browsers that we fully support, yellow are the ones that we partially support, and red are ones that we’re not making any effort to support – we’ll glance at them quickly and if they work, great, if they don’t we’re not losing sleep over it. I somewhat arbitrarily decided to cutoff our testing at 1% or more of our visitors. It was less about the 1% number and more about that number looking to be the cutoff between “browsers that a lot of people use” and “browsers that most people don’t use regularly.”
Step 3: Get Real Copies of Each Browser/OS You Need
This is where testing could get a bit expensive, but in my opinion it’s worth it. You have to pick up an iPhone to truly experience a site on an iPhone. Simply shrinking your browser or using an emulator just isn’t the same. Even using the developer tools within IE 10 to change the browser mode to IE 8 doesn’t display exactly the same as running a real version of IE 8. You just need the real thing.
Thankfully among the six of us we already had everything we needed so we didn’t have to buy anything new. Here was our list:
- A Windows 7 or 8 machine for IE 10, Chrome, Firefox, Opera, and a VirtualBox virtual machine running Ubuntu for the Linux browsers
- A Macbook Air for Safari, Chrome, and Firefox on OS X
- An iPhone 5
- A retina iPad
- An iPhone or iPod touch with the older 320×480 screen resolution
- An Android phone running Ice Cream Sandwich or greater
- Spoon.net to run full versions of IE 7, IE 8, IE 9, as well as Firefox < 4 natively in Windows
Step 4: Come Up With a Testing Plan
How you approach this really depends on your site. I can tell you what we did for LockerPulse and Detailed Image, which are on pretty different ends of the spectrum.
LockerPulse was easy. We all use it daily, and we all have different devices, so we just gave everyone access to our development site and instructed different people to use different browsers. This every-day use for months was incredibly effective at fixing any bugs or display issues.
Detailed Image was not so easy. None of us make purchases on it daily, plus we only had about a week for testing instead of the months we had for LP (with LP, we did the front-end marketing site last to allow for extra testing time). So we made a document with every type of common functionality a customer would perform: adding products to their cart, checking out, contacting customer service, and about a hundred more things, and then instructed everyone to test as many combinations of those things as they could. We assigned each person a different desktop and mobile browser to use during testing. We also compiled a list of the crazies things we’ve seen customers do and tested those as well.
Separately I tested each browser myself for the most important functionality, including some of the “unsupported” browsers like Opera and the Linux browsers just to ensure they weren’t horrible.
Throughout the entire week we got bombarded with things to fix, some small, some large. Bobby in particular was killing it, catching bugs that were around since at least 2009 on the old site and we never found! If you’re doing it right, the process should be hectic and chaotic (and a little demoralizing). No site of any scale displays properly everywhere right off the bat, nor is it bug-free, so it’s just a matter of testing as many common things as you can so that you catch as much of the major stuff as you can.
At a certain point – usually a few days before launch – we “freeze” development, test the important stuff again, and then come up with a launch-testing plan (usually a scaled-back version of that original document). On launch day we run through those tests, which in this case included placing a few actual orders, prior to launching the site to the public. We do this by only allowing our IP address access to the site. Everyone else sees a 503 page that says we’ll be back soon. A good tutorial for doing this can be found on SEOmoz.
Step 5: Figure Out What’s Worth Fixing
We like to fix the bugs as we find them, sometimes leading to us finding more issues, which lends in to the chaos but also helps keep things moving forward. The hard part isn’t the fixing of the bugs (usually), it’s figuring out what to fix in the time given and what not to.
The things we always fix:
- Functionality issues – for instance, things that could prevent someone from checking out. I’ll stay up all night, or push back a launch date, before releasing something with a known bug like that.
- Major display inconsistencies across supported browsers
- Display issues that could lower conversion rate – let’s say an iPhone user can’t see an error message for an invalid coupon because it displays off screen. That’s a big deal.
The things we’ll usually let slide:
- Minor display inconsistencies across browsers – if a gradient doesn’t look the same, or there is slightly different spacing on a non-critical page, we’ll note it and possibly fix it in the future.
- Feature preferences – if we disagree about exactly how a feature will convert best, but it works currently, we’ll leave it and make a note to test it in the future.
- Bugs that won’t affect anyone – if someone buys 300 boxes worth of stuff that includes 87 buffers and 20 buckets, we don’t really worry if our shipping system doesn’t quote them perfectly or our box size system doesn’t pick the right boxes. Some extremes are so extreme that there’s almost a 0% chance they’ll ever happen.
If you’ve gotten this far, you’ve likely tested the site in the most common resolutions used by your users. Still, we like to gather some data and make sure we’ve hit everything. Here was our resolution breakdown:
What should have stood out to me on this list was the 1024×768. We assumed that this was likely all iPad visitors in landscape view. However when I dug deeper it turned out that the highest percentage (44%) of those visitors were on Internet Explorer. I thought the days of people using 1024×768 CRT monitors were long gone, but I was wrong! Unfortunately we didn’t discover this until a few days after launch when we received a few nasty emails from these customers complaining about seeing the “tablet” view in the responsive design. We quickly tweaked a few things about that view to make it more computer friendly, but had we looked at the numbers better before launching we could have avoided those complaints.
Lesson learned: next time we’ll look much deeper at the browser/OS breakdowns for all of the popular resolutions.
People love to mess with their browser settings. We try to test the most common ones to ensure the site still looks right and functions properly. The most common things we’ve seen people do is change the default zoom and/or font-size. If you’re using a lot of percents and em’s (as you tend to do in responsive design) this has the potential to throw off your desired display. As we’re developing I like to zoom way in and way out just to ensure nothing noticeably bad is happening. At the end of our testing, I’ll change the zoom and font size settings in Chrome and test the site.
Bonus: Supporting Internet Explorer
I was dreading supporting IE 8 and IE 7, but the numbers dictated that we had to. It actually wasn’t nearly as bad as I thought it would be. We didn’t even look at them until just before the testing phase. We were able to get them 90% of the way there by doing two quick things:
- Included HTML5Shiv in an Internet Explorer < 9 IE conditional comment
- Since these versions don’t support CSS media queries, we copied over the styles for the 1024×768 view to a stylesheet for IE 7/8 and again included it with a conditional comment
After that, I spent a few hours cleaning up things using that IE stylesheet and that was it. IE 8 matches the desired site better than IE 7, but both are fully functional.
So How Did We Do?
We’re only a month in, but I’d say we did pretty well. Sales are up this past month. Conversion rates, especially on mobile, are up. Aside from those few complaints by the 1024×768 CRT crowd, and a few people who don’t love the aesthetics, we haven’t heard anything negative from our customers. Most importantly, we haven’t heard that anyone can’t checkout.
It was, all in all, the most successful launch I’ve ever been a part of. Which is why all of this crazy browser and usability testing is totally worth it.
- Why We Don’t Test on Windows Phones
- The Data That Drove Our Responsive Design
- Apple Pay and Google Pay Now Live – Step 2 of Our Braintree Integration
- Why We Removed Our Newsletter Open-Tracking Pixel
- We Recently Migrated Detailed Image to Braintree Payments: The Good & Bad From This Large, Unplanned Project