Blog - Articles in the ‘Methodology’ Category


Testing Desktop & Mobile in a Single Usability Session

May 8th, 2013
by sabrina.shimada


In the past year we have seen an increasing demand for multi-device usability testing. Multi-device usability testing involves asking users to perform tasks on more than one device in a single session; devices may include desktop computers, tablets, and mobile phones. Testing multiple devices in a single session has been useful when testing Responsive Web Designs, as well as testing the fluidity of a brand’s experience from desktop websites to mobile and tablet websites, and/or mobile apps.

One benefit to conducting testing across devices in one session vs. testing designs on a single device per session is that the user is able to shed light on the overall experience and consistency across devices. How fluid is the experience? How easy is it for the user to transition from desktop to mobile and vice versa? Not only will testing multiple devices with one user provide insights into an overall experience, but it can also give direction and insight into how and why users may gravitate to one device for certain tasks vs. another.

What we have learned from conducting these tests is that preparation and flexibility play key roles in making sure this type of testing goes smoothly. Below are a few tips to help you prepare for and conduct multi-device usability testing:

Recruiting

  • Recruit to reflect your real audience: If 90% of those using your mobile app are iPhone users, then testing should focus on iPhones vs. recruiting an even mix of iPhone and Android users.
  • Forget the Gadget Lab: We have seen greater success in this type of testing (and mobile-only testing) by having the user bring in their own device. Not only is the user more confident navigating the device, but often there are added insights. With the users’ own device, you may get to see what apps the user has downloaded and how they have organized the information on their phone. Just be sure to clearly specify the types of mobile devices, model and version of software / operating system (e.g., FROYO, GINGERBREAD, etc.) required for testing during the screening process.
  • Get the OK to Download: If users will need to download anything on their phone or tablet, it is best to ask this during the screening process to avoid any trouble down the road.

Discussion Guide

  • Counterbalance Starting Device: Typically we will start an equal number of participants on each device (e.g., N=12 / 6 start on desktop, 6 start on mobile), unless the team is focused on the usability of one more than the other. We also make sure that we have an even mix of participant types (if research has segments) for each starting device. For example, if we have a total of 12 participants, with 6 Prospects and 6 Current Users, then 3 Prospects and 3 Current Users will start on desktop and 3 Prospects and 3 Current Users will start on mobile.
  • Repeating Tasks is OK: It is okay to repeat tasks across devices; we have found that usability issues are still uncovered. Users will have expectations based on their initial experience, but this happens in real life, too! It is nice to add some variety to the tasks if possible, but do not stress if prototypes only allow for the same tasks to be performed on the various devices. By shuffling the starting device, the team will still be able to get initial impressions on the experience from a portion of the users. In fact, if you are looking for an apples-to-apples comparison across devices, it is important that you have the user perform the exact same task on each device.

Set Up

  • Wi-Fi Ready: We typically ask users to join the Wi-Fi network on their phones or tablet prior to starting the session to save time. If you are conducting labs at a facility, it’s helpful to ask the front desk to share the Wi-Fi information to the participant upon arrival.
  • Bring a Back-Up Device: Yes, we did say to forget the gadget lab; however, it does not hurt to have a back-up device for unpredictable technical difficulties. Typically, we just bring in the most popular mobile / tablet device used to access the site/app in the case that the user’s device fails to work during the test.
  • Device Hot Spot: To ensure that the respondents’ actions on their mobile device are captured, it is best to designate an area for the camera to target. Typically, we tape off the area where the users should keep their device in order for it to be successfully captured on camera.
  • Capture All Angles: Many of our clients view sessions remotely and it is a much more engaging experience if all angles of testing are captured, which includes the respondent’s face at all times and then the desktop screen, mobile phone, or tablet. In order to do this, we typically have multiple cameras in the room, which can be controlled by our video technician in the backroom. Our video technician is on site during the sessions to switch between cameras during testing to make sure all angles are captured at the appropriate times.

Conducting the Usability Session

  • Keep it Real: One thing to avoid in user testing is forcing the participant to complete tasks on a device he or she is not familiar with or would not use in real life.
  • Ask Wrap-Up Questions: One of the biggest benefits to multi-device usability testing is the ability to understand the overall experience across devices, so don’t forget to ask questions about this! Some questions you might ask are, “How does the desktop experience compare to the mobile experience?” or “Are there any tasks you prefer to do on one device vs. another?”

That sums up our tips for preparing and conducting multi-device usability testing. As each research project is unique, there are always ways to refine and adjust the methodology to ensure that the research objectives for your project will be met. If you have any questions, please do not hesitate to reach out to us at sales@evocinsights.com.

  • Share/Bookmark

Survey Questionnaire Templates

May 16th, 2011
by claudette.levine


Questions Every UX Survey Should Ask

Survey Template Example Questions

We often are asked for example survey questionnaire templates for evaluating the effectiveness of a website, and while each survey deserves its own unique set of questions and considerations, there are specific questions we feel are standard and important. Having consistent questions across your surveys not only makes it easier on you as a researcher, but also serves as a basis for establishing benchmarks. Over time, you can create your own library of benchmarks and understand what a high performing site is versus a low performing site.

First, a few things to think about when designing your questions.

1. How will people be coming to your survey? In other words, if it is an intercept and you want to learn who is coming to your site, then your questions will need to establish a clear understanding of who they are, what their site familiarity is and what they hope to accomplish. If it is a targeted survey that is emailed to respondents (e.g., customers or prospects who are specifically contacted for their feedback), then your introduction and profiling information need not be as extensive.

2.  How much time/patience do they have? Remember, time is of   the essence and everyone multi-tasks these days. Intercept surveys should not be longer than 20 questions, or 5-10 minutes in total (including entry and exit questions). Emailed invitations for surveys can be longer – up to 60 questions, or 30 minutes in length. And if your audience is more sophisticated (e.g., business decision makers or physicians), then you should reduce the amount of time required to complete the survey. Also, your industry and brand loyalty may also impact willingness to participate, so it is helpful to run a test with a small sample of users to understand your incidence of completes.

3.  Are you offering any incentive for completion? This may make respondents more engaged and willing to answer more questions; however, if you try to ask too many questions, you may end up getting the extremes in terms of respondents (e.g., the most loyal and the least satisfied users), which can skew your results.

The survey template example questions include:

  • What is your familiarity with this site?
  • How did you first hear about this site?
  • Overall, how satisfied were you with this site today?
  • How easy or difficult was it to use this site today?
  • Which of the following frustrations, if any, did you encounter on the site today?
  • Which of the following words would you use to describe this site?
  • Based on your experience, how likely are you to do the following?
  • What is your gender?
  • Which of the following best describes your age?
  • What is your highest level of education?

Click here to download the Survey Template Example Questions.

-

  • Share/Bookmark

Methodology Spotlight: Competitive Research

January 14th, 2010
by Stacey Crisler


Keeping up with the competition

Competitive Research offers insights into how to turn new visitors into loyal customers and keep your current customers invested in their relationship with you. It will provide an understanding of the current usability and appeal of your site in direct comparison to your leading competitor(s). By determining what your customers and potential customers think about you and your competition, you can position your site for success, converting new visitors and retaining your loyal customers. In this article, we will describe two Competitive Research methodologies and provide examples of the types of insights that can be gained from this type of evaluation.

Goals and Objectives

Typically, Competitive Research is conducted to meet the following goals:

  • Measure site performance against key competitor sites
  • Benchmark your site against industry leaders
  • Understand major differentiators that drive purchases on your site versus the competition
  • Determine which features to showcase to gain a competitive advantage
  • Identify short-term and long-term recommendations for site improvements

While Competitive Web Research can be conducted in all industries, the most common are:

  • Travel
  • Retail
  • Mobile
  • Financial
  • Automotive
  • Consumer Products
  • Pharmaceutical
  • Technology

Detailed Methodology

Target Audience

There are two key target audiences for Competitive Research: your customers and prospective customers. Depending on your research goals, it may make sense to select one audience or the other or a combination of both. The first audience to evaluate is your customers. By testing both your website and your competitor’s with your current customer base, you can understand answers to key questions like:

  • What is important to your customers?
  • What you are doing that pleases them?
  • What are your competitors doing better than you and have the potential to lure your customers away?

Prospective customers are the second audience to evaluate your website, including your competitors’ customers. Testing your site with this audience will provide insights into:

  • How do these potential customers see your site?
  • What draws their attention and their spending to other sites?
  • What would you need to do to entice them to switch to using your site?

A combination of these two audiences may be used to assess both retention and acquisition. A minimum of 100 users per segment is recommended with 200 users viewing each site. This will provide statistically significant results that inform key business decisions with as little margin of error as possible.

Study Design

There are 2 possible designs for Competitive Research: Within Subjects (Sequential Monadic) and Between Subjects (Monadic).

Within Subjects: A Within Subjects design means all of the participants in your study will see all of the sites you are including in your competitive set. This is also known as sequential monadic testing in traditional market research. The benefits of this design include the ability to determine a head-to-head preference and be more cost-effective since it typically requires fewer participants. One drawback to using this methodology is that participants will not be able to perform as many tasks on each site as they might if they were reviewing only one site. This means Within Subjects studies are ideal for tasks that participants are likely to perform on multiple sites in succession, such as shopping for a particular item or researching mobile phone service plans. The design would emulate participants’ natural behavior of moving from one site to another to accomplish a particular task, during which they would be comparing what they found on each site. In this design, participants would be taken to the sites in a random order to mitigate bias and be asked to perform the same task on each site. Following the completion of the task on each site, participants would be asked follow-up questions about their experience on that site before being taken to the next site. After completing the task on all of the sites, generally up to 3 or 4, the participants would be asked about their preference between the sites, on which site they would be most likely to take action, and why.

Between Subjects: In a Between Subjects design, participants are divided between the competitive sites and are asked to perform multiple tasks on a single site with no knowledge of the competitive nature of the study. This is also known as a monadic test design in traditional market research. Participants are asked the same questions on each site after completing the same tasks on each site, allowing for a direct comparison of the data in the analysis phase. This type of competitive research is beneficial if there are 4-5 tasks you would like users to perform on a site - too many to ask participants to complete on multiple sites. Between Subjects is also useful if visiting multiple sites would not be part of the natural process of the behavior being studied. For example, a financial institution may want to evaluate the account management tools they are offering clients against what the competition is doing. It is unlikely that the majority of participants move through multiple banking accounts in sequence looking at these tools. So, in order to replicate the natural experience, it would make more sense to have the bank’s customers evaluate their tools and to recruit customers of the competition to evaluate the performance of the tools in their accounts. The drawback to a Between Subjects design is the inability to ask about a direct preference.

In both designs, detailed questions will be used to probe what participants like about a site, what they do not like, and what would cause them to take action. Both quantitative and qualitative questions in combination with tracked behavioral data will provide a complete picture of how participants are using the sites and the differences in site experience. As this data is collected in the same manner for each of the sites evaluated, the site and brand experience can be compared apples to apples across the competitive set. This provides a clear picture of what is and is not working on your site, as well as the competitors’ sites, informing your strategy to attract and retain customers.

Analysis and Insights

After a Competitive Research project has finished fielding, data analysis begins by comparing the experiences across the various Websites. Differences in the quantitative data are tested for statistical significance and the behavioral and qualitative data are analyzed to understand how participants use each site and the “why’s” of how they feel about each experience. Bringing all this data together for each site evaluated, we can identify what is driving conversion and loyalty on each website to determine where the gaps and opportunities are for your site amongst the competition. Additionally, data can be segmented by current customers and new customers to determine if the groups are looking for different things in a site to help determine how to best serve both of these audiences.

competitive1 Analysis - Overall Experience Metrics:

  • How does each site perform on the metrics that are key to a good customer experience on each site?
  • Where does your site outperform others?
  • Where does it underperform? Why?
  • How does it impact how participants would use the sites?
competitive2 Analysis - Preference:

  • What sites do participants prefer before their experience?
  • What about afterwards?
  • What sites are they likely to use?
  • What tools / features / content drive preference and response to calls-to-action?
competitive3 Analysis - Best Practices:

  • What are the best features of competitive sites?
  • What features and functionality work?
  • What functionality has the competition implemented well?
  • What did the participants dislike?

Based on this analysis, recommendations will be made on what is working well on your site that distinguishes you from the competition and that should be retained, improved or highlighted. Additionally, recommendations will be made to address gaps in content, functionality or branding between your site and those of the competition. Best practices and key features and functionality will be identified and suggestions will be made on how to integrate these into your site. The overall result will be a stronger web presence that takes advantage of your site’s key strengths, acts on opportunities to convert new customers, and retains the loyalty of existing customers - all while establishing you ahead of the competition.

Many clients are concerned with competitive research and already partner with companies like Comscore, Compete or Hitwise. However, these firms offer aggregated data that provides only a cursory view of the competitive landscape. eVOC’s research approach goes deeper than traffic data by directly comparing specific processes and pages to provide actionable recommendations. What is also unique about eVOC’s approach is that you are able to have participants truly interact with the websites as they normally would, rather than answer survey questions after the fact. This interactive surveying technique allows us to understand users’ intentions and combine attitudinal data of what people say with behavioral data of what they actually do. This provides you with a crystal clear picture of how to beat the competition.

Examples of Competitive Research

  • Assisted a retail client in improving its checkout process by reducing abandonment and improving the shopping process to better meet the needs of its most loyal customers
  • Helped a pharmaceutical company understand how to integrate the content from two merging companies into a single site combining the best of what both companies had to offer
  • Helped a travel company determine how to improve its product display pages to increase conversion on its site and reduce the loss of business to competitors

For more information on Competitive Research, download our Competitive Research Overview or contact us with questions!

  • Share/Bookmark

Understanding The Online Decision Process: Open Web Research

July 18th, 2009
by Stacey Crisler


Open Web Research allows you to observe your target customers as they naturally explore the Web and search for information about your product or service. As you ask them key questions along the way, you gain insight into their motivations, behaviors, likes and dislikes on the Web. This information can be used to inform branding, marketing and search strategies, as well as provide insight into your competitors and identify best practices in your space. In this article, we will describe how Open Web Research works and provide examples of the types of insights gained from this type of survey.

Goals and Objectives

Typically, Open Web Research is conducted to meet the following goals:

  • Identify target audience expectations and motivations when conducting online research for a particular topic or product
  • Learn how users search or browse the Internet for the information - where they start, what keywords they enter, how they navigate from one domain to the next, which sites they prefer, which features / functionality and resources they find useful, high dwell pages, etc.
  • Assess the strengths and weaknesses of competitive websites and identify how to most effectively differentiate oneself
  • Determine what features / functionality and content are most effective at attracting and retaining site visitors
  • Identify areas of unmet needs among target audience across a competitive landscape

While this Open Web Research can be conducted in most industries, the most common are:

  • Pharmaceutical
  • Retail
  • Consumer Products
  • Automotive
  • Travel
  • Technology
  • Mobile
  • Financial

Detailed Methodology

Target Audience

Typically, the target audience for Open Web Research falls in one of two categories. It either mirrors your current customer base because you want to understand more about them in order to increase your market penetration, or you may have developed a profile of a customer you do not yet have, but would like to acquire. Sample sizes for this type of study are larger than many online evaluations in order to gain statistically significant data on the sites visited by users. Recommended sample size is a minimum of 300 users per major segment.

Study Design

The design of an Open Web Research study differs from most typical online studies. While you may include some basic introductory questions to understand the demographics or profile of a user, you want to minimize the number of questions about users’ awareness and usage of sites. This ensures that you have not biased them when they naturally explore the Web during the main task of the study: the Open Web task.

This task is designed to understand how users use the Internet to research or find information about a company, product or service. Users begin the Open Web task on a blank Web page and are given instructions on what the intent is of their search. For example, “Use the Web to research the symptoms and treatments for Asthma.”

While users complete their task, they are asked questions that help uncover their motivations and experience. First, users are probed about their motivation to begin their research at a particular site. Was it in their favorites? Was it top of mind? Do they need to turn to a search engine to determine where to go?

As users move between subsequent sites, they will be asked to assess their experience on the site they left: What did they like and dislike? How would they rate their satisfaction with the site and what it offered? Would they return to the site or take other action? And what are they hoping to find on the next site they visit? This detailed questioning paired with the behavioral data provides a complete picture of each user’s Web experience - we can understand not only where they went, but why they went there and what they thought of each site they visited.

Following the Open Web task, users are often taken to the client site if they did not visit it during their natural exploration. They are then instructed to conduct the same task that they completed during the Open Web task, but this time on the client site. The same questions about likes and dislikes, satisfaction and calls-to-action are asked to provide a detailed comparison between the client site and the others visited within the Open Web task.

Analysis and Insights

Following the fielding of an Open Web Research project, intensive data analysis begins to first tie the behavioral data to the comments and ratings given on each site visited, and then to determine how users browse the Internet in their search for information. The final step is to understand how all the sites visited perform in terms of satisfaction, ease of use, calls-to-action, etc.

Examples of the insights derived include:


Based on this analysis, recommendations will be made on how to attract users to the client site, such as search engine optimization, partnerships, branding, etc. Additionally, recommendations as to the usability and content of the client site will be made in order to assist in retaining current customers, while also gaining potential customers through new acquisition activities. The result is a stronger Web presence, targeted marketing efforts, improved site content to drive call-to-action and an understanding of how to better position your site among the online competition.

Example of Recommendations / Impact

  • Helped a pharmaceutical company understand what content and functionality are most critical in driving prescription requests and drug compliance
  • Guided a travel company to understand how to position itself in search results and whom to partner with
  • Determined how branding was impacting a retailer’s market share and how to reposition the brand to retain customers

For more information on Open Web Research, download our Open Web Research Overview or contact us with questions!

  • Share/Bookmark

Methodology Spotlight: The Benefits of Adding Eye Tracking to Usability Labs

January 19th, 2009
by Phil Scarampi


Usability Labs have long been considered a speedy, effective method of uncovering fundamental usability barriers on Websites. They are especially useful before launching a site or conducting an in-depth quantitative study. Here is a quick refresher on how Usability Labs work:

  1. Target users of a Website are invited to a research facility to participate in a usability study
  2. A moderator (from eVOC) conducts one-on-one interviews with the participants, asking them to complete tasks on the site(s) and answer questions about their experience
  3. Sessions are audio and video recorded, while clients observe the sessions from a separate room through a one-way mirror
Usability Lab
Usability Lab

What Sets Usability Labs Apart
Usability Labs are so effective because:

  1. All variables are accounted for
    • Internet connection speed, monitor resolution, and physical environment are the same for everyone
    • The moderator can control exactly which pages users explore
  2. Users’ facial expressions, physical reactions, and site behavior can all be observed
  3. The discussion guide is flexible
    • The moderator can probe on key areas on the fly
    • More specific, user-driven recommendations can be collected
  4. Usability interviews take 30-90 minutes, which exposes participants to more content than a survey would

It’s not surprising that many of our clients choose Usability; in fact, it’s unbeatable for evaluating a site’s information architecture, nomenclature, and overall navigability.

But what about effectiveness of navigation, ease of use, and information flow? Sure, Usability findings are great at illuminating these areas. But an advanced technology called Eye Tracking is now enabling us to uncover insights that, when combined with Usability, paint the most complete picture of a Website’s performance. We have found that Eye Tracking allows us to make discoveries about Websites that we never could have made doing Usability alone.

How Eye Tracking Works
So how does it work? Using a special Eye Tracking monitor that tracks participants’ eye movements, we can log exactly where users are looking as they browse each page of a Website. The data can then be visually quantified:

www.evocinsights.com

What Eye Tracking Tells Us
The Eye Tracking software also allows us to capture usage data, such as how long it takes users to click or complete tasks, and what percentage of users take a particular path. Here are just a few examples of the insights we can glean when we complement Usability with Eye Tracking. We can:

  • Determine what areas users overlook (e.g., advertising) and complement with usability findings to explain why
  • Calculate how long it takes users to complete tasks in different navigation designs and choose which one performs best
  • Discover if users tend to notice certain page areas early or late, and provide recommendations on prioritizing navigation and content
  • Study what paths users take as they browse a site and determine their effectiveness, as well as key differences between user segments
  • Learn how users scan information and find out the most effective content layouts

We strongly encourage any company that is planning to conduct Usability Labs to consider adding Eye Tracking, particularly to test visibility of key areas, navigation, messaging, and/or ads. To learn more, go to http://www.evocinsights.com/services_labbased.html#eyetracking.

  • Share/Bookmark

Our Services

Web Based Testing

Website Evaluation

Competitive Assessment

Intercept Survey

Prototype Testing

Home Page Survey

Brand & Concept Test

Open Web Research

Online Benchmarking


Consulting Services

Expert UI-Review

UI Consulting

Lab Based Testing

Usability Testing

Focus Groups

Eye Tracking

In-Depth Interviews

Card Sorting

Persona Research

Featured Articles

Usability 101

Methodology Spotlight

E-commerce Usability

Website Usability Testing

Our Expertise

Industries

Benchmarks

Case Studies

About Us

Management Team

Partners

News & Events

Contact Us

415-439-8333

sales@evocinsights.com

Connect With Us

linkedin logo

Affiliations and Endorsements

evoc greenbook badge Featured Market Research Companies

Keep Informed

Sign up for eVOC's monthly newsletter.