We specialize in optimizing the user experience (UX) through data-driven insights. Our research services are designed to elevate the usability, functionality, and overall satisfaction of digital products. By conducting usability tests and in-depth UX evaluations, we help businesses uncover the challenges their users face, ensuring smoother interactions and higher conversion rates. Whether it’s analyzing how users navigate a prototype or live website, or pinpointing where and why users struggle, our moderated tests reveal crucial pain points that can lead to a 135% increase in usability and a 100% improvement in conversion rates.

We also offer field user testing for long-term, real-world insights, ensuring your product thrives in actual usage scenarios. Our methods go beyond surface-level interactions, providing rich, contextual feedback from your target users. From card sorting and tree testing for navigation optimization to heuristic evaluations by our UX experts, we tailor our services to meet the specific needs of your digital product. Let us help you make informed design decisions that will not only enhance user satisfaction but also drive business growth.

A usability test (AKA moderated user test, user experience test, usability lab, conversion lab) is the optimization tool for usability, conversion, interaction, and general user experience of digital products and advertising media. Our moderators observe 6-20 users from the target group as they solve given use cases on prototypes or up and running web sites or apps. They think aloud as they use the test material. After the usage, various aspects of the experience are explored in an in-depth interview.

TYPICAL QUESTIONS

  • How is a new prototype or feature received?
  • Where are users having usage problems?
  • Why do so many users abort at a certain point in the ordering process?
  • Why do so many customers contact our customer service?
  • How can we improve the conversion rate of a landing page and the entire user journey, including advertising media?
  • Does the content tonality and presentation fit the target group?

HOW DOES IT HELP?

  • Better understanding of how target audiences use a product or website.
  • Usability increase by 135%* on average
  • Conversion rate optimization by an average of 100%*.
  • Efficient use of resources (UX conception, UX design, programming) by focusing on what is important to the users.
  • Reduction of support efforts.
  • Maximized ad impact through a higher CVR.

LIMITATIONS

  • The method focuses on finding all usability problems and does very well so. It is not suitable to produce representative results.
  • It isn’t well suited to answer questions centered around taste, e.g., graphic design, acceptance of features etc.

WHEN TO USE?

  • When lo-fi prototypes are available, a first usability test can be run. It can also be done with more refined prototypes or running apps and web sites.

COMBINE WITH

  • A representative survey in order to obtain design or functionality preferences. Card sorting before functional layouts & prototypes are built, in order to create a sustainable foundation for good usability.

VARIANTS AND OPTIONS

  • On-site (with one-way mirror) or remote, with phones, desktops, or TV’s. An unmoderated version can help gather data from bigger samples, with already running websites and apps. 

UX and usability testing of products in real usage in context over a longer time. Selected representatives of the target group are accompanied while they use digital and physical products over a longer period. We regularly gather feedback on various use cases and scenarios through self-report (e.g. text, voice, videos, photos) and interviews.

TYPICAL QUESTIONS

  • What problems are encountered in a longer, real-world use?
  • Which usage patterns evolve after actual usage in real life?
  • Which settings do users have for my product?

HOW DOES IT HELP?

  • Information about the behavior and experiences of users in real-world contexts
  • Organic and contextual behavioral insights
  • Longer-term usage can provide deeper insights than one-off observations.

LIMITATIONS

  • The efforts and duration are relatively high compared to in-lab tests.

WHEN TO USE?

  • For more complex products (e.g. apps, consumer electronics) that are used intensively, over a longer time, and in special contexts. This approach needs working products, at least on beta level.

COMBINE WITH

  • In-lab testing before rolling out the field test, in order to eliminate the major usability problems and tech bugs.

VARIANTS AND OPTIONS

  • Duration (a few weeks up to several months), one-off vs. constant. The way feedbacks are gathered can vary (e.g. spontaneous report from participants or solicited) and also the formats (e.g. voice message, video call, etc.)

With card sorting (or “open card sort”), the mental model of users can be observed and used for sitemaps and functional layouts.

100-200 participants sort content elements according to their personal liking in an online survey. The subsequent cluster analysis gives an understanding of the hierarchical relations of the content elements for the target group. With an appropriate grouping of content, usage effectiveness and efficiency can be optimized even before the interaction design begins.

TYPICAL QUESTIONS

  • Which elements should be grouped together in a sitemap or on a page?
  • Which elements should be highlighted?
  • Where do we need cross-references?

HOW DOES IT HELP?

  • Improved findability of content, thus more usage depth and width
  • Less search aborts
  • Lays the foundation for good usability

LIMITATIONS

  • Content elements (i.e., content, functionality) for a website or app must be known

WHEN TO USE?

  • When content and function ideas are available and can be described

COMBINE WITH

  • A tree test after a sitemap has been defined. A usability test when functional layouts are available.

VARIANTS AND OPTIONS

  • An online card sort can usually handle 50-60 content elements. If there are more elements to sort, a larger sample can be used in a special setting.

The tree test (AKA Reverse Card Sorting, Structure Test, Navigation test, sitemap test) finds out if a navigation structure fits the mental model of the target group and how it can be improved. For this validation we invite a sample of 200-300 participants from the target group and present them with a series of use cases like “try to find a certain product on the website”. They can then solve the tasks in a mock-up navigation structure, where we can measure the percentage of correct solutions and where participants ran into dead ends. With this knowledge, the navigation structure can be optimized, thus boosting conversion rates and user experience.

TYPICAL QUESTIONS

  • Do potential users find certain items on the website?
  • Is the navigation structure intuitive enough?
  • How can I optimize conversion, usability and overall UX of my website?

HOW DOES IT HELP?

  • Optimize findability of content and functions on a website before the launch
  • Optimize orientation on a website
  • Build the foundation for good usability, UX, interaction rate and conversions.

LIMITATIONS

  • The tool examines existing navigation models, either from a live website or a concept model. If no such model exists, an open card sorting might be a better alternative, as it generates a theory-free structure in a bottom-up process.
  • The tool is mostly useful for rather complex web sites that need a multi-level navigation structure.

WHEN TO USE?

  • As soon as a navigation concept and the main use cases are available. You only need the concept in a text form (e.g., an Excel table). No navigation mechanic is needed.

COMBINE WITH

  • An open card sorting

VARIANTS AND OPTIONS

  • Pure tree testing with only the navigation concept vs. tree testing with a more sophisticated click dummy, which includes a navigation solution (e.g., a navigation bar, content navigation)

Optimization of UX and usability through expert evaluation and cognitive walkthrough. Our UX and usability experts go through the essential use cases of a web site or app and look for possibilities to boost conversion and user acceptance. We use ISO norms, Nielsen’s usability heuristics and our long-standing experience to find quick and long-term fixes.

TYPICAL QUESTIONS

  • How can I quickly optimize the UX and usability of an app or website?
  • Which bugs and shortcomings can be solved before I run a regular user test?

HOW DOES IT HELP?

  • Quick and useful optimization suggestions
  • Optimizationn before a user test
  • Can be done with any kind of test material, e.g., running apps, wireframes, interaction designs

LIMITATIONS

  • This approach covers formal criteria of usability and hypothetical UX issues. As it doesn’t involve real users, target-group specific aspects cannot be covered

WHEN TO USE?

  • Any time

COMBINE WITH

  • A user test after the optimizations have been implemented

VARIANTS AND OPTIONS

  • An evaluation can be done with a formative focus (i.e., looking means to improve) or a summative focus (i.e., evaluate the performance against the norm or against competitors)

‘Special agents’ measure the CX quality level, mysteriously disguised as regular shoppers. Mystery shopping or mystery calls (AKA mystery research, secret shopping, CX audit) are used to measure the quality level of retail and service, work performance, compliance with regulations or the placement of products and promotional materials at the POS. The entire shopping experience is in focus and the individual elements at the POS that influence it are closely observed and documented by our testers (e.g. by video, photo, checklists).

TYPICAL QUESTIONS

  • How are my products placed in the store?
  • How is the shopping experience (both, online and offline)?
  • What is the customer experience like during a contact with customer support?
  • Is the entire assortment available at the POS and how does the POS look like?
  • How visible are promotional materials?
  • How is the tonality of the customer-facing staff?

HOW DOES IT HELP?

  • Assessment shopping and support experience
  • Optimization options for service quality and product presentation
  • Serves as a basis for strategy optimization in sales.

LIMITATIONS

  • As no real customers are involved, this method rather focuses on defined service levels. For an open-focus exploration, an additional customer involvement is a good idea.

WHEN TO USE?

  • When customer-facing touchpoints (call centers, POS’s, fulfilment processes etc.) are in place and running.

COMBINE WITH

  • A customer survey

VARIANTS AND OPTIONS

  • Use a sample of instructed real customers instead of expert observers

An easy way to get a feeling of customer experience and satisfaction after a product or campaign launch. A sample of customers or users is invited to participate in a short online survey. Invites can be sent through email, social media, newsletters etc.

TYPICAL QUESTIONS

  • How does the new app feature resonate?
  • How do we fare against our competition?
  • Which problems and features should we look after next?

HOW DOES IT HELP?

  • It provides quick, reliable and efficient insight into customer satisfaction and experiences
  • It helps prioritizing the next steps and releases
  • It helps refine the CX and UX
  • You can get a feedback even before users give it on other channels

LIMITATIONS

  • We need a way to invite the participants, e.g. in-app alerts, social media, banners, newsletters, emails etc.

WHEN TO USE?

  • When a product or campaign has gone live and customers had enough time to experience it.

COMBINE WITH

  • Usage data, campaign reporting, app store ratings

User feedback analysis (AKA review monitoring, app store reporting, review analysis) is the analysis of spontaneous customer feedback on different online channels. Almost all available products and services receive spontaneous reviews and feedback through channels like app stores, Amazon reviews, social media. We systematically gather and sort these, so we can extract the main pros, cons and fields of action. This is a great feedback tool in the run & refine phase, providing that the feedback are condensed and made actionable.

TYPICAL QUESTIONS

  • We know our rating on the app stores – but what do they actually mean?
  • What do customers say about our products?
  • What should we improve?

HOW DOES IT HELP?

  • Get fast and candid feedback on CX and product performance without having to run a bespoke survey
  • Get fields of action for further optimization

LIMITATIONS

  • Reviews are given spontaneously by the customers, hence the utterly satisfied and dissatisfied are over-represented. A representative survey helps prioritizing the fields of action.

WHEN TO USE?

  • When a product or service has been out in the market for a while

COMBINE WITH

  • A representative customer survey, so topics can be prioritized properly

VARIANTS AND OPTIONS

  • Selection of relevant channels

Case: Hilti

Agile user research for the relaunch of the global online presence of Hilti

Hilti AG had set itself the goal of redesigning its global website. The website was not only to be designed in a simpler and more attractive way from the users’ perspective and to represent the new brand image. Besides this, the focus was also on new topics such as editorial contents, personalisation, SEO alignment and responsiveness. It was our mission to support the development process through accompanying user research and enrich conceptual decisions with quick recommendations for action from the user perspective. We did this with a combination of classic user experience and fast iterations.

 

To build up a knowledge base about the future users of the site and identify good approaches to solutions as well as the needs for optimisation, an initial user experience test of the old website was initially done. In the following period of six months, we supported the development of the new concept with eight FAST iterations. The finished complete concept of the new online presence was confirmed by us at the end of the project in a final user experience test.

 

The agile development of the new website concept provided for the separate development of individual areas. To test the respective development statuses separately, individual, focused prototypes were created. This way, the FAST iterations concentrated on the respectively relevant ad hoc questions.

 

To supply the insights from the user tests at the right time, the iterative tests were integrated into the existing sprint plan of the development team. It provided for a period of five working days from the definition of research questions to the delivery of results.

 

To meet these requirements, careful planning and close collaboration between all participants was necessary. The project profited, on the one hand, from clearly communicated deadlines and the predefined scheduling of the test dates and, secondly, from the high degree of dedication on the part of Hilti’s project team, which constantly observed the interviews.

 

This way, the test results could be implemented immediately without any delays and the development could be supported optimally.

“The quick study results of Facit Digital have helped us in making prompt and soundly reasoned decisions in the user’s interest in the relaunch of our global website. Through the integration of the tests into our sprint planning, our development continued without interruption and we could implement the results immediately.”

Evelyn Lampert, Global Project Manager, Corporate E-Business, Hilti AG

Let's get in touch!
Michael Wörmann
Michael Wörmann
Managing Partner
Let's get in touch!
Bitte geben Sie eine gültige E-Mail-Adresse ein
Bitte geben Sie eine gültige Telefonnummer ein

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.