Home Unmoderated Test

Unmoderated Test

By Crowd team
9 articles

Creating Your First Test

Create new projects and gather instant user insights with Crowd. Follow these steps to set up your first unmoderated test: Steps to Create an Unmoderated Test 1. Initiate Test Creation: - Click the "Unmoderated Test" icon to begin. Alternatively, click the "Create New" button on your dashboard and select "Unmoderated Test" from the dropdown menu. 2. Choose a Template or Start from Scratch: - Select from various pre-populated templates or choose "Start from Scratch" to create a test tailored to your needs. 3. Enter Test Details: - After selecting "Start from Scratch," enter the necessary test details, including the project name. Specify the devices from which you want your participants to access the test. You can choose "All devices," "Desktop only," or "Mobile only." 4. Customize the Welcome Screen: - Enter your project title and description. This welcome message will be the first thing testers see. Provide clear instructions or a message for your testers to follow. 5. Configure the "Get Started" Button: - Customize the start button with a clear label. Ensure testers see an action prompt when clicking it. 6. Add Test Blocks: - Click "Add New Block" to choose a test type. You can add multiple test types such as Simple Survey, Design Survey, Website Evaluation, Preference Test, Card Sorting, Five Second Test, and Prototype Evaluation. Add multiple test types by clicking "Add New Block" again. 7. Customize the "Thank You" Screen: - Personalize the final screen that testers encounter after completing your test. Make it resonate with your participants and leave a positive impression. 8. Reorder Sections: - Use the icon to reorder your sections as needed. 9. Preview Your Test: - Click on the preview button to see your test from a tester’s perspective. 10. Publish Your Test: - Once you're satisfied, click the "Publish" button. When the modal pops up, click "Go Live" to publish your test, or select "Continue Editing" to make further adjustments. For a visual walkthrough, you can watch the Creating Your First Test demo video.

Last updated on Jul 10, 2024

Unmoderated Test: Simple Survey

The Simple Survey feature on Crowd provides users with valuable insights into their target audience's opinions, preferences, and behaviors. By including demographic questions in the survey, Crowd users can gain a better understanding of who their customers are, what they want, and why they use their product or service. This feature also helps identify multiple subgroups of users who interact with the product differently. Tailoring the user testing process to each specific subgroup ensures that the feedback received is relevant and actionable. Overall, the Simple Survey feature helps Crowd users make data-driven decisions, improve their products or services, and ultimately increase customer satisfaction and retention. Steps to Create a Simple Survey 1. Select the Simple Survey Test: - Start by selecting the "Simple Survey" test from the list of unmoderated test options. 2 . Select Answer Type: - Choose the preferred answer type for each question. Options include short text, long text, multiple choice, Yes or No, checkbox, verbal response, and linear scale. Select the answer type that best fits the question to gather accurate and relevant responses. 3. Set Question Requirements: - To make a question compulsory, toggle on the required button. If this is enabled, testers must answer the question to proceed. If left off, testers can skip the question. 4. Duplicate Questions: - Click on the duplicate button to create a copy of a question. 5. Delete Questions: - Click on the delete button to remove a question. 6. Duplicate Entire Sections: - To duplicate an entire section or block, click on the duplicate icon in the top right corner of the block. 7. Delete Entire Sections: - Click on the delete icon to remove an entire block or section. 8. Add New Questions: - Click on the "Add New Question" button to include additional questions. Multiple questions can be added within one section. 9. Enable Conditional Logic: - Toggle on conditional logic to display specific questions to users based on predefined conditions that you set.

Last updated on Jun 26, 2024

Unmoderated Test: Prototype Evaluation

Prototype Evaluation is essential for Crowd users, allowing them to receive feedback on their designs or concepts at an early stage of development. By testing a prototype, users can gain valuable insights into how their target audience perceives their product and identify potential issues before investing significant time and resources into the development process. This test type supports an iterative and user-centered design approach, enabling users to gather feedback on different versions of their prototypes as they refine their ideas. Crowd's platform provides a convenient and accessible way to share prototypes with contributors and receive feedback efficiently, making the testing process more streamlined and cost-effective. In summary, prototype testing is a useful feature for Crowd users as it helps ensure a successful launch by incorporating user feedback early and often into the development process. Use Cases for Prototype Evaluation - Testing Usability: Evaluate the usability of a design or concept that is not yet finished. - Early Feedback: Receive user feedback early in the development process to ensure a successful site or app launch. - Comparing Prototypes: Test a range of prototypes to determine the best research methodology for further testing. Steps to Create a Prototype Evaluation 1. Select Prototype Evaluation: - Choose "Prototype Evaluation" from the list of unmoderated test types. 2. Add Your Prototype Link: - Enter your Figma prototype link into the provided field and click the "Add" button. 3. Choose Task-Based or Non-Task-Based: - Click on the "Task-Based" field if you want respondents to perform specific tasks within your prototype. Select "Non-Task-Based" if you do not want respondents to perform specific tasks. 4. Add Instructions: - Provide optional instructions for your testers to follow. 5. Define Specific Tasks: - Add the specific tasks you want your testers to perform. Select the start screen, which is the first screen your testers will see. 6. Set Success Screen: - Choose the success screen, which indicates to testers that they have successfully completed the task. 7. Notification Option: - Select the notify button if you want your testers to be informed when they have successfully completed the task. 8. Add New Tasks: - Click "Add New Task" to include multiple tasks for testers to perform on the prototype. 9. Add Follow-Up Questions: - Click the "Add New Question" button to include any follow-up questions. 10. Preview Your Test: Once you've configured your prototype evaluation, click the preview button to review your test from the tester's perspective before publishing. For a visual walkthrough, you can watch the Prototype Evaluation demo video.

Last updated on Jun 26, 2024

Web evaluation

Key points; - Why web evaluation matters. - Key criteria for web evaluation. - Evaluating website contents. - Tools and techniques for web evaluation. Why Web Evaluation Matters In today's digital age, the internet is flooded with information, making it vital to distinguish credible sources from unreliable ones. Effective web evaluation helps you make informed decisions, whether you're conducting research, fact-checking, or seeking trustworthy information. The Importance of Critical Thinking Critical thinking is the foundation of web evaluation. It involves analyzing information, questioning sources, and using evidence to form reasoned judgments. Sharpening your critical thinking skills will enhance your ability to evaluate websites effectively. Key Criteria For Web Evaluation - Authority and Credibility Determine the website's authorship and credentials. Reliable sources often include information about the author, organization, or institution behind the content. - Accuracy and Reliability Assess the accuracy of the information presented. Look for citations, references, and corroborating evidence to verify claims. - Purpose and Objectivity Consider the website's purpose. Is it informational, promotional, or biased? Objectivity is key to reliable information. - Relevance and Timeliness Evaluate the website's relevance to your needs. Check for timeliness to ensure the information is up-to-date. Evaluating Website Content - Assessing Textual Information: Scrutinize the quality of the written content. Look for spelling and grammar errors, clear organization, and a logical flow of information. - Examining Visual Content: Evaluate images and graphics for authenticity and relevance. Check for proper attribution of visual elements. - Evaluating Multimedia and Interactive Elements: Assess the quality and relevance of videos, interactive tools, and other multimedia content. Ensure they enhance understanding and engagement. Tools and Techniques for Web Evaluation - Browser Extensions and Plugins Utilize web browser extensions and plugins designed for fact-checking, content blocking, and evaluating website trustworthiness. - Fact-Checking Websites Cross-reference information with reputable fact-checking websites to verify accuracy and authenticity. - User Reviews and Ratings Consider user-generated reviews and ratings to gauge the reliability and user experience of websites. Common Red Flags - Misleading Information Watch out for sensationalism, biased reporting, and exaggerated claims that may indicate unreliable sources. - Poor Design and Usability Evaluate the website's design, navigation, and usability. A poorly designed site may reflect unprofessionalism. - Privacy and Security Concerns Exercise caution with websites that request excessive personal information or lack secure connections (https://). Protect your privacy and data. - Broken Links and Outdated Content Broken links and outdated information can signal neglect or an unreliable source.

Last updated on Jun 26, 2024

Preference testing

Key points; - What is preference testing? - When to use preference test. - Types of Preference test - Preparing for preference testing - Creating and administering preference test - Analyzing and interpreting test results - Best practices for preference testing What is Preference Testing? Preference testing is a research method that assesses people's preferences among two or more alternatives. It aims to determine which option is preferred and why, shedding light on user opinions, tastes, and choices. When to Use Preference Tests Preference tests are valuable when you need to: - Choose between design alternatives, products, or concepts. - Understand user perceptions and preferences. - Optimize marketing strategies based on consumer choices. Types of Preference Testing Common types of preference testing include paired comparison tests, ranking tests, and rating scale tests. The choice of test depends on your research goals and the nature of the preference you want to evaluate. Preparing for Preference Testing - Defining Your Research Goals Clearly articulate what you want to achieve with the preference test. Determine the specific choices or preferences you aim to evaluate. - Identifying Target Participants Identify the target audience or demographic group that reflects the preferences you want to understand. Ensure your participant pool is representative of this group. - Selecting Testing Materials Choose the items or stimuli you will present to participants during the preference test. These can be product designs, advertisements, concepts, or any alternatives you want to compare. Creating and Administering Preference Tests - Designing Test Scenarios or Stimuli: Develop clear, unbiased, and relevant scenarios or stimuli that participants will evaluate. Ensure that the options presented are distinct and comparable. - Choosing the Right Testing Method: Select the appropriate preference testing method, whether it's a paired comparison test, ranking test, or another variant. Consider the best approach for your research goals - Conducting Tests with Participants: Administer the preference test to participants, clearly explaining the purpose and instructions. Use randomization to avoid order bias, and consider counterbalancing if necessary. - Collecting and Recording Data: Gather data on participants' preferences, including their rankings, ratings, or selected choices. Ensure consistent and accurate data collection procedures. Analyzing and Interpreting Results - Quantitative Analysis Utilize quantitative data to compare preferences among options. Calculate scores, rankings, or ratings to identify the most preferred alternative. - Qualitative Insights Complement quantitative data with qualitative insights. Analyze comments and open-ended questions to understand the reasons behind preferences. - Comparing Preferences Compare preferences across different demographic groups or subgroups, if applicable, to uncover variations in taste and choice. - Reporting and Presenting Findings Create a comprehensive report that summarizes the preference test findings, including data analysis, visual representations, and insights. Share results with stakeholders and decision-makers. Best Practices for Preference Testing - Objective Test Design Design tests that are unbiased and objective, avoiding leading questions or scenarios that may influence preferences. - Target Audience Representation Ensure your participant group accurately reflects the preferences you are researching. Consider factors like age, gender, location, or other demographics. - Data Collection and Analysis Adhere to rigorous data collection and analysis methods to maintain the integrity of your preference test results. - Ethical Considerations Obtain informed consent from participants, respect their privacy, and ensure that the test is conducted ethically. Preference testing is a valuable tool for decision-making in various fields, including product design, marketing, and user experience optimization. By following these guidelines and best practices from Crowd, you can conduct preference tests that yield valuable insights and inform choices that align with user preferences and expectations.

Last updated on Jun 26, 2024

Design surveys

Key Points; - What is a Design Survey? - The importance of Design Surveys. - When to use Design surveys. - preparing for a Design survey. - Creating and administering design surveys. - Analyzing and interpreting survey data. - Best practices for Design surveys. What is a Design Survey? A design survey is a process of gathering user feedback and opinions on various design-related aspects of a product or service. It aims to collect insights that can help in improving the user experience, aesthetics, and overall design of a product. The Importance of Design Survey Design survey tests offer several benefits, including: - Identifying design issues and areas for improvement. - Gathering user preferences and expectations. - Enhancing the aesthetics and user appeal of a product. - Providing data-driven insights for design decisions. When to Use Design Surveys Design surveys are valuable at different stages of product development, from initial design concepts to post-launch evaluations. They are especially useful when you want to understand user perceptions, preferences, and pain points related to design. Preparing for a Design Survey - Defining Survey Objectives Clearly outline the objectives of your design survey. Determine which design aspects you want to evaluate and what insights you hope to gain from the survey. - Identifying the Target Audience Understand your target audience and their expectations. Tailor the design of the survey to align with their preferences and needs. - Designing Survey Questions Craft survey questions carefully to ensure they are clear, relevant, and free from bias. Questions should address the design aspects you want to evaluate and gather valuable insights. Creating and Administering Design Surveys - Selecting the Right Survey Tool: Sign up on Crowdapp.io to create and distribute your design survey. - Crafting Clear and Relevant Questions: Write survey questions in plain language and avoid jargon or complex terminology. Ensure that questions are relevant to the design aspects you aim to evaluate. - Designing the Survey Layout: Create an appealing and user-friendly survey layout. Organize questions logically, use consistent formatting, and provide clear instructions for participants. - Distributing and Collecting Responses: Distribute the survey to your target audience through the chosen channel (email, social media, website, etc.). Set a deadline for responses and monitor data collection. Analyzing and Interpreting Survey Data - Quantitative Data Analysis Utilize quantitative data to assess the responses to closed-ended questions, such as ratings and multiple-choice questions. Analyze trends and correlations. - Qualitative Insights For open-ended questions, conduct qualitative analysis to extract valuable insights from participant comments. Identify recurring themes and patterns. - Identifying Design Issues and Trends Combine quantitative and qualitative data to identify design issues, trends, and areas for improvement. Categorize feedback and prioritize changes. - Reporting and Presenting Findings Create a comprehensive report summarizing the survey results, including data analysis, visual representations, and insights. Share these findings with relevant stakeholders. Best Practices for Design Survey - Continuous Iteration Design surveys are not a one-time effort. Regularly assess the design to ensure it remains user-friendly and appealing. - User-Centered Design Prioritize the needs and expectations of your users in design decisions. User feedback is a valuable source of insights. - Usability and Accessibility Ensure that your design is not only visually appealing but also usable and accessible to a wide range of users, regardless of their abilities. - Ethical Considerations Respect ethical guidelines during design surveys. Protect participant privacy and ensure data security. Obtain informed consent, especially for sensitive design aspects. Design survey tests are a fundamental part of improving the user experience and aesthetics of products and services. By following these guidelines and best practices by Crowd, you can conduct design survey tests that yield valuable insights and lead to design improvements that resonate with your users' preferences and expectations.

Last updated on Jun 26, 2024

Card sorting tests

Key Points; - What is Card sorting - Why Card sorting matters - Planning your Card sorting project - Conducting card sorting sessions - Interpreting and implementing card sorting results - Best practices for card sorting What is Card Sorting? Card sorting is a user-centered design method used to understand how users categorize and group information. Participants are asked to organize content items (usually represented on physical or digital cards) into meaningful categories, helping to inform the structure and organization of a website or application. Why Card Sorting Matters Card sorting is essential for: - Improving information architecture and navigation. - Enhancing the user experience and findability of content. - Ensuring that the structure aligns with user mental models. Crowd has 1 type of card sorting: - Closed Card Sorting: Participants sort cards into predefined categories. Planning Your Card Sorting Project - Defining Objectives and Scope Determine the specific objectives of your card-sorting project. What aspect of your website or product's structure do you want to improve? Clearly define the scope of your card sorting study. - Identifying Your Target Audience Identify the target audience or user group for whom you are optimizing the information structure. Ensure that participants represent this group accurately. - Preparing the Materials Create the cards or digital representations of content items that participants will sort. Ensure these items are clear and accurately represent your website's content. Conducting Card Sorting Sessions - Selecting Participants Recruit participants who are representative of your target audience. Aim for a diverse group to ensure a wide range of perspectives. - Gathering and Analyzing Results Administer the card sorting sessions, and collect participants' feedback and sorting data. Analyze the data to identify trends, clusters, and patterns. Interpreting and Implementing Card Sorting Results - Analyzing the Data Analyze the results to identify trends and common groupings. Understand how participants perceive the organization of your content. - Revising Your Information Architecture Based on the findings, revise the information architecture of your website or product to align with user preferences and expectations. - Testing the Revised Structure Implement the changes to your website or product, and conduct usability testing to validate the revised information structure. Best Practices for Card Sorting - Clear Communication Communicate the purpose of the card sorting study to participants, and provide them with instructions on how to perform the task. - Iterative Approach Card sorting is often most effective when conducted iteratively. Continue to refine your information structure based on ongoing feedback. - Collaboration Involve stakeholders and team members in the card-sorting process to ensure that design decisions are well-informed and aligned with the project's objectives. - Ethical Considerations Respect ethical guidelines during card sorting, including obtaining informed consent and ensuring that participant data is handled securely and privately. By following these guidelines and best practices from Crowd, you can conduct card sorting studies that provide valuable insights and result in a more user-centered and navigable information architecture.

Last updated on Jun 26, 2024

5- second tests

Key Points; - What is a 5-second block? - Why 5-second block matters. - Types of 5 Seconds block. - Planning Your 5 Seconds session. - Conducting 5 Seconds session. - Interpreting and Implementing 5 Seconds Result. - Best Practices for 5 Seconds block. What is a 5-second block? A 5-second test is a rapid usability test in which participants view a visual design (such as a web page or a landing page) for just 5 seconds. Afterward, they are asked questions about what they remember, what caught their attention, and what they found confusing. Why 5 Seconds Block Matter 5 seconds tests are crucial for: - Quickly evaluating the clarity of designs. - Identifying key elements that capture users' attention. - Improving the visual hierarchy of information. - Ensuring that designs effectively communicate their intended message. Types of 5 Seconds Block There are different types of 5 seconds tests: - First Impression Block: Participants share their initial impressions after viewing a design for 5 seconds. - Five-Second Click Block: Participants indicate where they would click if they were to take a specific action on the design. - Memory Block: Participants recall specific details or elements from the design they saw for 5 seconds. Planning Your 5 Seconds Session - Defining Test Objectives Determine the specific objectives of your 5-second test. What aspect of your design do you want to evaluate? Define the scope and focus of the test. - Selecting Target Audience Identify your target audience or user group. Ensure that the participants represent the users you're designing for. - Preparing Test Materials Prepare the design for testing. Ensure that these materials accurately represent the design you want to evaluate. Conducting 5 Seconds Session - Running Test Sessions Administer the 5-second test sessions, where participants view the design for 5 seconds. Provide clear instructions and ensure that the timing is strict. Record their responses and feedback. - Collecting and Analyzing Data Collect data, which may include participants' initial impressions, what they remember, what they found confusing, or where they would click. Analyze the data to identify patterns and insights. - Identifying Insights Based on the data and feedback, identify the strengths and weaknesses of the design. Determine what elements were effective in capturing users' attention and what needs improvement. Interpreting and Implementing 5 Seconds Block Result. - Analyzing Participants' Feedback Analyze participants' feedback to understand what worked well in the design and what aspects need improvement. Look for recurring themes or issues. - Iterative Design Improvements Implement design improvements based on the insights gained from the 5-second test. Focus on making key elements more prominent, clear, and effective. - Assessing Visual Hierarchy and Messaging Consider the visual hierarchy of your design and the clarity of your messaging. Ensure that the most important information is readily noticeable within the first 5 seconds. Best Practices for 5 Seconds Block - Clear Instructions Provide clear instructions to participants, explaining the purpose of the test and what you want them to focus on during the 5 seconds. - Objective Evaluation Criteria Define objective evaluation criteria in advance, making it easier to assess the effectiveness of the design elements. - Regular Testing Regularly conduct 5-second tests, especially during the design and iteration phases. This iterative approach helps in continually improving design clarity and effectiveness - Ethical Considerations Respect ethical guidelines during 5-second tests, including obtaining informed consent and ensuring that participant data is handled securely and privately. 5 seconds tests are a valuable method for quickly evaluating and enhancing the effectiveness of visual designs. By following these guidelines and best practices from Crowd, you can conduct 5-second tests that provide valuable insights and lead to design improvements that make a powerful and immediate impact on users.

Last updated on Jun 26, 2024

Unmoderated Test: Understanding Your Test Results

Once your test has been published and testers have started participating, you can analyze your test results. Follow these steps to review and interpret your data: 1. Access Test Results: - Click on the "Unmoderated Test" icon on your Crowd dashboard. - Click the "View Responses" button on the unmoderated test home page. 2. View Responses: - Click on "Responses" to see your test results. 3. Analyze Metrics: Click the metrics button to view detailed results. Here, you can see: - Number of Participant Sessions: Total interactions or engagements with the test. - Responses Submitted: Number of participants who have successfully completed the test. - Completion Rate: Percentage of users who finished the assessment. - Average Duration: Typical time participants spend answering the questions. - Completion vs. Abandonment Rate: Insights into user engagement and test performance. - Time Graph: Visual representation of how response times for various interactions change over the course of the test. 4. Filter Responses: - Analyze test responses based on criteria such as country, duration, date, recruitment method (hire, shareable link, email invite, web prompt), device type, and browser. - To proceed, Click "Add Filter" to filter your results accordingly. 5. Share Results: - To create a sharable link for stakeholders, click the "Share Result" button. This allows others to access and review the relevant information. 6. Review Participant Sessions: - Click on a participant session and then click the "Play" button to view each participant's journey. Observe their interactions, actions, clicks, scrolls, and any hesitations. 7. AI Analysis: - Crowd offers AI Analysis to provide an in-depth summary of your tests and surveys. Click on the AI Analysis button to access this feature. - Crowd AI provides key insights, themes, sentiments, recommendations, action items, and limitations. - For more context, interact with Crowd AI by clicking the "Ask AI" button. Type your questions to get specific answers and insights. You can watch the Understanding your test results demo video for a visual walkthrough.

Last updated on Jun 26, 2024