AI Design

B2B Enterprise

SaaS

Shipped

Scaling Questionnaire Creation through an Intentional AI Interaction Framework

Scaling Questionnaire Creation through an Intentional AI Interaction Framework

AI Design

B2B Enterprise

SaaS

Shipped

Scaling Questionnaire Creation through an Intentional AI Interaction Framework

ROLE

Product Designer

TEAM

+2 Product Designers | 2 Product Manager | Development Team | CEO

TIMELINE

4 Weeks

WHAT I DID

User Research | Wireframing | Prototyping | Stakeholder Management | User Testing

TIMELINE

4 Weeks

ROLE

Product Designer

TEAM

+2 Product Designers | 2 Product Manager | Development Team | CEO

WHAT I DID

User Research | Wireframing | Prototyping | Stakeholder Management | User Testing

TLDR:

Problem

Designing technical interview questionnaires was a manual bottleneck that delayed hiring, forced customers to rely on FlairX admins, and hindered scalability.

Solution

An AI-powered, human-in-the-loop workspace that automates JD-to-Question generation while maintaining control through a "Review & Approve" interaction model.

7x Creation Efficiency

Reduced the end-to-end drafting process from 30 minutes of manual labor to under 3 minutes of AI-assisted review.

73% Increased Adoption

Successfully transitioned users from "Shadow Workflows" into our native platform by reducing friction.

CONTEXT

Moving from Internal Service to Customer Self-Service for Scaling

FlairX is a B2B SaaS platform for AI and expert-vetted technical interviews. Previously, Internal Admins manually drafted questionnaires, creating a massive operational bottleneck.


To scale, we shifted from a concierge service to an AI-powered framework, empowering customers to generate high-quality interview questionnaires themselves in minutes.

PROBLEM + GOAL

Two Hurdles to Scalability

Speed

Reduce questionnaire drafting from 30 minutes to instant generation, enabling the platform to scale from 3 to 100+ daily requests.

Adoption

Questionnaire creation ownership shifted from internal admins to customers. Adoption was hindered by a tedious post-creation process which needed drastic redesigning to accommodate self-service usability.

DISCOVERIES

Identifying Gaps Across Manual Creation, AI, and Competitive Tools

User Interviews | Contextual Inquiry | Hands-On Experience | Stakeholder & Customers Reviews | Persona Creation

DESIGNING FOR AI

Ideation War Room with the CEO, PMs, Designers, and Dev Team

Outcome 1: Advocating for Human-in-the-Loop Flow

Users should feel in control rather than ambushed by automation.

"One-Click Automation" was a trap that would lead to generic questionnaires. To bridge this, a "Human-in-the-Loop" flow allowed users to set parameters and accept, reject, or iterate on AI’s output.

IMPACT

Increased customer satisfaction by ensuring the AI hit the mark faster, reaching the final version with fewer revisions.

Outcome 2: Dropping Underused Features

I audited the manual flow to separate high-value assets from "dead weight" features that would hinder the AI experience.

FEATURE

RESOLUTION

REASONING

Master Templates

Dropped

AI-generated questionnaires made the efficiency value of Master Templates null.

Objectives

Dropped

This field was left blank in 99% of questionnaires. The objective was always to assess the candidate’s skills.

Answer Guides

(Previously Ideal Answers)

Changed

Answer Guides. AI needs a guide to gauge what a right answer looks like. Furthermore, ideal answers negate that a question could have multiple right answers.

Question Bank

Kept

A fail-safe for reliable, expert-vetted interview questions in case AI outputs fell short.

Master Templates

Dropped

AI-generated questionnaires made the efficiency value of Master Templates null.

Objectives

Dropped

This field was left blank in 99% of questionnaires. The objective was always to assess the candidate’s skills.

Question Bank

Kept

A fail-safe for reliable, expert-vetted interview questions in case AI outputs fell short.

Answer Guides

(Previously Ideal Answers)

Changed

AI needs a guide to gauge what a right answer looks like. Furthermore, ideal answers negate that a question could have multiple right answers.

Master Templates

Dropped

AI-generated questionnaires made the efficiency value of Master Templates null.

Objectives

Dropped

This field was left blank in 99% of questionnaires. The objective was always to assess the candidate’s skills.

Question Bank

Kept

A fail-safe for reliable, expert-vetted interview questions in case AI outputs fell short.

Answer Guides

(Previously Ideal Answers)

Changed

AI needs a guide to gauge what a right answer looks like. Furthermore, ideal answers negate that a question could have multiple right answers.

AI INTERACTION

Choosing the Right Interaction Model

In-Line AI Editing

Canva Magic Write | Google Docs

Rejected

Led to a cluttered UI with tricky multi level editing interactions. The freeform nature left the information ambiguous and unstructured for the system to process effectively.

Chat + Workspace Editing

ChatGPT Canvas | Gemini Canvas

Rejected

Conversational "back-and-forth" adds friction and "prompt fatigue". The lack of fixed controls introduces too much ambiguity regarding the next steps.

AI Side Panel

Grammarly AI

Selected

Ensures a cleaner workspace and easier multi-level editing. It allows the user to manually refine the final content while the system preserves the information structure.

In Line AI Editing

Canva Magic Write | Google Docs

Rejected

Led to a cluttered UI with tricky multi level editing interactions. The freeform nature left the information ambiguous and unstructured for the system to process effectively.

Chat + Workspace Editing

ChatGPT Canvas | Gemini Canvas

Rejected

Conversational "back-and-forth" adds friction and "prompt fatigue". The lack of fixed controls introduces too much ambiguity regarding the next steps.

AI Side Panel

Grammarly AI

Selected

Ensures a cleaner workspace and easier multi-level editing. It allows the user to manually refine the final content while the system preserves the information structure.

In Line AI Editing

Canva Magic Write | Google Docs

Rejected

Led to a cluttered UI with tricky multi level editing interactions. The freeform nature left the information ambiguous and unstructured for the system to process effectively.

Chat + Workspace Editing

ChatGPT Canvas | Gemini Canvas

Rejected

Conversational "back-and-forth" adds friction and "prompt fatigue". The lack of fixed controls introduces too much ambiguity regarding the next steps.

AI Side Panel

Grammarly AI

Selected

Ensures a cleaner workspace and easier multi-level editing. It allows the user to manually refine the final content while the system preserves the information structure.

Automating Requirement Extraction

Isn’t this already in the JD. I don't have time to retype this.

To eliminate the friction of a 25-field setup, I replaced manual entry with an automated "Skill Extraction" step. By parsing the job description instantly, the AI identifies core requirements for a quick human review.

IMPACT

Increased adoption rates by removing manual data entry and reducing late-stage macro iterations by securing alignment early in the flow.

Shifted to Single Scroll Navigation to Adapt to New User Behavior

Users shifted from a "creation" mindset to a rapid "review" mode. While each skill being it’s own page matched a slow, one-by-one manual workflow, it created too much friction for AI outputs. Moving to a single-scroll workspace allowed users to audit the entire generated interview with minimal clicks.

IMPACT

Significantly reduced time on task due to fewer clicks and reduced context switching fatigue.

BEFORE: TABS UNDER TABS

AFTER: SINGLE SCROLL

BEFORE: TABS UNDER TABS

AFTER: SINGLE SCROLL

BEFORE: TABS UNDER TABS

AFTER: SINGLE SCROLL

BUSINESS PIVOT

Adapting to a Last-Minute Shift to Self Service Model

As the business requirement moved from internal admin drafting to customer self-service, I adapted the interface to support a high-speed "Review and Approve" mindset.

1

Unified Interview Creation

Integrated question generation directly into the job posting flow to prevent task-fragmentation.

2

Job Description Parsing

Reduced the manual setup by parsing job description files for reduced redundant data entry

3

Asynchronous Progress

Added a "Skip to Add Candidates" option to let users proceed with logistics while the AI generates questions.

IMPACT

Reduced form fatigue (25 fields → 8 fields), increasing adoption and reducing waiting time during question generation.

FINAL DESIGN

End to End Flow in Action

Notable Design Nuances

AI REASONING TO BUILD TRUST

SKIP WHILE GENERATING

COMPLETED GENERATION NOTIFICATION

EDITABLE ANSWER GUIDE

QUESTION BANK

REFLECTION

What I Learned

Clarity beats visual cleanliness

Explicit controls outperform elegant but hidden interactions.

AI UX patterns don't always transfer

Mental models matter more than visual familiarity (our questionnaire ≠ a document)

Break complexity into steps

Sequential generation is more predictable than all-at-once.

Work shown with permission from respective companies.

© 2024 Hritika. Work shown with permission from respective companies.
© 2024 Hritika.

Work shown with permission from respective companies.

© 2024 Hritika.

Create a free website with Framer, the website builder loved by startups, designers and agencies.