DocsReview & Evaluation

Review & Evaluation Process

Complete guide to analyzing candidate submissions and making informed hiring decisions.

Accessing Submissions

All completed sessions appear in your dashboard's Submissions section, ready for review.

Submitted Sessions

Candidates who manually submitted their work before time expired.

Ready for Review

Auto-Submitted

Sessions that automatically submitted when time expired.

Time Expired

Team Reviews

Submissions with comments and scores from multiple team members.

Collaborative

Understanding the Review Interface

The review page gives you complete insight into candidate performance with organized tabs and data.

Review Tabs Overview

Chat Messages

Complete AI conversation history with timestamps and interaction patterns.

Candidate Notes

Final notes and work submitted by the candidate.

AI Assessment

Automated analysis of AI utilization patterns and strategic thinking.

Team Comments

Collaborative feedback and scores from your team members.

Analyzing Chat Interactions

The chat log is your primary source of insight into how candidates think and work with AI.

What to Look For

🟢 Strong Indicators

  • Specific prompts: Detailed, context-rich questions with clear requirements
  • Iterative refinement: Building on AI responses to improve output
  • Strategic breakdown: Decomposing complex tasks into manageable parts
  • Quality validation: Questioning and improving AI-generated content

🔴 Warning Signs

  • Vague requests: Generic questions without context or specificity
  • Copy-paste dependency: Accepting AI output without modification
  • No iteration: Single-exchange interactions with no follow-up
  • Inappropriate scope: Asking AI to do everything vs. collaboration

Interaction Patterns to Analyze

Problem Exploration

Does the candidate ask clarifying questions and explore the problem space before jumping to solutions?

Progressive Refinement

Look for conversations that build complexity over time, with each prompt improving on the last.

Context Building

Strong candidates provide background information and constraints to help AI give better responses.

Using AI Assessment Reports

AI-generated assessments provide structured analysis but should supplement, not replace, your judgment.

How to Interpret AI Assessments

✅ AI Assessment Strengths

  • • Identifies conversation patterns and interaction frequency
  • • Counts specific behaviors (questions asked, iterations made)
  • • Provides objective analysis free from human bias
  • • Highlights evidence from actual chat interactions

⚠️ AI Assessment Limitations

  • • Cannot evaluate domain expertise or technical knowledge
  • • May miss cultural nuances or communication styles
  • • Doesn't understand business context or role requirements
  • • Should be combined with human judgment and team input

Assessment Report Sections

Executive Summary

High-level overview of candidate performance and key strengths.

Behavioral Signals

Analysis across five dimensions of AI utilization effectiveness.

Evidence & Examples

Specific quotes and interactions that support the analysis.

Follow-up Questions

Suggested interview questions to explore findings further.

Team Collaboration

Multiple team members can review the same submission and add their insights.

Collaborative Review Process

1

Individual Review

Each team member reviews the submission independently and adds their comments and scores.

2

Team Discussion

Review team comments to understand different perspectives and identify consensus or disagreements.

3

Final Decision

Combine insights from AI assessment and team feedback to make informed hiring decisions.

Benefits of Team Reviews

  • • Reduces individual bias and blind spots
  • • Captures different perspectives on candidate strengths
  • • Creates more comprehensive evaluation documentation
  • • Improves team alignment on hiring standards
  • • Provides multiple data points for difficult decisions

Making Hiring Decisions

Combine all available information to make well-informed hiring decisions.

Decision Framework

Quantitative Data

AI assessment scores, interaction counts, time utilization

Qualitative Analysis

Chat quality, strategic thinking, problem-solving approach

Team Consensus

Multiple reviewer perspectives, collaborative discussion

Questions to Consider

  • • Did they demonstrate strategic thinking about the problem?
  • • How effectively did they use AI as a collaborative tool?
  • • Would their approach scale to real-world scenarios?
  • • Do they show awareness of AI limitations and risks?
  • • How do they compare to your team's standards?

Documentation Tips

  • • Record specific examples from chat logs
  • • Note areas where candidate excelled or struggled
  • • Document team consensus and any disagreements
  • • Include follow-up interview topics
  • • Save decision rationale for future reference

Ready to Start Reviewing?

Apply these evaluation techniques to make better hiring decisions.