DocsFAQ

Frequently Asked Questions

Common questions from recruiters using Evaluait for AI-based hiring.

General Questions

How do I know if a candidate is good at using AI?

Look for these key indicators in their AI interactions:

  • Specific prompts: They ask detailed, context-rich questions
  • Iterative approach: They refine prompts based on AI responses
  • Strategic thinking: They break down complex problems into steps
  • Quality control: They validate and improve AI-generated content

What makes a good assignment for AI evaluation?

Effective assignments have these characteristics:

  • Open-ended: No single "correct" answer
  • AI-dependent: Requires AI assistance to complete effectively
  • Realistic: Mirrors actual work scenarios
  • Time-bounded: 60-90 minutes for most tasks
  • Measurable: Clear success criteria

How long should assessment sessions be?

Recommended session lengths by assignment type:

  • Design tasks: 45-60 minutes
  • Analysis tasks: 60-75 minutes
  • Implementation tasks: 75-90 minutes
  • Research tasks: 60-90 minutes

Most tasks work well with 60 minutes as a default.

Team Management

How do I add team members to my organization?

Team members can join through our request-based system:

  1. Share your organization name with the new team member
  2. They create an account and request to join your organization
  3. You review and approve their request in the Team Management page
  4. Once approved, they can access your assignments and submissions

Can multiple recruiters review the same submission?

Yes! Evaluait supports collaborative evaluation:

  • • Multiple team members can add comments to submissions
  • • Each recruiter can score candidates independently
  • • Comments are timestamped and attributed to team members
  • • Great for reducing bias and getting diverse perspectives

Assessment & Evaluation

What should I look for in AI chat logs?

Key patterns that indicate strong AI utilization:

  • Progressive refinement: Prompts get more specific over time
  • Context building: They provide background information to AI
  • Critical evaluation: They question and improve AI responses
  • Strategic decomposition: Break complex tasks into manageable parts
  • Appropriate scope: Ask questions that match the AI's capabilities

How accurate are the AI-generated assessments?

AI assessments provide helpful insights but should supplement, not replace, human judgment:

  • Good at: Identifying patterns, counting interactions, detecting strategies
  • Less reliable for: Assessing creativity, domain expertise, cultural fit
  • Best practice: Use AI assessments as a starting point for team discussions
  • Recommendation: Always review the actual chat logs yourself

What are red flags I should watch for?

Warning signs in candidate AI interactions:

  • Copy-paste dependency: Direct copying without understanding
  • Vague prompts: Asking overly generic questions
  • No iteration: Accepting first AI response without refinement
  • Inappropriate scope: Asking AI to do everything vs. strategic collaboration
  • No validation: Not checking or improving AI-generated content

Technical Questions

What happens if a candidate's session expires?

Sessions are automatically handled when time expires:

  • • The session automatically submits with current notes and chat history
  • • An "AUTO-SUBMITTED" marker is added to indicate time expiration
  • • All chat interactions are still captured and reviewable
  • • You can still evaluate the candidate's work up to the time limit

Can candidates see each other's sessions or assignments?

No, sessions are completely isolated:

  • • Each session has a unique access code
  • • Candidates can only access their specific session
  • • No candidate data is shared between sessions
  • • Only your organization's recruiters can view submissions

Still Have Questions?

Can't find what you're looking for? We're here to help.