HR Recruitment Screening Pipeline
1. The Problem
A typical open role generates 80–250 applications. Reading 200 CVs, shortlisting 20 candidates, scheduling first-round interviews, and conducting initial screening calls consumes 15–20 hours of HR or hiring manager time — before a single meaningful conversation has taken place. Multiply across multiple simultaneous open roles and recruitment becomes a full-time job in itself. More critically, manual screening is biased toward familiarity and pattern-matching, not potential — meaning the best candidates are regularly screened out by the least effective part of the process.
2. Integration & Webhook Setup
Follow the exact steps below to configure and deploy this automation inside your OpenClaw workspace.
- 1Create a new agent named `recruitment-pipeline`. Connect your Applicant Tracking System (Greenhouse, Lever, Workable, or SmartRecruiters all supported) and your async video platform (HireVue or Spark Hire via API).
- 2Build your `SCORING_CRITERIA` for each role type in your organisation: define the must-have qualifications, preferred qualifications, and behavioural signals that predict success in your specific environment. Weight each criterion — must-haves carry 3× the weight of preferred criteria.
- 3Configure your `SCREENING_QUESTIONS` bank: build 10–15 role-specific async video questions per job family. The agent selects the 4 most relevant questions for each candidate based on their CV profile — personalised screening at scale.
- 4Set your `BIAS_MITIGATION` parameters: configure the agent to assess based on structured criteria only, with name and photo fields excluded from the initial scoring pass. The scored shortlist is presented with candidate identifiers only — personal details visible only after the hiring manager confirms they want to proceed with each candidate.
- 5Configure your rejection communication template: the automated rejection should be warm, specific, and prompt. Candidates who receive a personalised rejection within 48 hours report significantly better employer brand perception than those who wait weeks for a generic response.
3. The Context Payload (context.json)
Save this file as: .openclaw/agents/hr-recruitment-screening-pipeline/context.json
{
"automation_id": "24",
"title": "HR Recruitment Screening Pipeline",
"level": 3,
"tier": "Enterprise Playbook",
"setup_time": "1 week",
"estimated_api_cost": "~$40–$90/mo",
"client_price_range": "$2,000–$10,000",
"agents": [
{
"role": "orchestrator",
"model": "claude-3-5-sonnet",
"temperature": 0.2,
"max_tokens": 4096
}
],
"memory": "session",
"output_format": "structured_json",
"human_review_gate": true,
"documentation_standard": "required"
}4. Execution Commands
Run these commands from your openclaw-workshop/ directory to validate, test, and schedule this automation. Commands are taken directly from The OpenClaw Income Engine, Appendix G.
# ── STEP 1: Register application webhook ──
$ openclaw webhook create recruitment-pipeline --trigger application.created
$
# ── STEP 2: Load scoring criteria for all role families ──
$ openclaw config load-criteria recruitment-pipeline --file ./scoring_criteria.json
$
# ── STEP 3: Load screening question bank ──
$ openclaw config load-questions recruitment-pipeline --file ./screening_questions.json
$
# ── STEP 4: Run calibration on 10 past applications (test mode) ──
$ openclaw run recruitment-pipeline --calibrate --input ./past_applications/ --limit 10
# Review scoring accuracy vs actual hiring outcomes before going live.
$
# ── STEP 5: Activate webhook and go live ──
$ openclaw webhook activate recruitment-pipeline
$
# ── Score a single application on demand ──
$ openclaw run recruitment-pipeline --application-id <ats_application_id>
$
# ── View shortlist for an active role ──
$ openclaw run recruitment-pipeline --role-id <ats_job_id> --shortlist
$
# ── Release personal details for a shortlisted candidate ──
$ openclaw run recruitment-pipeline --application-id <id> --reveal-identity
$
# ── Generate bias audit report for last 30 days ──
$ openclaw run recruitment-pipeline --bias-audit --days 30