An autonomous AI job search agent that discovers job postings via RSS, scores each one against my professional profile using GPT-4o-mini, drafts tailored applications for high matches, and runs 3x daily on a schedule — fully without human intervention.
Atlas is an autonomous AI pipeline that automates the job search process for software engineers. It discovers job postings, evaluates each one against Samuel's professional profile using an LLM scoring system, drafts tailored applications for strong matches, and sends results via email — all without any human trigger. It runs three times per day on a schedule on Oracle Cloud VPS.
The name Atlas reflects its purpose: carrying the operational weight of job searching so the engineer can focus on building.
Technical job searching is high-friction and unsystematic:
Atlas solves this by making job discovery and triage continuous and automatic.
Atlas subscribes to RSS feeds from job boards and company career pages. On each run (3x daily), it fetches new postings from all feeds and deduplicates against already-processed postings stored in the database.
Before any API call, each raw posting passes through a hard filter:
This layer filters approximately 60-70% of raw postings before any LLM call, directly reducing API cost and latency.
For postings that pass pre-filters, the scoring prompt runs.
Prompt structure (RCTF framework):
application_angle from the scoring output, email the draft to Samuel, store in database with status "applied"Token optimization note: Only postings scored ≥ 6 trigger an application draft prompt (a second LLM call). Postings scored 5 are stored as-is. This means the expensive application drafting step only runs on the minority of high-confidence matches.
For scores ≥ 8:
application_angle identified during scoringRuns on Oracle Cloud VPS. Python script scheduled via cron (3x daily). Results stored in PostgreSQL database.
Atlas is in active development. The core pipeline is built and has been tested manually. Specific bugs being resolved before stable scheduled deployment:
Target: fully scheduled, stable autonomous operation within the next few weeks.
Python, OpenAI API (GPT-4o-mini), RSS parsing (feedparser), PostgreSQL, Oracle Cloud VPS, cron scheduling, SMTP email delivery
The scoring rubric required calibration. Initial runs produced too many scores in the 6-7 range — the model was being generous. Tightening the context to include explicit criteria ("a score of 8 requires: Flutter as a primary requirement, Python or AI in the stack, remote-friendly, mid-level experience range") improved score distribution significantly. Prompt precision matters as much as prompt completeness.