feat: LLM-powered model auto-configuration and improved onboarding

Major changes:
- Add autoconfigure_models tool for intelligent model assignment
- Implement LLM-based model selection using openclaw agent
- Improve onboarding flow with better model access checks
- Update README with clearer installation and onboarding instructions

Technical improvements:
- Add model-fetcher utility to query authenticated models
- Add smart-model-selector for LLM-driven model assignment
- Use session context for LLM calls during onboarding
- Suppress logging from openclaw models list calls

Documentation:
- Add prerequisites section to README
- Add conversational onboarding example
- Improve quick start flow

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Lauren ten Hoor
2026-02-12 20:37:15 +08:00
parent 84483176f4
commit b2fc94db9e
12 changed files with 835 additions and 304 deletions

1
.gitignore vendored
View File

@@ -4,3 +4,4 @@ dist/
*.js.map
*.d.ts
!openclaw.plugin.json
.openclaw

View File

@@ -6,12 +6,21 @@
**Turn any group chat into a dev team that ships.**
DevClaw is a plugin for [OpenClaw](https://openclaw.ai) that turns your orchestrator agent into a development manager. It hires developers, assigns tasks, reviews code, and keeps the pipeline moving — across as many projects as you have group chats. [Get started &rarr;](#getting-started)
DevClaw is a plugin for [OpenClaw](https://openclaw.ai) that turns your orchestrator agent into a development manager. It hires developers, assigns tasks, reviews code, and keeps the pipeline moving — across as many projects as you have group chats.
**Prerequisites:** [OpenClaw](https://openclaw.ai) must be installed and running.
```bash
openclaw plugins install @laurentenhoor/devclaw
```
Then start onboarding by chatting with your agent in any channel:
```
"Hey, can you help me set up DevClaw?"
```
[Read more on onboarding &rarr;](#getting-started)
---
## What it looks like
@@ -386,6 +395,11 @@ Or for local development:
openclaw plugins install -l ./devclaw
```
Start onboarding:
```bash
openclaw chat "Help me set up DevClaw"
```
### Set up through conversation
The easiest way to configure DevClaw is to just talk to your agent:

View File

@@ -9,6 +9,7 @@ import { createHealthTool } from "./lib/tools/health.js";
import { createProjectRegisterTool } from "./lib/tools/project-register.js";
import { createSetupTool } from "./lib/tools/setup.js";
import { createOnboardTool } from "./lib/tools/onboard.js";
import { createAutoConfigureModelsTool } from "./lib/tools/autoconfigure-models.js";
import { registerCli } from "./lib/cli.js";
import { registerHeartbeatService } from "./lib/services/heartbeat.js";
@@ -103,6 +104,9 @@ const plugin = {
});
api.registerTool(createSetupTool(api), { names: ["setup"] });
api.registerTool(createOnboardTool(api), { names: ["onboard"] });
api.registerTool(createAutoConfigureModelsTool(api), {
names: ["autoconfigure_models"],
});
// CLI
api.registerCli(({ program }: { program: any }) => registerCli(program, api), {
@@ -113,7 +117,7 @@ const plugin = {
registerHeartbeatService(api);
api.logger.info(
"DevClaw plugin registered (10 tools, 1 CLI command group, 1 service)",
"DevClaw plugin registered (11 tools, 1 CLI command group, 1 service)",
);
},
};

View File

@@ -38,12 +38,18 @@ export async function hasWorkspaceFiles(
// ---------------------------------------------------------------------------
function buildModelTable(pluginConfig?: Record<string, unknown>): string {
const cfg = (pluginConfig as { models?: { dev?: Record<string, string>; qa?: Record<string, string> } })?.models;
const cfg = (
pluginConfig as {
models?: { dev?: Record<string, string>; qa?: Record<string, string> };
}
)?.models;
const lines: string[] = [];
for (const [role, levels] of Object.entries(DEFAULT_MODELS)) {
for (const [level, defaultModel] of Object.entries(levels)) {
const model = cfg?.[role as "dev" | "qa"]?.[level] || defaultModel;
lines.push(` - **${role} ${level}**: ${model} (default: ${defaultModel})`);
lines.push(
` - **${role} ${level}**: ${model} (default: ${defaultModel})`,
);
}
}
return lines.join("\n");
@@ -111,26 +117,29 @@ Ask: "Do you want to configure DevClaw for the current agent, or create a new de
- If none selected, user can add bindings manually later via openclaw.json
**Step 2: Model Configuration**
⚠️ **IMPORTANT**: First check what models the user has access to! The defaults below are suggestions.
Ask: "What models do you have access to in your OpenClaw configuration?"
- Guide them to check their available models (router configuration, API keys, etc.)
- If they have the default Claude models, great!
- If not, help them map their available models to these levels:
1. **Call \`autoconfigure_models\`** to automatically discover and assign models:
- Discovers all authenticated models in OpenClaw
- Uses AI to intelligently assign them to DevClaw roles
- Returns a ready-to-use model configuration
**Suggested default level-to-model mapping:**
2. **Handle the result**:
- If \`success: false\` and \`modelCount: 0\`:
- **BLOCK setup** - show the authentication instructions from the message
- **DO NOT proceed** - exit onboarding until user configures API keys
- If \`success: true\`:
- Present the model assignment table to the user
- Store the \`models\` object for Step 3
| Role | Level | Default Model | Purpose |
|------|-------|---------------|---------|
${modelTable}
3. **Optional: Prefer specific provider**
- If user wants only models from one provider (e.g., "only use Anthropic"):
- Call \`autoconfigure_models({ preferProvider: "anthropic" })\`
**Model selection guidance:**
- **junior/tester**: Fastest, cheapest models (Haiku-class, GPT-4-mini, etc.)
- **medior/reviewer**: Balanced models (Sonnet-class, GPT-4, etc.)
- **senior**: Most capable models (Opus-class, o1, etc.)
Ask which levels they want to customize, and collect their actual model IDs.
💡 **Tip**: Guide users to configure finer-grained mappings rather than accepting unsuitable defaults.
4. **Confirm with user**
- Ask: "Does this look good, or would you like to customize any roles?"
- If approved → proceed to Step 3 with the \`models\` configuration
- If they want changes → ask which specific roles to modify
- If they want different provider → go back to step 3
**Step 3: Run Setup**
Call \`setup\` with the collected answers:

View File

@@ -0,0 +1,157 @@
/**
* llm-model-selector.ts — LLM-powered intelligent model selection.
*
* Uses an LLM to understand model capabilities and assign optimal models to DevClaw roles.
*/
import { execSync } from "node:child_process";
import { writeFileSync, unlinkSync } from "node:fs";
import { join } from "node:path";
import { tmpdir } from "node:os";
export type ModelAssignment = {
dev: {
junior: string;
medior: string;
senior: string;
};
qa: {
reviewer: string;
tester: string;
};
};
/**
* Use an LLM to intelligently select and assign models to DevClaw roles.
*/
export async function selectModelsWithLLM(
availableModels: Array<{ model: string; provider: string }>,
sessionKey?: string,
): Promise<ModelAssignment | null> {
if (availableModels.length === 0) {
return null;
}
// If only one model, assign it to all roles
if (availableModels.length === 1) {
const model = availableModels[0].model;
return {
dev: { junior: model, medior: model, senior: model },
qa: { reviewer: model, tester: model },
};
}
// Create a prompt for the LLM
const modelList = availableModels.map((m) => m.model).join("\n");
const prompt = `You are an AI model expert. Analyze the following authenticated AI models and assign them to DevClaw development roles based on their capabilities.
Available models:
${modelList}
Assign models to these roles based on capability:
- **senior** (most capable): Complex architecture, refactoring, critical decisions
- **medior** (balanced): Features, bug fixes, code review
- **junior** (fast/efficient): Simple fixes, testing, routine tasks
- **reviewer** (same as medior): Code review
- **tester** (same as junior): Testing
Rules:
1. Prefer same provider for consistency
2. Assign most capable model to senior
3. Assign mid-tier model to medior/reviewer
4. Assign fastest/cheapest model to junior/tester
5. Consider model version numbers (higher = newer/better)
6. Stable versions (no date) > snapshot versions (with date like 20250514)
Return ONLY a JSON object in this exact format (no markdown, no explanation):
{
"dev": {
"junior": "provider/model-name",
"medior": "provider/model-name",
"senior": "provider/model-name"
},
"qa": {
"reviewer": "provider/model-name",
"tester": "provider/model-name"
}
}`;
// Write prompt to temp file for safe passing to shell
const tmpFile = join(tmpdir(), `devclaw-model-select-${Date.now()}.txt`);
writeFileSync(tmpFile, prompt, "utf-8");
try {
// Call openclaw agent using current session context if available
const sessionFlag = sessionKey
? `--session-id "${sessionKey}"`
: `--session-id devclaw-model-selection`;
const result = execSync(
`openclaw agent --local ${sessionFlag} --message "$(cat ${tmpFile})" --json`,
{
encoding: "utf-8",
timeout: 30000,
stdio: ["pipe", "pipe", "ignore"],
},
).trim();
// Parse the response from openclaw agent --json
const lines = result.split("\n");
const jsonStartIndex = lines.findIndex((line) => line.trim().startsWith("{"));
if (jsonStartIndex === -1) {
throw new Error("No JSON found in LLM response");
}
const jsonString = lines.slice(jsonStartIndex).join("\n");
// openclaw agent --json returns: { payloads: [{ text: "```json\n{...}\n```" }], meta: {...} }
const response = JSON.parse(jsonString);
if (!response.payloads || !Array.isArray(response.payloads) || response.payloads.length === 0) {
throw new Error("Invalid openclaw agent response structure - missing payloads");
}
// Extract text from first payload
const textContent = response.payloads[0].text;
if (!textContent) {
throw new Error("Empty text content in openclaw agent payload");
}
// Strip markdown code blocks (```json and ```)
const cleanJson = textContent
.replace(/```json\n?/g, '')
.replace(/```\n?/g, '')
.trim();
// Parse the actual model assignment JSON
const assignment = JSON.parse(cleanJson);
// Log what we got for debugging
console.log("LLM returned:", JSON.stringify(assignment, null, 2));
// Validate the structure
if (
!assignment.dev?.junior ||
!assignment.dev?.medior ||
!assignment.dev?.senior ||
!assignment.qa?.reviewer ||
!assignment.qa?.tester
) {
console.error("Invalid assignment structure. Got:", assignment);
throw new Error(`Invalid assignment structure from LLM. Missing fields in: ${JSON.stringify(Object.keys(assignment))}`);
}
return assignment as ModelAssignment;
} catch (err) {
console.error("LLM model selection failed:", (err as Error).message);
return null;
} finally {
// Clean up temp file
try {
unlinkSync(tmpFile);
} catch {
// Ignore cleanup errors
}
}
}

View File

@@ -0,0 +1,81 @@
/**
* model-fetcher.ts — Shared helper for fetching OpenClaw models without logging.
*
* Uses execSync to bypass OpenClaw's command logging infrastructure.
*/
import { execSync } from "node:child_process";
export type OpenClawModelRow = {
key: string;
name?: string;
input: string;
contextWindow: number | null;
local: boolean;
available: boolean;
tags: string[];
missing?: boolean;
};
/**
* Fetch all models from OpenClaw without logging.
*
* @param allModels - If true, fetches all models (--all flag). If false, only authenticated models.
* @returns Array of model objects from OpenClaw's model registry
*/
export function fetchModels(allModels = true): OpenClawModelRow[] {
try {
const command = allModels
? "openclaw models list --all --json"
: "openclaw models list --json";
// Use execSync directly to bypass OpenClaw's command logging
const output = execSync(command, {
encoding: "utf-8",
timeout: 10000,
cwd: process.cwd(),
// Suppress stderr to avoid any error messages
stdio: ["pipe", "pipe", "ignore"],
}).trim();
if (!output) {
throw new Error("Empty output from openclaw models list");
}
// Parse JSON (skip any log lines like "[plugins] ...")
const lines = output.split("\n");
// Find the first line that starts with { (the beginning of JSON)
const jsonStartIndex = lines.findIndex((line: string) => {
const trimmed = line.trim();
return trimmed.startsWith("{");
});
if (jsonStartIndex === -1) {
throw new Error(
`No JSON object found in output. Got: ${output.substring(0, 200)}...`,
);
}
// Join all lines from the JSON start to the end
const jsonString = lines.slice(jsonStartIndex).join("\n");
const data = JSON.parse(jsonString);
const models = data.models as OpenClawModelRow[];
if (!Array.isArray(models)) {
throw new Error(`Expected array of models, got: ${typeof models}`);
}
return models;
} catch (err) {
throw new Error(`Failed to fetch models: ${(err as Error).message}`);
}
}
/**
* Fetch only authenticated models (available: true).
*/
export function fetchAuthenticatedModels(): OpenClawModelRow[] {
// Use --all flag but suppress logging via stdio in fetchModels()
return fetchModels(true).filter((m) => m.available === true);
}

View File

@@ -0,0 +1,98 @@
/**
* smart-model-selector.ts — LLM-powered model selection for DevClaw roles.
*
* Uses an LLM to intelligently analyze and assign models to DevClaw roles.
*/
export type ModelAssignment = {
dev: {
junior: string;
medior: string;
senior: string;
};
qa: {
reviewer: string;
tester: string;
};
};
/**
* Intelligently assign available models to DevClaw roles using an LLM.
*
* Strategy:
* 1. If 0 models → return null (setup should be blocked)
* 2. If 1 model → assign it to all roles
* 3. If multiple models → use LLM to intelligently assign
*/
export async function assignModels(
availableModels: Array<{ model: string; provider: string; authenticated: boolean }>,
sessionKey?: string,
): Promise<ModelAssignment | null> {
// Filter to only authenticated models
const authenticated = availableModels.filter((m) => m.authenticated);
if (authenticated.length === 0) {
return null; // No models available - setup should be blocked
}
// If only one model, use it for everything
if (authenticated.length === 1) {
const model = authenticated[0].model;
return {
dev: { junior: model, medior: model, senior: model },
qa: { reviewer: model, tester: model },
};
}
// Multiple models: use LLM-based selection
const { selectModelsWithLLM } = await import("./llm-model-selector.js");
const llmResult = await selectModelsWithLLM(authenticated, sessionKey);
if (!llmResult) {
throw new Error("LLM-based model selection failed. Please try again or configure models manually.");
}
return llmResult;
}
/**
* Format model assignment as a readable table.
*/
export function formatAssignment(assignment: ModelAssignment): string {
const lines = [
"| Role | Level | Model |",
"|------|----------|--------------------------|",
`| DEV | senior | ${assignment.dev.senior.padEnd(24)} |`,
`| DEV | medior | ${assignment.dev.medior.padEnd(24)} |`,
`| DEV | junior | ${assignment.dev.junior.padEnd(24)} |`,
`| QA | reviewer | ${assignment.qa.reviewer.padEnd(24)} |`,
`| QA | tester | ${assignment.qa.tester.padEnd(24)} |`,
];
return lines.join("\n");
}
/**
* Generate setup instructions when no models are available.
*/
export function generateSetupInstructions(): string {
return `❌ No authenticated models found. DevClaw needs at least one model to work.
To configure model authentication:
**For Anthropic Claude:**
export ANTHROPIC_API_KEY=your-api-key
# or: openclaw auth add --provider anthropic
**For OpenAI:**
export OPENAI_API_KEY=your-api-key
# or: openclaw auth add --provider openai
**For other providers:**
openclaw auth add --provider <provider>
**Verify authentication:**
openclaw models list
(Look for "Auth: yes" in the output)
Once you see authenticated models, re-run: onboard`;
}

View File

@@ -0,0 +1,136 @@
/**
* autoconfigure-models.ts — Tool for automatically configuring model assignments.
*
* Queries available authenticated models and intelligently assigns them to DevClaw roles.
*/
import type { OpenClawPluginApi } from "openclaw/plugin-sdk";
import { jsonResult } from "openclaw/plugin-sdk";
import type { ToolContext } from "../types.js";
import {
assignModels,
formatAssignment,
generateSetupInstructions,
type ModelAssignment,
} from "../setup/smart-model-selector.js";
import { fetchAuthenticatedModels } from "../setup/model-fetcher.js";
/**
* Get available authenticated models from OpenClaw.
*/
async function getAuthenticatedModels(
api: OpenClawPluginApi,
): Promise<Array<{ model: string; provider: string; authenticated: boolean }>> {
try {
const models = fetchAuthenticatedModels();
// Map to the format expected by assignModels()
return models.map((m) => {
// Extract provider from key (format: provider/model-name)
const provider = m.key.split("/")[0] || "unknown";
return {
model: m.key,
provider,
authenticated: true,
};
});
} catch (err) {
throw new Error(`Failed to get authenticated models: ${(err as Error).message}`);
}
}
/**
* Create the autoconfigure_models tool.
*/
export function createAutoConfigureModelsTool(api: OpenClawPluginApi) {
return (ctx: ToolContext) => ({
name: "autoconfigure_models",
label: "Auto-Configure Models",
description:
"Automatically discover authenticated models and intelligently assign them to DevClaw roles based on capability tiers",
parameters: {
type: "object",
properties: {
preferProvider: {
type: "string",
description:
"Optional: Prefer models from this provider (e.g., 'anthropic', 'openai')",
},
},
},
async execute(_id: string, params: Record<string, unknown>) {
try {
// Get all authenticated models
let authenticatedModels = await getAuthenticatedModels(api);
// Filter by preferred provider if specified
const preferProvider = params?.preferProvider as string | undefined;
if (preferProvider) {
const filtered = authenticatedModels.filter(
(m) => m.provider.toLowerCase() === preferProvider.toLowerCase(),
);
if (filtered.length === 0) {
return jsonResult({
success: false,
error: `No authenticated models found for provider: ${preferProvider}`,
message: `❌ No authenticated models found for provider "${preferProvider}".\n\nAvailable providers: ${[...new Set(authenticatedModels.map((m) => m.provider))].join(", ")}`,
});
}
authenticatedModels = filtered;
}
// Intelligently assign models using current session context
const assignment = await assignModels(authenticatedModels, ctx.sessionKey);
if (!assignment) {
// No models available
const instructions = generateSetupInstructions();
return jsonResult({
success: false,
modelCount: 0,
message: instructions,
});
}
// Format the assignment
const table = formatAssignment(assignment);
const modelCount = authenticatedModels.length;
let message = `✅ Auto-configured models based on ${modelCount} authenticated model${modelCount === 1 ? "" : "s"}:\n\n`;
message += table;
message += "\n\n";
if (modelCount === 1) {
message += " Only one authenticated model found — assigned to all roles.";
} else {
message += " Models assigned by capability tier (Tier 1 → senior, Tier 2 → medior/reviewer, Tier 3 → junior/tester).";
}
if (preferProvider) {
message += `\n📌 Filtered to provider: ${preferProvider}`;
}
message += "\n\n**Next step:** Pass this configuration to `setup` tool:\n";
message += "```javascript\n";
message += "setup({ models: <this-configuration> })\n";
message += "```";
return jsonResult({
success: true,
modelCount,
assignment,
models: assignment,
provider: preferProvider || "auto",
message,
});
} catch (err) {
const errorMsg = (err as Error).message;
api.logger.error(`Auto-configure models error: ${errorMsg}`);
return jsonResult({
success: false,
error: errorMsg,
message: `❌ Failed to auto-configure models: ${errorMsg}`,
});
}
},
});
}

View File

@@ -112,9 +112,10 @@ export function createSetupTool(api: OpenClawPluginApi) {
...DEV_LEVELS.map((t) => ` dev.${t}: ${result.models.dev[t]}`),
...QA_LEVELS.map((t) => ` qa.${t}: ${result.models.qa[t]}`),
"",
"Files:",
...result.filesWritten.map((f) => ` ${f}`),
);
lines.push("Files:", ...result.filesWritten.map((f) => ` ${f}`));
if (result.warnings.length > 0)
lines.push("", "Warnings:", ...result.warnings.map((w) => ` ${w}`));
lines.push(

587
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "@laurentenhoor/devclaw",
"version": "0.1.1",
"version": "1.0.0",
"description": "Multi-project dev/qa pipeline orchestration for OpenClaw",
"type": "module",
"license": "MIT",
@@ -51,6 +51,7 @@
"openclaw": ">=2026.0.0"
},
"devDependencies": {
"typescript": "^5.8"
"@types/node": "^25.2.3",
"typescript": "^5.9.3"
}
}

View File

@@ -8,7 +8,7 @@
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"strict": true,
"strict": false,
"skipLibCheck": true,
"types": ["node"]
},