AI runners for engineering teams

Forkline gives engineering teams AI runners that recover CI, complete repository tasks, open PRs, and keep humans in control.

#142 Upgrade Frontend
Avatar of Software-leader Software-leader opened this issue
frontendpriority
Avatar of Software-leader Software-leader just now

|

Execution Plan for Front-End Update

Update Angular version

Upgrade dependencies safely and resolve breaking changes.

Add dark theme

Apply a consistent dark palette across main screens.

Fix GitHub issues

Address reported UI bugs and polish edge-case behavior.

Est. 6-8 min 3 parallel runners

Parallel Execution

Update Angular version, Add dark theme, Fix GitHub issues

runner-01 Waiting

Update Angular version

package.json

ng test --watch=false
runner-02 Waiting

Add dark theme

src/styles/theme.scss

npm run test:theme
runner-03 Waiting

Fix GitHub issues

src/app/app.component.ts

npm run test:issues
#143

Upgrade Frontend

forkline-bot bot
forkline/update-front-end
4 commits 9 files +214 -28
GitHub Actions Checks ready, deploy verification pending merge
Checks
Build
Prod deploy verified
Closes #142

How Forkline Works

Runner capacity for the work already in your backlog

Connect repositories once, turn scoped tasks into runner work, and keep technical leads in control of plans, policies, and review.

Ticket-driven runner execution workflow

01. Ticket-Driven Execution

Turn Repository Tasks Into Reviewed PRs

"Start from tracked work and keep humans in the approval loop."

  • Start from Git: turn issues and repo tasks into runner work.
  • Approve the plan: review scope before runners execute.
  • Review the PR: get branches or PRs with validation context.
Elastic capacity and parallel runners

02. Elastic Capacity

Parallel Runner Capacity On Demand

"Run multiple scoped tasks at once without coordination overhead."

  • Usage-based economics: pay for runner-hours, not elapsed project time.
  • Parallel execution: assign runners to independent tasks.
  • Faster feedback: recover CI and open reviewable PRs in parallel.
Model flexibility and governance

03. Model Sovereignty

Bring Your Own Models and Policies

"Use your model investment while preserving cost and data control."

  • Bring your own keys: use the providers your team trusts.
  • Avoid lock-in: switch providers without redesigning workflows.
  • Preserve boundaries: keep runs isolated and repositories private.

Connectors

Connect your tools

Forkline integrates with the AI providers and Git platforms your team already uses.

Popular

OpenAI

General-purpose models for chat, code, and vision

GitHub Copilot

GitHub-backed coding assistant provider

Anthropic

High-quality reasoning and coding models

Google

Gemini models for multimodal and long-context tasks

OpenRouter

Unified gateway to many model providers

Vercel AI Gateway

Route requests through Vercel's AI gateway

For Technical Leads and Operators

One operating surface for AI runner execution

Launch work, inspect execution, and understand what every runner is doing from a single control surface.

  • ✓ Connect the repository and git workflows your team already uses
  • ✓ Delegate work without losing visibility into scope, status, or ownership
  • ✓ Inspect execution history, reasoning, and validation before approving output
  • ✓ Increase delivery throughput without introducing additional coordination overhead
Technical operator workspace

Automation

Automate the routine. Focus on what matters.

Set policies once, review important outputs, and let runners handle repeatable engineering work.

CI Auto-Fix

Failed build? Forkline analyzes logs, proposes a fix path, and returns changes for review.

Merge Conflicts

Runners can resolve merge conflicts and return a branch or PR for approval.

Auto PR Review

Every PR can get review feedback on code quality, security, and team policies.

Bot Mentions

Mention @forkline in supported issues or PRs to request scoped runner work.

Supported today: GitHub, GitLab, Gitea, and Forgejo workflows · Learn more

Pricing

Simple, transparent pricing

Pay only for the compute time you use. Bring your own LLM keys with zero markup.

Launch pricing: 0% off all plans

Subscription Plans

Daily hours that reset every 24 hours. Best for consistent usage.

Basic
2 CPU / 4GB

$10/mo

Daily hours by runner size

Basic (1× speed) 8h/day
Advanced (2× speed) 4h/day
Pro (4× speed) 2h/day
★ Most Popular
Advanced
4 CPU / 8GB

$20/mo

Daily hours by runner size

Basic (1× speed) 16h/day
Advanced (2× speed) 8h/day
Pro (4× speed) 4h/day
Pro
8 CPU / 16GB

$40/mo

Daily hours by runner size

Basic (1× speed) 32h/day
Advanced (2× speed) 16h/day
Pro (4× speed) 8h/day
0%

Zero AI markup

Use your own API keys. We never add fees on top of your LLM costs.

Any model, any time

Switch between Claude, GPT, Gemini, or local models with no migration.

Daily refresh

Subscription hours reset every 24 hours. Fresh capacity each day.

Enterprise

Need Enterprise Features?

Dedicated support, custom integrations, team training, and volume discounts for organizations ready to scale their AI-assisted development.

Dedicated support with SLAs
Custom connector development
Team training programs
Volume discounts