AI Tutorials

Cursor Tutorial 2026: Composer 2, Agents Window, and a 30-Minute Workflow

A practical Cursor tutorial for 2026 covering installation, codebase indexing, Rules, Agent workflows, pricing checks, and common mistakes.

Cursor Tutorial 2026: Use Composer 2 and the Agents Window in a Real Repo

If you are searching for a Cursor tutorial, you probably do not just want a tour of buttons. You want to know whether Cursor can understand a real codebase, edit multiple files, run tests, and still leave you in control. This guide is based on public information checked on April 30, 2026, and turns Cursor into a practical workflow instead of a vague AI promise.

The short answer: Cursor is best for developers, technical founders, and product builders who already have a runnable project and can review diffs. It is not a magic replacement for product judgment, architecture, or code review. Use it for tasks that can be verified in 30 minutes to two hours: fixing a bug, adding a small feature, refactoring a component, or writing missing tests.

Quick verdict: who should use Cursor?

User typeFitRecommended workflow
Developers with a runnable repoGood fitUse Ask to inspect the code, then Agent for small scoped changes and tests.
Technical foundersGood fitBreak requests into reviewable PR-sized tasks and ask for a plan first.
Product, ops, or design buildersCautious fitUseful for copy, UI, and internal-tool prototypes, but someone must review the diff.
Anyone expecting a full product from one promptPoor fitLearn the project, tests, and deployment path before delegating work.

What changed in Cursor 3.0 and Composer 2?

Cursor 3.0 moves the product further toward agent-based development. Cursor’s changelog describes an Agents Window that can run multiple agents across local environments, worktrees, cloud environments, and remote SSH. In practice, this makes Cursor less like a chat box inside an editor and more like an AI coding workbench where you can experiment, compare, and keep changes isolated.

Composer 2 is Cursor’s coding model released in March 2026. Cursor says the standard model is available in Cursor and is priced at $0.50 per million input tokens and $2.50 per million output tokens; Cursor also lists a fast variant at $1.50 per million input tokens and $7.50 per million output tokens. Cursor reports benchmark results on CursorBench, Terminal-Bench 2.0, and SWE-bench Multilingual. Treat those numbers as useful vendor-reported context, not proof that it will succeed in your repo. Your own tests and reviews remain the real benchmark.

Install Cursor and index your codebase

Start with a project you can already run locally.

  1. Install Cursor from the official download page and sign in.
  2. Open a repository with a README, clear scripts, and tests if possible.
  3. Wait for codebase indexing to finish. Cursor’s Codebase Indexing docs explain that it computes embeddings for files and incrementally indexes new files.
  4. Exclude build outputs, logs, large data files, and irrelevant generated files with .gitignore, .cursorignore, or .cursorindexingignore.

Indexing quality matters. If you ask about your authentication flow before indexing finishes, Cursor may only see the active file. Once indexing is complete, it has a better chance of finding entry points, services, and tests across the repo.

Write a task prompt Cursor can actually execute

Treat Cursor like a junior teammate with terminal access, not a wish-granting machine. A good prompt includes the goal, scope, acceptance criteria, context, and workflow:

Goal: Add a status filter to the order list.
Scope: Only change files under apps/web/src/orders. Do not modify the database schema.
Acceptance: npm test -- orders passes and existing styling is preserved.
Context: The status enum is in packages/shared/order.ts.
Workflow: Read the code and propose a plan first. Wait for my approval before editing.

Use Ask mode first when you are unfamiliar with a codebase. Move to Agent when you know the scope and want Cursor to edit files, run commands, and fix errors. Because Agent can take action, your prompt should also define what it must not touch.

A 30-minute workflow for a real code change

Use this workflow for your first serious Cursor session:

StepYou doCursor doesAcceptance check
1Open the repo and check indexingReads project contextIt can name the entry points and test commands
2Ask about the relevant moduleFinds files and call pathsYou confirm the scope is correct
3Provide a small task promptProposes a planPlan matches your acceptance criteria
4Let Agent executeEdits files and runs commandsMinimal tests pass
5Review the diffExplains changes and risksNo out-of-scope edits

If the result is poor, avoid piling vague follow-up prompts into the same thread. Narrow the task, add missing acceptance criteria, or run a second attempt in an isolated worktree and compare the diff.

Use Cursor Rules or AGENTS.md for durable instructions

Cursor Rules turn project conventions into reusable instructions. Cursor’s docs say Project Rules live in .cursor/rules and can be version-controlled. For simpler setups, a root AGENTS.md file can define plain-English project instructions.

Example rule:

---
description: React component rules
globs: apps/web/**/*.tsx
alwaysApply: false
---

- Use TypeScript function components for new UI.
- Reuse existing Tailwind tokens before adding styles.
- Check mobile layout when changing UI.
- Do not add a new state-management library unless explicitly requested.

Rules are especially useful for code style, folder boundaries, test commands, security constraints, and “do not do this” instructions. They reduce repeated prompting and help Cursor behave more consistently across sessions.

Pricing, privacy, and team adoption notes

As of April 30, 2026, Cursor’s official pricing page lists Hobby as free, Pro at $20/month, Pro+ at $60/month, Ultra at $200/month, Teams at $40/user/month, and Enterprise as custom. Pricing and usage policies can change, so recheck the official page before publishing or recommending a plan.

For teams, the adoption question is not only “Which plan is cheapest?” It is also whether Cursor is allowed to access the repository, what privacy settings are required, how reviews happen, and which tasks are safe for agents. Start with low-risk repos, document your rules, and measure whether Cursor shortens verified development cycles rather than just generating more code.

Checklist and next steps from POPMARS

Want practical AI coding workflows instead of hype? Subscribe to POPMARS for hands-on guides to Cursor, Windsurf, Claude Code, and open-source agent stacks. Planned next reads include a Cursor vs Windsurf vs Claude Code comparison, a Cursor Rules template, and an AI coding agent review checklist.

Cursor is useful when it compresses the loop from “understand the repo” to “make a verified change.” The winning workflow is not blind automation. It is clear task design, constrained execution, and careful review.

Sources and freshness notes

SourceUse in articleFreshness / risk note
Cursor Composer 2 official blogComposer 2 availability, token pricing, vendor-reported benchmark table, and model context.Vendor source; benchmark claims are framed as Cursor-reported, not independent proof.
Cursor 3.0 changelog and Chinese changelogAgents Window, parallel agents, worktrees, cloud / remote SSH, /best-of-n, and Chinese terminology.Official product source; checked again on 2026-04-30.
Cursor Codebase Indexing docsEmbeddings-based indexing, automatic / incremental indexing, settings path, and ignore files.Official docs; exact UI labels can change.
Cursor Rules docs.cursor/rules, User Rules, Project Rules, AGENTS.md, and durable instruction patterns.Official docs; note details can change as Cursor evolves.
Cursor Chat / Agent overviewAgent capability framing: coding tasks, terminal commands, and edits.Official docs; used for capability boundaries, not success guarantees.
Cursor pricing pageHobby, Pro, Pro+, Ultra, Teams, and Enterprise pricing snapshot.Pricing can change; rechecked on 2026-04-30.
Composer 2 Technical ReportPublic paper record for Composer 2 training and benchmark context.Authored by Cursor researchers; not neutral user evidence.
Terminal-Bench 2.0 LeaderboardContext for Terminal-Bench as a terminal-agent benchmark.Independent benchmark site; leaderboard values can update.
SWE-bench MultilingualContext for multilingual software-engineering benchmark claims.Benchmark page can update; do not overstate ranking claims.