AI-Native Development Workflow
Complete the value stream: Transform backlog items into deployed, monitored production code using AI assistance. From spec generation to progressive deployment, learn how to ship faster while maintaining quality.
The Gap in Traditional Workflows: Most teams stop their value stream mapping at the backlog. But a story in the backlog is inventory, not value. Value is only delivered when working code reaches users in production.
This workflow completes the value stream by showing how to use AI tools to ship features faster while maintaining quality. With AI assistance, you can:
🎯 Value Stream Completion
This workflow closes the loop: Research → Backlog → Code → Production → Analytics → Research. By measuring production impact and feeding it back to analytics, you create a continuous improvement cycle.
Convert backlog stories into detailed technical specifications using AI
Take a user story from your backlog and use AI to generate a detailed technical specification including: API contracts, data models, component structure, edge cases, and acceptance criteria.
I have this user story from our backlog: [PASTE YOUR USER STORY HERE] Context about our system: - Tech stack: [e.g., Next.js, PostgreSQL, Prisma] - Architecture: [e.g., React components, API routes, database layer] - Related features: [Any existing features this touches] Please generate a detailed technical specification including: 1. **Feature Summary** (2-3 sentences) 2. **API Contract** (endpoints, request/response schemas) 3. **Data Model** (database schema changes, if any) 4. **Component Structure** (React components, props, state) 5. **Implementation Steps** (ordered list of what to build) 6. **Edge Cases** (what could go wrong, error handling) 7. **Acceptance Criteria** (how we know it's done) 8. **Testing Strategy** (unit, integration, E2E tests needed) Be specific with types, naming conventions, and make sure everything is implementable.
User Story: “As a user, I want to export my dashboard data to CSV so I can analyze it in Excel”
AI-Generated Spec:
**Feature Summary**
Add a "Export to CSV" button to all dashboard views that generates a downloadable CSV file containing the current dashboard data with proper formatting and column headers.
**API Contract**
POST /api/dashboards/{dashboardId}/export
Request: { dashboardId: string, filters: object }
Response: { downloadUrl: string, expiresAt: timestamp }
**Data Model**
No schema changes needed. Uses existing dashboard data.
**Component Structure**
- ExportButton component (props: dashboardId, currentFilters)
- useExportDashboard hook (handles API call + download)
- Toast notification for success/error states
**Implementation Steps**
1. Create ExportButton.tsx component with loading states
2. Add useExportDashboard hook with API call logic
3. Implement /api/dashboards/[id]/export route handler
4. Add CSV generation utility (convert JSON to CSV)
5. Integrate button into DashboardHeader component
6. Add tests for CSV generation logic
**Edge Cases**
- Empty dashboard (show "No data to export")
- Large datasets (stream CSV, don't load all in memory)
- Special characters in data (proper CSV escaping)
- Rate limiting (max 5 exports per minute)
**Acceptance Criteria**
✓ Button appears on all dashboard views
✓ Click generates CSV with correct data
✓ CSV has proper headers and formatting
✓ Toast shows success/error messages
✓ Works with filtered dashboard data
✓ Large datasets don't crash browser
**Testing Strategy**
- Unit: CSV generation utility with edge cases
- Integration: API route with mocked dashboard data
- E2E: Full export flow in Cypress/Playwright💡 Pro Tip: Use Codebase Context
Use Cursor or Cody to generate specs with your actual codebase context. They can reference existing components, API patterns, and database schemas to create more accurate specs.
Write code 3-5x faster using AI pair programming and code generation
Best for: Multi-file edits, codebase chat, complex refactoring
$20/month
Best for: Line-by-line suggestions, GitHub users, PR summaries
$10/month
Best for: Autonomous tasks, debugging, terminal operations
API usage
Start with Spec
Paste your AI-generated spec into a comment or use it as context in Cursor chat
Generate Scaffolding
Use AI to create files, components, API routes, database schemas
Implement Logic
Write comments describing what you need, let AI generate the implementation
Review & Refine
Don't blindly accept AI suggestions. Review, test, and refine the generated code
Handle Edge Cases
Use AI to implement error handling, loading states, and edge cases from your spec
Cursor Chat Conversation:
You:
I need to implement the CSV export feature. Here's the spec: [paste spec]. Start by creating the ExportButton component in components/dashboard/ExportButton.tsx
Cursor AI:
I'll create the ExportButton component with TypeScript, loading states, and error handling. Here's the implementation...
// components/dashboard/ExportButton.tsx
import { useState } from 'react';
import { useExportDashboard } from '@/hooks/useExportDashboard';
interface ExportButtonProps {
dashboardId: string;
currentFilters: object;
}
export function ExportButton({ dashboardId, currentFilters }: ExportButtonProps) {
const { exportDashboard, isExporting } = useExportDashboard();
const handleExport = async () => {
try {
await exportDashboard(dashboardId, currentFilters);
} catch (error) {
console.error('Export failed:', error);
}
};
return (
<button
onClick={handleExport}
disabled={isExporting}
className="px-4 py-2 bg-green-600 text-white rounded..."
>
{isExporting ? 'Exporting...' : 'Export to CSV'}
</button>
);
}You:
Now create the useExportDashboard hook in hooks/useExportDashboard.ts
Cursor AI:
[Generates the custom hook with API call, error handling, and download logic...]
⚠️ Warning: AI is Not Always Right
AI can generate code quickly, but it's not perfect. Always:
Achieve better test coverage faster with AI-generated unit, integration, and E2E tests
Specialized in test generation. Analyzes your code to suggest test cases, edge cases, and generate full test suites.
Free for individuals
Can generate tests inline. Type a comment describing the test and it will complete it.
$10/month
Test individual functions and components in isolation
Example: Testing CSV generation utility
describe('generateCSV', () => {
it('should generate CSV with headers', () => {
const data = [{ name: 'John', age: 30 }];
const csv = generateCSV(data);
expect(csv).toContain('name,age');
expect(csv).toContain('John,30');
});
it('should escape special characters', () => {
const data = [{ name: 'O\'Brien', note: 'Hello, world' }];
const csv = generateCSV(data);
expect(csv).toContain('"O\'Brien"');
});
it('should handle empty data', () => {
const csv = generateCSV([]);
expect(csv).toBe('');
});
});Test API endpoints with mocked dependencies
Example: Testing export API route
describe('POST /api/dashboards/[id]/export', () => {
it('should return download URL for valid dashboard', async () => {
const response = await request(app)
.post('/api/dashboards/123/export')
.send({ filters: {} })
.expect(200);
expect(response.body).toHaveProperty('downloadUrl');
expect(response.body).toHaveProperty('expiresAt');
});
it('should return 404 for non-existent dashboard', async () => {
await request(app)
.post('/api/dashboards/999/export')
.send({ filters: {} })
.expect(404);
});
it('should handle rate limiting', async () => {
// Make 6 requests quickly
for (let i = 0; i < 6; i++) {
const response = await request(app)
.post('/api/dashboards/123/export')
.send({ filters: {} });
if (i < 5) expect(response.status).toBe(200);
else expect(response.status).toBe(429); // Too many requests
}
});
});Test the full user flow in the browser
Example: E2E export flow
test('user can export dashboard to CSV', async ({ page }) => {
// Login and navigate to dashboard
await page.goto('/dashboard/sales');
// Click export button
await page.click('button:has-text("Export to CSV")');
// Wait for download
const download = await page.waitForEvent('download');
const path = await download.path();
// Verify CSV content
const content = await fs.readFile(path, 'utf-8');
expect(content).toContain('Product,Revenue,Date');
expect(content.split('\n').length).toBeGreaterThan(1);
// Verify success toast
await expect(page.locator('.toast-success')).toContainText('Export complete');
});Open your code file in VSCode/JetBrains
Select the function or component you want to test
Right-click → Codium AI → “Generate Tests”
Codium analyzes your code and suggests test cases
Review suggested test cases
Codium will show: happy path, edge cases, error scenarios
Accept and customize
Codium generates the test code. Review and modify as needed
Run tests and iterate
Fix any failing tests and add additional cases if needed
💡 Pro Tip: TDD with AI
Use AI to generate tests BEFORE writing implementation. Paste your spec into Codium AI and ask it to generate test cases. This gives you a test suite to validate against as you build.
Get automated, consistent code reviews and quality gates before merge
Automated PR reviews with inline comments, security checks, and learning from your team's patterns.
From $12/user/month
Built into GitHub, provides PR summaries and code suggestions during review.
$19/user/month (Business)
Install CodeRabbit GitHub App
Visit coderabbit.ai and connect your GitHub repo
Configure review rules
Add .coderabbit.yaml to your repo with custom rules
Open a PR and wait for review
CodeRabbit will automatically review within 1-2 minutes
🤖 CodeRabbit Review
Reviewed 5 files with 247 additions and 23 deletions
📝 Summary
This PR implements CSV export functionality for dashboards. The implementation looks solid overall. I've identified a few potential improvements around error handling and performance for large datasets.
🔴 Security Issue - HIGH
api/export.ts:45
User input is directly interpolated into file path without sanitization. This could allow path traversal attacks.
- const filePath = `/tmp/${dashboardId}.csv`;
+ const filePath = `/tmp/${sanitizePath(dashboardId)}.csv`;⚠️ Performance - MEDIUM
utils/generateCSV.ts:12
Loading all rows into memory could cause issues with large datasets (>10k rows). Consider streaming the CSV generation.
✅ Great Work
Set up automated quality gates in your GitHub Actions workflow:
.github/workflows/ci.yml
name: CI/CD Pipeline
on:
pull_request:
push:
branches: [main]
jobs:
quality-gates:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run type check
run: npm run type-check
- name: Run unit tests
run: npm run test:unit
- name: Run integration tests
run: npm run test:integration
- name: Build application
run: npm run build
- name: Run E2E tests
run: npm run test:e2e
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage/coverage-final.json
deploy:
needs: quality-gates
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- name: Deploy to production
run: npm run deploy💡 Pro Tip: Learn from AI Reviews
CodeRabbit learns from your team's review patterns. When you approve or reject its suggestions, it adapts to your team's standards. Over time, reviews become more accurate and aligned with your coding style.
Roll out features gradually with feature flags and monitoring
Instead of deploying to 100% of users at once, gradually roll out features to catch issues early and minimize blast radius. This is how companies like Facebook, Netflix, and Stripe deploy safely.
Deploy to your team first. Catch obvious bugs internally.
Roll out to early adopters. Monitor metrics closely.
If metrics look good, deploy to everyone.
Install LaunchDarkly SDK
npm install launchdarkly-react-client-sdkWrap your app with LaunchDarkly Provider
import { withLDProvider } from 'launchdarkly-react-client-sdk';
function App() {
return <YourApp />;
}
export default withLDProvider({
clientSideID: 'your-client-id',
user: {
key: user.id,
email: user.email,
custom: { plan: user.plan }
}
})(App);Use feature flags in your components
import { useFlags } from 'launchdarkly-react-client-sdk';
function DashboardHeader() {
const { csvExport } = useFlags();
return (
<div>
<h1>Dashboard</h1>
{csvExport && <ExportButton />}
</div>
);
}Configure rollout in LaunchDarkly dashboard
Create targeting rules for gradual rollout:
Watch these metrics as you roll out your feature:
Monitor error rates for new feature vs. baseline
🚨 If error rate > 1%, pause rollout
Watch page load time, API latency, database query time
⚠️ If p95 latency > 2x baseline, investigate
Track how many users are actually using the new feature
✓ Target: >10% of exposed users try the feature
Measure impact on key metrics (retention, engagement, revenue)
📊 Compare cohort with feature vs. without
Have a plan to roll back if things go wrong:
🚨 When to Roll Back Immediately
Rollback is instant with feature flags:
💡 Pro Tip: Integrate with Experimentation
Use feature flags as your A/B testing infrastructure. LaunchDarkly and Split.io can automatically measure the impact of features on key metrics, turning every rollout into an experiment.
Feed production data back to analytics to complete the value stream
You've shipped the feature to production. But the value stream isn't complete until you measure its impact and feed that learning back into your decision-making process.
🔄 The Complete Value Stream Loop
Track how many users are using your new feature
// Track CSV export usage in your analytics
amplitude.track('CSV_Export_Clicked', {
dashboard_id: dashboardId,
dashboard_type: 'sales',
user_plan: user.plan,
row_count: data.length
});Compare users with the feature vs. without
In your Product Analytics dashboard:
Talk to users who are (and aren't) using the feature
Follow-up user interviews:
Use insights from these interviews to inform your next backlog items
Update the original backlog item with outcomes
Example: Linear/Jira comment on shipped story
✅ Shipped to 100% on Dec 15, 2025
Impact (30 days post-launch):
Next steps: Created follow-up story for Excel export
This workflow completes your value stream by measuring cycle time from idea to value delivery:
❌ Before AI Tools
✅ With AI Tools
🚀 65% faster with AI
🎯 Value Stream Success
You've completed the full value stream! You can now measure: