A practical guide to introduce AI-first workflow in production frontends
When our product at SiftHub went live, AI wasn’t deeply embedded in our frontend workflow. We had a production-grade React codebase, established patterns, design tokens, CI/CD pipelines, and real customers. Stability mattered more than experimentation.
Then AI tooling matured.
Instead of rebuilding everything around AI, we adopted what we now call an AI-first frontend approach on top of a live production system. This post is a technical breakdown of how we use AI today, what works, what doesn’t, and how you can introduce AI into an already shipped product without disrupting velocity or quality.
No fluff. Just practical workflow.
Why AI-first doesn’t mean AI-dependent
Let’s clarify something upfront.
An AI-first approach does not mean:
- Replacing engineering judgment
- Letting AI architect your system
- Blindly accepting generated code
- Reducing ownership
It means: AI becomes the first collaborator in implementation, not the final authority.
As a frontend engineer, our responsibility hasn’t changed. We still own architecture, DX, performance, accessibility, maintainability, and system coherence. AI simply compresses the time between idea and usable code.
The core setup: Indexed codebase + context-aware AI
The real shift happened when we started using Cursor IDE.
The key differentiator is this: The entire codebase is indexed.
That changes everything.
Instead of prompting in isolation, the AI:
- Knows our folder structure
- Understands existing hooks and utilities
- Sees design tokens and shared components
- Understands our state management patterns
- Reads previous feature implementations
This transforms prompts from generic to system-aware.
Without indexed context, AI is a toy.
With indexed context, AI becomes a serious engineering accelerator.
Our feature development workflow with AI
Here’s how we typically build a new frontend feature today.
Step 1: Provide the Figma Context
The first input we provide to AI is the Figma design link along with the behavioral expectations of the UI.
At this stage we describe:
- Component behavior
- Loading states
- Error states
- Empty states
- Responsiveness rules
- Accessibility expectations
- Interaction patterns
AI cannot reliably infer intent purely from the visual design, so we describe what the interface should do, not just what it looks like.
For example, instead of saying: Build this UI from the Figma.
We give context like: Build a filter panel matching this Figma. It should support keyboard navigation, persist selected filters in URL params, and use our existing Drawer interaction pattern.
Clear behavioral instructions reduce ambiguity and improve the quality of the generated implementation.
Experimenting with Figma Code Connect
One area we are currently experimenting with is Figma Code Connect.
The idea behind Code Connect is to pair Figma components directly with components in the codebase. When a designer uses a component in Figma, it is mapped to the equivalent component in the frontend component library.
This has a significant implication for AI-assisted development.
Instead of explicitly instructing the AI to use specific components from the design system, the mapping already exists between:
Figma Component → Code Component
For example:
Figma Button → <Button />
Figma Modal → <Modal />
Figma Select → <Select />
If this mapping becomes reliable enough, the AI workflow simplifies considerably because we no longer need to explicitly instruct:
- which component library to use
- which component abstraction to follow
- how the design tokens should map
The design system becomes self-describing through the design tooling.
We are still experimenting with this approach, but if it works well at scale it could eliminate an entire class of prompting instructions that currently exist when generating UI code from AI.
For now, however, we still provide explicit instructions alongside the Figma link to ensure the generated code aligns with the conventions of the codebase.
Step 2: Provide frontend rules and conventions
This is critical.
AI needs to follow your system rules. We explicitly mention:
- Naming conventions
- Folder structure expectations
- State management approach (local vs global)
- Data fetching abstraction
- Error handling pattern
- Form validation library
- Styling approach (CSS modules, Tailwind, styled-components, etc.)
- Testing expectations
Example:
Follow the pattern used in FeatureXContainer. Keep UI components dumb. Place data hooks inside /hooks. Avoid inline styles. Use our useQueryWrapper abstraction.
If you don’t define constraints, AI will invent its own.
Step 3: Reference similar feature code
This step dramatically improves output quality.
We say:
This feature is similar to BulkUploadFeature. Use it as reference for structure and API interaction.
Because the codebase is indexed, the AI reads the existing feature and aligns accordingly.
This reduces:
- Architecture drift
- Style inconsistency
- Reinvention of utilities
- State duplication
AI becomes a pattern replicator.
Step 4: Define file & folder structure
Sometimes we let AI create structure. Sometimes we specify it manually.
Example:
/features/AdvancedFilters/
- AdvancedFiltersContainer.tsx
- AdvancedFiltersView.tsx
- useAdvancedFilters.ts
- types.ts
If the feature is complex, we prefer controlling the structure upfront. This reduces refactoring later.
AI is great at implementation, but structure is still an engineer’s job.
Step 5: Provide a mini PRD
For complex features, we give a lightweight PRD inside the prompt:
- Business goal
- User persona
- API contracts
- Performance constraints
- Edge cases
- Success criteria
Example:
This feature should allow enterprise users to configure role-based filters.
API may return 500ms–2s latency.
Must not block initial page render.
Filter state must be shareable via URL.
AI performs much better when business context is included.
What AI handles well
In my experience, AI excels at:
1. Boilerplate
- Component scaffolding
- Hook structures
- Types
- API wrappers
- Basic validation
2. Pattern replication
- Matching an existing feature style
- Reusing design tokens
- Following established architectural decisions
3. Incremental refactors
- Extracting reusable hooks
- Converting logic to memoized patterns
- Improving type definitions
4. Test skeletons
- Jest test scaffolding
- Mocking structure
- Basic coverage generation
This is where velocity improves significantly.
Where cleanup is always required
AI output is rarely production-ready.
Common issues we fix:
- Over-engineered abstractions
- Unnecessary re-renders
- Missing memoization
- Improper dependency arrays
- Edge-case blind spots
- Accessibility oversights
- Incorrect assumption about API shape
- Slight deviation from naming standards
Handling complex features: Multi-turn Strategy
For complex features, a single prompt rarely works.
Instead, we use iterative refinement.
Turn 1: Generate structure and main flow
Turn 2: Improve state handling
Turn 3: Optimize performance
Turn 4: Add edge-case handling
Turn 5: Clean up naming and consistency
Think of it like pair programming with an infinitely patient junior engineer.
The trick is this: Each turn should narrow scope.
Bad: Improve this.
Good: Refactor this hook to prevent unnecessary re-renders when filters haven’t changed.
Precision drives quality.
What we were missing initially
When we first adopted this workflow, we underestimated a few things.
1. Architecture still requires intentionality
AI cannot define system boundaries.
If you don’t decide:
- Where state lives
- How data flows
- What is reusable vs feature-specific
- How to maintain scalability
AI will create accidental complexity.
2. You must define guardrails
Without guardrails:
- Naming drifts
- Folder structure mutates
- Shared components duplicate
- Styles diverge
Create a documented frontend playbook. Then enforce it in prompts.
3. Performance awareness is still human
AI does not inherently optimize for:
- Bundle size
- Tree shaking
- Suspense boundaries
- Lazy loading
- Rendering cost
You must ask for it.
How the industry is evolving
Across engineering teams, the pattern emerging is:
- AI for scaffolding
- Engineers for system design
- AI for iteration
- Engineers for final polish
The most effective teams are not replacing developers.
They are:
- Reducing cognitive load
- Compressing repetitive work
- Increasing experimentation velocity
- Shortening feature delivery cycles
AI-first frontend is becoming less about writing code and more about: Designing systems that AI can implement correctly.
Practical advice if your product is already in production
If you’re working on a live B2B SaaS product, here’s how to introduce AI safely.
1. Start with non-critical features
Don’t begin with core authentication or billing logic.
Start with:
- Dashboard enhancements
- Internal tools
- UI-only improvements
Build confidence.
2. Enforce code reviews strictly
AI-generated code must go through the same PR standards.
No shortcuts.
If anything, be stricter.
3. Avoid letting AI decide architecture
Use AI to:
- Fill implementation gaps
- Translate intent into code
- Speed up refactoring
But architecture remains human-led.
4. Invest in prompt quality
The better your prompt:
- The less cleanup
- The fewer refactors
- The higher architectural alignment
Prompt writing is now an engineering skill.
The real shift: From writing code to designing intent
Before AI, a frontend engineer’s workflow was:
Think → Code → Refactor → Optimize
Now it is:
Think → Specify → Review → Refine
AI compresses the “Code” step.
The bottleneck moves to:
- Clarity of thought
- Architectural decisions
- System-level consistency
This is actually a positive shift for engineers.
Final thoughts
Adopting an AI-first frontend approach after your product is already in production is not risky if done intentionally.
Here’s the summary of our workflow:
- Use an indexed AI IDE
- Provide Figma context
- Define frontend rules clearly
- Reference similar features
- Specify folder structure
- Include business context (mini PRD)
- Use multi-turn refinement
- Enforce strict review
- Keep architecture human-owned
AI will handle boilerplate. You handle systems.
The teams that win won’t be the ones writing the most code. They’ll be the ones designing the clearest intent.
If you’re already shipping a production SaaS product, the question isn’t whether to adopt AI. It’s whether you’ll do it deliberately — or let it shape your codebase accidentally.