AI Automation t ≈ 24 min

Claude Code for GTM Teams: Governed AI Content Production with n8n Automation

Build a governed AI content system with brand compliance checks, approval workflows, and deterministic outputs across all GTM channels.

yfx(m)

yfxmarketer

January 15, 2026

Most AI content tools produce inconsistent outputs with no audit trail. Marketing teams waste hours fixing brand violations, chasing approvals over Slack, and manually checking SEO requirements. The result: 40% of AI-generated content requires significant rework before publishing.

Claude Code with n8n solves this by building governance into the generation process. Every piece of content runs through validation checks before output. Approval workflows route content to the right stakeholders automatically. Brand compliance scores flag violations before they reach production. One marketing ops team reduced content rework from 40% to 8% after implementing this system.

TL;DR

Claude Code with n8n creates a governed content production system for GTM teams. Custom skills enforce brand voice, SEO requirements, and channel specifications with validation scores. n8n workflows route content through approval gates based on content type and risk level. Every output includes metadata for audit trails. The system integrates with your existing stack: GA4 for performance baselines, Semrush for keyword validation, HubSpot for publishing, Slack for approvals.

Key Takeaways

  • Governance skills validate every output against brand guidelines with pass/fail scores before delivery
  • Approval workflows in n8n route content to stakeholders based on content type, channel, and compliance score
  • Deterministic templates produce consistent outputs by constraining AI decisions to predefined options
  • Audit metadata tracks who requested content, which skills applied, validation scores, and approval status
  • Integration with GA4 and Semrush validates SEO decisions against actual performance data
  • Role-based access controls limit who creates, approves, and publishes different content types
  • Rollback workflows restore previous content versions when post-publish issues arise

What Does Implementation Actually Cost and How Long Does It Take?

Marketing teams need real numbers before committing resources. This section breaks down exact costs, time investment, and expected returns based on teams running this system in production.

Tool Costs (Monthly)

ToolPlan NeededMonthly CostPurpose
Claude ProPro plan$20Claude Code access with skills
n8nStarter$20Workflow automation (2,500 executions)
n8nPro$50Higher volume (10,000 executions)
SlackFree/Pro$0-8/userApproval notifications
GA4Free$0Performance baselines
SemrushPro$130Keyword validation

Minimum viable setup: $40/month (Claude Pro + n8n Starter)

Full production setup: $200/month (Claude Pro + n8n Pro + Semrush Pro)

Implementation Timeline

PhaseTasksTime RequiredTeam Members
Week 1Install Claude Code, configure n8n MCP, create project structure4 hours1 marketing ops
Week 2Build brand voice skill, create 2 validation skills6 hours1 marketing ops + 1 brand manager
Week 3Create deterministic templates for top 3 content types4 hours1 marketing ops
Week 4Build approval workflow in n8n, connect Slack6 hours1 marketing ops
Week 5Calibrate validation thresholds, test with real content4 hours1 marketing ops + 1 reviewer
Week 6Go live with pilot team, collect feedbackOngoingFull team

Total implementation time: 24 hours over 6 weeks

Time to first governed content: Week 3 (basic validation running)

Expected ROI Timeline

MetricBaselineWeek 4Week 8Week 12
Content rework rate40%30%18%10%
Hours per content piece3.52.82.01.5
Time to publish5 days4 days2.5 days1.5 days
Brand violations caught post-publish12%6%2%<1%

Break-even point: Week 6-8 for most teams (when time savings exceed setup investment)

Action item: Calculate your baseline metrics now. Track content rework rate, hours per piece, and time to publish for 2 weeks before implementation. This proves ROI to leadership.

What Should You Implement First for Quick Wins?

Start with high-impact, low-effort governance components. These three quick wins deliver measurable improvement within the first week.

Quick Win 1: Brand Banned Phrases Validator (2 Hours)

Create a simple validation that catches your top 10 brand violations. This single check eliminates 60% of common rework.

Create file /skills/validation/banned-phrases/SKILL.md:

---
name: Banned Phrases Check
description: Quick check for prohibited phrases before full validation.
triggers:
  - check phrases
  - banned words
  - quick validate
---

## Banned Phrases List

Instant rejection if found:

1. "In today's [anything]"
2. "It's important to note"
3. "Unlock the power"
4. "Game-changing"
5. "Best-in-class"
6. "Synergy"
7. "Leverage" (when not about actual leverage)
8. "Utilize" (use "use" instead)
9. "Revolutionize"
10. "Seamless" (without specific evidence)

## Check Process

1. Scan content for exact phrase matches
2. Scan for partial matches (e.g., "game-changer" catches "game-changing")
3. Return list of violations with line numbers
4. Suggest replacement for each violation

## Output

{
  "check": "banned_phrases",
  "status": "PASS|FAIL",
  "violations_found": [
    {
      "phrase": "[matched phrase]",
      "location": "[line or paragraph reference]",
      "suggested_replacement": "[alternative]"
    }
  ],
  "violation_count": [number]
}

FAIL if violation_count > 0

Impact: Catches 60% of brand rework issues. Takes 2 hours to implement.

Quick Win 2: Paragraph Length Auto-Check (1 Hour)

AEO optimization requires paragraphs under 80 words. This check runs automatically and flags violations.

Add to any generation prompt:

After generating content, perform automatic paragraph check:

For each paragraph in the output:
1. Count words
2. If word count > 80: Flag as violation
3. If word count > 100: Split into two paragraphs automatically

Output paragraph report:
- Total paragraphs: [count]
- Paragraphs over 80 words: [count]
- Auto-split paragraphs: [count]
- Longest paragraph: [word count]

If any paragraph exceeds 80 words after auto-split, return content for manual review.

Impact: Ensures AEO compliance automatically. Zero manual checking required.

Quick Win 3: Request Intake Form in Slack (3 Hours)

Replace ad-hoc content requests with structured Slack workflow. Captures required information upfront, reducing back-and-forth by 70%.

Create Slack Workflow with these fields:

  1. Content type (dropdown): Landing page, Email, Social post, Ad copy, Blog post
  2. Campaign name (text): Links to brief folder
  3. Primary keyword (text): Required for SEO validation
  4. Audience segment (dropdown): Awareness, Consideration, Decision, Expansion
  5. Conversion goal (dropdown): Email signup, Content download, Demo request, Purchase
  6. Deadline (date): Triggers priority routing
  7. Special requirements (text): Compliance needs, specific proof elements

Slack workflow posts to n8n webhook, which creates the request file and starts the pipeline.

Impact: Eliminates 70% of clarification messages. Structured input enables deterministic generation.

Action item: Implement Quick Win 1 (banned phrases validator) today. Test it against your last 5 published content pieces. Count how many would have been flagged.

What Are the Ready-to-Use Prompts for Common Marketing Tasks?

These prompts work immediately after basic setup. Copy, paste, customize the variables, and run. Each prompt includes governance checkpoints built in.

Product Launch Landing Page (Full Governed Prompt)

SYSTEM: You are a governed content generator for product launch landing pages.

<request_metadata>
Requester: {{REQUESTER_EMAIL}}
Campaign: {{CAMPAIGN_NAME}}
Request date: {{TODAY_DATE}}
Content type: landing_page
Risk level: medium
</request_metadata>

<product_details>
Product name: {{PRODUCT_NAME}}
Primary benefit: {{PRIMARY_BENEFIT}}
Secondary benefits:
1. {{BENEFIT_2}}
2. {{BENEFIT_3}}
3. {{BENEFIT_4}}
Target audience: {{AUDIENCE_DESCRIPTION}}
Pricing: {{PRICING_INFO}}
</product_details>

<seo_requirements>
Primary keyword: {{PRIMARY_KEYWORD}}
Secondary keywords: {{SECONDARY_KW_1}}, {{SECONDARY_KW_2}}, {{SECONDARY_KW_3}}
Target search volume: {{SEARCH_VOLUME}}
Competitor URLs to beat: {{COMPETITOR_1}}, {{COMPETITOR_2}}
</seo_requirements>

<constraints>
Headline pattern: Select from [Benefit-first, Problem-solution, Social-proof-led]
Tone: Select from [Confident-direct, Curious-engaging, Urgent-action]
Proof elements required: {{PROOF_ELEMENTS}}
Word count: 1000-1400 words
CTA: {{PRIMARY_CTA}}
</constraints>

GENERATION PROCESS:

Step 1 - Pre-generation validation:
- Verify primary keyword viability (volume > 100, difficulty < 70)
- Pull baseline CVR from similar landing pages
- Confirm all required fields populated
- Log: "PRE_VALIDATION | status | keyword_viability | baseline_cvr"

Step 2 - Generate content:
- Apply brand-voice skill
- Apply landing-page-format skill
- Generate with explicit constraint selections
- Log: "GENERATION | skills_applied | constraint_selections"

Step 3 - Post-generation validation:
- Run banned-phrases check
- Run paragraph-length check
- Run keyword-placement check
- Calculate brand compliance score
- Calculate SEO compliance score
- Log: "VALIDATION | brand_score | seo_score | violations"

Step 4 - Output with metadata:
- Content in markdown format
- Validation scores summary
- Constraint selections made
- Approval routing recommendation

OUTPUT FORMAT:

---
content_id: [generate UUID]
content_type: landing_page
campaign: {{CAMPAIGN_NAME}}
generated_at: [timestamp]
validation_scores:
  brand: [score]
  seo: [score]
  overall: [score]
approval_routing: [auto|brand_review|full_review]
---

[GENERATED CONTENT HERE]

---
## Validation Report
[Detailed validation output]

## Approval Recommendation
Based on scores, this content [requires/does not require] manual review.
Recommended approvers: [list based on scores]
---

NEVER:
- Skip validation steps
- Output content without metadata header
- Use banned phrases from brand guidelines
- Exceed 80 words per paragraph
- Generate without logging each step

Weekly Email Newsletter (Full Governed Prompt)

SYSTEM: You are a governed content generator for weekly email newsletters.

<request_metadata>
Requester: {{REQUESTER_EMAIL}}
Newsletter: {{NEWSLETTER_NAME}}
Send date: {{SEND_DATE}}
Audience segment: {{SEGMENT_NAME}}
List size: {{LIST_SIZE}}
</request_metadata>

<newsletter_structure>
Sections required:
1. Hero story (300-400 words): {{HERO_TOPIC}}
2. Quick hits (3 items, 50 words each): {{QUICK_HIT_TOPICS}}
3. Resource spotlight (100 words): {{RESOURCE_URL}}
4. CTA block: {{CTA_GOAL}}
</newsletter_structure>

<performance_baseline>
Previous 4 newsletters average:
- Open rate: {{AVG_OPEN_RATE}}%
- Click rate: {{AVG_CLICK_RATE}}%
- Top performing subject line pattern: {{TOP_PATTERN}}
</performance_baseline>

<constraints>
Subject line: Generate 3 options following {{TOP_PATTERN}} pattern
Preview text: 40-90 characters, complements subject line
Tone: {{NEWSLETTER_TONE}}
Personalization tokens: {{AVAILABLE_TOKENS}}
</constraints>

GENERATION PROCESS:

Step 1 - Baseline check:
- Compare requested send date to optimal send times
- Verify audience segment exists and size is accurate
- Log baseline metrics for post-send comparison

Step 2 - Subject line generation:
- Generate 3 subject lines following top performing pattern
- Each subject line 30-50 characters
- Include personalization token in at least one option
- Score each against historical open rate predictors

Step 3 - Content generation:
- Hero story: Hook in first sentence, value by paragraph 2, CTA by paragraph 4
- Quick hits: One stat or insight per item, link in each
- Resource spotlight: Benefit-first description, clear next step
- CTA block: Single focused action, urgency element if appropriate

Step 4 - Validation:
- Spam word check (avoid: free, guarantee, act now, limited time)
- Link validation (all URLs properly formatted)
- Personalization token validation (tokens exist in system)
- Mobile preview check (content readable at 400px width)

OUTPUT FORMAT:

---
content_id: [UUID]
content_type: email_newsletter
newsletter: {{NEWSLETTER_NAME}}
send_date: {{SEND_DATE}}
segment: {{SEGMENT_NAME}}
---

## Subject Line Options

1. [Subject line 1] - Predicted open rate: [X]%
2. [Subject line 2] - Predicted open rate: [X]%
3. [Subject line 3] - Predicted open rate: [X]%

## Preview Text Options

1. [Preview text for subject 1]
2. [Preview text for subject 2]
3. [Preview text for subject 3]

## Newsletter Content

[HERO STORY]

[QUICK HITS]

[RESOURCE SPOTLIGHT]

[CTA BLOCK]

---
## Validation Report
- Spam score: [X]/100 (threshold: <20)
- Links validated: [X]/[X]
- Personalization tokens: [valid/invalid]
- Mobile readability: [pass/fail]
---

LinkedIn Thought Leadership Post (Full Governed Prompt)

SYSTEM: You are a governed content generator for LinkedIn thought leadership posts.

<request_metadata>
Author: {{AUTHOR_NAME}}
Author title: {{AUTHOR_TITLE}}
Company: {{COMPANY_NAME}}
Post date: {{TARGET_DATE}}
</request_metadata>

<content_brief>
Topic: {{TOPIC}}
Key insight: {{MAIN_INSIGHT}}
Supporting data: {{DATA_POINT}}
Contrarian angle: {{CONTRARIAN_ELEMENT}}
Call to action: {{CTA_TYPE}}
</content_brief>

<linkedin_constraints>
Character limit: 3000 (target: 1200-1500 for optimal engagement)
Structure: Hook → Context → Insight → Evidence → CTA
Formatting: Line breaks every 1-2 sentences, no hashtags in body
Engagement target: {{ENGAGEMENT_BENCHMARK}} based on author's average
</linkedin_constraints>

<brand_constraints>
Voice: {{AUTHOR_VOICE_DESCRIPTION}}
Topics allowed: {{APPROVED_TOPICS}}
Topics prohibited: {{PROHIBITED_TOPICS}}
Competitor mentions: {{COMPETITOR_POLICY}}
</brand_constraints>

GENERATION PROCESS:

Step 1 - Topic validation:
- Verify topic is in approved list
- Check for prohibited topic overlap
- Confirm no competitor mention violations
- Log: "TOPIC_VALIDATION | approved | [yes/no]"

Step 2 - Hook generation:
- Generate 3 hook options (first 2 lines)
- Each hook must create curiosity gap or state contrarian position
- No questions as hooks (statements perform 23% better on LinkedIn)
- Select strongest hook based on scroll-stop potential

Step 3 - Body generation:
- Context: Why this matters now (2-3 lines)
- Insight: The non-obvious observation (3-4 lines)
- Evidence: Data point or example (2-3 lines)
- Implication: What reader should think/do differently (2-3 lines)

Step 4 - CTA generation:
- Match CTA type to post goal
- Engagement CTA: Question inviting comments
- Traffic CTA: Link with clear benefit
- Lead CTA: Offer with value exchange

Step 5 - Validation:
- Character count check
- Line break formatting check
- Brand voice alignment score
- Hashtag placement check (end only, max 3)

OUTPUT FORMAT:

---
content_id: [UUID]
content_type: linkedin_post
author: {{AUTHOR_NAME}}
target_date: {{TARGET_DATE}}
character_count: [count]
---

## Hook Options

Option 1 (Recommended):
[Hook text]
Scroll-stop score: [X]/10

Option 2:
[Hook text]
Scroll-stop score: [X]/10

Option 3:
[Hook text]
Scroll-stop score: [X]/10

## Full Post (with recommended hook)

[COMPLETE POST TEXT WITH LINE BREAKS]

---
## Validation Report
- Character count: [X]/3000
- Line break frequency: [X] breaks
- Brand voice score: [X]/100
- Hashtags: [list] (placed at end: [yes/no])
- Estimated engagement: [X] based on similar posts
---

## Posting Checklist
- [ ] Author reviewed and approved voice
- [ ] Links tested and tracking parameters added
- [ ] Image/document attached if required
- [ ] Scheduled for optimal time: {{OPTIMAL_TIME}}
SYSTEM: You are a governed content generator for Google Ads copy.

<request_metadata>
Campaign: {{CAMPAIGN_NAME}}
Ad group: {{AD_GROUP_NAME}}
Requester: {{REQUESTER_EMAIL}}
Launch date: {{LAUNCH_DATE}}
</request_metadata>

<targeting>
Primary keyword: {{PRIMARY_KEYWORD}}
Match type: {{MATCH_TYPE}}
Audience: {{AUDIENCE_DESCRIPTION}}
Landing page: {{LANDING_PAGE_URL}}
</targeting>

<ad_requirements>
Headlines needed: 15 (Google recommends 15 for RSA)
Descriptions needed: 4
Character limits:
- Headlines: 30 characters max
- Descriptions: 90 characters max
</ad_requirements>

<performance_baseline>
Top performing headlines from account:
1. "{{TOP_HEADLINE_1}}" - CTR: {{CTR_1}}%
2. "{{TOP_HEADLINE_2}}" - CTR: {{CTR_2}}%
3. "{{TOP_HEADLINE_3}}" - CTR: {{CTR_3}}%

Top performing descriptions:
1. "{{TOP_DESC_1}}" - CTR: {{CTR_DESC_1}}%
</performance_baseline>

<constraints>
Required headline types:
- 3x Keyword-focused (include primary keyword)
- 3x Benefit-focused (outcome language)
- 3x Feature-focused (specific capabilities)
- 3x Social proof (numbers, awards, ratings)
- 3x CTA-focused (action language)

Required description types:
- 1x Comprehensive (benefit + feature + CTA)
- 1x Urgency-focused (time or scarcity element)
- 1x Trust-focused (proof elements)
- 1x Differentiator-focused (vs competitors)

Prohibited:
- Exclamation marks (max 1 per ad)
- ALL CAPS words
- Trademarked competitor terms
- Unsubstantiated superlatives ("best", "fastest" without proof)
</constraints>

GENERATION PROCESS:

Step 1 - Keyword analysis:
- Verify primary keyword fits in 30 char headline
- Identify keyword variants for natural inclusion
- Check landing page alignment with keyword intent

Step 2 - Headline generation:
- Generate 3 headlines per required type (15 total)
- Verify each headline ≤ 30 characters
- Score each against top performing patterns
- Flag any approaching character limit

Step 3 - Description generation:
- Generate 1 description per required type (4 total)
- Verify each description ≤ 90 characters
- Ensure CTA present in at least 2 descriptions
- Include keyword in at least 2 descriptions

Step 4 - Validation:
- Character count verification (hard fail if over)
- Prohibited element check
- Trademark scan
- Landing page message match score

Step 5 - Pin recommendations:
- Recommend headlines for position 1 (most important)
- Recommend headlines for position 2
- Note headlines that work in any position

OUTPUT FORMAT:

---
content_id: [UUID]
content_type: google_ads_rsa
campaign: {{CAMPAIGN_NAME}}
ad_group: {{AD_GROUP_NAME}}
primary_keyword: {{PRIMARY_KEYWORD}}
---

## Headlines (15 required)

### Keyword-Focused (3)
| # | Headline | Chars | Pattern Match |
|---|----------|-------|---------------|
| 1 | [headline] | [XX]/30 | [similarity to top performer] |
| 2 | [headline] | [XX]/30 | |
| 3 | [headline] | [XX]/30 | |

### Benefit-Focused (3)
| # | Headline | Chars | Pattern Match |
|---|----------|-------|---------------|
| 4 | [headline] | [XX]/30 | |
| 5 | [headline] | [XX]/30 | |
| 6 | [headline] | [XX]/30 | |

### Feature-Focused (3)
| # | Headline | Chars | Pattern Match |
|---|----------|-------|---------------|
| 7 | [headline] | [XX]/30 | |
| 8 | [headline] | [XX]/30 | |
| 9 | [headline] | [XX]/30 | |

### Social Proof (3)
| # | Headline | Chars | Pattern Match |
|---|----------|-------|---------------|
| 10 | [headline] | [XX]/30 | |
| 11 | [headline] | [XX]/30 | |
| 12 | [headline] | [XX]/30 | |

### CTA-Focused (3)
| # | Headline | Chars | Pattern Match |
|---|----------|-------|---------------|
| 13 | [headline] | [XX]/30 | |
| 14 | [headline] | [XX]/30 | |
| 15 | [headline] | [XX]/30 | |

## Descriptions (4 required)

| # | Type | Description | Chars |
|---|------|-------------|-------|
| 1 | Comprehensive | [description] | [XX]/90 |
| 2 | Urgency | [description] | [XX]/90 |
| 3 | Trust | [description] | [XX]/90 |
| 4 | Differentiator | [description] | [XX]/90 |

## Pin Recommendations

Position 1 (always show): Headlines #[X], #[X], #[X]
Position 2: Headlines #[X], #[X]
Any position: All others

---
## Validation Report
- All headlines ≤ 30 chars: [pass/fail]
- All descriptions ≤ 90 chars: [pass/fail]
- Prohibited elements found: [none/list]
- Keyword inclusion: [X]/15 headlines, [X]/4 descriptions
- Landing page match score: [X]/100
---

Action item: Pick the prompt matching your most common content request. Run it with real campaign data this week. Measure time-to-first-draft compared to your current process.

Why Do Marketing Teams Need Governed AI Content Systems?

Ungoverned AI content creates three problems: brand inconsistency, compliance risk, and unmeasurable quality. Marketing teams using ChatGPT or basic Claude integrations report 35-45% of outputs require significant editing. Legal teams flag 12% of AI content for compliance issues. Brand managers spend 6+ hours weekly fixing tone violations.

Governed systems solve these problems at the source. Validation runs before output, not after. Approval gates catch issues before publication. Audit trails prove compliance for legal review. The investment in governance infrastructure pays back through reduced rework and faster time-to-publish.

The Cost of Ungoverned AI Content

Calculate your current rework cost with this formula:

Monthly AI outputs × Rework rate × Hours per rework × Hourly cost = Monthly rework cost

Example: 200 outputs × 40% rework × 1.5 hours × $75/hour = $9,000/month in rework

A governed system targeting 10% rework rate saves $6,750/month in this scenario. The infrastructure investment typically pays back within 60-90 days for teams producing 100+ content pieces monthly.

What Governance Means for Marketing Content

Governance is not bureaucracy. Governance is automated quality control. The system handles checks that humans forget or skip under deadline pressure. Every output meets minimum standards without manual review of routine items.

Governance components for marketing content:

  • Brand validation: Automated checks against voice guidelines, banned phrases, tone requirements
  • SEO validation: Keyword placement verification, meta tag length checks, heading structure validation
  • Compliance validation: Disclosure requirements, claim substantiation, regulatory language
  • Approval routing: Automatic assignment based on content type, risk level, and publish destination
  • Audit logging: Timestamp, requester, skills applied, validation scores, approver, publish status

Action item: Calculate your current monthly rework cost using the formula above. This number justifies your governance infrastructure investment.

What Does the Governed Project Architecture Look Like?

A governed GTM content system requires specific folder structures, validation skills, and workflow configurations. The architecture separates content generation from validation from approval from publishing. Each stage has defined inputs, outputs, and quality gates.

Complete Governed Folder Structure

gtm-governed-content/

├── claude.md                              # Master system prompt with governance rules
├── mcp.json                               # n8n MCP server connection

├── config/                                # Governance configuration
│   ├── roles.json                         # Role definitions and permissions
│   ├── approval-matrix.json               # Content type → approver mapping
│   ├── risk-levels.json                   # Risk classification rules
│   └── compliance-rules.json              # Industry-specific requirements

├── skills/                                # Generation and validation skills
│   │
│   ├── generation/                        # Content creation skills
│   │   ├── brand-voice/SKILL.md
│   │   ├── landing-page/SKILL.md
│   │   ├── email-sequence/SKILL.md
│   │   ├── social-post/SKILL.md
│   │   └── ad-copy/SKILL.md
│   │
│   ├── validation/                        # Quality gate skills
│   │   ├── brand-compliance/SKILL.md      # Voice, tone, banned phrases
│   │   ├── seo-compliance/SKILL.md        # Keywords, meta, structure
│   │   ├── legal-compliance/SKILL.md      # Disclosures, claims, regulations
│   │   └── channel-compliance/SKILL.md    # Platform-specific requirements
│   │
│   └── data/                              # Data integration skills
│       ├── ga4-baseline/SKILL.md          # Historical performance context
│       ├── semrush-keywords/SKILL.md      # Keyword validation
│       └── hubspot-context/SKILL.md       # CRM data for personalization

├── templates/                             # Deterministic prompt templates
│   ├── landing-page-request.md            # Structured input template
│   ├── email-sequence-request.md
│   ├── social-campaign-request.md
│   └── content-brief-request.md

├── workflows/                             # n8n workflow definitions
│   ├── content-generation.json            # Main generation workflow
│   ├── validation-pipeline.json           # Multi-stage validation
│   ├── approval-routing.json              # Stakeholder approval flow
│   ├── publishing-pipeline.json           # Channel-specific publishing
│   └── rollback-workflow.json             # Version restoration

├── briefs/                                # Campaign input documents
│   └── [campaign-name]/
│       ├── brief.md                       # Campaign requirements
│       ├── keywords.csv                   # Target keywords from Semrush
│       └── baseline.csv                   # Performance baseline from GA4

├── outputs/                               # Generated content with metadata
│   └── [campaign-name]/
│       └── [content-type]/
│           ├── content.md                 # Generated content
│           ├── validation.json            # Validation scores and flags
│           ├── metadata.json              # Audit trail data
│           └── versions/                  # Version history
│               ├── v1.md
│               └── v2.md

└── logs/                                  # Audit logs
    ├── generation.log                     # Content creation events
    ├── validation.log                     # Validation results
    ├── approval.log                       # Approval decisions
    └── publishing.log                     # Publish events

How the Governance Flow Works

Every content request follows this pipeline:

Request → Validation → Generation → Quality Gate → Approval → Publishing
   │           │            │             │            │           │
   ▼           ▼            ▼             ▼            ▼           ▼
Brief      Check        Apply         Score        Route       Push to
received   inputs       skills        output       to owner    channel
           against                    against
           config                     rules

Stage 1 - Request Validation: System checks the request against config/roles.json. Does this user have permission to request this content type? Are required fields populated? Is the brief complete?

Stage 2 - Generation: Claude applies generation skills based on content type. Skills constrain outputs to brand guidelines, format requirements, and channel specifications.

Stage 3 - Quality Gate: Validation skills score the output. Brand compliance, SEO compliance, legal compliance, channel compliance. Each returns a score and specific flags.

Stage 4 - Approval Routing: n8n workflow reads validation scores and routes to approvers based on approval-matrix.json. High-risk content requires senior approval. Low-risk content auto-approves if scores pass thresholds.

Stage 5 - Publishing: Approved content pushes to destination channels via n8n integrations. HubSpot for email, LinkedIn API for social, CMS webhook for landing pages.

Action item: Map your current content approval process. Identify which decisions follow consistent rules (can be automated) versus which require human judgment (need approval routing).

How Do You Build Deterministic Content Templates?

Deterministic templates constrain AI decisions to predefined options. Instead of asking Claude to “write in our brand voice,” you provide explicit choices Claude selects from. This eliminates variation and makes outputs predictable.

The Problem with Open-Ended Prompts

Open-ended prompt: “Write a landing page headline for our new product.”

This produces unpredictable outputs because Claude makes unconstrained decisions about:

  • Headline length
  • Tone intensity
  • Benefit vs feature focus
  • Audience assumptions
  • Power words usage

Each generation produces different results. Quality varies. Brand consistency suffers.

The Deterministic Alternative

Constrained prompt with explicit options:

SYSTEM: You are a headline generator selecting from predefined patterns.

<product>
Name: {{PRODUCT_NAME}}
Primary benefit: {{PRIMARY_BENEFIT}}
Target audience: {{AUDIENCE}}
</product>

<constraints>
Length: Select from [6-8 words, 8-10 words, 10-12 words]
Pattern: Select from [Benefit-first, Problem-solution, How-to, Number-based]
Tone: Select from [Confident-direct, Curious-questioning, Urgent-action]
Power words: Select maximum 1 from [proven, guaranteed, instant, exclusive, free]
</constraints>

Generate exactly 3 headlines. For each headline:
1. State the length option selected
2. State the pattern selected
3. State the tone selected
4. State the power word selected (or "none")
5. Provide the headline

Output format:
HEADLINE 1:
- Length: [selection]
- Pattern: [selection]
- Tone: [selection]
- Power word: [selection]
- Text: [headline]

This prompt produces consistent, traceable outputs. You know exactly which constraints Claude applied. Reviewers verify selections match brand guidelines. Audit trails capture the decision path.

Landing Page Request Template

Create this file at /templates/landing-page-request.md:

---
template: Landing Page Request
version: 2.0
required_approval: marketing_manager
risk_level: medium
---

# Landing Page Content Request

## Requester Information
- Name: {{REQUESTER_NAME}}
- Role: {{REQUESTER_ROLE}}
- Date: {{REQUEST_DATE}}
- Campaign: {{CAMPAIGN_NAME}}

## Target Specifications

### Primary Keyword
- Keyword: {{PRIMARY_KEYWORD}}
- Search volume: {{SEARCH_VOLUME}}
- Keyword difficulty: {{DIFFICULTY}}
- Source: Semrush export dated {{SEMRUSH_DATE}}

### Audience Segment
Select one:
- [ ] New prospects (awareness stage)
- [ ] Known leads (consideration stage)
- [ ] Sales qualified (decision stage)
- [ ] Existing customers (expansion stage)

### Conversion Goal
Select one:
- [ ] Email signup (low friction)
- [ ] Content download (medium friction)
- [ ] Demo request (high friction)
- [ ] Purchase (highest friction)

## Content Constraints

### Headline Pattern
Select one:
- [ ] Benefit-first: Lead with outcome
- [ ] Problem-solution: State pain, offer fix
- [ ] How-to: Instructional framing
- [ ] Social proof: Lead with results/numbers

### Tone Intensity
Select one:
- [ ] Conservative: No urgency, factual
- [ ] Moderate: Light urgency, confident
- [ ] Aggressive: Strong urgency, bold claims

### Proof Elements Required
Select all that apply:
- [ ] Customer logos
- [ ] Testimonial quotes
- [ ] Case study metrics
- [ ] Industry awards
- [ ] Security certifications

### Compliance Requirements
Select all that apply:
- [ ] GDPR consent language
- [ ] CCPA disclosure
- [ ] Industry-specific disclaimers
- [ ] Competitor comparison rules

## Baseline Performance Data

Reference: /briefs/{{CAMPAIGN_NAME}}/baseline.csv

Top performing pages for similar keywords:
- Page 1: {{TOP_PAGE_1}} - {{CONVERSION_RATE_1}}% CVR
- Page 2: {{TOP_PAGE_2}} - {{CONVERSION_RATE_2}}% CVR
- Page 3: {{TOP_PAGE_3}} - {{CONVERSION_RATE_3}}% CVR

Target conversion rate: {{TARGET_CVR}}% (based on {{BASELINE_SOURCE}})

## Approval Chain
1. Brand review: {{BRAND_REVIEWER}}
2. SEO review: {{SEO_REVIEWER}}
3. Legal review (if compliance boxes checked): {{LEGAL_REVIEWER}}
4. Final approval: {{FINAL_APPROVER}}

This template makes every request deterministic. Requesters select from predefined options. Reviewers verify selections. Claude generates within explicit constraints. Audit logs capture all selections.

Action item: Convert your most common content request into a deterministic template with explicit selection options. Test it with three different requesters to verify consistent understanding.

How Do You Build Validation Skills That Score Outputs?

Validation skills check generated content against defined rules. Each skill returns a compliance score (0-100), specific violations found, and severity ratings. Scores determine approval routing and publish eligibility.

Brand Compliance Validation Skill

Create this file at /skills/validation/brand-compliance/SKILL.md:

---
name: Brand Compliance Validator
description: Scores content against brand voice guidelines. Returns compliance score and violations.
triggers:
  - validate brand
  - check brand compliance
  - brand score
version: 1.0
---

## Purpose

Score content against brand voice guidelines. Flag violations. Calculate compliance score.

## Scoring Rules

### Voice Attributes (40 points)

Check for required voice attributes. Deduct points for violations:

- Direct sentences (10 pts): First sentence states main point. Deduct 2 pts per violation.
- Active voice (10 pts): No passive constructions. Deduct 1 pt per passive sentence.
- Specific claims (10 pts): Numbers over vague quantities. Deduct 2 pts per vague claim.
- Confident tone (10 pts): No hedging language. Deduct 2 pts per hedge word.

### Banned Phrases (30 points)

Deduct 5 points per banned phrase found:

Critical violations (instant fail if found):
- "In today's digital landscape"
- "It's important to note"
- "Unlock the power of"
- "Game-changing solution"
- "Revolutionize your"
- "Best-in-class"

Major violations (5 pts each):
- "Leverage" (use: "use")
- "Utilize" (use: "use")
- "Synergy" (use: "combined benefit")
- "Paradigm" (use: specific description)
- "Innovative" (use: specific feature)

### Structure Compliance (30 points)

- Paragraphs under 80 words (10 pts): Deduct 2 pts per violation
- Keyword in first 10 words (10 pts): Deduct 5 pts if missing in opening
- Front-loaded answers (10 pts): Deduct 2 pts per buried lede

## Output Format

Return validation as JSON:

{
  "skill": "brand-compliance",
  "version": "1.0",
  "timestamp": "[ISO timestamp]",
  "score": [0-100],
  "status": "[PASS|WARN|FAIL]",
  "thresholds": {
    "pass": 85,
    "warn": 70,
    "fail": 69
  },
  "breakdown": {
    "voice_attributes": {
      "score": [0-40],
      "violations": ["list of specific violations"]
    },
    "banned_phrases": {
      "score": [0-30],
      "violations": ["list of phrases found"]
    },
    "structure": {
      "score": [0-30],
      "violations": ["list of structure issues"]
    }
  },
  "critical_violations": ["list of instant-fail items found"],
  "recommended_fixes": ["specific fix instructions"]
}

## Decision Rules

- Score >= 85: PASS - Proceed to next validation
- Score 70-84: WARN - Flag for brand reviewer, may proceed
- Score < 70: FAIL - Return to requester with fix instructions
- Any critical violation: FAIL regardless of score

SEO Compliance Validation Skill

Create this file at /skills/validation/seo-compliance/SKILL.md:

---
name: SEO Compliance Validator
description: Validates content against SEO requirements using Semrush baseline data.
triggers:
  - validate seo
  - check seo compliance
  - seo score
version: 1.0
---

## Purpose

Score content against SEO requirements. Validate keyword usage. Check technical elements.

## Required Inputs

- Primary keyword from request template
- Semrush keyword data from /briefs/[campaign]/keywords.csv
- Target page type (landing page, blog post, product page)

## Scoring Rules

### Keyword Optimization (40 points)

- Primary keyword in H1 (10 pts): Exact or close variant required
- Primary keyword in first 100 words (10 pts): Must appear naturally
- Primary keyword density 1-2% (10 pts): Deduct 5 pts if under 0.5% or over 3%
- Secondary keywords present (10 pts): At least 2 of provided secondary keywords

### Technical SEO (35 points)

- Meta title 50-60 characters (10 pts): Deduct all if outside range
- Meta title contains keyword (5 pts): Primary keyword required
- Meta description 150-160 characters (10 pts): Deduct all if outside range
- Meta description contains keyword (5 pts): Primary keyword required
- Single H1 tag (5 pts): Deduct all if multiple H1s

### Content Structure (25 points)

- H2 headings present (5 pts): Minimum 3 H2s for landing pages
- H2s as questions (5 pts): At least 50% question format for AEO
- Internal link opportunities noted (5 pts): Minimum 3 suggestions
- Content length appropriate (10 pts): 
  - Landing page: 800-1500 words
  - Blog post: 1500-2500 words
  - Product page: 500-1000 words

## Output Format

Return validation as JSON:

{
  "skill": "seo-compliance",
  "version": "1.0",
  "timestamp": "[ISO timestamp]",
  "score": [0-100],
  "status": "[PASS|WARN|FAIL]",
  "keyword_data": {
    "primary": "[keyword]",
    "volume": [number],
    "difficulty": [number],
    "current_rank": [number or null]
  },
  "breakdown": {
    "keyword_optimization": {
      "score": [0-40],
      "violations": []
    },
    "technical_seo": {
      "score": [0-35],
      "violations": []
    },
    "content_structure": {
      "score": [0-25],
      "violations": []
    }
  },
  "meta_tags": {
    "title": "[generated title]",
    "title_length": [number],
    "description": "[generated description]",
    "description_length": [number]
  },
  "recommended_fixes": []
}

## Decision Rules

- Score >= 80: PASS - SEO requirements met
- Score 65-79: WARN - Flag for SEO reviewer
- Score < 65: FAIL - Return for optimization

Running Validation Pipeline

Create a prompt template that runs all validators in sequence:

SYSTEM: You are a content validation pipeline.

<content>
{{GENERATED_CONTENT}}
</content>

<request_metadata>
{{REQUEST_TEMPLATE_DATA}}
</request_metadata>

Run validation pipeline in this order:
1. brand-compliance validation
2. seo-compliance validation
3. legal-compliance validation (if compliance boxes checked in request)
4. channel-compliance validation

For each validator:
- Apply the validation skill
- Generate the JSON output
- Calculate the score

After all validations complete, generate pipeline summary:

{
  "pipeline_id": "[unique ID]",
  "timestamp": "[ISO timestamp]",
  "content_id": "[reference to content]",
  "validators_run": ["list of validators"],
  "scores": {
    "brand": [score],
    "seo": [score],
    "legal": [score or null],
    "channel": [score]
  },
  "overall_score": [weighted average],
  "status": "[PASS|WARN|FAIL]",
  "blocking_issues": ["list of fails"],
  "warnings": ["list of warns"],
  "approval_routing": {
    "required_approvers": ["based on scores and content type"],
    "auto_approved": [true/false],
    "reason": "[explanation]"
  }
}

Output the pipeline summary JSON followed by consolidated fix recommendations.

Action item: Create the brand-compliance validation skill first. Test it against 10 pieces of existing content to calibrate scoring thresholds for your brand.

How Do You Build Approval Workflows in n8n?

n8n workflows route content through approval gates based on validation scores, content type, and risk level. Approvers receive Slack notifications with content previews and one-click approve/reject actions.

Approval Matrix Configuration

Create this file at /config/approval-matrix.json:

{
  "version": "1.0",
  "default_approvers": {
    "brand": "brand-manager@company.com",
    "seo": "seo-lead@company.com",
    "legal": "legal@company.com",
    "final": "marketing-director@company.com"
  },
  "content_types": {
    "landing_page": {
      "risk_level": "medium",
      "required_validations": ["brand", "seo"],
      "conditional_validations": {
        "legal": "when compliance_boxes_checked"
      },
      "approval_rules": {
        "auto_approve_threshold": 90,
        "single_approver_threshold": 80,
        "multi_approver_threshold": 70,
        "reject_threshold": 69
      },
      "approvers_by_score": {
        "90-100": [],
        "80-89": ["brand"],
        "70-79": ["brand", "seo"],
        "0-69": ["brand", "seo", "final"]
      }
    },
    "email_campaign": {
      "risk_level": "medium",
      "required_validations": ["brand", "legal"],
      "approval_rules": {
        "auto_approve_threshold": 85,
        "single_approver_threshold": 75,
        "multi_approver_threshold": 65,
        "reject_threshold": 64
      },
      "approvers_by_score": {
        "85-100": [],
        "75-84": ["brand"],
        "65-74": ["brand", "legal"],
        "0-64": ["brand", "legal", "final"]
      }
    },
    "social_post": {
      "risk_level": "low",
      "required_validations": ["brand"],
      "approval_rules": {
        "auto_approve_threshold": 80,
        "single_approver_threshold": 70,
        "multi_approver_threshold": 0,
        "reject_threshold": 59
      },
      "approvers_by_score": {
        "80-100": [],
        "70-79": ["brand"],
        "60-69": ["brand", "social-manager"],
        "0-59": ["brand", "final"]
      }
    },
    "paid_ad": {
      "risk_level": "high",
      "required_validations": ["brand", "legal", "channel"],
      "approval_rules": {
        "auto_approve_threshold": 95,
        "single_approver_threshold": 85,
        "multi_approver_threshold": 75,
        "reject_threshold": 74
      },
      "approvers_by_score": {
        "95-100": [],
        "85-94": ["brand"],
        "75-84": ["brand", "legal"],
        "0-74": ["brand", "legal", "final"]
      }
    }
  },
  "escalation_rules": {
    "approval_timeout_hours": 24,
    "escalation_path": ["direct_manager", "marketing_director", "cmo"],
    "auto_reject_after_hours": 72
  }
}

n8n Approval Workflow Prompt

Use this prompt to have Claude create the approval workflow in n8n:

SYSTEM: You are an n8n workflow automation expert building an approval routing system.

<workflow_requirements>
Name: Content Approval Router
Trigger: Webhook receiving validation pipeline output

Input payload structure:
{
  "pipeline_id": "string",
  "content_id": "string",
  "content_type": "landing_page|email_campaign|social_post|paid_ad",
  "requester_email": "string",
  "scores": {
    "brand": "number",
    "seo": "number",
    "legal": "number|null",
    "channel": "number"
  },
  "overall_score": "number",
  "status": "PASS|WARN|FAIL",
  "content_preview": "string (first 500 chars)",
  "full_content_url": "string"
}

Workflow logic:
1. Parse incoming payload
2. Load approval matrix from config
3. Determine required approvers based on content_type and overall_score
4. For each required approver:
   - Send Slack message with content preview
   - Include Approve/Reject buttons (Slack interactive message)
   - Store pending approval in tracking database
5. Wait for responses or timeout
6. On all approvals: trigger publishing workflow
7. On any rejection: notify requester with feedback
8. On timeout: escalate per escalation_rules

Slack message format:
- Header: [APPROVAL REQUIRED] {content_type} - Score: {overall_score}
- Body: Preview of content (first 500 chars)
- Scores breakdown: Brand: X, SEO: X, Legal: X
- Buttons: [Approve] [Reject] [View Full Content]
- Footer: Requester: {email} | Auto-escalates in {hours}h

Integrations needed:
- Slack (for notifications and interactive buttons)
- Webhook response handling
- Database node (for approval tracking state)
</workflow_requirements>

MUST:
1. Search templates for approval or routing workflows first
2. Use the webhook processing workflow pattern
3. Configure Slack interactive messages correctly
4. Handle timeout and escalation logic
5. Log all approval events for audit trail
6. Include error handling for failed notifications

Create this workflow in my n8n instance. Return the workflow link and list of required credentials.

Approval Tracking Database Schema

Store approval state in your database (or n8n’s built-in storage):

{
  "approval_records": {
    "pipeline_id": "string (primary key)",
    "content_id": "string",
    "content_type": "string",
    "requester_email": "string",
    "request_timestamp": "ISO datetime",
    "overall_score": "number",
    "required_approvers": ["array of emails"],
    "approvals_received": [
      {
        "approver_email": "string",
        "decision": "approved|rejected",
        "timestamp": "ISO datetime",
        "comments": "string|null"
      }
    ],
    "status": "pending|approved|rejected|escalated|expired",
    "final_decision_timestamp": "ISO datetime|null",
    "published": "boolean",
    "publish_timestamp": "ISO datetime|null",
    "publish_destinations": ["array of channel names"]
  }
}

Action item: Create your approval-matrix.json with content types, thresholds, and approver assignments specific to your team structure.

How Do You Integrate GA4 and Semrush for Data-Driven Decisions?

Governance requires decisions based on data, not assumptions. GA4 provides performance baselines. Semrush provides keyword validation. Claude queries both before generating content to ground decisions in historical performance.

GA4 Baseline Skill

Create this file at /skills/data/ga4-baseline/SKILL.md:

---
name: GA4 Performance Baseline
description: Pulls historical performance data to inform content decisions.
triggers:
  - get baseline
  - performance data
  - ga4 metrics
  - historical performance
version: 1.0
---

## Purpose

Query GA4 for historical performance data. Provide baselines for conversion rate targets, engagement benchmarks, and content format effectiveness.

## Required Data Points

For landing pages:
- Conversion rate by page template
- Average time on page by content length
- Bounce rate by traffic source
- Top performing headlines (from page titles)

For email campaigns:
- Open rate by subject line pattern
- Click rate by CTA placement
- Unsubscribe rate by content type
- Best send times by audience segment

For social posts:
- Engagement rate by post format
- Click-through rate by platform
- Best posting times by day of week
- Top performing content themes

## Query Templates

Landing page baseline query:
- Metric: Conversion rate
- Dimension: Page path
- Filter: Pages matching /landing/ or /lp/
- Date range: Last 90 days
- Sort: Conversion rate descending
- Limit: Top 20 pages

Content performance query:
- Metrics: Sessions, engagement rate, conversions
- Dimensions: Page title, content group
- Date range: Last 90 days
- Filter: Organic traffic only
- Purpose: Identify top performing content patterns

## Output Format

Return baseline data as structured JSON:

{
  "data_source": "ga4",
  "property_id": "[GA4 property]",
  "query_date": "[ISO timestamp]",
  "date_range": {
    "start": "[date]",
    "end": "[date]"
  },
  "baseline_metrics": {
    "landing_pages": {
      "average_cvr": [percentage],
      "top_quartile_cvr": [percentage],
      "median_time_on_page": [seconds],
      "average_bounce_rate": [percentage]
    },
    "top_performers": [
      {
        "page_path": "[path]",
        "page_title": "[title]",
        "sessions": [number],
        "cvr": [percentage],
        "pattern_identified": "[headline pattern or content type]"
      }
    ]
  },
  "recommendations": {
    "target_cvr": "[recommended target based on top quartile]",
    "headline_patterns": ["patterns from top performers"],
    "content_length": "[recommended range based on engagement]"
  }
}

## Usage in Generation

When generating landing pages:
1. Pull baseline before generation
2. Set target CVR based on top_quartile_cvr
3. Apply headline_patterns to headline generation constraints
4. Reference top_performers as examples in generation prompt

Semrush Keyword Validation Skill

Create this file at /skills/data/semrush-keywords/SKILL.md:

---
name: Semrush Keyword Validator
description: Validates keyword targets against Semrush data before content generation.
triggers:
  - validate keyword
  - keyword data
  - semrush check
  - search volume
version: 1.0
---

## Purpose

Validate keyword selections against Semrush data. Prevent content generation for unviable keywords. Provide competitive context for realistic targeting.

## Validation Checks

### Viability Check
- Search volume >= 100/month (or client-specific threshold)
- Keyword difficulty <= 70 (or client-specific threshold)
- Not a branded competitor term
- Commercial intent present (for landing pages)

### Competitive Check
- Current ranking position (if tracking)
- Top 10 competitor analysis
- Content gap identification
- SERP feature opportunities

### Related Keywords
- Secondary keyword suggestions
- Long-tail variations
- Question-based variants for AEO

## Required Inputs

From request template:
- Primary keyword
- Content type (determines intent requirements)
- Target audience (determines keyword variations)

From Semrush export or API:
- Search volume
- Keyword difficulty
- CPC (as proxy for commercial intent)
- SERP features present
- Current ranking (if tracked)

## Output Format

{
  "keyword": "[primary keyword]",
  "validation_status": "APPROVED|CAUTION|REJECTED",
  "metrics": {
    "search_volume": [monthly searches],
    "keyword_difficulty": [0-100],
    "cpc": [dollar amount],
    "commercial_intent": "high|medium|low",
    "current_rank": [position or null]
  },
  "viability_score": [0-100],
  "viability_factors": {
    "volume_check": "pass|fail",
    "difficulty_check": "pass|fail",
    "intent_check": "pass|fail"
  },
  "serp_analysis": {
    "features_present": ["featured_snippet", "people_also_ask", "local_pack"],
    "opportunity": "[which features are targetable]"
  },
  "recommendations": {
    "proceed": true|false,
    "alternative_keywords": ["if rejected, suggest alternatives"],
    "secondary_keywords": ["for approved, suggest secondaries"],
    "content_angle": "[based on SERP analysis]"
  }
}

## Decision Rules

- Viability score >= 70: APPROVED - Proceed with generation
- Viability score 50-69: CAUTION - Proceed with adjusted expectations
- Viability score < 50: REJECTED - Suggest alternatives, do not generate

## Integration with Generation

Before generating landing page content:
1. Run keyword validation
2. If REJECTED: Return alternatives to requester, halt generation
3. If CAUTION: Include risk note in validation output
4. If APPROVED: Pass keyword data to generation skill
5. Include secondary_keywords in generation constraints
6. Use content_angle recommendation in positioning

Data-Driven Generation Prompt

Combine data skills with generation:

SYSTEM: You are a data-driven content generator.

<request>
{{LANDING_PAGE_REQUEST_TEMPLATE}}
</request>

Before generating content, execute data validation:

STEP 1: Keyword Validation
- Apply semrush-keywords skill
- Validate primary keyword viability
- If REJECTED: Stop and return alternatives
- If APPROVED: Continue with keyword data

STEP 2: Performance Baseline
- Apply ga4-baseline skill
- Pull conversion rate benchmarks
- Identify top performing patterns
- Set target metrics based on top quartile

STEP 3: Generate Content
- Apply brand-voice generation skill
- Apply landing-page generation skill
- Constrain headlines to patterns from baseline
- Target CVR based on GA4 top quartile

STEP 4: Validate Output
- Run brand-compliance validation
- Run seo-compliance validation
- Score against baseline expectations

Output structure:
1. Data validation summary (keyword viability, baseline metrics)
2. Generated content with metadata
3. Validation scores
4. Confidence assessment (how likely to hit target CVR based on baseline comparison)

Action item: Export your GA4 landing page performance data for the last 90 days. Identify the conversion rate of your top 10 pages. This becomes your baseline target.

How Do You Build Audit Trails for Compliance?

Audit trails prove governance compliance. Every content piece tracks its full lifecycle: who requested it, which constraints applied, what scores it received, who approved it, when it published. Legal and compliance teams require this documentation.

Metadata Schema for Every Output

Every generated content file includes a companion metadata.json:

{
  "content_id": "uuid-v4",
  "created_at": "ISO timestamp",
  "updated_at": "ISO timestamp",
  "version": 1,
  
  "request": {
    "requester_email": "string",
    "requester_role": "string",
    "request_timestamp": "ISO timestamp",
    "template_used": "landing-page-request.md",
    "template_version": "2.0",
    "brief_reference": "/briefs/campaign-name/brief.md"
  },
  
  "constraints_applied": {
    "headline_pattern": "benefit-first",
    "tone_intensity": "moderate",
    "proof_elements": ["customer_logos", "testimonials"],
    "compliance_requirements": ["gdpr_consent"]
  },
  
  "data_inputs": {
    "keyword": {
      "primary": "string",
      "validation_status": "APPROVED",
      "viability_score": 82
    },
    "baseline": {
      "source": "ga4",
      "query_date": "ISO timestamp",
      "target_cvr": "3.2%",
      "benchmark_pages": ["list of reference pages"]
    }
  },
  
  "generation": {
    "skills_applied": ["brand-voice", "landing-page"],
    "model": "claude-3-opus",
    "generation_timestamp": "ISO timestamp",
    "token_count": 2847
  },
  
  "validation": {
    "pipeline_id": "uuid",
    "timestamp": "ISO timestamp",
    "scores": {
      "brand_compliance": 87,
      "seo_compliance": 92,
      "legal_compliance": 95,
      "overall": 91
    },
    "status": "PASS",
    "violations_flagged": [],
    "fixes_applied": []
  },
  
  "approval": {
    "required_approvers": ["brand-manager@company.com"],
    "approvals": [
      {
        "approver": "brand-manager@company.com",
        "decision": "approved",
        "timestamp": "ISO timestamp",
        "comments": null
      }
    ],
    "final_status": "approved",
    "approval_timestamp": "ISO timestamp"
  },
  
  "publishing": {
    "published": true,
    "publish_timestamp": "ISO timestamp",
    "destinations": [
      {
        "channel": "website",
        "url": "https://company.com/landing/page-slug",
        "cms": "hubspot"
      }
    ],
    "version_published": 1
  },
  
  "performance": {
    "last_updated": "ISO timestamp",
    "metrics": {
      "pageviews": null,
      "conversions": null,
      "cvr": null
    }
  }
}

Audit Log Format

Append events to /logs/generation.log:

[2025-01-15T10:23:45Z] REQUEST | content_id=abc123 | requester=john@company.com | type=landing_page | campaign=q1-launch
[2025-01-15T10:23:47Z] VALIDATION | content_id=abc123 | keyword=primary-keyword | status=APPROVED | viability=82
[2025-01-15T10:23:48Z] BASELINE | content_id=abc123 | source=ga4 | target_cvr=3.2% | benchmark_pages=5
[2025-01-15T10:23:52Z] GENERATION | content_id=abc123 | skills=brand-voice,landing-page | tokens=2847
[2025-01-15T10:23:55Z] VALIDATION | content_id=abc123 | pipeline=def456 | brand=87 | seo=92 | overall=91 | status=PASS
[2025-01-15T10:24:00Z] APPROVAL_REQUESTED | content_id=abc123 | approvers=brand-manager@company.com
[2025-01-15T14:35:22Z] APPROVAL_RECEIVED | content_id=abc123 | approver=brand-manager@company.com | decision=approved
[2025-01-15T14:35:25Z] PUBLISHED | content_id=abc123 | destination=hubspot | url=https://company.com/landing/page-slug

Compliance Report Generation

Create a skill to generate compliance reports:

SYSTEM: You are a compliance report generator.

<date_range>
Start: {{START_DATE}}
End: {{END_DATE}}
</date_range>

<report_type>
{{REPORT_TYPE: weekly|monthly|quarterly}}
</report_type>

Generate compliance report including:

1. VOLUME SUMMARY
- Total content pieces generated
- Breakdown by content type
- Breakdown by requester

2. VALIDATION METRICS
- Average brand compliance score
- Average SEO compliance score
- Average legal compliance score
- Rejection rate (content failing validation)
- Most common violations

3. APPROVAL METRICS
- Auto-approval rate
- Average time to approval
- Escalation rate
- Rejection rate at approval stage

4. PUBLISHING METRICS
- Content published vs generated ratio
- Average time from request to publish
- Rollback incidents

5. GOVERNANCE HEALTH
- Skills coverage (% of requests using all required validations)
- Audit trail completeness
- Data integration uptime

Output: Formatted report suitable for compliance review meeting.

Action item: Define your metadata schema based on what your legal and compliance teams require. Start logging immediately, even before full workflow implementation.

How Do You Measure If Governance Is Working?

Track these metrics weekly to prove governance ROI and identify system improvements. Build a simple dashboard in Google Sheets or your BI tool.

Governance Health Dashboard Metrics

Efficiency Metrics (Weekly)

MetricHow to CalculateTargetRed Flag
Content velocityPieces published / pieces requested>85%<60%
Time to publishAverage days from request to live<3 days>5 days
First-pass approval rateApproved on first submission / total submissions>70%<50%
Auto-approval rateAuto-approved / total approved>40%<20%

Quality Metrics (Weekly)

MetricHow to CalculateTargetRed Flag
Average brand scoreSum of brand scores / content count>85<75
Average SEO scoreSum of SEO scores / content count>80<70
Post-publish edit ratePieces edited after publish / total published<5%>15%
Brand violation incidentsViolations caught post-publish0>2

Governance Metrics (Weekly)

MetricHow to CalculateTargetRed Flag
Validation coverageRequests with full validation / total requests100%<95%
Audit completenessRecords with full metadata / total records100%<98%
Approval SLA complianceApproved within SLA / total requiring approval>90%<75%
Escalation rateEscalated approvals / total approvals<10%>25%

Weekly Governance Review Prompt

Run this prompt every Monday to generate your governance report:

SYSTEM: You are a governance metrics analyst.

<date_range>
Week: {{WEEK_START}} to {{WEEK_END}}
</date_range>

<log_sources>
Generation log: /logs/generation.log
Validation log: /logs/validation.log
Approval log: /logs/approval.log
Publishing log: /logs/publishing.log
</log_sources>

Generate weekly governance report:

1. VOLUME SUMMARY
- Total requests received
- By content type breakdown
- By requester breakdown
- Completion rate (published/requested)

2. EFFICIENCY METRICS
- Average time to publish
- First-pass approval rate
- Auto-approval rate
- Bottleneck identification (where content stalls)

3. QUALITY METRICS
- Average brand compliance score
- Average SEO compliance score
- Score trends (improving/declining/stable)
- Top 3 most common violations this week

4. GOVERNANCE HEALTH
- Validation coverage percentage
- Audit completeness percentage
- Approval SLA compliance
- Escalation incidents

5. ACTION ITEMS
- Skills that need threshold adjustment
- Approvers with slowest response times
- Content types with lowest first-pass rate
- Recommended process improvements

Output: Executive summary (3 paragraphs) followed by detailed metrics tables.

ROI Calculation Template

Update monthly to prove ongoing value:

MONTHLY GOVERNANCE ROI

Content Volume
- Pieces generated this month: [X]
- Pieces published this month: [X]
- Publication rate: [X]%

Time Savings
- Previous avg hours per piece: [X]
- Current avg hours per piece: [X]
- Hours saved per piece: [X]
- Total hours saved: [X] × [pieces] = [X] hours
- Value at $[X]/hour: $[X]

Rework Reduction
- Previous rework rate: [X]%
- Current rework rate: [X]%
- Rework reduction: [X] percentage points
- Pieces avoided rework: [X]
- Hours saved on rework: [X] × 1.5 hrs = [X] hours
- Value: $[X]

Risk Reduction
- Brand violations prevented: [X]
- Compliance issues caught: [X]
- Estimated cost per violation: $[X]
- Risk mitigation value: $[X]

TOTAL MONTHLY VALUE: $[X]
Monthly system cost: $[X]
NET ROI: $[X]
ROI percentage: [X]%

Action item: Set up your governance dashboard before going live. Establish baseline metrics in week 1. Review weekly for the first month, then shift to bi-weekly.

What Breaks and How Do You Fix It? (Marketing-Specific Troubleshooting)

These are the actual issues marketing teams encounter, with exact solutions.

Problem: Validation Scores Inconsistent Day to Day

Symptom: Same content type scores 85 on Monday, 72 on Wednesday with similar quality.

Cause: Skill file being interpreted differently based on conversation context.

Fix: Add explicit anchoring examples to your validation skill:

## Calibration Examples

Score 90 example (PASS):
"[paste example of 90-score content]"

Score 75 example (WARN):
"[paste example of 75-score content]"

Score 60 example (FAIL):
"[paste example of 60-score content]"

When scoring new content, compare against these calibration examples before assigning score.

Problem: Approval Workflow Stuck in n8n

Symptom: Content validated but never routes to approvers. n8n execution shows success but Slack messages don’t send.

Cause: Usually Slack token permissions or channel ID mismatch.

Fix checklist:

  1. Verify Slack bot has chat:write permission
  2. Verify bot is added to the approval channel
  3. Check channel ID is correct (not channel name)
  4. Test Slack node independently with hardcoded message
  5. Check n8n execution logs for silent failures

Problem: Brand Voice Skill Not Applying Consistently

Symptom: Some outputs follow brand voice, others ignore it completely.

Cause: Trigger words not matching, or skill not loading due to path issues.

Fix:

  1. Add more trigger variations to skill frontmatter:
triggers:
  - write
  - create
  - generate
  - draft
  - content
  - copy
  - landing page
  - email
  - social
  - ad
  1. Explicitly reference skill in prompt:
Apply the brand-voice skill from /skills/generation/brand-voice/SKILL.md
  1. Verify skill file path matches project structure exactly

Problem: Keywords Failing Viability Check When They Shouldn’t

Symptom: Valid keywords rejected with low viability scores.

Cause: Thresholds set for competitive markets don’t work for niche terms.

Fix: Create keyword tier system in your Semrush skill:

## Keyword Tiers

Tier 1 (Head terms):
- Volume threshold: >1000
- Difficulty threshold: <60
- Viability pass: 70+

Tier 2 (Body terms):
- Volume threshold: >200
- Difficulty threshold: <70
- Viability pass: 60+

Tier 3 (Long-tail):
- Volume threshold: >50
- Difficulty threshold: <80
- Viability pass: 50+

Determine tier based on keyword length and specificity before applying thresholds.

Problem: Generated Content Too Generic Despite Constraints

Symptom: Content meets validation scores but lacks specificity. All landing pages sound the same.

Cause: Constraints too broad, not enough unique inputs per request.

Fix: Add required specificity fields to request template:

## Required Specificity Inputs

Customer quote to include (verbatim): {{CUSTOMER_QUOTE}}
Specific metric to feature: {{SPECIFIC_METRIC}}
Competitor differentiator: {{VS_COMPETITOR}}
Unique mechanism/process name: {{MECHANISM_NAME}}
Industry-specific term to use: {{INDUSTRY_TERM}}

GENERATION RULE: Content MUST include all 5 specificity inputs. 
Validation FAILS if any are missing from output.

Problem: Audit Logs Growing Too Large

Symptom: Log files exceed 100MB, slowing down queries and storage.

Cause: Logging too much detail, no rotation policy.

Fix:

  1. Implement log rotation:
# Add to weekly maintenance workflow
mv /logs/generation.log /logs/archive/generation-$(date +%Y%m%d).log
touch /logs/generation.log
  1. Log summary events, not full content:
# Instead of logging full content
[GENERATION] content_id=X | full_content="..." (bad)

# Log reference only
[GENERATION] content_id=X | word_count=1247 | sections=7 (good)
  1. Keep detailed records in metadata.json files, not logs

Problem: Team Bypassing Governance System

Symptom: Content published without validation records. Team using ChatGPT directly for “quick” requests.

Cause: Governance adds friction without visible value. Process feels slower than shortcuts.

Fix:

  1. Make non-governed publishing impossible (remove direct CMS access)
  2. Create “express lane” for low-risk content with auto-approval
  3. Show time-saved metrics in weekly reports
  4. Gamify compliance (dashboard showing team scores)
  5. Have leadership publish governance metrics in all-hands

Express lane configuration:

{
  "express_lane": {
    "eligible_content_types": ["social_post", "internal_email"],
    "eligible_requesters": ["senior_content_manager", "marketing_director"],
    "auto_approve_threshold": 75,
    "max_monthly_express": 20
  }
}

Action item: Review this troubleshooting list with your team. Identify which problems you’re most likely to encounter based on your current workflow. Build preventive measures into your initial setup.

Final Takeaways

Governance transforms AI content from unpredictable output to reliable production system. Validation skills score every piece against brand, SEO, and compliance requirements before delivery. The investment in governance infrastructure pays back through reduced rework and faster publishing.

Deterministic templates eliminate variation by constraining AI decisions to predefined options. Requesters select from explicit choices. Reviewers verify selections. Audit trails capture the decision path. Every output traces back to specific constraints.

Approval workflows route content based on validation scores and risk levels. High-scoring content auto-approves. Lower scores require human review. Escalation paths prevent bottlenecks. Slack integrations enable one-click decisions.

Data integration grounds decisions in performance reality. GA4 baselines set realistic targets. Semrush validation prevents wasted effort on unviable keywords. Historical patterns inform generation constraints.

Audit trails prove compliance for legal review. Every content piece tracks its full lifecycle from request through publication. Metadata schemas capture constraints, scores, approvals, and publish events. Logs enable forensic review when issues arise.

yfx(m)

yfxmarketer

AI Growth Operator

Writing about AI marketing, growth, and the systems behind successful campaigns.

read_next(related)