Skip to main content

Building a GitHub Projects Showcase with TDD

· 5 min read
Chris
Amiable Dev

Building a portfolio page that showcases GitHub projects sounds straightforward until you consider the edge cases: What happens when the GitHub API is down? What if rate limits are exceeded? How do you test React components that depend on external data?

This post walks through how test-driven development (TDD) helped us build a robust /projects page with 64 automated tests, ensuring 100% deployment reliability and preventing CI failures when upstream APIs go down.

The Problem

We wanted to create a curated portfolio page that:

  • Displays selected GitHub projects (not everything)
  • Shows live statistics like stars and recent activity
  • Provides context about each project's impact and technologies
  • Works reliably even when GitHub's API is unavailable

The key constraint: the build must never fail. If GitHub is down during deployment, visitors should still see the site with cached data.

The Cache Strategy

Before diving into tests, here's how we handle the "cold start" problem in CI/CD:

We commit src/data/projects.cache.json to the repository. When the build runs:

  1. First attempt: Fetch fresh data from GitHub API
  2. On success: Write to both projects.generated.json and update the cache
  3. On failure: Fall back to the committed cache file

This means even a fresh container in GitHub Actions has fallback data available.

Choosing the Test Stack

For a Docusaurus React project, we chose:

  • Vitest - Fast, Vite-native test runner with Jest compatibility
  • React Testing Library - Testing components from a user's perspective
  • MSW (Mock Service Worker) - Intercepting HTTP requests via msw/node (patching Node's http/https modules)
  • Zod - Runtime schema validation

MSW intercepts outbound HTTP requests in the Node environment, allowing us to exercise the real @octokit/rest client against mock responses without changing production code.

Phase 1: Data Schemas First

Before writing any components, we started with data validation. Zod schemas ensure that configuration files and API responses match expected shapes:

// src/data/schemas.ts
export const ProjectConfigSchema = z.object({
// Simplified validation for standard owner/repo formats
repo: z.string().regex(/^[\w-]+\/[\w.-]+$/),
category: z.enum(['active', 'maintained', 'historical']),
featured: z.boolean().optional().default(false),
skills: z.array(z.string()),
title: z.string().min(1),
description: z.string().min(1),
// ... more fields
});

Our schema tests covered validation of:

  • Valid project configurations
  • Invalid repo formats
  • Missing required fields
  • Enriched data with GitHub statistics

Result: 20 schema tests passing before touching any UI code.

Phase 2: Build Script with Graceful Fallback

The prebuild script is the most critical piece—it fetches GitHub data and must handle failures gracefully. Here's the core fallback logic:

// scripts/fetch-projects.js
async function fetchGitHubData(repo) {
try {
const { data } = await octokit.repos.get({ owner, repo: repoName });
return {
stars: data.stargazers_count,
language: data.language || 'Unknown',
lastUpdated: data.pushed_at,
fetchedAt: new Date().toISOString(),
};
} catch (error) {
console.warn(`Warning: Failed to fetch ${repo}, using cache`);
return null; // Triggers fallback to cached data
}
}

// In enrichAndWriteProjects:
if (apiData) {
enrichedProjects.push({ ...project, ...apiData });
} else {
const cached = cachedData.find(p => p.repo === project.repo);
enrichedProjects.push({
...project,
...cached,
dataStale: true, // Signal to UI
});
}

TDD guided us to test these scenarios:

it('should fetch repository data from GitHub API', async () => {
const result = await fetchGitHubData('amiable-dev/stentorosaur');
expect(result?.stars).toBe(42);
});

it('should return null on GitHub API failure', async () => {
server.use(
http.get('https://api.github.com/repos/:owner/:repo', () => {
return new HttpResponse(null, { status: 500 });
})
);
const result = await fetchGitHubData('amiable-dev/stentorosaur');
expect(result).toBeNull();
});

it('should never throw - build should always succeed', async () => {
server.use(
http.get('https://api.github.com/*', () => {
return new HttpResponse(null, { status: 500 });
})
);
await expect(enrichAndWriteProjects(config)).resolves.not.toThrow();
});

By writing these tests first, we discovered edge cases like rate limiting (403 with X-RateLimit-Remaining: 0) before they could break production deployments.

Phase 3: React Components

Testing React components focused on user-visible behavior, not implementation details:

it('should render project title', () => {
render(<ProjectCard project={mockProject} />);
expect(screen.getByText('Stentorosaur')).toBeInTheDocument();
});

it('should display star count with icon', () => {
render(<ProjectCard project={mockProject} />);
expect(screen.getByText('42')).toBeInTheDocument();
expect(screen.getByLabelText(/stars/i)).toBeInTheDocument();
});

it('should render stale data indicator when dataStale is true', () => {
render(<ProjectCard project={mockStaleProject} />);
expect(screen.getByTitle(/data may be outdated/i)).toBeInTheDocument();
});

The stale data indicator shows users when they're viewing cached data:

// In ProjectCard.tsx
{dataStale && (
<span className={styles.stale} title="Data may be outdated">

</span>
)}

Lessons Learned

1. MSW in Node Environments

In Node.js tests, MSW intercepts via msw/node by patching http/https modules (not a Service Worker). This means your actual Octokit client code runs unchanged.

2. CSS Modules Need Different Assertions

When testing with CSS Modules, class names are hashed. Instead of:

expect(card).toHaveClass('featured');  // Fails with CSS Modules

Use regex matching:

expect(card.className).toMatch(/featured/);  // Works

3. Aliases for Docusaurus Themes

Testing Docusaurus pages requires mocking @theme/Layout. Configure aliases in vitest.config.ts:

resolve: {
alias: {
'@theme': path.resolve(__dirname, './__tests__/mocks/@theme'),
},
},

4. Integration Tests Catch What Unit Tests Miss

Our integration tests validate the full build pipeline:

  • Generated JSON file exists and is valid
  • Configuration file has required fields
  • Schema validation passes on real data

These catch issues like file path problems that unit tests miss.

Final Stats

MetricValue
Total tests64
Schema tests20
Component tests30
Integration tests6
Build script tests8
Total run time (incl. startup)~1.3s

The investment in testing paid off immediately: we caught issues with duplicate text elements, CSS module class name assertions, and missing mock modules—all before deploying.

Try It Yourself

Check out the ADR-002 for the full architecture decision, or browse the /projects page to see the result.

The test infrastructure is reusable for future features:

  • MSW handlers can mock any API
  • Component test patterns apply to new UI
  • Schema validation ensures data integrity

TDD isn't just about preventing bugs—it's about building confidence that your code works as expected, even when external dependencies fail.


This post was drafted with AI assistance for research and structure. The implementation, testing, and technical decisions were made by the human author.