Blog home

Building Beers Radio A Craft Beer Podcast Aggregator

Justin Hunter

Published on

5 min read

Building Beers Radio A Craft Beer Podcast Aggregator

Using Pinata's Hot Swaps

I recently wrote on my personal blog about the experience of buying random domains and how sometimes you just don’t have a plan for them. One of those random domains was a URL that was just begging to have something podcast related. The domain, beersradio.com, had been sitting in my domain registrar account for more than a year, but after Pinata’s Hot Swaps dropped, I knew exactly what I was going to built for the domain.

There are a plethora of craft beer podcasts out there, so I thought it would be cool if this site was an aggregator of these podcasts and allowed people to see the most recent episodes, browse by podcast title, and more. In most cases, I would have reached for a database to store the podcast records, but I wanted to keep this lightweight. This was just a random domain I bought, after all. So, I turned to Hot Swaps.

Hot Swaps allows developers to store files of any kind and then simply update the content without updating the URL. In my case, the file would be a JSON object representing all the podcast episodes pulled from RSS, and I would use Hot Swaps to keep that JSON updated on a daily basis without having to constantly map to the newest storage URL. I could have used a public storage options and simply overwrote the content, but with that approach I would lose the history and auditability is important to me.

Let’s dive into how I built Beers Radio’s podcast aggregator API using Hot Swaps.

What I Used

Remember, this is a very simple app, so the tech stack is also very simple. It is comprised of:

  • Next.js
  • Pinata

This app makes use of Pinata’s Hot Swaps, which is a paid feature that you have to enable. For a guide on enabling the Hot Swaps plugin, check out these docs.

Next.js is a great solution for building any app, small or large, but I chose it specifically for the combined developer experience of writing frontend code and backend. We’re only going to focus on the backend code in this article, and Next.js serverless functions are a breeze to work with.

Pinata made uploading and retrieving my JSON data so painless. It was actually the fastest part of the entire build as you’ll probably see.

Let’s get into the code.

The Cron Job

In order to pull in the latest RSS feeds of podcasts, we need a cron job that will run daily. If you’re going to host the app on Vercel, you can support the cron job endpoint and automating the jobs by adding a vercel.json file to the root of your project. It should look like this:

{
  "crons": [
    {
      "path": "/api/cron",
      "schedule": "0 0 * * *"
    }
  ]
}

This tells Vercel to hit the /api/cron route once per day. Let’s create that route. In the app folder, add an api folder. Inside that one, create a cron folder and add a route.ts file. Inside that file, add the following:

import { NextRequest, NextResponse } from 'next/server';
import { extract } from '@extractus/feed-extractor'
import { FeedSchema } from '@/types';
import { PinataSDK } from "pinata-web3";

const pinata = new PinataSDK({
  pinataJwt: process.env.PINATA_JWT,
  pinataGateway: process.env.NEXT_PUBLIC_GATEWAY_URL,
});

const feeds = [
	// Fill this array with all the podcast rss feeds you want to include
  { podcast: "Steal This Beer", url: "<https://stealthisbeer.squarespace.com/episodes?format=rss>" }, 
]

export async function GET(request: NextRequest) {
    try {
      const results: FeedSchema[] = []
      for(const feed of feeds) {
        let result: any = await extract(feed.url)
        result.entries.length > 10 ? result.entries = result.entries.slice(0, 10) : result.entries = result.entries;    
        results.push(result)
      }      
      //  Send it to Pinata
      const { IpfsHash } = await pinata.upload.json(results)      
      //  Swap it like it's hot
      await pinata.gateways.swapCid({
        cid: process.env.NEXT_PUBLIC_ORIGINAL_CID!,
        swapCid: IpfsHash
      })
      return NextResponse.json(results);
    } catch (error) {
      console.log(error)
      return NextResponse.json({ error: "Server error" });
    }    
}

You’ll notice that we are using a library called @extractus/feed-extractor and we are using Pinata’s SDK, so go ahead and install both by running the following in your terminal:

npm i @extractus/feed-extractor pinata-web3

To configure the Pinata SDK, we need to pass in an API key JWT and a Gateway URL. You can get both by logging into your Pinata account. Generate an API key and save the JWT. Then go to the Gateways page to get your Gateway URL. Back in your project, create a .env.local file and add the following:

PINATA_JWT=YOUR PINATA JWT
NEXT_PUBLIC_GATEWAY_URL=YOUR PINATA GATEWAY
NEXT_PUBLIC_ORIGINAL_CID=THE ORIGINAL FILE CID FOR YOUR PODCAST DATA (we'll get to this)

Now, let’s get back to our cron job API route and see what’s going on. We are looping through all the podcast RSS feed URLs and fetching the content from those feeds:

const results: FeedSchema[] = []
for(const feed of feeds) {
  let result: any = await extract(feed.url)
  result.entries.length > 10 ? result.entries = result.entries.slice(0, 10) : result.entries = result.entries;    
  results.push(result)
}      

We store the results of each podcast feed in the results variable which matches the type FeedSchema. I have all my types in a types.ts file in the src directory of my project. It looks like this:

export type Entry = {
  id: string,
  title: string,
  link: string,
  description: string,
  published: string, 
  podcastTitle?: string,
  podcastLink?: string
}

export type FeedSchema = {
  title: string,
  link: string,
  description: string,
  generator: string,
  language: string,
  published: string,
  entries: Entry[]
}

Once we have the results array, we need to upload the JSON to Pinata. This is simple with the Pinata SDK:

const { IpfsHash } = await pinata.upload.json(results) 

From the response, we de-structure the content identifier (or IpfsHash). We need that to pass into the Hot Swaps plugin. But to use Hot Swaps, we have to have an original file. For the sake of our app, our original file can be literally anything, but it’s easiest if it’s an empty array. In your project’s root, create a file called og.json and add an empty array. Upload that file to Pinata using the web interface. Once you’ve done that, copy the CID you get as a result and store it with the proper variable in your .env.local file.

Now, back in the cron job API route, you can swap your new results with the original CID like this:

await pinata.gateways.swapCid({
  cid: process.env.NEXT_PUBLIC_ORIGINAL_CID!,
  swapCid: IpfsHash
})

You only ever have to know the original CID and can always update the file that gets returned. This is especially important for the podcast aggregator app because the frontend will use that CID to load all the podcast data. That’s it!

Every time the cron job runs, it gets the latest episodes of each podcast from RSS, then it updates the JSON stored on Pinata. Simple.

While we’re not building the UI in this tutorial, if you’re curious how to load the data for your podcast results, it’s a one-liner:

const data: any = await pinata.gateways.get(process.env.NEXT_PUBLIC_ORIGINAL_CID!)

Because this uses Hot Swaps, you can always use that original CID and never have to worry about mapping the new file CID in a database. This drastically simplifies the app, which was perfect for my use case.

Conclusion

As developers, we often reach for a database immediately when building an app, but you might not need to. With Hot Swaps, you can serve dynamic data in your app without a database. And it’s as simple as storing files.

Happy Building!

Stay up to date

Join our newsletter for the latest stories & product updates from the Pinata community.

No spam, notifications only about new products, updates and freebies. You can always unsubscribe.