Back to blog
How To Build a Persistent CRUD App Using SQLite and Deno
The beauty of SQLite is in its simplicity. It’s a local SQL database with many of the same benefits you would expect from PostgreSQL or MySQL, but it’s lightweight, portable, and doesn’t require a hosting provider. However, one of SQLite’s drawbacks is in its data persistence. Since it’s a local database, that means if your server fails, you lose your database. Fortunately, there are numerous backup and syncing solutions. Today, we’re going to explore the simplest of all those solutions by using Pinata.
First, let’s get a quick lesson on SQLite’s strengths and weaknesses.
What is SQLite?
SQLite is a lightweight, serverless, self-contained database engine that is widely used for embedded systems, mobile apps, and small to medium-sized applications. Unlike more complex database systems like MySQL or PostgreSQL, SQLite does not require a separate server process to run, as it stores the entire database in a single file on disk.
Key features of SQLite include:
- Serverless: No need to manage a separate database server; the database is integrated directly into the application.
- Zero configuration: Requires minimal setup and can run without complex configuration or management tasks.
- Single database file: All data, including tables, schemas, and indexes, are stored in a single file, making it easy to transport or back up.
- Lightweight: SQLite is small in size, which makes it ideal for applications with limited resources, such as mobile or embedded devices.
- ACID-compliant: SQLite supports transactions with Atomicity, Consistency, Isolation, and Durability, ensuring reliable data handling.
- Cross-platform: Works on many operating systems including macOS, Windows, Linux, and mobile platforms like Android and iOS.
SQLite is commonly used in web browsers, operating system components, mobile apps, and small-scale applications due to its simplicity and portability. However, it may not be suitable for large-scale applications requiring high concurrent access or complex query optimizations.
Downsides of SQLite
While SQLite has many advantages, it also comes with some limitations that may make it unsuitable for certain use cases. Here are the main downsides:
- Not suitable for large-scale applications: SQLite is designed for smaller databases (a few gigabytes at most), making it unsuitable for enterprise-level or high-volume databases that require complex queries, distributed storage, or clustering.
- Database file size: Although SQLite supports up to terabytes of data, performance can degrade when the database file grows too large, especially with frequent writes.
- Limited features compared to other DBMS: SQLite lacks some advanced features found in traditional database management systems like stored procedures, user-defined functions, or advanced indexing capabilities (e.g., partial indexes or expression indexes).
- No user management: SQLite does not have built-in user authentication or role-based access control, meaning it relies on the underlying file system permissions for security.
- Less robust crash recovery: While SQLite is ACID-compliant and can handle transactions reliably, it may not be as robust as other databases in crash recovery. Under heavy concurrent writes or certain crash scenarios, data corruption risks may be higher compared to other DBMS.
- Backup limitations: Backing up a live, in-use SQLite database can be challenging. Although SQLite supports database file backups, it's not as seamless or robust as snapshot-based backup systems in server-based databases.
Because of these limitations, SQLite has very specific uses. However, the beauty is that almost every app starts out as a great use case for SQLite. You may find that you need to migrate in the future or your app grows to a size you’re comfortable with and SQLite continues to perform well. Both are good scenarios.
We’re going to specifically focus on the last downside point regarding backups in today’s tutorial. We’ll build a CRUD (create, read, update, delete) API using SQLite and simple file storage from Pinata. The file storage is beneficial for providing redundancy for the SQLite database but also for adding additional functionality to your API long-term (after all, almost every app needs file storage).
What you’ll need
To build our server, we’re going to make use of a few tools. I’ll outline each of them below.
- Deno, a JavaScript runtime and alternative to Node.js
- npm (installed when you install Node.js)
- A text editor
- A free Pinata account
Getting Started
You’ll need to install Deno if you don’t already have it on your machine. You can follow the guide here. You’ll know it’s installed when you can run the following command in your terminal and get a response:
deno --version
Now, let’s initialize our project. Run the following command from the folder where you keep your development projects:
deno init sqlite-crud
This will initialize the project. You can change into the project directory and run the sample project with the following:
cd sqlite-crud && deno run dev
The sample app will execute and print results in the terminal. This command also includes a watcher with hot-reloading, so you can make changes to your project and it will handle those changes without stopping the process.
We’re ready to get into the good stuff and start building the API to handle our CRUD operations.
Building the API
Let’s open the project in the text editor of your choosing. Since Deno has built-in Typescript support, you’ll notice each file has a .ts
extension. Another cool thing about Deno that you’ll see next is that dependencies are automatically installed.
Let’s see that in action by creating a web server which is what we’ll use to interact with our CRUD API. In the main.ts
file, replace everything with:
import { DB } from "<https://deno.land/x/sqlite/mod.ts>";
type Book = {
title?: string;
author?: string;
}
const db = new DB("books.db");
db.execute(`
CREATE TABLE IF NOT EXISTS books (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT,
author TEXT,
UNIQUE(title, author)
)
`);
const insertBookQuery = db.prepareQuery(
"INSERT INTO books (title, author) VALUES (:title, :author)",
);
Deno.serve(async (req) => {
const url = new URL(req.url);
if(url.pathname === "books") {
if(req.method === "POST") {
const body: Book = await req.json()
try {
insertBookQuery.execute({
title: body.title,
author: body.author,
});
return new Response("Success", {
status: 200
})
} catch (error) {
console.log(error);
return new Response("Server error", {
status: 500
})
}
}
}
return new Response("Hello, world", {
status: 200,
headers: {
"content-type": "text/plain; charset=utf-8",
},
});
});
You’ll see the SQLite dependency gets installed automatically and just works. This version of SQLite is optimized for Deno. You can read about it here. Let’s look at the rest of the file.
We’re defining our database. Since SQLite is just a local file, the books.db
path will point to a local copy of the SQLite database. If this file is ever not found, that’s when we will want to fetch from a remote backup source (i.e. Pinata). We’ll save that for the end of the tutorial. Our database has a books table with just three columns: id, title, author. We are also ensuring the same book cannot be added more than once with the unique constraint tied to title and author.
Next, we prepare the insert query. Note, this doesn’t execute it. You’ll need to call .execute
as you’ll see in our server code.
Deno has a built-in web server, so we’re making use of that. We have simple routers, so we don’t need a routing library. Instead, we’ll make use of the req.url
and pathname
to determine our routes. For POST requests to the /books
route, we want to insert a new book.
Let’s add a GET route so we can test our server and see if the API functions. Below your insert query, add this:
const getBooksQuery = db.prepareQuery(
"SELECT * FROM books",
)
Then in the server, code, add an if statement for the GET method on the books
route:
if(req.method === "GET") {
const result = getBooksQuery.all();
result.forEach(r => {
return {
id: r[0],
title: r[1],
author: r[2]
}
})
return Response.json(result.map(item => ({
id: item[0],
title: item[1],
author: item[2]
})))
}
There are a lot of ways to get a JSON output from SQLite, but for the sake of time, we’re just going to manipulate the array of arrays we get back manually like what you see above.
Ok, now we’re ready to test. If your server isn’t already running, you can start it with:
deno run dev
Then make a POST request from your terminal using cURL like this:
curl --location '<http://localhost:8000/books>' \\
--header 'Content-Type: application/json' \\
--data '{
"title": "Fahrenheit 451",
"author": "Ray Bradbury"
}'
You should see a response of “Success”. You can now make another cURL request to get the books in the DB like this:
curl --location --request GET '<http://localhost:8000/books>'
This should give you an array of book objects (in this case with just the one book).
Ok, we have the CREATE and READ parts of the CRUD API working. Let’s build out the UPDATE and DELETE parts. We’ll start with our UPDATE handler.
We need to handle a path parameter in the request URL to get our book ID because we’ll be making PUT requests to ${API_URL}/books/${BOOK_ID}
, so we need to add an additional if statement in the server code. We already have an if statement checking if the url.pathname === "books"
, so we need to add this:
if(url.pathname.includes("/books/")) {
}
Before we write the handler, let’s write our database query functions. There are a bunch of ways to do this, but for clarity, we’re going to write three queries:
- updateBookTitleQuery
- updateBookAuthorQuery
- updateBookAuthorAndTitleQuery
The end user can then pass in a request body that includes both the title and author or just one of those properties and we’ll handle it. Below your existing database queries, add these three new queries:
const updateBookTitleQuery = db.prepareQuery(
"UPDATE books SET title = :title WHERE id = :id",
);
const updateBookAuthorQuery = db.prepareQuery(
"UPDATE books SET author = :author WHERE id = :id",
);
const updateBookAuthorAndTitleQuery = db.prepareQuery(
"UPDATE books SET (author, title) = (:author, :title) WHERE id = :id",
);
Now, back in our server handler, let’s add the following logic:
if(url.pathname.includes("/books/")) {
if(req.method === "PUT") {
try {
const pathParts = url.pathname.split("/");
const bookId = pathParts[pathParts.length - 1];
const id = parseInt(bookId, 10);
const body: Book = await req.json()
if(body.title && body.author) {
updateBookAuthorAndTitleQuery.execute({
author: body.author,
title: body.title,
id: id,
})
} else if(body.title) {
updateBookTitleQuery.execute({
title: body.title,
id: id,
})
} else if(body.author) {
updateBookAuthorQuery.execute({
author: body.author,
id: id,
})
}
return new Response("Success", {
status: 200
})
} catch (error) {
console.log(error);
return new Response("Server error", {
status: 500
})
}
}
}
We are parsing the path to get the book ID and then we are conditionally executing the correct query depending on whether or not the title and author exists in the body or just one of those properties. You can now test this endpoint by executing this cURL request that makes me the author of one of the greatest books in history:
curl -X PUT <http://localhost:8000/books/1> \\
-H "Content-Type: application/json" \\
-d '{"author": "Justin Hunter"}'
You should see “Success” print in the terminal. Let’s run the GET cURL command to make sure our book has been updated:
curl --location --request GET '<http://localhost:8000/books>'
The response should show our one book with my name listed as the author. Perfect! Don’t change a thing, I’ll take the credit for that book.
Let’s finish up the API routes with our DELETE handler. We will be executing this handler inside the same if statement since we need a book id. So add the following conditional statement:
if(req.method === "DELETE") {
}
Now, up with your other queries, add this new delete query:
const deleteBookQuery = db.prepareQuery(
"DELETE FROM books WHERE id = :id",
);
In back in your DELETE handler, update it to look like this:
if (req.method === "DELETE") {
try {
const pathParts = url.pathname.split("/");
const bookId = pathParts[pathParts.length - 1];
const id = parseInt(bookId, 10);
deleteBookQuery.execute({
id: id
})
return new Response("Success", {
status: 200
})
} catch (error) {
console.log(error);
return new Response("Server error", {
status: 500
})
}
}
We can now test this by running another cURL command like so:
curl --location --request DELETE '<http://0.0.0.0:8000/books>'
You should see the word “Success” print out. And once again, you can use our GET cURL to test:
curl --location --request GET '<http://localhost:8000/books>'
The result should now be an empty array, which means it worked. We now have all of the CRUD functionality in place. We just need to make this database robust and redundant. That’s where Pinata comes in.
Database Snapshots
Our database works great on our server. It’s fast because it’s stored on disk. But if our server crashes, we might lose everything. So, it’s important to have a backup and redundancy plan when using SQLite. There are services that will help you with this, but we’re going to make it easy and use object storage through Pinata.
Let’s get our API key and gateway URL. The gateway is the CDN where you can load content and comes with all kinds of great extras that may not matter for this project but could be helpful in others.
To get your API key, log into Pinata and click the API Keys tab. Generate a new Admin key and save the JWT portion of the key you get back. We’re going to put that in an environment variables file, but first we want to make sure we don’t accidentally commit our secrets to a code repository. So, create a .gitignore
file at the root of your project. In that file, add the following:
.env
Now, you can create the .env
file in the root of your project. Let’s add two variables inside that file like this:
PINATA_JWT=Your JWT
PINATA_GATEWAY_URL=Your gateway url
Go ahead and paste your JWT in as an environment variable. Now, let’s get your Gateway URL. To do that, go back to the Pinata web app and click the gateways tab. You’re see the URL for your Gateway there. Copy it and add it to your .env
file.
Now, let’s restart the Deno server to make sure the .env
file is picked up. Now, we can use those variables in our server code. At the top of your main.ts
file below the existing import statement, add the following:
import "jsr:@std/dotenv/load";
import { PinataSDK } from "npm:pinata";
const pinata = new PinataSDK({
pinataJwt: Deno.env.get("PINATA_JWT"),
pinataGateway: Deno.env.get("PINATA_GATEWAY_URL"),
});
We are importing the package to load variables from a .env
file and we are importing the Pinata SDK. Deno makes it easy to import npm modules without having to run the npm install command. We’re grabbing the current version of Pinata’s SDK with this import. You’ll see this is installed automatically. We then configure the Pinata SDK for use in the app.
Ok, now we need to set up a service that will periodically back up the database. We’ll use a cron job for this and you can choose the frequency of snapshotting. For the tutorial, we’ll create a daily snapshot.
We’re going to use Deno.cron
for this which isn’t quite production ready and will need to be run with the --unstable
flag. To do this, open your deno.json
file and update the dev task to look like this:
"dev": "deno run --unstable --watch"
Restart your server, and now we can use the cron service built into Deno. Add this to bottom of your main.ts
file below the server code:
const snapShotDB = async () => {
}
Deno.cron("Snapshot DB", "0 0 * * *", () => {
snapShotDB()
});
This will run the snapShotDB
function every day. Adjust the cron frequency to your needs, especially when testing. We’ll build out the function, but first, we need to create a private Group in Pinata. We can do this with the following function:
const createGroup = async () => {
const group = await pinata.groups.create({
name: "booksDbSnapshots",
isPublic: false
});
return group.id
}
Then, back in our snapShotDB
function, add the following:
const snapShotDB = async () => {
try {
const groupId = await createGroup()
const dbFile = await Deno.readFile("books.db");
const file = new File([dbFile], "books.db", { type: "application/octet-stream" });
await pinata.upload.file(file).group(groupId)
} catch (error) {
console.log(error)
}
}
The groupId
will always be the same as long as the group name is the same. We use that to then upload the database file to the group. This means not only will we have the most recent snapshot of the DB, but we’ll have historical snapshots as well.
Now that we have snapshots in place, we just need to dynamically load from a snapshot if the database table is ever not available or can’t be loaded. Let’s make that happen.
Up towards the top of your file, you’ll see the following code:
const db = new DB("books.db");
db.execute(
CREATE TABLE IF NOT EXISTS books (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT,
author TEXT,
UNIQUE(title, author)
)
);
This works well, but we don’t actually want to immediately create the table if it doesn’t exist. Instead, we want to fetch the snapshot remotely and if we can’t fetch a snapshot, we want to create the table.
Let’s replace the above code with this:
const restoreDB = async () => {
try {
const data = await pinata.groups
.list()
.name("booksDbSnapshots");
const group = data.groups[0]
const groupFiles = await pinata.files().list().group(group.id)
const mostRecentSnapshot = groupFiles.files[0]
const response: any = await pinata.gateways.get(mostRecentSnapshot.cid)
if (!response.data) {
throw new Error(`Failed to fetch SQLite file`);
}
await Deno.writeFile("book.db", await response.data.arrayBuffer())
} catch (error) {
console.log(error)
console.log("Creating table because snapshot failed")
db.execute(`
CREATE TABLE books (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT,
author TEXT,
UNIQUE(title, author)
)
`);
console.log("'books' table created.");
}
}
const db = new DB("books.db");
const tableExists = db.query(`
SELECT name
FROM sqlite_master
WHERE type = 'table' AND name = 'books'
`);
// Take action based on whether the table exists
if (tableExists.length > 0) {
console.log("The 'books' table already exists.");
// Perform other actions, like altering the table or inserting data, etc.
} else {
console.log("The 'books' table does not exist. Checking for snapshot...");
await restoreDB()
}
Our restoreDB
function gets the Group ID where our snapshots are stored. It then loads those files and grabs the most recent. The function then loads the content of that file and writes it to the local disk. If the function fails, we then execute the create table SQL command.
Below that, we have a conditional check to see if the table exists locally. If it does, we don’t do anything. If it doesn’t, we call the restoreDB
function.
And just like that, you have full snapshot support for your SQLite database. All with a service you can use for more than just snapshotting databases. Want to store images as part of your app? Great! You already have Pinata up and running. This solution is powerful and flexible.
Conclusion
Building an app with SQLite is fast and performant. However, it can be prone to data loss if you don’t have a backup strategy. Fortunately, it’s easy to save your entire table using Pinata in just a few lines of code.
Even better, you can restore your database with a simple command. Using Pinata to power your snapshots gives you an extensible solution that can be leveraged as your app needs grow and you need more and more file storage support. If you’re ready to start, sign up now.
Happy building!