Want to record voice memos, automatically transcribe them (a.k.a. speech to text), summarize them, and send the transcripts and summaries to Notion?
This tutorial will teach you exactly how to do that.
Here’s a 14-minute voice note I took recently, in which I brain-dump some thoughts on a video idea:
After I stopped recording, my transcript and summary showed up in Notion in just 90 seconds. The cost? $0.09. That’s 9 cents for a near-perfect transcription of 14 minutes of audio, plus a summary.
You can see the entire thing here.
I’m not just getting a summary and transcript, either.
I’ve also set the automation up so that ChatGPT creates some useful lists:
- Main points
- Action items
- Stories, examples, and citations
- Follow-up questions
- Potential arguments against my transcript
- Related topics
In short, I can now brain-dump an idea as a voice note on my phone, upload it to Google Drive or Dropbox, and quickly get a fleshed-out document with a transcript and a ton of other useful information.
In this tutorial, I’ll show you how you to set up the same workflow for yourself.
Specifically, we’ll create a speech-to-text automation that:
- Lets you quickly record a voice note and upload it to Google Drive, Dropbox, or OneDrive
- Uses OpenAI’s Whisper model to convert the audio into a near-perfect transcription.
- Summarizes the transcript and pulls out key points using ChatGPT
- Sends the transcript, summary, and points directly to your Notion workspace
This is one of the most powerful and seemingly magical workflows I’ve ever built. It feels like I have a superpower now.
The best part is that once you’ve uploaded your audio file, it’s completely hands-off.
Want to use this yourself? You’re in luck – I’ve made the workflow public and extremely easy to set up. In the next sections, I’ll show you how to get it set up in less than 10 minutes.
And if you never want to miss when I post new Notion tutorials like this one, you should join my free Notion Tips newsletter:
Tutorial Overview
Here’s a quick look at how this automation will work.
When you take a voice recording, you’ll upload it to a cloud storage app like Dropbox, Google Drive, or Microsoft OneDrive (this tutorial will show you how to use all three.)
Once your audio file gets uploaded, our automation will trigger. Your recording will be transcribed by Whisper and summarized by the ChatGPT API.
Finally, the automation will package up the transcript and summary, and then it’ll send them to a new page in your Notion workspace using the Notion API.
First, I should note that we’ll be building and deploying this automation on Pipedream, which is an automation-builder that is similar to Make.com and Zapier. It’s also my favorite of those platforms.
Here’s a look at how our
- When a new audio file is uploaded to Dropbox, Google Drive, or OneDrive, the automation is triggered.
- The audio is downloaded into your
Pipedream account’s temporary storage. - We get the duration of the audio.
- The audio is fully transcribed using OpenAI’s Whisper speech recognition model.
- We send the transcript to ChatGPT to get a summary, title, and some useful lists (action items, follow-up questions, etc.)
- The transcript and ChatGPT response are formatted and checked for errors.
- We send everything to a new page in Notion.
Here’s a visualization of this workflow:
10-Minute Setup Guide
I’ve built this automation in Pipedream, an automation-builder platform that allows me to share my automations.
In the Copy My Workflow step below, you’ll find links that will copy and automatically set up the automation for you. It works a lot like a Notion template, and should only take about 10 minutes to full set up.
As a primer, here’s everything you’ll need to set up the workflow.
- A Pipedream account (free)
- An OpenAI account (pay-as-you-go)
- A cloud storage account – we’ll cover Google Drive, Dropbox, and OneDrive here. (all have free tiers)
- A Notion account (free)
You’ll also need an OpenAI API key, but I’d encourage you to create that later in the tutorial when it becomes relevant. You’ll only be able to see the key once on your OpenAI dashboard.
This workflow is free to set up, and extremely cheap the run. The only costs come from pay-as-you-go OpenAI usage, which averages out to about $0.40/hour of audio. Learn more in the Cost Information section.
Choose Your Notion Database
This workflow will work with any Notion database. You can even use a completely new database with only the default Name property.
However, this workflow work especially well with my Ultimate Brain template, which comes with the best note-taking system you’ll find in a Notion template.
Ultimate Brain is my all-in-one productivity template for Notion, and it combines tasks, notes, projects, goal-tracking, daily planning, journaling, and more to form a complete second brain in Notion.
You can get it here:
Want to turn Notion into a complete productivity system? Ultimate Brain includes all the features from Ultimate Tasks - and combines them with notes, goals, and advanced project management features.
Alternatively, I’ve created a simple Notes template that you can use along with this tutorial. Grab it here.
That simple template has a couple of useful properties baked in:
- AI Cost
- Duration
The Duration property is a formula, which takes in a number of seconds from the Duration (Seconds) property and formats it as a timestamp – e.g. 00:00:00.
If you’d like to add these properties to your own Notes database, check out the toggle below.
If you want to store and display the duration of your recording, as well as the combined cost for transcription and summarization, add the following properties to your notes database:
Property Name | Property Type |
---|---|
AI Cost | Number |
Duration (Seconds) | Number |
Duration | Formula |
Then add the following formula in the Duration property’s formula editor:
if(floor(prop("Duration (Seconds)") / 3600) < 10, "0", "") + format(floor(prop("Duration (Seconds)") / 3600)) + ":" + if(floor(prop("Duration (Seconds)") % 3600 / 60) < 10, "0", "") + format(floor(prop("Duration (Seconds)") % 3600 / 60)) + ":" + if(floor(prop("Duration (Seconds)") % 3600 % 60) < 10, "0", "") + format(floor(prop("Duration (Seconds)") % 3600 % 60))
Code language: JavaScript (javascript)
This will take the number of seconds in the audio file and show it in hours:minutes:seconds format (e.g. 00:00:00).
With these properties set, we’ll be able to calculate the full cost and duration in our
If you’re unable to add or edit properties in your database, make sure the database isn’t locked:
And if you want to learn more about Notion formulas, check out my comprehensive formula guide:
And if you want to create your own Notes database and need a refresher on Notion databases, you can check out this guide:
Copy My Workflow
I’ve created a version of this workflow for each of the major cloud storage apps: Google Drive, Dropbox, and Microsoft OneDrive.
Click the workflow link for the app you want to upload your audio files to, and it’ll copy my pre-built workflow into your
Upload mp3 or m4a files to Google Drive, and this workflow will automatically transcribe and summarize them, then send the results to Notion.
Upload mp3 or m4a files to Dropbox, and this workflow will automatically transcribe and summarize them, then send the results to Notion.
Upload mp3 or m4a files to Microsoft OneDrive, and this workflow will automatically transcribe and summarize them, then send the results to Notion.
The workflow links above are my
You don’t need a paid account to run this workflow at all. In fact, I spent hours optimizing it so that the Free
As an added bonus, my referral link bumps the Free tier’s connected-account limit from 3 apps to 5 apps. So you can connect more apps and try more workflows without needing to upgrade 🙂
Why
Pipedream is hands-down my favorite automation platform.
I love it, use it every day, and constantly talk about it online. I use it to build all of my automations. Reasons I love it:
- It is the only platform I know of that allows for npm imports. So not only can you write JavaScript code, but you can use any npm package.
- The free tier is incredibly generous. You can do so much more on
Pipedream for free than on Make, Zapier, or other traditional no-code platforms. It’s not even in the same league. - The team is incredibly responsive and helpful.
- As a creator, I can share workflows with my audience, removing 95% of the setup for you.
It’s a product I am extremely proud to be an affiliate for; in fact, I was the one who asked them to build an affiliate program.
I wanted a way to justify making more tutorials, and to be able to share workflows like this one (which took over 100 hours to build and is actively updated) for free.
Again, you can use this workflow on the free plan – the only cost comes from ChatGPT usage, which is extremely cheap.
But if you do choose to upgrade, thank you. 🙂
Create a Project and Workflow
From here, I’ll use the Google Drive version of the workflow as the example in this setup guide. The Dropbox and OneDrive versions are nearly identical. I’ll note their minor differences where applicable.
Next, click Create Project and Continue:
Click Create Workflow:
Set Up the Trigger
Connect your Google account (or Dropbox/OneDrive account).
Choose the Folder option from Optional Fields, and choose a folder for your audio files. When you upload new audio files to this folder, your automation will run.
For the Dropbox version, set your folder in the Path field.
Click Create Source.
Upload a Test Audio File
Next, you’ll need to upload an audio file (mp3 or m4a) to your chosen cloud storage folder. This will create a Test Event, which you’ll be able to use to finish setting up and testing the automation.
Here’s a sample file you can download, then upload to your folder. Click here to download it.
Alternatively, you can record a voice note now!
Most audio-recording apps will let you upload files to your cloud storage app once you hit the Share button on a recorded file.
If you are using Dropbox and have an iOS device, you can also use the RecUp app:
It is currently the only app I can find that will automatically upload files to Dropbox after you finish recording. I haven’t been able to find any app that will do this for Google Drive or OneDrive.
Select a Test Event
When
Hit Continue.
Download the File to Temp Storage
This step is only required for the Google Drive and Microsoft OneDrive versions of the workflow. The Dropbox version does this step automatically, so you won’t see it in the workflow builder.
In the google_drive_download
action, connect your Google account once again. (This is called onedrive_download
in the OneDrive version.)
Hit Test. This step simply downloads your file into
Hit Continue to move onto the next step.
Connect Your Notion and OpenAI Accounts
Continue onto the notion_voice_notes
action. This action contains all the behind-the-scenes code that runs the automation. You can read the code in the project’s GitHub repo.
Connect your Notion account, making sure to give the
Create an OpenAI API key and enter it in the OpenAI (ChatGPT) Account field.
You can generate a new key here – note that you’ll need an OpenAI account to do so.
You can create an API key from your API Keys page in your OpenAI user settings.
Note that you won’t be able to see it again after generating it, so be sure to copy it and paste it into
You can add your billing details and upgrade to a paid account from your Billing Overview page.
As of this writing, OpenAI currently gives you an “approved usage limit”, which is a monthly cap on spending. Mine is currently $120/mo.
You can also choose to set your own Hard Limit if you want to ensure you don’t go over a certain amount of spend each month.
This automation is very inexpensive to run, so even setting it to $10 would likely be adequate.
By default, this workflow costs around $0.40 per hour of uploaded audio to run. Learn more in the Cost Information section.
Choose Your Settings and Test
Leave {{steps}}
as-is. Choose your Summary Options, set your Notes Database, then set your other options as desired.
Reminder: If you want to pair this workflow with the best note-taking system for Notion, check out Ultimate Brain. It includes a Journal feature, a Quick Capture page, a My Day dashboard for planning your day, and much more:
Want to turn Notion into a complete productivity system? Ultimate Brain includes all the features from Ultimate Tasks - and combines them with notes, goals, and advanced project management features.
Once you’re happy with your settings, hit Test.
If you get a “File not found…” here, scroll back up to the previous step and hit Test there once again. This can happen during the setup process, as /tmp/
directory pretty quickly. Once you Deploy the workflow, this won’t be an issue.
After the test finishes, you should see a Success message and some information about the run. Below that info, find the Deploy button and click it. Your workflow is now live!
Congrats! Your workflow should now be active. Now, whenever you upload an audio file to the folder you chose, your
Updating Your Workflow
I’m able to ship new versions of the notion_voice_notes
step in this workflow when I fix bugs and add improvements.
These new versions won’t automatically apply to your copy of the workflow, but you can easily update your workflow to use them with just a click.
To check for new updates, first find your workflow in
Once you’re in the workflow editor, refresh the page. You need to do this in order to see the Update button.
Afterwards, you should see a red Update button on the notion_voice_notes
action if there’s an update available.
Cost Information
Transcribing audio and working with the ChatGPT API are both extremely cost-effective, but they’re not free.
You can see the pricing for all of OpenAI’s models on their pricing page, but here’s a quick breakdown of the current (April 10, 2023) prices for the models we’ll be using. This workflow always uses Whisper for transcription, and defaults to gpt-3.5-turbo for its chat model. Since you can choose other chat models, I’ve included their current pricing here as well.
Model | Price |
---|---|
Whisper | $0.006 / minute (rounded to nearest second) |
Chat (gpt-3.5-turbo) | $0.0015 / 1,000 tokens (prompt) $0.002 / 1,000 tokens (completion) |
Chat (gpt-3.5-turbo-16k) | $0.003 / 1,000 tokens (prompt) $0.004 / 1,000 tokens (completion) |
Chat (gpt-4) | $0.03 / 1,000 tokens (prompt) $0.06 / 1,000 tokens (completion) |
If you’re curious, a token is a fragment of a word. In general, 1,000 tokens is equivalent to 750 words.
You can get an accurate token count using OpenAI’s Tokenizer tool.
Here’s how the above pricing breaks down for the 14-minute audio file I shared in the intro to this tutorial:
Model | Price |
---|---|
Whisper (transcription) | $0.084 |
Chat (summarization) | $0.01 |
Total cost | $0.094 |
Given this, we can set a general rule of thumb:
You’ll pay roughly $0.10 per 15 minutes of audio, or $0.40 per hour.
If you wanted to cap your spend at $10/mo, you’d get roughly 25 hours of audio transcription and summarization.
As you’ve no doubt noticed, transcription is by far the largest part of the cost here. As I mention in the privacy section below, Whisper is actually an open-source model, and there are already apps you can get that will run Whisper on your phone or local computer:
- Hello Transcribe (iOS), Whisperboard (iOS), Aiko (iOS), and WhisperMemos (iOS) are all currently free
- MacWhisper (MacOS) is also free, though you’ll have to pay €16 for the Pro license to get access to the most accurate models
- Speech-Translate (Windows) is similarly free. I haven’t tested this one.
This means that you could easily transcribe audio on your own local device, cutting out a large portion of the already low price of this automation.
Personally, I prefer using OpenAI’s Whisper API, as it makes the automation much more seamless and hands-off.
Privacy Information
TL;DR on this section: When you interact with OpenAI’s services, your data isn’t 100% private. Keep this in mind when uploading audio recordings.
I’d like to mention this up-front before we get too far into the tutorial:
This automation should not be used for data that you want to be sure is 100% private.
Since this automation utilizes both ChatGPT and OpenAI’s hosted version of Whisper, any audio that you send to is should not be assumed to be private.
As of March 1, 2023, OpenAI CEO Sam Altman stated that data submitted via the OpenAI API is not used for training models:
“data submitted to the OpenAI API is not used for training, and we have a new 30-day retention policy and are open to less on a case-by-case basis. we’ve also removed our pre-launch review and made our terms of service and usage policies more developer-friendly.”
However, Gizmodo’s article covering this change noted:
“The company retains API data for 30 days to identify any ‘abuse and misuse,’ and both OpenAI and contractors have access to that data during that time.”
Personally, I am fine with this. I’m using the automation primarily to brain-dump ideas that become public content, but I wouldn’t use it for confidential or extremely personal thoughts.
It should be noted that data submitted via the actual ChatGPT website is used to train models. We’re not using the ChatGPT site for this tutorial, but you may want to be aware of that fact if you use it for other purposes (as I do).
OpenAI has released the Whisper speech recognition model under an open-source license, and it’s actually possible to run it entirely on your own local device.
There are even apps that already do this, including:
- Hello Transcribe (iOS), Whisperboard (iOS) and Aiko (iOS)
- MacWhisper (MacOS)
If you were inclined, you could also deploy Whisper to your own server within a web app, and build your own APIs to handle this entire workflow (sans ChatGPT call) on it.
Getting summaries and action items from your transcripts in a privacy-friendly way is a bit harder.
One model, BLOOM, is an open-source model that is similar to GPT-3. You can learn more about it in HuggingFace’s announcement post.
I have not deeply investigated the feasibility of practical use or privacy implications of BLOOM (or any other open model), but I’m mentioning it here in case you want to explore further.
There is also a project called BlindAI, which seeks to improve user privacy while interacting with AI models. Again, I do not have a deep understanding of this project or how well it works, but I’m including it here for reference.
Why Not Use Notion AI?
Notion AI is a powerful suite of generative AI tools baked right into Notion. I covered a lot of what you can do with it in this post:
I’ll also be making a lot more content around Notion AI in the future.
However, it’s not the right tool for this particular workflow. The reason is that we cannot currently create new Notion pages via the API that have a database template applied.
If we could, then we could add Notion’s new AI Buttons feature to a database template, and use them to add our AI-generated summary/lists to the page.
Until we’re able to do that, we need to work directly with the ChatGPT API to get this information and sent it to our Notion page.
That said, you could easily choose to send only the transcript from Whisper to Notion. From there, you could use Notion AI to summarize the text or create lists at your discretion.
Common Errors and Issues
As with any multi-step workflow (especially those involving code), these workflows can sometimes run into unexpected errors.
In this section, I’ve collected some of the most common errors, along with their likely fixes.
Before digging into the actual errors, I want to give you a few tips for debugging an errors you run into.
First, open up the toggle in your error message. If you’re working with a code step, the first line inside the toggle will usually tell you what line the error is occurring on.
Second, ask for help in the Pipedream community.
Finally, if you think you’ve encountered a true bug, please open an Issue in this workflow’s GitHub repo.
Notion Database Not Showing Up
If you’ve connected your Notion account, but your desired notes database isn’t showing up in the Notes Database field, here’s the fix.
- Navigate to your desired Notion database (go to the actual database, not a page that contains it or a linked view of it)
- Click the ••• menu in the top-right corner.
- Under Add Connections, find and add Pipedream.
If you’re using Ultimate Brain, this screenshot shows the location of the All Notes database: Ultimate Brain → Archive → All Notes [UB].
Once that’s done, head back to
If you’re working with a different database, here’s a trick for making sure you actually navigate to the database itself.
Open any page within that database (e.g. an existing note), then make sure it’s open as a full page (not Side Peek or center modal).
In the breadcrumbs, you’ll see a link to the database directly to the left of the current page’s title. Click that, and you’ll find yourself at the database.
Error: “Failed to read audio file metadata…”
This error can happen with the Google Drive and Microsoft OneDrive versions when you’re building your workflow, but will likely never happen after the workflow has been deployed.
It happens because these versions have an intermediate step to download the audio file to temp storage. /tmp
directory quite quickly in order to keep it open for new files, so if you take a while to set up the notion_voice_notes
step, the audio file can get cleared.
If you run into this error while building, simply test the file download step again.
This will get a new file into /tmp
storage, and from there you should be able to keep building and testing the workflow successfully.
See this
Error: Timeout or Out of Memory
This workflow shouldn’t run into Out of Memory errors, but it if does, you can increase the workflow’s memory in the Settings menu for the workflow. Note that this will increase credit useage. You can learn how Pipedream’s credit model works here.
Timeout errors can happen more frequently, and they happen for two reasons:
- Your audio file is very long
- Whisper and/or ChatGPT is under heavy load and is responding slowly
By default, I’ve set my shared workflows to timeout after 300 seconds (5 minutes), which is the max timeout setting on
I’ve added a tons of optimizations to my code in order to make it handle long files even within that timeout window. I’ve even successfully tested it with a 4-hour file!
That said, if you need longer timeouts, you can put them to 12.5 minutes if you upgrade to one of Pipedream’s paid plans.
Aside from that, you can tweak the following settings to get the workflow to handle longer files without timing out:
- Choose fewer Summary Options. The more you choose, the more ChatGPT has to write and the longer it’ll take to finish.
- Set Summary Density lower.
- Set Summary Verbosity lower.
- Set Audio File Chunk Size lower.
Additionally, ensure you’ve added billing details in your OpenAI account, and have generated a new API key after doing so. Keys generated during the trial period have restricted rate limits that counteract the optimizations I’ve made in my code.
Error: “Error Occured While Sending Chunks to OpenAI”
This error often happens if you’re using OpenAI trial credit that has either expired or can’t cover the size of the request you’re making.
The fix for this error is almost always the following:
- Add valid billing details to your OpenAI account
- Create a brand new API key after adding billing details
- Replace your old API key in the
notion_voice_notes
step with the new key inPipedream .
This is by far the most common error people run into with this workflow – it happens because OpenAI is not clear about the fact that API keys created before you add billing info become invalid after you add that info!
FAQ: Where’s the Previous Tutorial?
I originally released this tutorial in April 2023 with this video:
The version of the workflow shown in this older video had way lower capabilities, less customization, was more error-prone, and required tons of manual setup.
Therefore, I highly recommend using the new versions I’ve shared above. They take care of nearly everything for you.
If you want to access to old tutorial, however, you can find an archived version here.
Working with Non-English Languages
If you want to use this workflow with a language besides English, you’re in luck! I’ve built several translation features directly into this workflow.
Both Whisper and ChatGPT can work in other languages – though ChatGPT in particular will have varying degrees of usefulness depending on what language you pick.
In this section, I’ll cover the languages you can work with, and show you how to tweak your workflow in order to use a specific language.
Here’s the short version of how to do it. In the notion_voice_notes
action. Note that Summary Language and Add Translation will show up after you set Enable Advanced Options to True.
- Set the Transcript Language option if you want all audio files to be translated into your selected language.
- Leave Transcript Language blank if you want Whisper to transcribe the audio file in its original language.
- Set the Summary Language option to your language, and ChatGPT will write your Summary and chosen summary lists in that language.
- Set the Add Translation option if you want ChatGPT to translate the full transcript into your selected Summary Language. You can choose to have the script keep or discard the original-language transcript
If you often upload audio files in different languages, I recommend keeping Transcript Language blank, then setting Add Translation. If you choose a Transcript Language, Whisper will always attempt to transcribe any audio file into that language.
What Languages Does This Workflow Support?
I’ve added translation support for all of the languages that the Whisper model officially supports.
The Whisper model can currently work with more than 50 languages. According to the Whisper API FAQ page, those languages include:
Afrikaans, Arabic, Armenian, Azerbaijani, Belarusian, Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Marathi, Maori, Nepali, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tagalog, Tamil, Thai, Turkish, Ukrainian, Urdu, Vietnamese, and Welsh.
If the language in your audio file is among these choices, then this entire workflow will work for it!
However, keep in mind that ChatGPT does not have a list of officially-supported languages. Instead, it was simply trained on a large corpus of data that included many languages.
Languages that are more prevalent in the training data (and that have more users providing feedback) will work better than less-prevalent languages.
How to Set the Translation Options
In the notion_voice_notes
action, you’ll find a Transcript Language option. If you set a language here, Whisper will translate audio files into that language. It will not return a transcript in the file’s original language.
You can leave this blank (which is the default), and Whisper will transcribe the file in its original language.
Note the Enable Advanced Options field in that screenshot as well. Set it to True in order to access the next two settings.
The Summary Language option will let you set a language for your chosen Summary Options. Even if the transcript is in its original language, ChatGPT will write the summary and summary lists in the language chosen here.
The Add Translation option will let you use ChatGPT to translate the full transcript. There are three options:
- Translate and Keep Original – ChatGPT will translate the transcript into your chosen Summary Language, and this script will also include the original-language transcript in the Notion page.
- Translate Only – ChatGPT will translate the transcript into your chosen Summary Language, but will not include the original transcript in the Notion page.
- Don’t Translate – ChatGPT will not translate the transcript, and will only include the original transcript in the Notion page. This option is the same as simply leaving the property blank, and is only included to reduce potential user confusion.
The Add Translation property’s chosen option will only be considered if the transcript’s language differs from your chosen Summary Language.
Wrap-Up and More Resources
Congrats! You’ve now got a totally hands-off automation that will turn your voice notes into well-formatted Notion pages containing a summary, transcript, and useful lists.
If you’re curious, I built this automation using the Notion API and a lot of JavaScript. Building it has been an intensely rewarding learning experience.
If you want to learn the Notion API as well so you can realize your own ideas, start with my Notion API Beginner’s Guide:
This is a truly comprehensive introduction to coding with the API, and even features a 2-hour video tutorial. And it’s 100% free.
You can also find other no-code tutorials at my Notion Automations hub:
You might find these guides helpful as well:
I’d also recommend checking out the Pipedream docs if you want to work more with the platform.
Support My Work
This workflow and tutorial took well over 150 hours to research, test, debug, and write. I’ve been working on it continually for several months, and my initial testing started well over a year ago.
If you’d like to support my work, the best way is to share it. So if you enjoyed this tutorial, please share it on Twitter, LinkedIn, or with a friend 🙂
I’ll also note that this automation works extremely well with Ultimate Brain, my all-in-one productivity template for Notion.
If you want a complete, done-for-you second brain in Notion, give it a shot:
Want to turn Notion into a complete productivity system? Ultimate Brain includes all the features from Ultimate Tasks - and combines them with notes, goals, and advanced project management features.
Finally, if you want to get notified when I release new tutorials (like this one) and templates, join my free Notion Tips newsletter: