- Published on
Youtube AI Summary
- Authors
- Name
- codebuff
Summarize youtube content with AI using local llm
This Python script converts YouTube video transcripts into well-structured summarized posts using a local LLM. It automates the process of fetching video details, extracting transcripts, and summarizing the content. You can find the project on GitHub: GitHub Repository.
Features
- Fetches YouTube video title and transcript.
- Summarizes transcript.
- Saves the generated content as a Markdown (
.md
) file. - Supports local use with Ollama's AI reasoning models, such as
deepseek-r1
.
Prerequisites
- Python 3.8+
- Ollama installed locally.
- YouTube Transcript API:
youtube-transcript-api
- PyTubeFix:
pytubefix
. I had some issue withpytube
module. - Ollama Python SDK:
ollama
Installation
- Clone this repository or copy the script.
- Install the required dependencies:
pip install youtube-transcript-api pytubefix ollama
- Install and configure Ollama to run locally.
Usage
To run the script, execute the following command in your terminal:
python script_name.py <YouTube_URL>
Example:
python script_name.py https://www.youtube.com/watch?v=dQw4w9WgXcQ
How It Works
- The script fetches the title and transcript of the specified YouTube video.
- The transcript is passed to an AI model (like
deepseek-r1
) running locally via Ollama. - The AI summarizes the transcript into a structured and formatted post.
- The generated post is saved as a Markdown file using the video title as the filename.
Example Output
If the video title is My Sample Video
, the script creates a file called My Sample Video.md
containing the blog post.
Local AI Model Usage
This script is designed to work with local AI models using Ollama. Ensure you have a model like deepseek-r1
available locally by downloading it via Ollama:
ollama pull deepseek-r1:7B
Ollama Model Configuration
The model is specified in the summarize_text()
function:
response: ChatResponse = chat(
model="deepseek-r1:7B",
messages=[
{"role": "user", "content": prompt},
]
)
You can adjust the model based on your preferences.
Error Handling
If any errors occur, the script logs the details and exits gracefully. Ensure you provide a valid YouTube URL and have internet access to fetch transcripts.
Troubleshooting
- No transcript available: Some videos may not have transcripts enabled.
- Model issues: Ensure the specified AI model is correctly installed and available in Ollama.
- File not saving: Check if the title contains invalid filename characters.
Contributing
Feel free to submit issues or pull requests to enhance this project.