I’m testing AI tools and sharing what works. This week I looked at NotebookLM.
I have a lot of recruiters in my network, so I used CV screening as the example. But the workflow applies to anyone dealing with multiple documents. Customer feedback, competitive research, vendor proposals, meeting notes. Same principle.
Most AI tools pull from their training data. That’s where hallucinations come from. The AI fills gaps with information from its dataset, whether it’s relevant or not.
NotebookLM only looks at what you upload. No external datasets. Just your documents.
I created 10 fake resumes and a job description for a Digital Marketing Manager role. Varied qualification levels. Some with 8+ years experience, others fresh graduates.
Uploaded everything to notebooklm.google.com. Free tool, just need a Google account.
Then I asked it to rank the candidates.
First attempt was a mess. Got paragraphs of analysis for each person. Almost more work than reading the CVs manually.
So I changed the prompt: “Rank these 10 candidates from most to least qualified. Create a simple table with 3 columns only: Name, Rank, Top 2 Strengths. Keep each strength to one line.”
That worked. Clean table. Easy to scan. Actually useful.
Go to notebooklm.google.com, create a notebook, upload your documents. You can connect Google Drive, add websites, YouTube videos, whatever.
Then ask questions. It answers based only on those documents.
The key is being specific. Tell it exactly what format you want, how many items, how long each should be.
I tested a few more questions:
“What are David Williams’ top 2 strengths and top 2 weaknesses for this role? Answer in 4 bullet points maximum.”
“Create 3 interview questions for Sarah Mitchell. List only the questions, no explanations.”
Each time it cited specific sections from the resumes. No guessing. No filler.
The CV example is easy to understand, but this scales to other stuff:
Upload customer feedback from different sources, ask what the top complaints are. Upload competitor materials, get side by side comparisons. Upload vendor proposals, see which offers the best value. Upload research papers, find common findings or contradictions.
More documents means more time saved.
Be specific with format. Say how many columns or bullet points you want. Set length limits. Request tables when you want scannable information.
Include context. For recruiters, upload the job description with the resumes. For other use cases, include requirements documents or evaluation criteria.
Iterate your prompts. If you get too much information, add constraints. Ask for shorter answers.
NotebookLM does one thing well. It analyzes your documents without making stuff up.
For 10 documents, maybe you save an hour. For 50 or 100, you save a lot more. And you reduce the risk of missing details when skimming manually.
It’s free. It works now. Worth testing if you deal with lots of documents.
More AI tool experiments coming this week.
Try it: notebooklm.google.com
Disclaimer: This post has been written using Claude.ai from Anthropic. I used the transcript of this video to write the copy.