MongoDB Query Debugging: How to Find and Fix Slow Queries
Slow MongoDB queries silently kill app performance. Learn how to find them with the database profiler and explain(), interpret execution stats, and fix the most common culprits.

The Silent Performance Killer
Your app is slow. Not crash-slow, just... sluggish. Pages take 3 seconds instead of 300ms. API responses that used to be snappy now feel like they're thinking about it. Users notice. They don't file bug reports, they just stop using the feature.
Nine times out of ten, the problem is a slow MongoDB query. And the frustrating part is that MongoDB won't tell you about it unless you know where to look. There's no red warning banner. No error in your logs by default. Just a query that examines 2 million documents to return 15 results, and nobody knows until the database server starts sweating.
Let's fix that.
Step 1: Finding the Slow Queries
Before you can fix anything, you need to know which queries are actually slow. MongoDB gives you three ways to find them.
The Database Profiler
MongoDB has a built-in profiler that records slow operations. By default it's off, which is why most people never see it.
Turn it on:
// Profile operations slower than 100ms
db.setProfilingLevel(1, { slowms: 100 })
// Profile ALL operations (careful in production)
db.setProfilingLevel(2)
// Check current profiling status
db.getProfilingStatus()
Once enabled, slow operations get logged to the system.profile capped collection. Query it like any other collection:
// Find the 5 slowest queries in the last hour
db.system.profile.find({
ts: { $gt: new Date(Date.now() - 3600000) }
}).sort({ millis: -1 }).limit(5)
Each profile entry tells you the namespace, the query shape, how long it took, how many documents it examined, and whether it used an index. This is your starting point.
db.currentOp()
The profiler shows you what already happened. db.currentOp() shows you what's happening right now. If your database is actively choking, this is where you look:
// Find operations running longer than 5 seconds
db.currentOp({
"active": true,
"secs_running": { $gt: 5 }
})
This is especially useful during incidents. You can see exactly which query is holding things up and kill it with db.killOp(opId) if needed.
mongod Logs
MongoDB logs slow operations by default (threshold: 100ms). If you have access to your server logs, grep for COMMAND entries with high durationMillis values. On Atlas, the Performance Advisor does this automatically and even suggests indexes, but it's limited to what it can observe passively.
Step 2: Understanding explain()
You've found a slow query. Now you need to understand why it's slow. This is where explain() comes in, and where most people's eyes glaze over because the output is a wall of JSON.
It doesn't have to be that complicated.
The Three Verbosity Modes
// What the query planner chose (cheapest)
db.orders.find({ status: "pending" }).explain("queryPlanner")
// Actual execution stats (most useful)
db.orders.find({ status: "pending" }).explain("executionStats")
// Stats for ALL candidate plans (deep debugging)
db.orders.find({ status: "pending" }).explain("allPlansExecution")
For 90% of debugging, executionStats is what you want. It runs the query and tells you exactly what happened.
Reading the Output
Here's an executionStats output, stripped down to what matters:
{
"executionStats": {
"executionSuccess": true,
"nReturned": 15,
"executionTimeMillis": 2340,
"totalKeysExamined": 0,
"totalDocsExamined": 1850000,
"executionStages": {
"stage": "COLLSCAN",
"filter": {
"status": { "$eq": "pending" }
},
"nReturned": 15,
"docsExamined": 1850000
}
}
}
Three numbers tell the whole story:
- nReturned: 15 — the query found 15 matching documents
- totalDocsExamined: 1,850,000 — but it had to look at 1.85 million documents to find them
- executionTimeMillis: 2,340 — which took 2.3 seconds
That ratio is the red flag. totalDocsExamined / nReturned should be as close to 1:1 as possible. A ratio of 123,000:1 means the query is doing a full collection scan (COLLSCAN), reading every single document and checking if it matches the filter.
The fix here is obvious: create an index on status. After that, the same explain output would look like:
{
"executionStats": {
"nReturned": 15,
"executionTimeMillis": 2,
"totalKeysExamined": 15,
"totalDocsExamined": 15,
"executionStages": {
"stage": "FETCH",
"inputStage": {
"stage": "IXSCAN",
"indexName": "status_1",
"keysExamined": 15
}
}
}
}
From 2,340ms to 2ms. From scanning 1.85 million documents to exactly 15. That's the difference an index makes.
Step 3: The Usual Suspects
Most slow MongoDB queries fall into a handful of patterns. Once you've seen them a few times, you can spot them in explain output almost instantly.
Collection Scans (COLLSCAN)
The most common offender. If you see "stage": "COLLSCAN" in your explain output, the query isn't using any index. Every document in the collection gets examined. This is fine for a collection with 100 documents. For a collection with 10 million, it's a disaster.
Fix: Create an index that covers your query's filter fields. For compound queries, index field order matters. The ESR rule (Equality, Sort, Range) is a good starting point for compound index design.
Missing Compound Indexes
Having an index on { userId: 1 } doesn't help a query that filters on { userId: "abc", status: "active", createdAt: { $gte: lastWeek } }. MongoDB might use the single-field index and then scan through all matching userId documents to check the other conditions.
Fix: Create a compound index that covers the full query pattern:
db.orders.createIndex({ userId: 1, status: 1, createdAt: -1 })
No Limit on Large Result Sets
This one sneaks past code reviews constantly:
// Fetching ALL orders for a user? Hope they don't have 500,000 of them.
db.orders.find({ userId: "abc123" })
Even with a perfect index, returning a massive result set takes time and memory. Always paginate or limit:
db.orders.find({ userId: "abc123" })
.sort({ createdAt: -1 })
.limit(50)
Fetching Fields You Don't Need
By default, MongoDB returns the entire document. If your documents are 20KB each and you only need three fields, you're transferring 20x more data than necessary over the network.
// Bad: returns entire 20KB document
db.users.find({ active: true })
// Good: returns only what you need
db.users.find({ active: true }, { name: 1, email: 1, _id: 0 })
Projections also help MongoDB use covered queries. If every field you request is in the index, MongoDB can answer the query from the index alone without ever touching the actual documents.
Expensive $lookup Pipelines
$lookup is MongoDB's version of a join, and it can be brutally slow if the foreign collection isn't properly indexed. Each document in your pipeline triggers a lookup query against the foreign collection. No index on the foreign key? That's a collection scan per document.
// This runs a query against "products" for every order
db.orders.aggregate([
{ $match: { status: "shipped" } },
{ $lookup: {
from: "products",
localField: "productId",
foreignField: "_id",
as: "product"
}}
])
Fix: Make sure the foreignField is indexed. For _id it's automatic, but for any other field, you need to create it yourself.
Step 4: Practical Fixes
Create Targeted Indexes
Don't just throw indexes at every field. Each index costs write performance and storage. Look at your actual query patterns, check which queries are slow, and create indexes that serve them.
// Check existing indexes first
db.orders.getIndexes()
// Create only what you need
db.orders.createIndex(
{ userId: 1, status: 1, createdAt: -1 },
{ name: "user_status_date" }
)
Use explain() after creating the index to verify the query actually uses it. MongoDB's query planner sometimes makes surprising choices.
Rewrite Inefficient Queries
Sometimes the fix isn't an index but a different query shape. Regex queries with leading wildcards (/.*search.*/) can't use indexes. $ne and $nin operators are inefficient by nature. $exists: false queries have to scan documents to confirm a field is missing.
Consider whether your query can be restructured. Can you filter on a different field first to reduce the working set? Can you denormalize data to avoid a $lookup? Can you precompute a value instead of calculating it in an aggregation?
Monitor After Fixing
After deploying index changes, watch the profiler output. Confirm that the queries you targeted are now fast. Check that you haven't accidentally slowed down write operations with too many indexes. Production data has a way of behaving differently than your test data.
Making This Easier with Visual Tools
Let's be honest: parsing explain() JSON output in a terminal is not fun. The output is deeply nested, the field names are terse, and comparing two query plans requires scrolling through walls of text.
This is where a GUI tool pays for itself. Mingo lets you run queries, see results, and inspect execution plans visually, without memorizing shell syntax or parsing raw JSON. You can explore collections, spot performance issues, and test index changes all in one place.
Whether you use a GUI or the shell, the core process is the same: profile, explain, fix, verify. But being able to visually see that a query is doing a COLLSCAN across 2 million documents, rather than reading nested JSON and doing mental arithmetic, makes the debugging loop significantly faster.
The Checklist
When you hit a slow query in production, here's the sequence:
- Find it — Enable the profiler or check
db.currentOp()to identify the problematic query - Explain it — Run the query with
.explain("executionStats")and look attotalDocsExaminedvsnReturned - Diagnose it — Is it a
COLLSCAN? Missing compound index? Unbounded result set? Expensive$lookup? - Fix it — Create the right index, add a limit, use projections, or restructure the query
- Verify it — Run explain again and confirm the numbers improved
MongoDB query debugging isn't glamorous work, but it's some of the highest-leverage work you can do. A single missing index can be the difference between a 50ms response and a 5-second timeout. Your users won't know you fixed it, which is exactly the point.