30 minutes • Identify inefficiencies and transform your grant operations
Most grant teams work efficiently on individual tasks. Writers write well. Researchers research thoroughly. Compliance reviewers catch errors. But when you examine the entire workflow, inefficiencies emerge that cost thousands of hours annually and result in missed funding opportunities.
A typical grant workflow might look fine on the surface but hide significant waste: people waiting for approvals while other work could progress. Data entered multiple times into different systems. The same research repeated for similar funders. Bottlenecks where one person's availability blocks the entire team. Finding and eliminating these inefficiencies is one of the highest-impact investments grant operations can make.
The most reliable bottleneck identification method is tracking actual time. For one month, track every grant proposal from discovery through submission. Document: discovery date, when screening decision is made, when research starts/finishes, when writing starts/finishes, when review starts/finishes, when submitted. Calculate total cycle time and where time actually gets spent.
Most organizations find that actual work time is 30-40% of total cycle time. The remaining 60-70% is waiting time: waiting for approvals, waiting for research to finish, waiting for writer availability, waiting for reviewer feedback. These waiting periods are your bottlenecks.
Map your workflow as dependencies. What must happen before each task can start? Create a chart showing these relationships. Usually, certain tasks create queues: if Task A must complete before Tasks B, C, and D can start, and Task A takes longer than expected, B, C, and D all pile up waiting. These are critical bottlenecks.
Example: All proposals must be reviewed by the grants director before submission. If the director is reviewing previous proposals, new ones wait. The director becomes the bottleneck. Understanding this allows you to redesign the workflow to remove the constraint.
Track how your team's time is actually spent. Are grant writers spending 20% of time writing (their core skill) and 80% on administrative tasks? Are researchers finding information or entering data? If talented people spend significant time on tasks that could be automated or delegated, that's inefficiency.
Effective utilization doesn't mean 100% billable time on grants. It means people spending time on what they do best. Writers should write. Researchers should research. Administrative work should be minimal.
Typically, 20% of your bottlenecks account for 80% of delays. Find that critical 20%. Eliminate it. You'll gain disproportionate benefit. Don't try to optimize everything simultaneously. Focus ruthlessly on the biggest impediments.
When proposals must be individually approved by leadership before moving forward, approvals become bottlenecks. If your director reviews one proposal per day and you're generating three, proposals queue for approval.
Solutions: Implement parallel approvals (director reviews while writers are writing next section). Create approval templates and checklists so director can approve quickly. Delegate approval authority to others for lower-risk decisions. Establish service-level agreements (approvals within 48 hours). Consider risk-based approval: low-risk grants get streamlined approval, high-risk grants get thorough review.
If only one person can write compelling proposals, or only one researcher can find niche funding, that person becomes a bottleneck. When they're unavailable, work stops.
Solutions: Document expertise so others can step in. Create templates that others can follow. Pair experts with developing staff for knowledge transfer. Cross-train team members. Build systems (like AI prompts) that replicate expertise. Invest in skills development so bottlenecks don't exist at the human level.
When a tool fails or doesn't integrate with others, work gets stuck. Manual data entry between disconnected systems slows everything. Unreliable automation that requires constant checking defeats the purpose.
Solutions: Invest in reliable tools. Set up integrations. Build redundancy so a single tool failure doesn't stop operations. Monitor tool performance and escalate issues immediately. Have manual workarounds ready if technology fails.
Sometimes grants cluster: three deadlines in one week when normally you handle one per month. Your capacity isn't built for spikes. Work gets delayed because people are overwhelmed.
Solutions: Plan for spikes by building capacity buffer into your team. Use flexible staffing (contractors, consultants) for surge periods. Create simplified processes for high-volume periods. Prioritize grants strategically rather than trying to do everything. Consider declining some opportunities to protect quality on priority grants.
Track these metrics to measure efficiency: Cycle time (discovery to submission). Proposal quality (acceptance rate, funding rate). Cost per proposal (staff hours divided by number submitted). Win rate compared to benchmarks. Time to decision (how long from opportunity discovery to go/no-go decision). Staff capacity utilization (percentage of time on high-value work).
These metrics reveal what's working and what's not. If cycle time increased 40% but quality stayed same, something is broken. If you're winning at historical rate but spending 50% more time, efficiency decreased. Metrics guide improvement efforts.
What's "good" efficiency? It depends on your grant type, organization size, and complexity. But you can benchmark: leading organizations typically cycle grants in 6-8 weeks. They're funding 25-35% of proposals submitted. They're spending 40-60 hours per submitted grant. Use these as reference points to understand if you're operating efficiently.
Create a simple dashboard tracking your key metrics monthly: total cycle time, average waiting time at each stage, proposal quality scores, funding rate, cost per proposal, staff hours per proposal. Review monthly and identify trends. When metrics trend negatively, investigate why. Treat efficiency like any other business metric that requires monitoring and management.
Complex processes slow everything. Review your workflow and eliminate unnecessary steps. Do you really need four levels of approval? Could two suffice with clear delegation? Do all proposals need the same review process or could you streamline lower-risk applications?
Simplification must be balanced against quality. You're not cutting corners; you're removing non-essential work. Each step should provide clear value. Steps that exist purely for habit or historical reasons should go.
Sequential work (one step completes, next begins) is often slow. Parallel work (multiple steps happen simultaneously) is faster. Can research and strategy development happen in parallel? Can sections of a proposal draft simultaneously rather than sequentially?
Parallelization requires clear communication and strong coordination. But the speed improvements are significant. If sequential work takes eight weeks, parallel might take five. The challenge is ensuring quality doesn't suffer when less time is spent coordinating between stages.
Tasks that don't require human judgment should be automated. Data entry, monitoring, initial research screening, compliance checking—these are candidates for automation. Tasks requiring human judgment should be delegated to the most skilled available person, not concentrated at the top.
This frees high-value people to focus on strategy and relationship-building rather than administrative work. It also develops staff skills through increased responsibility.
Reinventing the wheel for each proposal wastes time. Create templates for common elements: organizational background sections, evaluation approaches, budget narratives. Create standard processes for common decisions: how do we evaluate funder fit? What's our decision framework?
Standardization sounds like it reduces quality, but it often improves it. With templates, people focus on customization and improvement rather than creating from scratch. With standard processes, decisions are more consistent.
Analyzing bottlenecks is step one. Implementing solutions is the real challenge. Create a roadmap: what's the single biggest bottleneck? What's the solution? How long will implementation take? What's the expected improvement? Start there. Once you've addressed the biggest bottleneck, move to the next.
Quick wins should be prioritized: changes that are simple to implement but produce significant improvement. These build momentum and team confidence in optimization efforts.
Workflow changes affect people's daily work. If you're eliminating an approval step, the approver needs to understand why. If you're automating a task, the person doing it needs training on the new approach. Communicate clearly: what's changing, why, what's the benefit, how will it affect each person's work.
After implementing an optimization, measure the results. Did cycle time actually decrease? Did staff hours per proposal reduce? If results are less than expected, investigate why. Adjust the approach. Optimization is iterative—you refine over time based on actual results.
One nonprofit analyzed their grant workflow and found approvals took 6 weeks of the 12-week cycle. By implementing risk-based approval (low-risk grants approved by program staff, high-risk by director), parallel approvals, and automation for routine approvals, they cut approval time to 3 weeks. Total cycle time dropped from 12 to 6 weeks—a 50% improvement. No quality degradation.
Once you've optimized your workflow, sustaining improvements requires discipline. Monitor metrics continuously. When metrics degrade, address them immediately rather than letting problems accumulate. Regularly review processes to ensure they're still optimal. As tools, team, and capacity change, optimization must adapt.
Next, we'll explore managing data across multiple platforms and ensuring consistency throughout your workflows.
Continue to Next Lesson