How to Run a Weekly Performance Review for Your LoL Team
Most esports coaches review their players. Few do it consistently. Even fewer do it in a way that actually changes how the team plays next week.
The problem isn't motivation. It's process. Without a repeatable framework, weekly reviews become ad hoc conversations based on whatever the coach remembers from the last few games. Sometimes it's useful. Most of the time it's a vague "we need to be better in teamfights" that everyone nods along to and nobody acts on.
A structured weekly performance review fixes this. It turns scattered impressions into a clear picture, backed by data, that the whole staff can align around. Here's how to build one from scratch.
Why weekly, not daily or monthly
Daily reviews are too noisy. A player can have two bad games on a Tuesday and look like they're in crisis. By Thursday they've bounced back and the "issue" was just variance. Daily data is useful for spotting acute tilt or sudden drops, but it's not the right cadence for structured evaluation.
Monthly reviews are too slow. If a player's CS/min has been declining for three weeks and you only catch it at the end of the month, you've wasted 21 days of practice time where targeted coaching could have helped. By the time you identify the issue, the player might have internalized bad habits.
Weekly is the sweet spot. Seven days gives you enough games to filter out variance (most active players will have 15–30+ games in a week across SoloQ and scrims) while keeping the feedback loop tight enough to course-correct quickly.
The 4 phases of a weekly review
A good weekly review follows four phases. Skip any of them and the review loses its impact.
Phase 1: Data pull (15 minutes)
Before you sit down to analyze anything, pull the numbers. This is not the time for opinions. It's the time for facts.
For each player on your roster, pull:
- Win rate (7-day window, filtered by SoloQ and scrims separately)
- CS/min trend (7-day vs. 30-day average)
- KDA trend (7-day vs. 30-day average)
- Damage per minute
- Kill participation
- Vision score (especially for supports and junglers)
- Number of games played (activity level matters)
For the team as a whole:
- Team win rate in scrims vs. SoloQ
- Average game duration (are games getting shorter or longer?)
- Total games played across the roster (is anyone underplaying?)
The goal of Phase 1 is to have all the numbers in front of you before you start interpreting them. Don't cherry-pick. Pull everything, then look for patterns.
Phase 2: Individual analysis (30 minutes)
Go through each player one by one. For each player, answer three questions:
1. What improved this week?
Look for metrics that trended up compared to the previous week or the 30-day average. Even small improvements matter. A player who went from 7.2 to 7.5 CS/min is moving in the right direction. Acknowledge it. Positive reinforcement on specific metrics is far more motivating than generic praise.
2. What declined or stagnated?
Look for metrics that dropped or flatlined. A KDA drop of 0.5+ or a CS/min drop of 0.3+ over a week is worth investigating. But don't jump to conclusions. Check the context first:
- Did they play new champions? Learning curves cause temporary stat dips.
- Did they face harder opponents? Ranking up means tougher matchups.
- Were games shorter than usual? Some metrics compress in fast games.
- Did their role in the team comp change? A player asked to play weak-side will naturally have lower numbers.
3. What's the one thing to focus on next week?
This is the most important question. Not five things. Not a general "play better." One specific, actionable focus area.
Bad example: "You need to improve your laning." Good example: "Your CS/min at 10 minutes has been averaging 72 this week. Let's get that to 80 by focusing on last-hitting under tower in the first practice block every day."
One clear target gives the player something to measure themselves against. Multiple targets dilute focus and make progress harder to track.
Phase 3: Team-level patterns (15 minutes)
After individual reviews, zoom out. Look for patterns that affect the whole team:
Win rate divergence between SoloQ and scrims
If your team's SoloQ win rate is strong but scrim win rate is weak, the players are individually skilled but the team coordination isn't translating. Focus practice time on team-level execution: draft strategy, objective sequencing, teamfight triggers.
If the reverse is true (good scrims, bad SoloQ), it's less concerning for competitive purposes. But it could indicate that players aren't taking SoloQ seriously, which affects their individual mechanics over time.
Activity imbalances
If one player played 40 SoloQ games this week and another played 12, that matters. The 12-game player might be burned out, tilted, or dealing with something outside the game. It's worth a check-in before it becomes a performance issue.
Role-based outliers
If your ADC's damage/min is below your top laner's, something is structurally wrong, either in draft, in how the team plays around carries, or in the ADC's own performance. Cross-role comparisons can reveal issues that individual reviews miss.
Phase 4: Action items (10 minutes)
Every review should end with a short list of concrete actions for the next week. Not observations. Actions.
Good action items:
- "Mid player: 10 minutes of CS practice tool every day before scrims, targeting 85 CS at 10 minutes"
- "Review last 3 scrim losses as a team on Wednesday, with a focus on objective setup specifically"
- "Jungler and support: run 3 games of duo SoloQ this week to work on river control timing"
- "Full team: track vision score this week. Target: supports above 2.0/min, junglers above 1.2/min"
Bad action items:
- "Play better"
- "Stop dying so much"
- "We need to improve macro"
- "Everyone watch more VODs"
The difference is specificity and measurability. After next week's review, you should be able to look at the data and say whether the action item was completed or not.
The review document: what it looks like
Keep it simple. A weekly review doesn't need a 10-page report. One page per player, one page for the team. Here's a template:
Player review page
Player: [Name]
Role: [Role]
Games this week: [SoloQ: X | Scrims: X]
Key metrics (7-day):
Win rate: XX% (SoloQ) / XX% (Scrims)
CS/min: X.X (↑/↓/→ vs. last week)
KDA: X.X (↑/↓/→ vs. last week)
DPM: XXX (↑/↓/→ vs. last week)
KP: XX%
Vision: X.X/min
What improved: [1 sentence]
What to watch: [1 sentence]
Focus for next week: [1 specific action]
Team review page
Team: [Name]
Week of: [Date range]
Team win rate: XX% SoloQ / XX% Scrims
Avg game duration: XX min
Total roster games: XXX
Patterns:
- [Pattern 1]
- [Pattern 2]
Action items for next week:
1. [Action]
2. [Action]
3. [Action]
That's it. If your review doc is longer than this, you're over-engineering it. The point is to drive action, not to write a thesis.
Common mistakes coaches make in weekly reviews
Reviewing without data
"I feel like you've been playing worse" is not coaching. It's a vibe check. Players will either agree out of politeness or push back because the feedback isn't grounded in anything concrete. Always lead with numbers, then add context.
Only focusing on problems
If every weekly review is a list of things that went wrong, players will start dreading them. And they'll stop being honest about their own struggles because they expect criticism. Start with what improved. Then address areas to work on. The ratio matters. This isn't about being soft, it's about keeping players engaged in the process long-term.
Comparing players to each other
"Your CS/min is worse than [teammate]'s" creates resentment, not improvement. Compare players to their own past performance. "Your CS/min dropped from 8.1 to 7.6 this week" is about the player. "You're the worst farmer on the team" is about the hierarchy. One is useful. The other is destructive.
Giving too many action items
Three focus areas per player maximum. One is ideal. When a player has seven things to work on, they work on zero of them. Prioritize ruthlessly. What's the single change that would have the biggest impact on this player's performance right now?
Not following up
The most common failure. You do a great review on Monday, assign focus areas, and then never mention them again until next Monday. Mid-week check-ins don't need to be formal. A quick "how's the CS practice going?" in Discord is enough. The player needs to know that the focus area matters beyond the review meeting.
Adjusting the framework by team level
Amateur teams (no coaching staff)
If you're the team captain running reviews yourself, simplify. Skip the individual write-ups. Pull up the team dashboard, spend 20 minutes going through the key metrics together on a Discord call, and agree on 1-2 team-level actions for the week. That's more than 90% of amateur teams do.
Semi-pro teams (coach + small staff)
The full 4-phase framework works here. The head coach runs the review, shares findings with any assistant coaches or analysts, and delivers individual feedback in 1-on-1s. Keep the team-level review as a group meeting.
Organizations with multiple rosters
Each roster gets its own review cycle with its own coach. But the GM or director of performance should get a summary from each coach. The team review page is enough. This lets management spot org-wide patterns (e.g., all three rosters have declining scrim win rates this week) without micromanaging individual players.
When to deviate from the weekly cadence
Weekly reviews are the backbone, but some situations warrant breaking the rhythm:
- Before a tournament: switch to daily check-ins focused on scrim performance. The weekly cadence is too slow when you're days away from competition.
- After a roster change: the first 2-3 weeks with a new player need more frequent reviews as the team adjusts. Consider doing a mid-week mini-review in addition to the weekly one.
- During a break: if the team is on a scheduled break with reduced play, stretch to every two weeks. Don't force a review when there isn't enough data to make it meaningful.
- After a major loss: don't rush a performance review the day after a tournament elimination. Emotions run high. Wait 48 hours, pull the data, and do the review with a clear head.
Make it a habit, not an event
The first weekly review will feel awkward. The second will feel forced. By the fourth or fifth, it becomes part of how the team operates. The players start checking their own stats before the review because they know it's coming. The coaching staff starts spotting issues faster because they have a baseline to compare against.
That's the goal. Not a one-time analysis, but a rhythm that makes your team a little sharper every week.
With VictoryView, the data pull that used to take 45 minutes of tab-switching and spreadsheet work takes about 2 minutes. Every player's metrics are already centralized, trended over time, and filterable by queue type. You open the dashboard, you see the numbers, and you spend your time on analysis and coaching, not data entry.
Set up your roster, import your first week of matches, and run your first review. The team that reviews consistently is the team that improves consistently.