What to expect, what they’re really testing, and what a strong answer looks like — scored.
Developer and team collaboration tools (Jira, Confluence, Trello), enterprise software, and the integration ecosystem. Atlassian PMs must understand software development workflows, the needs of engineering teams vs. project managers, and how to build products that work across diverse team structures.
The question below was asked by Atlassian interviewers. The answer is graded on the five dimensions real PM interviewers use: structure, specificity, reasoning, decision quality, and delivery.
“How would you improve Jira's sprint planning experience for remote engineering teams?”
Sprint planning has two distinct pain points for remote teams: async coordination (team members in different timezones can't plan together in real time) and estimation accuracy (story point estimates are inconsistent when engineers aren't in the same room).
I'd focus on estimation accuracy because it has a clearer product solution. Async coordination is primarily a process problem; estimation accuracy is a product problem.
The core issue: when teams estimate asynchronously in Jira, estimates are anchored to whoever responds first (anchoring bias). The engineer who comments first on a ticket with 'I think this is a 3-pointer' will cause everyone else to gravitate toward 3. This makes async estimation less accurate than synchronous planning poker.
Proposed feature: blind async estimation. When a sprint planning session is initiated in Jira, engineers are shown each ticket and asked to submit their story point estimate privately. Estimates are hidden until everyone on the sprint team has responded (or a 24-hour window expires). Only then are estimates revealed and the median shown as the suggested estimate.
This is the async equivalent of planning poker's simultaneous card reveal, which eliminates anchoring.
Success metric: estimate variance within a team — the standard deviation of story point estimates across team members per ticket. Lower variance after implementation indicates the team is converging on estimates more consistently. Secondary: sprint velocity accuracy — do teams complete what they plan? This is the downstream outcome that better estimation should improve.
Two-pain-point frame, focuses on the product-solvable one, diagnoses root cause (anchoring bias), and proposes a direct solution.
Names anchoring bias specifically, blind reveal mechanic, 24-hour window, and estimate variance as the metric.
The planning poker analogy is correct and the anchoring bias mechanism is well-explained.
Correctly identifies the product-solvable pain and focuses there; estimation variance is the right leading indicator.
Tight and efficient; the anchoring bias explanation is appropriately brief.
Near-top-tier answer. Naming anchoring bias as the mechanism behind inconsistent async estimates is a non-obvious, correct insight that demonstrates both PM and behavioral economics literacy. The blind reveal mechanic follows logically from the diagnosis. The estimate variance metric is the right leading indicator. The one gap: the answer doesn't address whether Jira has the data infrastructure to show who has and hasn't submitted estimates yet — the UX of 'waiting for all responses' needs design consideration.
Add one sentence on the UX of the waiting state — how does a PM running the sprint see who still needs to estimate, and how do they nudge people without revealing estimates early?
Paste any Atlassian PM interview question and your answer. Get scored on the same five dimensions — instantly, free, no signup.
Grade my answer free →First grade is free. No account needed.