What to expect, what they’re really testing, and what a strong answer looks like — scored.
Growth, engagement metrics, social graph, and monetization. Meta PMs are expected to reason about scale (billions of users), think about network effects, and be comfortable with ambiguous trade-offs between engagement and revenue.
The question below was asked by Meta interviewers. The answer is graded on the five dimensions real PM interviewers use: structure, specificity, reasoning, decision quality, and delivery.
“How would you improve Facebook's News Feed?”
Before jumping to solutions, I want to clarify: are we optimizing for engagement, time well spent, or ad revenue? I'll assume we're defining 'improvement' as increasing meaningful engagement — content that users engage with and report as valuable — while not increasing passive scrolling time, since that's the metric Meta has publicly said they care about.
I'd start by identifying the key user segments. News Feed serves very different users: people who check in daily for updates from close friends, people who use it as a news source, and light users who only open it when notified. The improvements will differ by segment.
For the highest-impact segment — daily social users — the biggest friction I'd target is stale content. The feed often shows posts from days ago because the algorithm over-indexes on 'likes' as an engagement signal, which favors viral public content over timely content from close connections. My proposed change: introduce a 'Close Friends freshness' ranking boost that surfaces content from the top 15 people a user interacts with, posted in the last 24 hours, even if that content has low overall engagement. Hypothesis: users who see timely content from close connections will initiate more conversations (comments, DMs), which is a higher-quality signal than passive likes.
I'd measure success with three metrics: comment rate on close-friend posts (primary), DM thread initiations sourced from Feed (secondary), and daily active user retention at 7 days (guardrail — I don't want to cannibalize retention). I'd run an A/B test for 4 weeks, targeting the top engagement quartile first to avoid introducing noise from low-engagement users.
Tradeoffs: this ranking change could reduce ad impressions if it surfaces fewer viral posts, so I'd want the monetization team's sign-off before shipping broadly.
Clarifies goal, segments users, proposes a specific change, defines metrics, and names tradeoffs — clean PM framework.
Names specific metrics (comment rate, DM initiations), a concrete hypothesis, and a measurable test window.
Explains WHY the algorithm over-indexes on likes and why close-friend content is a better signal — not just what to build.
Commits to one clear feature hypothesis rather than listing five options; acknowledges the monetization tradeoff.
Slightly long but justified by depth; could trim the final paragraph without losing the core argument.
This answer is strong because it doesn't fall into the classic trap of listing ten random feature ideas. Instead it picks one hypothesis, ties it to a named user segment, and defines success metrics before proposing the test. The monetization tradeoff at the end shows business awareness that separates senior candidates. The one weakness is delivery — the answer runs a bit long and could cut the setup paragraph without losing anything.
Cut the setup paragraph and open directly with the user segment and the core hypothesis to tighten delivery.
Paste any Meta PM interview question and your answer. Get scored on the same five dimensions — instantly, free, no signup.
Grade my answer free →First grade is free. No account needed.