What to expect, what they’re really testing, and what a strong answer looks like — scored.
AI safety, developer experience, enterprise adoption, and product-safety trade-offs. OpenAI PMs must reason about how AI capabilities can be misused, think carefully about model behavior and guardrails, and understand both API-first developers and end users of consumer products like ChatGPT.
The question below was asked by OpenAI interviewers. The answer is graded on the five dimensions real PM interviewers use: structure, specificity, reasoning, decision quality, and delivery.
“How would you improve ChatGPT for users who use it daily for work?”
Daily work users have fundamentally different needs than casual users — they're running the same types of prompts repeatedly (drafting emails, reviewing docs, debugging code) and they need ChatGPT to know context that persists across sessions. The biggest friction for this segment is having to re-establish context every conversation.
I'd focus on two improvements:
First, persistent context profiles. Users should be able to set a 'work context' that persists: their role, their team's writing style, their preferred output format, and recurring projects they work on. When they start a new conversation, ChatGPT has this context without them pasting it in. This is partially addressed by custom instructions, but the current implementation is too free-form and not structured around project context.
Second, conversation templates. Recurring use cases (weekly status update, code review, meeting summary) should be templatable — users define the structure once and invoke it with a slash command. This reduces the cost of setting up the prompt each time and makes output more consistent.
I'd prioritize persistent context over templates because it has a higher impact-to-effort ratio. Templates benefit power users; context persistence benefits everyone who uses ChatGPT for work.
Success metric: session-start time for daily active users (the time between opening ChatGPT and sending a first meaningful message, measured by prompt length and specificity). If context persistence is working, users should be giving more specific, work-relevant prompts faster — not spending time on setup. Guardrail: privacy — any persistent context must be clearly visible, editable, and deletable by the user.
Identifies the daily-user pain (context re-establishment), proposes two solutions, prioritizes between them with clear reasoning.
Names specific context fields (role, writing style, output format), slash command templates, and a session-start time metric.
The 'context persistence benefits everyone; templates benefit power users' prioritization logic is clean.
Commits to a priority order with a clear rationale; privacy guardrail shows awareness of the sensitivity of persistent context.
Well-paced; the privacy guardrail at the end is appropriately brief.
This is a solid answer because it correctly identifies context re-establishment as the daily-user pain rather than proposing random feature additions. The persistent context vs. template prioritization logic is well-reasoned. The success metric (session-start time measured by prompt specificity) is non-obvious and creative. The weakness: the answer doesn't acknowledge that OpenAI is already building Memory and custom instructions — the interviewer will ask how this differs from what's already shipped.
Explicitly differentiate persistent context profiles from OpenAI's existing Memory feature — for example, noting that Memory is retroactively built from conversations, while context profiles are user-configured proactively for specific work contexts.
Paste any OpenAI PM interview question and your answer. Get scored on the same five dimensions — instantly, free, no signup.
Grade my answer free →First grade is free. No account needed.