Diagnostic: Full-Stack Survey
Overview
Before building a study plan, we needed to locate the knowledge edges. This session covered eleven rapid-fire questions spanning React performance, async patterns, JavaScript internals, algorithms, distributed systems, browser rendering, TypeScript/component API design, Postgres, auth debugging, and streaming responses.
Results: strong first-principles reasoning and good production instincts throughout. The recurring gaps are precise vocabulary (knowing the concept but not the name), and diagnostic speed (generating hypotheses faster than a narrowing strategy).
Questions
Q1 — React component re-renders
Question: A React component re-renders too often and causes performance problems. How do you diagnose and fix it?
Response summary: Question the premise first — do we have Performance recordings? Then identify what’s changing. Named three root causes: large unstable objects (depend on a smaller subset), inline functions/components (extract them), and expensive calculations (cache with useMemo).
Grade: A−
The instinct to validate the bug report before diving in is exactly principal-level behavior. Three common causes are correct.
Gap: React.memo as a tool to prevent re-renders when props haven’t changed wasn’t mentioned, and the React DevTools Profiler wasn’t named explicitly as the diagnostic tool.
Q2 — Search-as-you-type
Question: You’re building a search input that calls an API as the user types. What problems do you anticipate, and how do you handle them?
Response summary: Debounce; use input/change not keydown to support paste; empty-state UX; aria-live for screen readers; hybrid full-text + semantic search; offset vs cursor-based pagination; caching — use GET not POST, set Cache-Control, decompose Cookie into Vary-able headers.
Grade: A+
Most candidates stop at debounce. The GET vs POST caching point and the Vary header / cookie decomposition approach are rare and show real production experience.
Gap: Race conditions — what if response #2 arrives before response #1? The fix is AbortController to cancel the stale request, or comparing a request ID to ignore it. Classic frontend interview topic.
Q3 — == vs === in JavaScript
Question: Explain the difference between == and === — but assume I already know the basics. Teach me something I don’t know.
Response summary: == does type coercion; 0, false, null, undefined, '', [] are equivalent under ==. Practical application: == null as shorthand for === null || === undefined. Switch statements use ===.
Grade: B+
The == null trick is the right lead — the genuinely useful non-obvious thing. Good choice.
Gaps: [] is not == null — that list conflated “falsy” with == null, which are different. The coercion rules are more chaotic than “empty things are equivalent” ([] == false is true, {} == false is false). The correct takeaway is: the rules are too surprising to memorize, so always use === except for the == null trick.
What would push this higher: Object.is() — identical to === except it handles NaN === NaN (true) and +0 === -0 (false) differently. React uses it internally for state comparison.
Q4 — Parallel API fetches
Question: You need to fetch data from three independent APIs before rendering a page. How do you do it, and what can go wrong?
Response summary: Sensitive API keys → fetch server-side. If there are data dependencies, serial fetches; otherwise parallel with Promise.all. Progressive loading with skeleton screens to improve perceived latency (with layout-shift caveat). API gateway to decouple at scale.
Grade: A
Server-side-for-credentials shows security awareness. The skeleton + layout shift caveat is exactly the UX/perf nuance that distinguishes principal-level thinking.
Gaps: AbortController cancels in-flight requests, but Promise.all already fails fast — don’t conflate the two. AbortController is for preventing wasted bandwidth after a failure, not for implementing fail-fast behavior.
Promise.allSettled wasn’t mentioned — often the better choice when partial success is acceptable. Also worth knowing:
Promise.race— first settled wins; niche use is timeouts, mostly superseded byAbortController+signal.timeout()Promise.any— first successful wins; useful for CDN mirror redundancy
Insight from follow-up: Promise.all(A, B.then(cache), C) failing fast doesn’t prevent B’s side effect — B is already in-flight when A rejects. Failing fast means stopping to observe the result, not stopping the work.
Q5 — Flat array to tree
Question: Given a flat array where each item has an id and optional parentId, write a function that builds a tree. What’s the time complexity?
Response summary: Build a Map<id, entry> in O(n). Link entries in a second O(n) pass. O(n) time, O(n) space.
Grade: A
Correct and clean. The two-pass HashMap approach is the canonical O(n) solution.
Follow-up: missing parentId: If it’s a specific use case, throw or return a forest. If it’s a generic reusable algorithm, add an onParentNotFound callback — callers get control without requiring them to always pass a callback. The default behavior (when no callback is provided) should be explicit: silently dropping orphans risks silent data loss; attaching them to the root changes the tree shape. Name the default.
Better framing: Even a generic algorithm can have an opinionated default (return a forest) plus an optional strict mode (throw on orphans).
Q6 — Collaborative document editing
Question: Two users edit the same paragraph simultaneously. What happens and how do you handle it?
Response summary: The canonical solutions are CRDTs and consensus algorithms like Raft (Google Docs uses CRDT-like operations at the keystroke level). Without the team or experience to build this, two paths:
- Buy instead of build — use a library or service if it’s not the core value prop
- Reduce scope — mutex-lock paragraphs, or build on simpler infrastructure like pubsub
Grade: A
Self-awareness about knowledge boundaries is a principal-level trait that builds trust. Buy-vs-build judgment shows business thinking alongside technical thinking.
Gap: The mutex approach needs a user-facing tradeoff named explicitly: user B sees a disabled state or spinner while user A holds the lock. Always think about the person at the keyboard.
What would sharpen this: “I don’t have experience to own this” lands better with a path forward — spike on Yjs for a week, bring in a consultant, or prototype the mutex version with a migration plan.
Context on the options:
- Yjs — production-battle-tested CRDT library powering Notion. First-class integrations with ProseMirror, Tiptap, CodeMirror. A working spike with a WebSocket provider takes an afternoon.
- Automerge — more academically rigorous (Ink & Switch). Automerge 2.0 rewrote the core in Rust/WASM, closing the performance gap.
The interview-ready version: “I’d prototype with Yjs — it has good library integrations and I could have a working spike in a day.”
Q7 — Browser rendering performance
Question: A page feels janky — animations stutter, scrolling isn’t smooth. How do you diagnose it?
Response summary: If reproducible locally, take a Performance profile and look at the flame chart. If not reproducible locally, check telemetry (Sentry, Honeycomb) for patterns. Likely culprits: mouse event handlers, data processing >16ms after a network response.
Grade: Hit the knowledge edge early.
Follow-up: reflow vs repaint vs composite
These are the three layers of the browser rendering pipeline:
- Reflow (layout) — recalculates geometry (which pixels does this element occupy?). Triggered by changes to
width,height,margin,font-size, DOM insertions, etc. Expensive because it cascades — one element can force its siblings, parents, and descendants to re-layout. - Repaint — recalculates appearance without geometry (color,
background,box-shadow,visibility). Cheaper, still CPU work. - Composite-only —
transformandopacityskip both reflow and repaint. Handled entirely by the GPU compositor on a separate layer.
Interview-ready rule: Prefer transform and opacity for animations — they’re compositor-only. Avoid animating width, height, top, left, or anything that triggers layout.
Q8 — Polymorphic design system components
Question: A team wants your Button component to accept a to prop so it renders as a router link. How do you design the API?
Response summary: First ask: is this a button acting as a link, or a link styled as a button? If it’s navigational, prefer a Link component with a button CSS class rather than modifying Button. If the behavior is truly button-like (e.g. micro-frontend loading), use the as prop pattern common in many component systems — pass the router link component as the root element. Watch for breaking changes: any prop you extract from Button’s prop bag rather than forwarding to the root element becomes a backwards-incompatible surface.
Grade: A
Links-vs-buttons instinct is exactly right. The as prop pattern is the correct answer. Breaking-change warning shows library authorship experience.
Gap — TypeScript polymorphic components: The as prop pattern loses type safety with a naive implementation. as?: React.ElementType doesn’t infer the props of whatever as is set to. The correct solution is a generic type parameter:
type ButtonProps<T extends React.ElementType> = {
as?: T
} & React.ComponentPropsWithoutRef<T>
This infers the props of T, making to required (and typed) when as={NavLink}.
Q9 — Slow Postgres query
Question: A query that used to be fast on a table with millions of rows is now slow. How do you diagnose and fix it?
Response summary: First verify the exact same query was fast before — rule out a recent query change. If unchanged: lost index, plan flip, or need vacuum.
- Plan flip: Happens when summary statistics become disconnected from reality. The cost-based optimizer chooses plan A over B based on row count estimates; when the estimates are stale, it makes the wrong choice.
- Vacuum: Defragments the table so more real rows fit per page. MVCC means Postgres never overwrites rows — every
UPDATEcreates a new row version and leaves the old one as a dead tuple. Vacuum reclaims that space. Without it, queries slow down scanning dead tuples.
Grade: B+
Plan flip explanation is solid. Vacuum answer missed the more important point: MVCC cleanup (dead tuples) is the primary reason vacuum matters for query speed.
Correction on index vacuum: Vacuum does process indexes to remove dead index entries. What you can’t reclaim without REINDEX is index bloat.
The missed tool: EXPLAIN ANALYZE is the canonical first step — it shows actual vs estimated row counts (stale statistics show up here as a large gap between the two) and reveals whether you’re doing sequential scans vs index scans. This is arguably the most practical thing in all of Postgres debugging.
Q10 — Unexpected logouts
Question: Users report being logged out unexpectedly. How do you diagnose it?
Response summary: How are sessions stored — cookies, localStorage? If cookies: what domain, path, are other cookies overwriting the session? What’s validating sessions — session ID in Redis, or cryptographic (JWT)? If JWT: is the JWKS key store becoming invalid, or is there a network issue reaching it?
Grade: B+
Systematic coverage of storage mechanisms, session backends, and cryptographic validation. The JWKS network trouble angle is sharp and shows real production debugging experience.
Gaps:
Two common cookie culprits weren’t named:
SameSiteattribute — a common source of “works in dev, breaks in prod” logout bugs, especially after a domain change or in an iframeSecureflag — cookie silently dropped on HTTP; catches people in mixed-content situations
On JWT: clock skew between issuer and validator (if >a few seconds apart, exp validation fails) is a classic prod issue worth having ready.
Bigger gap: Asking users for patterns before diving into the stack. Is it happening after a specific action? After backgrounding a tab? On mobile only? On a specific browser? Qualitative signal often cuts diagnostic time in half.
Q11 — Streaming CSV export
Question: Users want to export 100,000 rows to CSV. The naive implementation times out. How do you design it?
Response summary: At 100k rows, timeouts are likely because we’re buffering the entire CSV before sending. Fix: HTTP streaming — write rows to the response as they come from the database. If that’s still not enough, or for orders-of-magnitude more data: background job with a job ID, status polling or SSE, deliver the result from S3 or another blob store.
Grade: A−
HTTP streaming as the first move is correct and often overlooked. The background job + blob store escalation is right.
Deeper dive: HTTP streaming in Node/Express
res.setHeader('Content-Type', 'text/csv')
res.setHeader('Content-Disposition', 'attachment; filename="export.csv"')
// Node/Express handles Transfer-Encoding: chunked automatically
res.write(csvChunk)
res.end()
Key points:
- No
Content-Length— you don’t know it upfront; its absence tells the client the response is chunked Transfer-Encoding: chunked, notContent-Encoding—Content-Encodingis for compression (gzip, br);Transfer-Encodingdescribes how bytes are framed in transit. In HTTP/2 this is handled automatically.- Backpressure:
res.write()returnsfalsewhen the buffer is full. Don’t poll withsetTimeout— listen for thedrainevent and pause/resume the database cursor:
const ok = res.write(chunk)
if (!ok) {
dbCursor.pause()
res.once('drain', () => dbCursor.resume())
}
In practice, you rarely write this manually. pipe handles chunking, backpressure, and closing automatically:
dbCursor.pipe(csvTransformStream).pipe(res)
UX gap: If the user closes the tab mid-export, does the job continue? If they refresh, can they resume? These matter at scale — the background job pattern handles both.
Deep Dive: Undo/Redo Mechanics
Discussed during Q5 follow-up on tree-building.
The algorithm
An undo/redo stack isn’t just an array with a pointer — the truncation rule on new input is critical:
["cat", "cats", "catso"] @ index 2
User types “n”:
- Wrong:
["cat", "cats", "catso", "catson"] @ 3— now redo has two possible futures - Correct: truncate above pointer, push new state →
["cat", "cats", "catson"] @ 2
User hits undo: ["cat", "cats", "catson"] @ 1 (value becomes “cats”)
User types “x”: truncate above pointer, push → ["cat", "catsx"] @ 1
Without truncation, you get branching history — a different (much harder) feature.
Edge cases interviewers expect
- Stack size limit — do you cap it? At what size? Drop from the bottom?
- Keystroke grouping — push on every keystroke and undo feels broken (“why did that only delete one character?”). Most editors group keystrokes within ~500ms into one undo entry.
- Server integration — when do you persist? What if the server rejects a value (validation error)? Does that affect the undo stack?
- Command pattern vocabulary — entries as
{ do, undo }pairs rather than value snapshots; overkill for a simple input, but the name is worth knowing.
Identified Gaps
| Area | Gap | Priority |
|---|---|---|
| Communication | Precise vocabulary: reflow/repaint/composite, Transfer-Encoding vs Content-Encoding, MVCC, JWKS | High |
| Communication | Lead with the conclusion; signpost your reasoning | High |
| Communication | Name the diagnostic tool, not just the hypothesis | High |
| React | React.memo, React DevTools Profiler by name | Medium |
| Async | Race conditions in search; AbortController pattern | Medium |
| JS | Object.is() and its connection to React state comparison | Low |
| Postgres | EXPLAIN ANALYZE as first diagnostic step | Medium |
| Auth | SameSite, Secure cookie flags; clock skew in JWT | Medium |
| Rendering | Reflow vs repaint vs composite vocabulary | Medium |
| Node streams | drain event and backpressure pattern | Low |
Overall Strengths
- First-principles reasoning — consistently reached the right answer through reasoning even without pattern recognition
- Productive pushback — challenged the premise on Q1, caught a misstatement on Q5, and refined Q4 around side effects; interviewers at the principal level look for this
- Production instincts — caching headers, Vary, skeleton screens with layout-shift caveat, API gateway — all show real-world experience, not just textbook knowledge
- Buy-vs-build judgment — naming “prove the product first” as a valid strategy shows business thinking