Protected case study
This work is under NDA. Enter the password to continue, or reach out to request access.
Anila Alexander
is in pursuit of
the next great problem.

I research the things we're still figuring out, like how humans adapt to AI, how tools shape behavior, how products earn trust. I have spent eight years tackling those problems at Meta, Google, and The New York Times. Outside of industry, I have taught product, data, and UX as an instructor at The New School's Eugene Lang College of Liberal Arts and General Assembly.

Work
2025–Present
Meta
Research and Strategy
Defining long-term product strategy for next-generation VR / AR / XR platforms.
XR PlatformsHardwareRoadmap Strategy
2021–2025
Google
UX Research and Program Lead
Led two multi-year 0→1 generative AI research programs from early exploration through production launch.
Generative AIHITLMixed MethodsPatent0→1Google I/O 2024
2018–2021
The New York Times
Product Strategy & Research
Led vision, strategy, and development of mobile (iOS/Android), internal tools, and web products.
MobileProduct StrategyMixed MethodsExperimental Design0→1
2016–2020
Teaching
Instructor
The New School, Eugene Lang College of Liberal Arts — Data Visualization Instructor (2016–2019)
General Assembly — Lead Product Management Instructor (2020)
Education
Hunter College
M.S. Applied Digital Sociology
Pratt Institute
UX/UI Mobile Design Certificate
New York University
B.A. Journalism and Politics

© 2026 Anila Alexander

Resume

Curriculum Vitae

UX Researcher · Product Strategist · 8+ years

I have spent eight years in pursuit of hard problems across research, product, and strategy. My background does not fit neatly into one lane because the work rarely does. At The New York Times I was a product manager who ran research. At Google I was a researcher who drove product direction. At Meta I do both simultaneously. I go where the problem is and do what it needs.

I specialize in generative AI, human-in-the-loop systems, and emerging technologies. I have co-invented a patent, launched products at Google I/O, built research programs from scratch, and taught what I know along the way.

Meta2025–Present
UX Research & Strategy
  • Define long-term product strategy for next-generation VR/AR/XR platforms, shaping roadmap priorities and investment decisions for developers and creators.
  • Leverage AI-powered literature reviews and competitive analyses to enable rapid prioritization of high-impact developer tools.
Google2021–2025
UX Research & Program Lead, Generative AI
  • Co-invented and patented a human-in-the-loop GenAI solution (US20250209307A1), improving system reliability, quality, and user trust in AI-assisted workflows.
  • Owned research strategy for a GenAI model visualization product launched at Google I/O 2024, driving a 25% increase in adoption.
The New York Times2018–2021
Product Strategy & UX Research
  • Led product strategy and development of an Elections tab that served 1M+ weekly active users during the 2020 U.S. presidential election.
  • Led 0→1 development and launch of a newsroom CMS, introducing new content formats for live news.
The New School, Eugene Lang College of Liberal Arts2016–2019
Instructor
  • Taught 90+ college students web development and data visualization with a focus on HTML, CSS, and JavaScript.
  • Wrote all curriculum materials including syllabus, presentation decks, in-class labs, and sample projects.
Peter G. Peterson Foundation2017–2018
Product Strategy & UX Research
  • Led product strategy, end-to-end development, and release of a suite of digital products in the nonprofit space.
  • Led redesign of HTML email newsletters driving a 20% increase in CTR. Launched five micro-sites for pgpf.org and fiscalsummit.org.
General Assembly2020
Lead Product Management Instructor
  • Taught 30+ adult learners the product development lifecycle, UX design, agile methodologies, and experimentation.
  • Held a consistent 4+ student rating.
Zelis2015–2016
Product & UX Strategy
  • Led development and release of custom configurations for a SaaS healthcare search solution.
  • Launched four client websites for national healthcare companies.
  • Hunter College — M.S. Applied Digital Sociology
  • Pratt Institute — UX/UI Mobile Design Certificate
  • New York University — B.A. Journalism and Politics
Get in touch →
Meta · Sept 2025–Present

XR Developer Tools Research

Lead UX Researcher · XR / VR / MR · Developer Experience · Simulation & Performance Tools

When I joined Meta, the simulation and performance tools space had limited prior research coverage. A major hardware launch (Phoenix) was coming in 2026, and the teams building developer tools needed to make high-stakes roadmap decisions with thin evidence.

I did not wait to be assigned problems. I proactively mapped the landscape, identified the gaps, and built a research foundation from scratch across three connected studies. Each one informed the next. The arc was deliberate: understand the terrain, situate Meta in the competitive landscape, then validate the newest tool at the moment it launched.

The result: a direct line from my research to the H1 2026 DPP roadmap, XR Simulator 2.0 shipping with known issues documented and tracked, and 13 product features prioritized for fast follow.

Performance Tools in Developer Workflows

Literature Review · Strategic UXR

Both new and experienced developers use profiling tools to assess and optimize performance of VR, MR, and 2D applications. Key pain points included inconsistent terminology, steep learning curves, and a lack of actionable guidance to resolve issues. Research in this area was limited across Meta, so I started by building the knowledge base.

Synthesized approximately 15 internal research reports, Workplace posts, and quantitative data sources including H1 2025 DevX tracker scores, CSAT and perceived reliability metrics, and usage analysis across developer segments. Used AI to accelerate the literature review process, rapidly surfacing patterns across fragmented internal sources and enabling faster synthesis of qualitative and quantitative signals into a single coherent narrative for stakeholders.

  • Performance is continuous, not a one-time fix. Developers address issues both proactively and reactively throughout the build lifecycle, averaging 20 to 30 minutes per profiling session excluding setup.
  • Tool satisfaction and reliability vary widely. OVR Metrics leads at 77% CSAT and 85% perceived reliability. Perfetto sits at 53%. RenderDoc %Bad is at 40% against a 20% EoY target, signaling significant UX debt.
  • Developer experience determines tool needs. New and generalist developers rely on lightweight tools like OVR Metrics. Experienced and specialized developers need RenderDoc and Perfetto, but these carry the highest onboarding barriers.
  • Performance Analyzer drives app approval rates. Only 3% of organizations used it before submission, yet those organizations saw 35% higher approval rates, signaling strong untapped value.
  • Improve onboarding by clearly communicating which tools are available, their value, and when to use them in the build process.
  • Standardize terminology and acronyms across all performance tools, especially for new developers.
  • Integrate AI-driven suggestions and automated root cause identification to help developers diagnose and resolve issues faster.
  • Enhance tools to provide actionable guidance, not just flagging, particularly for developers building for Phoenix.

Simulation Tools as Strategic Assets

Literature Review · Competitive Analysis · Social Listening

Simulation tools like XR Simulator and the upcoming Spatial Simulator are foundational to Meta's developer ecosystem. With Phoenix launching in 2026, these tools needed to support a new wave of developers building MR and 2D apps. Adoption lagged, satisfaction was declining, and the competitive landscape was shifting fast. The team needed a clear picture of where Meta stood and where to invest.

Three-method approach: a literature review of approximately 15 internal Workplace posts and research reports; a competitive analysis comparing Meta XR Simulator against Apple visionOS Simulator and Android XR Emulator across technical capabilities, developer experience, and product-market fit; and social listening via Vocal, synthesizing developer feedback from Reddit, YouTube, and Meta Community forums. Used AI to run and synthesize the social listening prompts at scale, enabling rapid competitive intelligence across a large volume of unstructured developer feedback that would have taken significantly longer to analyze manually.

  • Simulation tools attract 3D developers but mobile developers remain hard to convert. 47% of game developers globally focus on 3D development and are well-positioned to upskill for XR. Mobile developers (16.6M worldwide) prioritize platforms with large user bases and strong monetization.
  • XR Simulator drives publish rates but adoption lags. XRSim has a +0.1% effect on app publishing, but only 40% adoption within 6 months, below Link's 50%. Poor discoverability, steep learning curve, and weak documentation are the primary barriers. WATM@14 sits at 18.27% against a 24% target.
  • Spatial Simulator is a major relief for Android developers. Early dogfooding feedback described dramatically faster iteration speeds, no more don/doff, no dead batteries, development anywhere on the go. Primary research planned for H1 2026.
  • Competitors set a baseline expectation. Apple leads on polish and Xcode integration. Android leads on flexibility. Meta leads on MR and multiplayer testing, but faces the most usability challenges of the three platforms.
  • Invest in tailored onboarding and migration guides for Android and mobile developers moving to Spatial Simulator.
  • Overhaul documentation with guides, videos, and sample projects, and improve XR Simulator discoverability on the Dev Center.
  • Market XR Simulator's MR and multiplayer testing capabilities as differentiators for Phoenix 2026.
  • Redesign onboarding to deliver immediate value and reduce friction to address the low WATM@14 retention rate.

XR Simulator 2.0 Usability Study

Evaluative UXR · Usability Testing

XR Simulator 2.0 was launching in December 2025. The team needed to assess the value of the new version, identify fast follows for the product roadmap, and mitigate risks for gaze and hands inputs ahead of Phoenix. This was the first primary research on the tool. I designed and executed the study independently, running sessions the week of launch.

Seven remote 60-minute sessions with experienced VR/AR developers who had used XR Simulator 1.0 to build at least one app. Five of the seven downloaded the 2.0 build when it became available on December 4, 2025. Sessions ran December 5 to 9. Visual stimuli included the publicly available 2.0 build, Figma prototypes, and concepts for activation toggle and panel behavior. Used AI to support rapid synthesis of session notes across seven participants, identifying themes and feature request patterns faster than manual affinity mapping alone, enabling the report to be delivered within days of the final session.

  • 7 of 7 participants praised the new UI. The standalone app and clearer panel organization were described as less clunky, more intuitive, and accessible enough that "anybody can use this." 4 of 7 noted improved stability and fewer crashes compared to 1.0.
  • Versioning and discoverability remain blockers. 3 of 7 participants were confused by two coexisting versions (v81 and v83) and the split between Unity package manager and standalone website installation, creating friction at the moment of adoption.
  • In-tool documentation is valued but incomplete. Participants appreciated quick-access help docs but hit dead ends with advanced features like synthetic environments. 2 of 7 encountered dead-end documentation links.
  • Gaze and eye tracking readiness is low. Most participants had no hands-on experience with gaze inputs due to Quest 3/3S hardware limitations and EU privacy and regulatory concerns, creating uncertainty about Phoenix readiness.
  • Hand simulation is the top feature request. 6 of 7 participants requested custom, recordable, and uploadable hand gestures. 4 of 7 needed complex multi-hand interactions their apps required but could not simulate. Right and left hand role customization was flagged as critical for inclusivity.
XR Sim 2.0
Launched with research-informed fast follows
13
Product features identified for roadmap
3
Usability bugs surfaced and tracked
Google · Jun 2021–May 2025

Generative AI Research Programs

Lead UX Researcher · Generative AI · Human-in-the-Loop · Mixed Methods · Q4 2021–May 2025

GenAI CX Support Solution

Generative AI · Strategic UXR · HITL · Q4 2021–Q4 2023

Google needed to scale its customer support agent workforce while maintaining high CSAT scores. The solution: a human-in-the-loop generative AI system where a support agent monitors a conversational AI chatbot speaking to a customer, intervening when the chatbot hallucinates or goes off-topic.

The core research question was: what is the role of a customer support agent as Google shifts from manual → assistive → supervised 1:1 support experiences?

Over two years I ran an iterative mixed-methods program spanning 25 studies with external customers and Google customer support representatives — from Beta through Full Launch.

  • Prototype testing, in-depth interviews, shadowing, focus groups, surveys, and chat transcript qualitative analysis — pre-Beta through MVP.
  • Service design and NASA-TLX cognitive load assessments to understand agent workload in multi-chat scenarios.
  • Post-launch: surveys, focus groups, shadowing, and ethnographic research conducted in Manila with frontline support agents.
  • Advertiser needs: Customers arrive resolution-focused after self-help has failed. They expect short, precise conversations — "no lollygagging" — and want agents who read the room, not robotic scripted responses.
  • AI suggestion relevance is everything: Agents found AI suggestions frequently out of context with the customer’s issue. Content relevance — shaped by job role, linguistic tone, conversation flow, and customer sentiment — became the key determinant of adoption.
  • Fear-based mental models block adoption: Agents operated under fixed, fear-based mental models — afraid of being penalized by QA auditors for mistakes the AI made. The research reframed the design challenge: shift agents from fixed → growth mindsets.
  • Cognitive overload in multi-chat: High cognitive load in real-time multi-chat scenarios created a "poverty of attention." Context switching, mental model mismatches, and info density all contributed to agent overwhelm.
25%
Agent messages automated
2.7M
1:1 chats using automated messages by Q3 2023
$22M
ARR cost reduction annually

Model Explorer Visualization Tool

Non-LLMs · Dev Tools · Strategic UXR · Q4 2023–May 2025

ML researchers and engineers need to identify and debug architecture, quality, and performance issues in large models — especially for on-device deployments where conversion and optimization processes can significantly alter a model from its original state. Existing tools couldn’t render large models and had significant usability problems incompatible with modern model architecture.

The core research question: what role does visualization play in the model development and deployment process, and how does it fit into a larger tooling ecosystem for on-device ML developers?

  • Contextual inquiry with 1P and 3P on-device ML developers to understand workflows, pain points, and existing tool usage.
  • Usability testing of prototype visualization tool, iterated ahead of a hard Pre-I/O deadline.
  • Literature review and workshops & design sprints to align cross-functional teams on product direction.
  • Four core ODML stages: Developers follow Build → Adapt → Integrate → Release to launch LLMs on device. Each stage has distinct tasks, evaluation targets, and tools. Developers frequently revisit earlier stages as they learn quality vs. performance trade-offs.
  • Visualization matters in 3 of 4 stages: Visualization plays a critical role in Build, Adapt, and Optimize — helping developers understand model architecture, identify debug areas, and communicate progress across teams.
  • The role shifts by stage: In Build, viz tracks architectural changes across model versions. In Adapt, it helps developers onboard to new models post-quantization. In Optimize, it surfaces performance bottlenecks and supports inter-team alignment.
  • Top requested features: Side-by-side model graph comparison (currently requires multiple browser tabs and mental snapshots), richer per-node metrics, and tighter Colab integration to move from development to debugging seamlessly.
25%
Adoption increase at Google I/O 2024
934+
GitHub stars at launch
109K+
Views of @GoogleAI launch tweet

Launched at Google I/O 2024. Covered by VentureBeat and Hacker News. Over 61.8k unique GitHub site visitors and 11k developer site visits in the weeks following launch.

The New York Times · Jul 2018–Jun 2021

Mobile Products & Elections Tab

Product Manager · iOS / Android / Web · 0→1

Led the vision, strategy, and development of user-facing mobile and web products at The New York Times. Spearheaded the Elections tab — a flagship 0→1 feature that acquired over 1 million weekly active users.

  • Launched the Elections tab for iOS and Android, acquiring 1M+ weekly active users.
  • Enhanced the story page experience, resulting in a 4–7% increase in CTR.
  • Partnered with Data Science on A/B testing strategies, leading to a 10% lift in user engagement.
  • Scaled adoption of a new CMS across the newsroom, empowering journalists to cover breaking news faster.
1M+
Weekly active users
10%
Lift in engagement
4–7%
CTR increase
Thinking

Essays & Notes

Published on After Legibility ↗
Loading posts…
No posts with this tag yet.

Subscribe to get new posts delivered to your inbox.

Subscribe on Substack →
Miscellaneous

Other things

Patent — US20250209307A1
Human-in-the-Loop Generative AI Solution · Google · 2025
LinkedIn
linkedin.com/in/anilaalexander
After Legibility — Substack
afterlegibility.substack.com
Speaking & Collaboration
Open to research talks, advisory, and hard problems
Contact

Let's talk about
the next problem.

Open to research collaborations, speaking invitations, advisory conversations, and connecting with teams working on hard problems in AI, immersive tech, or human-centered systems.

Message sent. I'll be in touch.
Connect on LinkedIn →