An Under-the-Radar AI Play: Why Marvell Technology (MRVL) Could Be the Next Big Winner in AI
Introduction
Artificial Intelligence (AI) has rapidly evolved from a niche research discipline into the centerpiece of modern technological innovation. From generative AI models powering natural language interfaces to machine learning algorithms optimizing logistics and healthcare, AI is no longer a futuristic concept—it is today’s reality. While household names like NVIDIA, Microsoft, and Alphabet dominate the conversation, Bank of America (BofA) recently highlighted an unexpected star: Marvell Technology (MRVL). According to BofA, Marvell represents an under-the-radar AI play that has already doubled in value over the past six months, with significant room for further upside.
Editor’s Note (Updated September 2025): Since BofA’s earlier bullish positioning, the bank downgraded MRVL to Neutral following Q2 FY26 results and a slightly below‑consensus Q3 guide. Even so, BofA and other analysts continue to frame Marvell as a core AI-infrastructure lever with multi‑year custom‑silicon and optics tailwinds. This article digs into the technology, customers, and numbers so technically savvy readers can judge the long‑term thesis.
Marvell Technology (MRVL): A Semiconductor Powerhouse in the AI Era
At its core, Marvell Technology is a semiconductor company that designs and develops chips critical to data infrastructure. Unlike NVIDIA—which is best known for its GPUs used in AI training—Marvell specializes in chips that power networking, cloud storage, and custom AI accelerators. This positioning places Marvell at the intersection of AI compute and AI infrastructure, giving it a differentiated edge.
![]() |
An Under-the-Radar AI Play: Why Marvell Technology (MRVL) Could Be the Next Big Winner in AI |
Why Marvell Is Under‑the‑Radar
Not a Household Name: Unlike NVIDIA or AMD, Marvell is less recognized by retail investors, despite its deep partnerships with hyperscale cloud providers.
Focus on Data Infrastructure: The company’s role in enabling faster, more efficient data movement is less glamorous than GPUs, but it is just as essential for scaling AI workloads.
Custom Silicon Solutions: Marvell designs custom silicon for large customers, including AI data centers. This provides long‑term contracts and customer lock‑in.
Fresh Financial Snapshot (FY26 year‑to‑date)
Q1 FY26 (reported May 29, 2025): Revenue $1.895, non‑GAAP EPS $0.62; strong sequential and YoY growth led by data center.
Q2 FY26 (reported Aug 28, 2025): Record revenue $2.006B (+58% YoY), non‑GAAP EPS $0.67; operating cash flow $462M.
Q3 FY26 guidance (midpoint): Revenue $2.06B, non‑GAAP EPS $0.74; sequential growth expected to continue.
Revenue mix: Data center trending around ~75% of total revenue as AI optics and custom ASICs ramp.
(See attached charts: revenue trend, EPS trend, and revenue mix.)
![]() |
An Under-the-Radar AI Play: Why Marvell Technology (MRVL) Could Be the Next Big Winner in AI |
The Technical Case for Marvell’s AI Chips
Scaling modern AI is as much a data‑movement problem as a compute problem. Marvell’s portfolio maps directly to the pain points of hyperscale AI clusters:
Electro‑Optics & Optical DSPs (PAM4, 800G/400G/200G): Coherent/linear optics and DSPs reduce power per bit and increase reach at rack‑to‑rack scale—vital for GPU pod fabrics. Marvell’s Inphi acquisition gave it leadership in this domain, now a growth engine as AI workloads push optical transceiver demand.
Custom AI ASICs: Application‑specific accelerators for inference/training adjacencies (networking offload, compression, security) with performance‑per‑watt advantages over general‑purpose compute for targeted tasks. These ASICs enable hyperscalers to avoid bottlenecks and lower TCO.
High‑Speed Ethernet (200/400/800G): Merchant Ethernet NICs/switch silicon and PHYs underpin scale‑out AI clusters, complementing proprietary interconnects like NVIDIA’s NVLink. Hyperscalers increasingly mix merchant Ethernet with proprietary fabrics for cost and scale reasons, putting Marvell in a sweet spot.
DPUs/XPUs & Offload Engines: Packet processing, storage, and security offloads free up GPU cycles and reduce TCO at the system level. This offloading is critical as clusters scale to tens of thousands of nodes.
System‑Level Impact
Throughput: Optical interconnects ensure each GPU/accelerator in a cluster can exchange gradients and parameters efficiently, accelerating training convergence.
Latency: By reducing tail latency across network hops, Marvell’s solutions minimize stragglers during distributed training, improving overall utilization.
Power Efficiency: Lower picojoules/bit for optics directly translates to reduced power per token processed, a key metric for generative AI economics.
Concrete Use Cases
LLM Training Clusters: Tens of thousands of GPUs training large language models rely on Marvell’s 800G optics to maintain synchronization across pods.
Inference at Scale: Custom ASICs designed with hyperscalers accelerate recommendation models and search inference workloads with higher efficiency than GPUs.
Hybrid Cloud AI: Enterprises deploying AI across hybrid environments depend on Marvell’s Ethernet and security solutions for consistent performance and secure data flow.
Why it matters: Token/sec and tokens /J are gated by fabric bandwidth, tail latency, and memory movement. Marvell’s optics + Ethernet stack mitigates those bottlenecks at cluster scale, improving utilization of pricey accelerators.
Why BofA (Earlier) Saw More Room to Run—and What’s Changed
Earlier stance: BofA previously framed Marvell as an under‑the‑radar AI infrastructure compounder, citing multi‑year visibility in custom silicon and electro‑optics.
What’s new (Q2 FY26): Following in‑line prints and a slightly below‑consensus revenue outlook, BofA downgraded MRVL to Neutral, flagging near‑term visibility questions even as multi‑year AI drivers remain intact. The long‑term thesis hinges on:
AI Revenue Inflection: AI‑linked revenue (custom silicon + optics) is set to remain the primary growth vector into FY27.
Hyperscaler Engagement: > 50 active custom AI chip/design opportunities across 10+ customers; pipeline breadth supports durability.
Diversified End‑Markets: Recovery in enterprise networking and carrier infra adds ballast.
Relative Valuation: Despite a large AI premium across semis, MRVL’s multiple still prices in execution risk—creating upside if optics/ASIC ramps beat guide.
The Broader BofA AI Stock List (Context)
Beyond MRVL, BofA has highlighted several under‑the‑radar or non‑Mega‑Cap AI beneficiaries that round out the AI stack:
Datadog (DDOG) – Observability platform for AI‑era microservices; BofA has called it a top pick in software as agentic/AI‑ops expand.
Seagate (STX) – High‑capacity HDD with HAMR roadmap; essential for cold/warm AI data tiers.
Kyndryl (KD) – AI‑enabled infrastructure services and modernization deals; improving margin structure.
JFrog (FROG) – Binary management/DevOps backbone for AI‑accelerated release trains.
Why this matters for MRVL readers: These names complement MRVL along the pipeline: data generation → storage → movement → compute → observability. The structural demand for data movement (MRVL’s sweet spot) scales with the adoption of tools like DDOG and data platforms that STX underpins.
![]() |
An Under-the-Radar AI Play: Why Marvell Technology (MRVL) Could Be the Next Big Winner in AI |
The Competitive Landscape: Marvell vs. NVIDIA vs. AMD
While NVIDIA remains the undisputed leader in GPUs for AI, Marvell operates in complementary segments:
NVIDIA dominates AI training compute.
AMD competes with GPUs and accelerators for AI.
Marvell dominates AI networking, custom ASICs, and optical interconnects.
Rather than competing head-on with NVIDIA, Marvell enables NVIDIA-powered clusters to scale more efficiently—a symbiotic relationship.
Risks to the Bull Case
Guide‑to‑guide volatility: Recent quarters show that in‑line prints without upside can trigger sharp drawdowns in AI‑heavy semis.
Customer concentration: Hyperscaler dependency increases deal timing and competitive risks (e.g., merchant vs. semi-custom alternatives).
Optics cycle dynamics: Supply constraints, linear vs. coherent mix, and price/bit trajectories can swing gross margin.
Competitive set: Broadcom, Intel, and select Asian ASIC/optics vendors target the same TAM.
Conclusion: Marvell as the Unsung Hero of the AI Boom
Marvell Technology won’t replace the GPU incumbents—but it amplifies them. If the world keeps scaling model sizes and context windows, optics, Ethernet, and custom offload become kingmakers. The latest quarter/guidance injected caution, yet the multi‑year fabric + ASIC story remains intact. For technologists and AI builders, MRVL is a bellwether for whether the industry can keep lowering dollars per token via smarter I/O and interconnect.
Appendix: Fresh Numbers & Visuals
Reported: Q1 FY26 revenue $1.895, EPS $0.62; Q2 FY26 revenue $2.006, EPS $0.67; Guide Q3 FY26 revenue $2.06B, EPS $0.74.
Data center mix around ~75%.
Pipeline: >50 active custom‑AI design engagements across 10+ customers.
See the attached charts (revenue trend, EPS trend, revenue mix).
SEO Keywords to Target (Integrated Throughout the Article)
Marvell Technology AI stock
under-the-radar AI play
Bank of America AI picks
AI infrastructure stocks
semiconductor AI growth
Marvell MRVL AI chips
custom silicon for AI
optical interconnect AI
AI networking solutions