Skip to main content
Publications lead hero image abstract pattern

Publications

The Increasingly Important Role of Optical in Communication, Datacenter, and Now AI/ML Interconnect Networks, and the Related IEEE ComSoc Activities

Robert Schober 300x300

Robert Schober
President
2024-2025

Loukas Paraschis

Loukas Paraschis

Vijay Vusirikala

Vijay Vusirikala

Elaine Wong

Elaine Wong

Optical networks, systems, and interconnects have been foundational to global communication and data-center infrastructures. They are also one of the most active areas of research and development for the new Artificial Intelligence/Machine Learning (AI/ML) computational cluster interconnect networks. In this month’s column, Loukas Paraschis, Vijay Vusirikala, and Elaine Wong summarize recent related industry and academic efforts, with particular emphasis on related activities by the IEEE Communication Society (ComSoc).

Loukas Paraschis has over 25 years of industry experience, primarily in optical networking and interconnects, and the related technology and infrastructure evolution of data centers, the cloud, and the Internet. He is currently vice president at Cspeed, a start-up optimizing optical interconnects for AI/ML clusters. Until August 2024, Loukas was vice president in the connectivity products group at Alphawave Semi, which he joined in 2022 through the acquisition of Banias Labs, an optical DSP start-up. Before Banias, Loukas was for 5 years at Infinera as senior director and then CTO of cloud transport system engineering for Internet and content providers, and for 15 years at Cisco, where he worked on many generations of optical and routing systems and wireline transport networks. Loukas has co-authored more than 100 peer-reviewed papers. Reviewed publications, invited or tutorial presentations, and book chapters, 10 patents, and has served in IEEE and OSA leadership positions, including the OFC and JOCN steering and long-range planning committees, as an IEEE Photonics Society Distinguished Lecturer (2009), and is an OPTICA (OSA) Fellow (elected in 2011). He studied at Stanford University (Ph.D. 1999, MS 1998), where he worked at the Ginzton, Information Systems, and Networking laboratories. He was born and raised in Athens, Greece, where he completed his undergraduate studies.

Vijay Vusirikala received a Ph.D. in optical communications from the University of Maryland, College Park, and a BSEE from IIT, Madras. Vijay published extensively and holds 17 patents. He is an Optica Fellow and serves on the OFC Long Range Committee. He is a distinguished lead for AI Systems and Networks at Arista, focusing on networking solutions for hyperscalers and large AI clusters. Previously, he was Vice President of Global Network Engineering at an AI infrastructure startup covering network architecture/engineering and operations. Previously, he spent 14+ years at Google in senior technical and org leadership roles, leading Google’s network technology teams. He also serves on the board of Telescent, an optical switching company. Before Google, he was in senior marketing, business development, and architecture roles at Infinera, Motorola, and Sycamore Networks, covering core, metro, and access products.

Elaine Wong received her Ph.D. (2002) degree in Electrical Engineering from the University of Melbourne, Australia. She is a Redmond Barry Distinguished Professor and is currently Pro Vice Chancellor (People & Equity) of the University. Her current research interests include low-latency communication networks and prescriptive analytics to facilitate human-to-machine applications over the Tactile Internet. Elaine currently serves on the ARC College of Experts, the IEEE Technical Activity Board, Committee of Diversity Equity and Inclusion, and is Chair of the IEEE Communication Society Optical Network Technical Community. She is an IEEE ComSoc Distinguished Lecturer (2025–2026) and General Chair of Optical Fiber Communication Conference (OFC) 2025. She has previously served on the IEEE Photonics Society Board of Governors, and numerous editorial boards including IEEE/Optica Journal of Optical Communications and Networking, IEEE/Optica Journal of Lightwave Technology and IEEE Network. She is a Fellow of Optica.

Introduction

As some readers of this column may recall, the significant growth of optical networks, systems, and photonic interconnects started about 30 years ago, coinciding with and benefiting from the very substantial expansion of the Internet infrastructure at the time. Optical networking experienced another equally important wave of innovation and growth, motivated by the massive build-out of the global data-center infrastructure to serve the extensive adoption of cloud-based service delivery models starting about 15 years ago. In the last 2 years, another potentially even more significant wave of growth for optical R&D has been motivated by the network interconnect needs of the new computational data-center infrastructure dedicated to the so far exponential scaling of AI/ML training and inference, and the drive towards artificial general intelligence.

Members of the ComSoc have been at the forefront of many of the optical innovations that have catalyzed the growth of communication and data center networks and systems. There are many related noteworthy ComSoc activities, including journals — notably the Journal on Optical Communication and Networking (JOCN) and the Journal on Lightwave Technology (JLT), which are both co-sponsored with Optica, and the Photonics Society, as well as conferences - most notably Optical Fiber Communication Conference and Exhibition (OFC) alongside IEEE International Communication Conference (ICC) and IEEE Global Communications Conference (Globecom), two flagship conferences of ComSoc. OFC, which is co-sponsored by Optica, IEEE Photonics Society, and ComSoc, takes place in California every March and is the premier global event for optical networks, systems, and interconnects. It has been capturing most extensively the related research and development, both in research and academia, since the early days of optical fiber communication time-division multiplexing (TDM) networks, the subsequent introduction of wavelength-division multiplexing (WDM) systems and networks initially in telecommunication long-haul and undersea, and then metropolitan and access infrastructure. It has also been at the forefront of adopting optical in inter-data-center interconnects and in the evolving debates about the optimum network architectures for integrating WDM with routers and switches. OFC has also become “the place to be” for intra-data-center interconnect photonic systems and networks, and more recently for the most extensive efforts around the role of optics in AI infrastructure, as well as the “orthogonal” topic of the role of AI in advancing optical network planning and operations. OFC has also recently expanded to include new research efforts in quantum networks and satellite networks, and through OFCnet, it has showcased advanced optical networking research. Many ComSoc members, including members from the ComSoc Optical Networking Technical Committee (ONTC), join forces with Optica and Photonics Society members to drive these topics and activities. ComSoc members participate in the OFC sessions, program committees, and its Steering & Long Range Planning committees; including the authors of this article, Vincent Chan, the ComSoc past president-elect, and Steve Alexander who while the Ciena CTO volunteered plenty of his time to lead many OFC efforts and as chair of its LRP until 2024 sponsored many of the recent initiatives.

These important optical innovations would easily deserve a full-length article to cover in sufficient detail. In the rest of this article we prefer to focus on three complementary themes, which are noteworthy for ComSoc members: A) The Digital Signal Processing (DSP) innovations, especially DSP for coherent communications that, leveraging “Moore’s law,” have catalyzed optical interconnects to scale optical system capacities as fast (or occasionally faster than) “Moore’s law,” enabling today’s state-of-the-art Dense WDM (DWDM) systems to operate very close to the fiber channel Shannon limit. B) The increasing value of AI in the optical network planning and operational optimizations, particularly as the DWDM networks employ more fibers to scale beyond fiber channel capacity limits. C) The new generation of photonics interconnects optimized for the AI/ML computational infrastructure, which requires significant new R&D advancements and a new optical ecosystem towards extensively tighter integration, lower power, and consequently minimum use of DSP.

Innovations in Coherent DWDM Networks Scaling Close to the Fiber Shannon Limit

The introduction of DSP in optical communications has been one of the most essential innovations (and OFC topics) for over 15 years. A few different DSP solutions have been considered in optical networks over the years; in one of the earliest (20 years ago) such examples, Maximum Likelihood Sequence Estimation was briefly explored to mitigate the fiber dispersion in 10 Gb/s WDM systems of hundreds of km links instead of using dispersion compensating fiber. It was, however, the ubiquitous adoption of DSP for coherent modulation, combined with powerful Forward Error Correction (FEC), both very similar to the concepts employed previously in wireless communications, that enabled more than 10x scaling in the DWDM system capacity, and at the same time gracefully mitigating the system impairments in fiber links extending to thousands of kilometers. For example, at OFC 2019, a state-of-the-art coherent DWDM system deployment reported 6.21 b/s/Hz in the MAREA undersea 6,644 km cable (between Virginia and Bilbao) based on 42 WDM channels each employing 16QAM and up to 25% soft-decision FEC at 16.8 Gbd. The same OFC2019 paper also reported 4.46 b/s/Hz for the 13,210 km loopback link by employing 30 WDM channels each at 22.3 GBd and 8QAM. This summer, the current state-of-the-art WDM system reported in the same MAREA subsea cable 7 b/s/Hz employing 1.3 Tb/s per WDM channel.

As these results showcase, using DSP and FEC, coherent WDM systems enabled optical fiber networks to operate increasingly close to the fiber capacity Shannon limit. As importantly, these innovations enabled WDM networks to scale cost-effectively thanks to impressively successful CMOS economics, often collectively referred to as “Moore’s law.” Of course, even after accounting for the benefits of adopting the increasingly smaller geometry of each new generation of CMOS manufacturing, using increasingly complex DSP and higher GBd channels has inevitably increased the system power. As a result, high-performance coherent WDM systems typically consume tens of pJ/bit. Power, however, has yet to become a prohibitive constraint in DWDM deployments. Therefore, coherent DWDM quickly became the technology of choice to scale undersea, continental, and later metropolitan networks, and even inter-data-center interconnect (DCI) of more than 10 kilometers. 

A few years after coherent DSP technology became pervasive in WDM fiber networks, motivated by the identical CMOS “Moore’s law” economics, intra-Data-center interconnects, typically referred as “datacom” optics, started employing CMOS DSP for PAM4 (instead of NRZ) modulation and increasingly stronger FEC techniques, to scale per channel capacity (which in datacom is usually also the fiber capacity, since only one channel is used per fiber) initially to 50 Gb/s, currently mostly at 100 Gb/s, and by next year to 200 Gb/s. Datacom optics links are < 10 km and up to a few meters (below which copper interconnects have been the preferred physical channel). However, unlike WDM systems, datacom links are deployed in much greater physical proximity and at 10-100x higher yearly number of channels (vs WDM), making power efficiency much more critical. Therefore, starting at OFC 2023, the power efficiency of PAM4 DSP, currently around 10 pJ/bit, in intra-DCI optics has been significantly debated, especially for the AI/ML dedicated networks in data-center infrastructure, as we will elaborate in section 4.

Even in coherent WDM systems, the DSP power efficiency has become an increasingly active area of R&D, especially as coherent WDM technology expanded its applicability to use cases in systems with tighter power budgets or shorter links. The use of coherent optical links in inter-satellite constellation networks is an excellent example. Another good example is the potential use of coherent WDM for the next generation of broadband access and wireless backhaul networks. The most important use-case for lower power coherent DSP, however, has been the integration of coherent WDM in pluggable form factors (e.g., Octal Small Form-factor Pluggable or OSFP), starting with the extensive DCI deployments of 400G ZR 4-5 years ago. As coherent DSP pluggable power comes close to 10 pJ/bit, coherent is also expected to compete with PAM4 datacom optics for data-center links of 1-10 km, because the reach of the future generation of 400G PAM4 optical links is expected not to scale easily beyond 1km. Thus, in the last 5 years, two separate classes of coherent DSP ASIC designs have been in development: one optimized for the high-performance systems needed for undersea and continental networks, while the other is optimized for power. Note that the lower power coherent WDM pluggable designs have also enabled the integration of WDM into switches and routers, motivating a long sought wider evolution, particularly in telecom networks, to a converged “routing+optical” architecture.

This fundamental shift underway in network architecture is collapsing the long-standing separation between IP routing and optical transport layers. Known as IP-over-DWDM (IPoDWDM) or coherent routing, this evolution involves integrating compact, high-performance coherent pluggable optics directly into the ports of routers and switches. This eliminates the need for separate, power-intensive transponder systems and the short-reach single-wavelength, typically called “grey,” optics that connect them, dramatically simplifying network design. The IPoWDM convergence was catalyzed by the maturation of standardized, interoperable optics, first at 400G generation and now the 800G coherent pluggable optics, which overcame historical barriers like reduced port density and vendor lock-in. The result is a substantial reduction in the network’s total cost of ownership, power consumption, and footprint.

The momentum of coherent routing extends beyond simple data center interconnects for campus and metro reaches, as coherent pluggable with higher-performance optics enabling longer reach (often extending beyond 1000 km) and greater flexibility for more complex metro and regional networks. This continued innovation is accelerated by the unprecedented bandwidth demands of Artificial Intelligence (AI) workloads, making the network a critical performance bottleneck. In response, the industry is also on an aggressive roadmap toward 1.6T coherent pluggable optics. While this rapid evolution presents significant power and thermal management challenges for router platforms, the move toward a converged, high-speed architecture is now widely adopted for building the scalable and sustainable backbone networks required for the AI era.

Scaling the global terrestrial fiber infrastructure has motivated optical innovations in many other areas beyond coherent WDM. Scaling optical networks operating multiple fibers has become increasingly important, especially since the advanced WDM systems are already operating very close to the fiber channel Shannon limit; e.g., the recent 7 b/s/Hz announcement in the MAREA subsea cable is just 12% higher than the spectral efficiency achieved 6 years earlier! Hence, one of the most important new areas of innovation seeks to maximize the simplicity of network operations by using “AI in optical networks” to operate multiple fibers. This is summarized in the next section.

The Emerging Value of AI/ML in Optical Networks

Optical communication networks, including those that serve as optical transport for mobile networks, are evolving to support unprecedented growth in traffic demand driven by cloud-scale services, 5G and potentially 6G deployments, emerging AI-centric workloads, and immersive digital experiences. Innovations combining the embedding of real-time monitoring and end-to-end intelligence alongside processing capability at the network edge have resulted in optical networks gaining adaptive, autonomous, and self-evolving capabilities — from topology design and traffic engineering through to fault localization, self-healing, and AI-aware energy-efficient control.

Conventionally, optical network design — a multi-dimensional optimization problem of topology selection, routing, and wavelength assignment- is tackled through heuristic or integer linear programming-based approaches that are computationally intensive and limited in scalability. AI-driven methodologies such as graph neural networks enable data-driven design by learning from historical traffic matrices, fiber topology constraints, and transponder performance telemetry to generate near-optimal configurations in orders of magnitude less time. More recently, generative AI through intent-driven design extends this capability by synthesizing future demand scenarios and proposing topology blueprints aligned with long-term growth trajectories. Moreover, machine learning approaches such as recurrent neural networks, long short-term memory architectures, and temporal convolutional networks to accurately forecast traffic dynamics at fine temporal granularity are harnessed to proactively achieve resource orchestration, e.g., dynamic wavelength provisioning, bandwidth slicing, and adaptive modulation format selection. More recently, agentic AI has been explored to further ensure high throughput and network utilization by introducing distributed, autonomous decision-making entities capable of negotiating cross-layer and multi-domain resource allocation without centralized control.

AI-driven fault detection and localization identify failures like amplifier degradation (soft failure) and component failure (hard failure) with high accuracy for network reliability. At the same time, predictive maintenance prevents outages by forecasting location-specific shortcomings in advance. Beyond detection, reinforcement learning approaches automate network recovery by rerouting traffic, reallocating wavelengths, and adapting modulation formats. More recently, agentic AI has been investigated to enhance this capability through coordinated, cross-layer, and even cross-domain service restoration, paving the way for self-healing optical infrastructures that improve resilience and maintain strict service-level agreements.

Additionally, by integrating physics-based optical models with AI-driven simulations, digital twin networks deliver a virtual testbed for network assurance and validation. Such “testbeds” allow operators to safely assess capacity upgrades, model cascading failures, and optimize load-balancing strategies without impacting live traffic while simulating network scenarios and conditions — from the introduction of new services to traffic surges and network failure — to improve network planning cycles, test network recovery frameworks, and reduce operational risks, to name a few.

AI also transforms optical network security from static, rule-based protection into an adaptive, self-learning approach. Generative adversarial networks have been proposed to strengthen resilience by simulating new attack vectors, keeping detection frameworks ahead of evolving threats. Building on this, agentic AI introduces an autonomous response layer, automatically and dynamically isolating compromised network segments, rerouting critical services, and initiating mitigation protocols. The resulting response times are reduced with service continuity maintained.

In addition, safety and security are important R&D areas related to AI-based optical networking. AI algorithms make errors. Most of these errors are self-correcting by the algorithms as the network evolves. Some may drive the network into a bad state, even requiring a cold start reset. The problem is severe in large-scale networks where the number of decisions per second can be numerous. Hence, judicious guardrails must be inserted to limit the damage an inaccurate decision causes. Also, AI will never have seen black Swan events and may be ill-prepared to respond to network evolutions and zero-day attacks that have never occurred. AI security R&D is thus critical for the future health of AI-enabled optical networks. Digital twins will exercise many possible network evolutions. Still, some rare events will be difficult for digital twins to simulate; thus, companion analytic considerations must be done to augment what digital twins cannot provide.

Finally, leveraging predictive analytics — ranging from time-series forecasting to graph neural networks — AI-driven energy-aware frameworks can anticipate spatio-temporal traffic variations and proactively reconfigure lightpaths, consolidate flows, and selectively deactivate underutilized network components to achieve energy-efficiency gains without compromising quality of service. Reinforcement learning can be leveraged to learn optimal policies for transitioning hardware into low-power states, carefully balancing energy gains against wake-up latency and service continuity. AI dynamically orchestrates optical bypass at the multilayer level to minimize electronic switching overhead. In contrast, coordinated IP–optical control enables end-to-end energy optimization validated through high-fidelity digital twins as discussed above. Energy efficiency is a particularly critical requirement for the new AI/ML infrastructure. So, complementary to the role “AI in optical networks,” the role of power-optimized “Optics in AI” has grown to another very active area of optical R&D, as summarized in the next section.

New Optical Innovations in the AI/ML Computational Infrastructure are Needed

The wide-ranging promise of AI (which the previous section summarized specifically for optical networks) has motivated an unprecedented amount of new investment in AI infrastructure. This year, hundreds of billions of dollars are being invested globally in a new computational infrastructure dedicated to AI/ML compute clusters. This infrastructure has already been deployed in multiple datacenters, each with tens of thousands of xPU accelerators (currently mostly GPUs), and has thus dominated the data-center network capex. Moreover, the annual growth rate of the AI infrastructure has averaged 4-8x. So, the AI infrastructure is expected to soon scale to multiple multi-GW data-centers, each with hundreds of thousands of xPU, requiring the related “backend” networks to scale much faster than any previous data-center infrastructure through a combination of faster networking systems and interconnects. This requires significant improvements in key performance metrics of the AI/ML interconnects, notably energy efficiency (from the current 20 pJ/bit of 100G PAM4 datacom optics to 5 pJ/bit or less), reliability, latency, integration density, and cost.

Energy efficiency is paramount, as interconnects are expected to grow to 5-10% from the current 1%of the total data-center power budget, unless new approaches are adopted. Three primary strategies are being employed to enhance energy efficiency:

Copper cables should be prioritized whenever feasible due to their higher reliability and lower cost than optics. However, copper’s reach is limited, typically 1-2 meters for passive cables at 200 gigabits per lane. This can extend to 3-4 meters with active “re-timed” cables, which, however, also increase the power of the interconnect by 8-10 pJ/bit. Retiming will also increase the link latency (often by > 100 ns at 200 Gb/s lanes).

  • Utilize linear interface optics, which eliminate the use of DSP in Linear Pluggable Optics (LPO), Near Package Optics (NPO), and Co-Packaged Optics (CPO), to reduce power (often by 60%) and latency, and increase reliability.
  • Adopt network-level efficiencies whenever feasible, such as deploying workloads on optimized accelerators, distributing inference closer to the network edge, or scheduling compute-intensive training to align with renewable energy from the smart grid.

At the same time, ensuring reliability in hyperscale environments requires a shift in focus from individual component longevity to architecting for system-wide resilience. Even in today’s average-sized clusters with tens of thousands of links, failures are statistically certain, making proactive management essential. Reliability has often been a significant concern for traditional datacom photonics. These failures manifest in several disruptive forms:

  • Hard failures: These involve a complete link loss, such as a fiber cut, or a laser failure, and are typically managed through redundant network paths where possible, or expedient replacement of failed modules.

Soft failures, which are more insidious, involve a gradual degradation of signal quality due to a few reasons, notably component aging. They silently increase error rates and latency well before a link completely fails. The most damaging soft failures are link flaps; these occur when a marginal link rapidly cycles between “up” and “down” states. An essential cause of link flaps is the presence of dust on the fiber connectors, which causes back reflections and multi-path interference, leading to uncorrectable bit errors, triggering repeated network-wide reconvergence events that can halt synchronized communication libraries (like NCCL), disturbing entire multi-week training jobs.

Consequently, managing these operational realities through advanced automated monitoring has become as crucial as the underlying optical technology.

While these enhancements in power efficiency and reliability have enabled the available generation of datacom optical interconnects to be employed in the initial deployments of AI/ML infrastructure, it has also become clear that a new generation of optical interconnects optimized specifically for AI is needed as soon as possible. The most important, currently unmet requirements arise in the interconnects of the AI “scale-up” network, which enables a cluster of xPUs to be in the same “coherent memory” domain. Currently, in such “scale-up” networks (e.g., NVL72 is the most prominent), the interconnects extend typically 1-2 meters in a single rack, limited by the reach of passive copper cables. However, as the number of xPUs and the capacity of each xPU grow, these “scale-up” networks would benefit from much longer links. Power management is again the critical driver; the typical data-center rack of CPUs servicing traditional cloud services consumes 10-15 kW, while a fully populated rack with the latest AI scale-up domain (NVL72 with GB200 GPUs) consumes 120 kW, and is already announced that the upgrade to the 2027 generation of (Ruben+) GPU would raise the rack power to 600 kW. So, while liquid cooling has already been adopted in the new AI deployments, the ability to extend the AI scale-up domain across multiple racks, beyond the 1-2m copper interconnect reach limitations, is widely accepted as the best way to scale the AI clusters, as soon as efficient optical interconnects optimized for the AI infrastructure become available.

The recent push for LPO, NPO, and CPO (discussed at the beginning of this section) has already clarified that minimal use of DSP is an essential requirement in the new generation of optical AI interconnects. Unlike the DSP benefits in coherent DWDM systems, in the AI cluster, the power (and cost) savings most times outweigh the value of DSP use. Moreover, smaller geometries in new generations of CMOS manufacturing have diminishing benefits in power efficiency (especially for the analog). Therefore, many emerging technologies aim to enable a new ecosystem that can achieve or exceed the performance of traditional PAM4 datacom optics with minimal DSP use. One promising approach is minimizing the need for DSP by operating channels at a lower rate, NRZ modulation, and instead multiplexing more optical channels in a single fiber; e.g., 4 or 8 channels each at 50 Gb/s. Using such lower-rate NRZ schemes could also significantly simplify the complexity of the modulator. In the most extreme example, an interconnect up to a few meters can employ many fibers, each with a microLED at around 10 Gb/s. Copper interconnects have also remained an active R&D area, and even THz-RF in waveguides extending beyond copper are investigated using DSP and THz amplifiers instead of optics. Complementary to these new optical interconnect system designs for AI, significant related R&D efforts also occur in tighter integration, notably aiming to extend CMOS innovations in 3D packaging to silicon-photonics, as well as in new higher speed modulators scaling beyond 100 GHz, advanced laser designs like comb-lasers and quantum-dot lasers, or hollow core fiber, all of which are noteworthy but less relevant to the ComSoc focus of this article.

A more relevant topic to ComSoc members is a new frontier of networking research exploring the potential overall performance benefits from joint scheduling of the interconnection network and computing. This longer-term research explores how a large data analytic job requiring distributed computing will benefit from using optimized routing in the interconnection network, resulting in tandem queues. This is a complicated problem for the job scheduler. Still, it could offer high payoff when networking becomes a sizable part of the overall delay, raising the prospect for agile optical routing and scheduling, and the challenge of the time scale of network reconfigurations and hardware and software switching speeds.

Given the size limitations of this article, it would not be possible to include all the interesting R&D topics and potential optical innovations motivated by the needs of the new AI/ML computational infrastructure build-outs. Hopefully, this section offers a useful summary of this exciting new domain and motivation to learn more.

Concluding Summary

We would like to close this article on a somewhat personal note. When we started our professional careers, some 20-25 years ago, optical innovations successfully enabled the Internet to scale from a few kb/s dial-in modem connectivity to the Zettabyte global fiber infrastructure. Then, a few years later, the proliferation of massively scaled data centers motivated by the broad adoption of the “cloud” service delivery models gave rise to an equally strong wave of growth in optical DCI (both in between and inside the data center) as part of the impressive build-out of the global data-center infrastructure. Yet today may be an even more exciting time to be in optical, as the need to scale the ML/AI compute infrastructure has created even more opportunities for optical innovation. At the same time, AI has been innovating optical networks’ planning and operational efficiency. In this sense, we feel very fortunate to have been able to participate in this journey. Most importantly, we would like to acknowledge and recognize an excellent community of colleagues from ComSoc, the Photonics Society, and Optica who have made this journey exceptional.

We look forward to continuing to share the experience at future events, and especially at OFC in March 2026.

The authors acknowledge many colleagues, especially Steve Alexander and Vincent W. S. Chan, for their insightful suggestions related to this article.