On Tuesday, Arista Networks (NYSE:ANET) discussed first-quarter financial results during its earnings call. The full transcript is provided below.
Benzinga APIs provide real-time access to earnings call transcripts and financial data. Visit https://www.benzinga.com/apis/ to learn more.
View the webcast at https://events.q4inc.com/attendee/274051223
Watch the full earnings call below:
Summary
Arista Networks Inc reported Q1 2026 revenue of $2.71 billion, a 35.1% year-over-year increase, surpassing guidance of $2.6 billion, driven by strong demand in AI and specialty providers.
The company anticipates full-year revenue growth of 27.7%, reaching $11.5 billion, with AI-related revenue targets revised to $3.5 billion amid strong demand.
Despite robust demand, supply chain constraints, particularly in wafers and chips, are a significant challenge, impacting lead times and potentially gross margins.
Arista is leading in AI networking strategy with innovations such as its AI fabric use cases (scale up, scale out, and scale across) and the new XPO optics form factor, which has garnered significant industry support.
Deferred revenue is increasing, reflecting new product qualifications and customer readiness, with the expectation of recognition over multiple quarters.
Gross margin for Q1 2026 was reported at 62.4%, influenced by customer mix and rising supply chain costs.
Arista's AI-focused initiatives and strategic wins in various sectors, including cloud and insurance, highlight its diversified growth strategy.
The company is actively working on expanding its scale-up capabilities for 2027, while scale-out and scale-across segments are key revenue drivers for 2026.
Full Transcript
OPERATOR
Welcome to the first quarter 2026 Arista Networks Inc financial results earnings Conference call. During the call, all participants will be in a listen only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by zero. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section of on the Arista Networks Inc website. Following this call, Mr. Rudolph Araujo, Arista's Head of Investor Advocacy.
Rudolph Araujo (Head of Investor Advocacy)
Thank you, Regina Good afternoon, everyone and thank you for joining us. With me on today's call are Jaishree Ulal, Arista Network's Chairperson and Chief Executive Officer, and Chantal Brightoff, Arista Networks Inc's Chief Financial Officer. This afternoon Arista Networks issued a press release announcing its fiscal first quarter results for the period ending March 31, 2026. If you want a copy of this release, you can find it on our website. During the course of this conference call, Arista Networks Management will make forward looking statements including those relating to our financial outlook for the second quarter of the 2026 fiscal year, longer term business model and financial outlooks for 2026 and beyond our total addressable market and strategy for addressing these market opportunities including AI inventory management, lead times and product innovation which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10Q and Form 10K and which could cause actual results to differ materially from those anticipated by these statements. These forward looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q1 results and our guidance for Q2 2026 is based on non GAAP and excludes stock based compensation, expense, intangible asset, amortization gains, losses on strategic investments and income tax effect of these non GAAP exclusions including the recognition of direct access tax benefits associated with stock based awards. A full reconciliation of our selected GAAP to non GAAP results is provided in our earnings release. With that, I will turn the call over to Jai Sree.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Thank you Rudy and welcome everyone to our first quarter 2026 earnings call. Arista has experienced significant velocity in all our sectors in Q1 and are now commanding the number one market share in high speed switching in the greater than 10 gigabit ethernet category. With that we have overtaken many incumbent vendors according to major market analysts for 2025 our cloud and AI networking strategy for diverse AI accelerators continues to gain traction. Unlike typical workloads, AI workflow patterns can be long lived elephant flows or short lived and simply not predictable. This implies careful attention to performance where a flow can cause burstiness for a long duration of milliseconds. The intensity of a flow can determine the line rate throughput. The shifting traffic patterns to massive flows synchronized to all in all or all reduce or bursts of collective communication are all important for AI training and inference applications. I would like to take a moment to review our three AI fabric use cases in scale up mode. We have familiar technologies such as NVLink and PCIe that have enabled vertical scaling of single compute nodes or racks. The advent of ESUN Ethernet for scale up networking specifications allows for increasing or decreasing computing power in a flexible manner with ethernet to automatically adapt to workload demands. Scale up will be a new entry for Arista in 2027 and beyond where we will be working closely with our customers to build AI racks with very fast interconnects for CO packaged copper, CPC or open co packaged optics CTO as well as supporting collectives and memory acceleration Scale out or horizontal scaling involves adding more machines to a leaf spine fabric, moving workloads across multiple servers or nodes, or even connecting other elements like storage or CPUs as you scale up or out with massive data sets, bottlenecks can be resolved with collectives and protocol acceleration. At L2 L3 cluster load balancing all at wire rate. The system must deliver consistent performance without degradation as more nodes participate. Arista is a shining example here with greater than 100 cumulative customers to date in 800 Gigabit Ethernet deployments and we expect the addition of 1.6 terabyte in 2027 and at production scale. Scale across drives across the cloud in AI as the AI accelerators in a location may need to be distributed to achieve the appropriate bandwidth capacity with the optimal power. As workloads become more complex and more distributed, the bisectional bandwidth must scale smoothly to avoid bottlenecks and preserve performance. This demands sophisticated traffic engineering, deep routing, encryption properties and integrated optics based on Arista EOS stack and using Arista's flagship 7800 R3 or R4 series. The 7800 has established itself in this category as the premier scale across choice. You can see with Arista's accelerated networking strategy and these three types of AI fabrics. These are critical to deployment of diverse accelerators and frontier models. Traditional static network topologies with hotspot jitter that slows down job completion time or increases time to first token for inference are all not the way to go. Arista's EtherLink portfolio addresses both the synchronous flows for massive training and the low latency for concurrent swarms of real time inference in this era of trillions of tokens, terabits of performance and terawatts of power. In 2024 you may recall we discussed four Ethernet based AI training deployments and of course since then we've expanded and exploded to countless others. This fourth customer from the group has officially moved from Infiniband to Ethernet at production scale over the last two years. The high speed Ethernet AI leaf spine with flexible air or liquid cooled infrastructure overcomes the physical constraints of power and space for AI workloads. It results in a low latency distributed AI supercomputer fabric across global regions. What is clear to me and us is our networking prowess with data control and management and multiplanear orchestration is not only central to our AI switching performance but also important for high speed optics transmission. At the recent Optical Cyber Conference, Arista unveiled its extended pluggable Optics XPO form factor designed specifically for optics optics innovations at high speed. Now endorsed by greater than 100 vendors. Salient features include record breaking throughput delivering 12.8 terabits per pluggable module, unprecedented rack density achieving 204.8 terabits per OCP rack unit, integrated cold plate capable of cooling up to 400 watts power per module, and the universality and flexibility across a range of pluggable optics copper as well as linear half time or retimed interfaces. A special kudos to Andy Bechtelsheim, Arista's chief architect for driving from OSFP 10 years ago to this next generation XPO, bringing structural improvements in power footprint and cost reductions. Our enterprise business experienced Strong results in Q1 2026 both in data center and campus. Our VeloCloud acquisition is also integrating well into our branch and campus strategy, bringing more distributed enterprise use cases and a new channel motion with managed service providers MSPs to share some recent wins. Let us hear now from Todd Nightingale and Ken Duda, our co Presidents, to delineate our ARISTA 2.0 centers of data strategy over to you.
Todd Nightingale (Co-President)
Thanks teacher. Arista is diversifying its business with new customer acquisitions covering a broad set of use cases, all unified by Arista's EOS stack and its ability to modernize enterprise infrastructure operating models. Our first highlighted win is a NEO Cloud AI network. The customer was constrained by an incumbent white box architecture that simply could not keep pace with the massive scale out requirements of AI. Arista Networks Inc was selected as a commercially proven and reliable scale out architecture with unmatched stability of EOS and the ability to connect AMD MI Series XPUs. Arista Networks Inc's AI Leaf and Spine EtherLink products were deployed at 800 gigabits to provide the incredible performance modern AI networks require. The AI fabric was tuned using Arista's cluster load balancing to scale out to thousands of XPUs, minimizing hotspots and congestion. On the software side, the customer leveraged AVD Arista's validated design framework to automate network provisioning which both reduces the total cost of ownership but also provides an easy path to reliable network deployment at scale where without AVD automation a small mistake can cost precious days of debugging time. This was a strategic NEO Cloud win with large potential for upside growth in an area where we are seeing enormous opportunity and velocity in both NEO Cloud and sovereign cloud customers. Our next win is in the service provider sector with a leading regional fiber to the home provider serving hundreds of thousands of subscribers. As subscriber bandwidth demands have surged, this customer realized their legacy routing architecture was too rigid, too brittle and too costly to scale. They needed a solution which would modernize their next generation backbone and Internet peering edge. Arista Networks Inc won this upgrade by proving an automation first approach with a modern operating model, driving operational savings and increased subscriber reliability. On the hardware side, we deployed popular 7280 routing platforms using EOS's FLX capabilities which unlock deep buffering, a rich control plane, software stack and full Internet route scale. On the software side, Arista Networks Inc's AVD framework again automates router provisioning to reduce the time it takes to turn up services while also reducing errors. Here we saw great results from our technology partnership with Palo Alto Networks, ensuring the routing edge integrated securely and seamlessly with our overarching security architecture and heres core value proposition of lower operating costs and greater reliability drove a competitive win. Now I'll hand it off to Todd Thanks Ken. Our third win is in the insurance services sector. Following a year of strategic collaboration, the customer wanted to modernize their infrastructure with a streamlined automated foundation capable of delivering granular real time insights to secure and monitor critical applications. Here, observability was truly the key. Arista secured this comprehensive win after executing a flawless proof of concept, proving our architecture significantly exceeded operational standards to achieve deep network observability. The customer deployed our R3 series for filter and delivery roles on our monitoring fabric dms. Additionally they deployed campus switches to radically simplify out of band management. Leveraging rich telemetry capabilities of eos, the customer unlocked advanced features like VXLAN header stripping and transition to a fully automated declarative operational model. Our final win is within the manufacturing sector where we're seeing amazing momentum. Here we have a customer operating more than 100 factory sites globally servicing consumer, healthcare, aerospace, defense and AI infrastructure customers. This was a true mission critical use case and their legacy campus network had become the bottleneck for achieving real 24 by 7 production. Shifting traffic patterns, manual provisioning and importantly a lack of visibility and forensics into microbursts and drops were keeping them from achieving their goals. Arista won an extensive bake off against two established vendors, both of whom proposed campus design that could not match what Arista delivered A universal leaf spine campus based on open standard standards, running a single EOS binary across campus Data center and WAN. The Cognitive Campus solution leveraged 100 gig campus spine, high powered poe leaves and Arista Wi Fi 7 Cloud Vision drove provisioning, configuration and life cycle end to end with consistent tooling across the network infrastructure. Here it really was Arista's modern operating model that drove differentiation in the engagement and hit list production upgrades, latency analyzer for microburst visibility and true packet drop forensics. The teams were able to significantly reduce production, impacting maintenance windows and expose events that had previously caused line interruption. In all four of these examples, Arista's support team stood out to customers for its best in class service. Well known for troubleshooting issues with customers long after Arista Gear is no longer suspected to be at fault. Arista Networks Inc's modern operating model also played a key role, especially the AVD tooling that Ken mentioned for architecture validation and deployment. We're excited about the momentum across the entire enterprise business and especially the diversification that it brings to Arista Networks Inc. Thanks Jasher.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Thank you Todd. Thank you Ken. It was so fantastic to hear of happy customer outcomes. We had another fitting example of that at our Innovate 2026 event here in the Headquarter facility held in March. The energy and enthusiasm of our greater than 250 customers who attended was truly infectious and inspiring. I want to especially give a shout out to Ashwin Kohli and Divya Wagner's teams who have already improved our outstanding net promoter score from 87 to 89 ratings, translating to a 94% customer approval. This really exemplifies the lowest security vulnerabilities in the tech industry. It enhances our ability to better cope with the many risks that AI is creating. As I look ahead at the year, our Arista Networks Inc 2.0 momentum continues to march on and resonate. Our demand is actually the best I have ever seen in my Arista tenure. The supply however is a slightly different and opposite tale. We are experiencing industry wide shortages across the board be it wafers, silicon chips, CPUs, optics and of course memory that I referred to last quarter coupled with elevated cost to procure these. Clearly our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or two, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multi year purchase commitments. We anticipate gross margin pressure due to mix and trade offs we are making to pay more to assure supply continuity to our customers. Nevertheless, it gives us confidence to increase our forecasted growth slightly to 27.7%, aiming now for 11.5 billion for 2026. We also increase our AI target now to 3.5 billion this year, thereby more than doubling our AI sales annually. And with that good news, over to you Shantel for the financial details.
Chantal Brightoff (Chief Financial Officer)
Thank you Jaishree Ulal. I continue to be impressed by our company's ability to deliver such a breadth and depth of networking innovation. It is a core tenet that underpins our strong financial return to shareholders. Q1 to detail our most recent financial outcomes to start off, Total revenues in Q1 were $2.71 billion up 35.1% year over year and above. Our guidance of 2.6 billion. Growth was seen across the customer sectors led by our AI and specialty providers customers within the quarter. International revenues for the quarter came in at $418.9 million or 15.5% of total revenue, down from 21.2% last quarter. This quarter over quarter decrease was primarily influenced by America's base sales to our large global customers. The overall gross margin in Q1 was 62.4% within the guidance range of 62 to 63% and down from 63.4% in the prior quarter. This quarter over quarter decrease is due to the lower mix of sales to our enterprise customers in the quarter. Operating expenses for the quarter were $396.8 million or 14.6% of revenue, down slightly from last quarter at $397.1 million. Our R&D spending came in strong at $271.5 million or 10% of revenue despite a slight sequential decrease due to the timing of new product introduction Costs Arista continues to demonstrate its commitment focus on networking, innovation, sales and marketing. Expense was $103.5 million or 3.8% of revenue, down from 4% last quarter. Representative of the highly efficient Arista go to market methodology, our G and A cost came in at $21.8 million or 0.8% of revenue, down from $26.3 million last quarter. Reflecting our strong base cost productivity within a pure play networking business model, our operating income for the quarter was $1.29 billion, or 47.8% of revenue. Let me pause here to thank the greater Arista team for all of their efforts and resulting excellent execution in a dynamic environment. Other income and expense for the quarter was a favorable $110.8 million and our effective tax rate was 21.1%. Overall. This resulted in net income for the quarter of $1.11 billion or 40.9% of revenue. Our diluted share count was 1.27 billion shares, resulting in a diluted earnings per share for the quarter of $0.87, up 31.8% from the prior year. Now turning to the balance sheet Cash, cash equivalents and marketable securities ended the quarter at approximately $12.35 billion. In the quarter, we did not repurchase our common stock. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors. Now turning to operating cash performance for the for the quarter, we generated approximately $1.69 billion of cash from operations in the period, the strongest in the history of arista. This was driven by a robust earnings performance coupled with an increase in deferred revenue. DSOS came in at 64 days, down from 70 days in Q4. Due to the linearity of shipments within the quarter, our inventory turns improved slightly, landing at 1.7 versus 1.5 in the prior quarter. We ended the quarter with $2.38 billion in inventory, up from 2.25 billion last quarter. This marginal increase is a calculated investment in the mix of raw materials to fulfill our growing demand. Our purchase commitments at the end of the quarter were $8.9 billion, up from 6.8 billion at the end of Q4. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI dep. We will continue to have some variability in future quarters as a reflection of the combination of demand for our new products, component variability and the lead times from our key suppliers. This could also result in quarters of elevated inventory balances ahead of the deployments. Our total deferred revenue balance was $6.2 billion, up from $5.37 billion in the prior quarter. The majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $643 million versus last quarter. We remain in a period of ramping our new products, winning new customers and expanding new use cases including AI. These trends have resulted in increased customer specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 54 days down from 66 days in Q4 reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $54.5 million. We continue to we continue the construction work to build expanded facilities in Santa Clara in Q1, we incurred approximately $40 million in CapEx related to this program and estimate it will reach $180 million in 2026. These Q1 results have provided a strong start to our fiscal year 2026. As Jaishree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 27.7% revenue growth, delivering approximately $11.5 billion. We maintain our 2026 campus revenue goal of 1.25 billion and raise our AI fabrics goal from 3.25 to 3.5 billion. I would like to take this opportunity to remind the audience that the timing and outcome of customer projects with acceptance terms can create quarterly and sequential dynamics that do not follow prior year trends. For gross margin, we reiterate the range for the fiscal year of 62 to 64% inclusive of mix and anticipated supply chain cost increases for memory and silicon. Given this challenging supply backdrop, I am proud of our sourcing team's execution which strongly contributes to the gross margin outlook holding in our guidance range, we feel confident that we can source the necessary supply to meet our customers needs. Our operating margin outlook remains at approximately 46% for the fiscal year with the tax rate expected at 21.5%. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory and cash flow from operations due to the timing of component receipts on purchase commitments. More specifically, now Our guidance for the second quarter is as follows. Now with the added quarterly metric of diluted earnings per share, revenues of approximately $2.8 billion, gross margin between 62 and 63%, operating margin between 46 and 47% and diluted earnings per share of approximately $0.88. With approximately 1.27 billion diluted shares, our effective tax rate is expected to be approximately 21.5%. In closing, we are optimistic about the fiscal year ahead. The industry has many times demonstrated the pattern of landing on Ethernet as the winning technology. And that is where Arista shines best. We appreciate our customers choice of working with us to achieve their business outcomes. Now, Rudy, back to you for Q and A.
Rudolph Araujo (Head of Investor Advocacy)
Thank you, Chantel. We will now move to the Q and A portion of the Arista earnings call to allow for greater participation. I'd like to request that everyone please limit themselves to one question. Your line will be placed on mute after your question. Thank you for your understanding. Regina, please take it away.
Regina (Moderator)
We will now begin the Q and A portion of the Arista earnings call. To ask a question during this time, simply press Star and then the number one on your telephone keypad. If you'd like to withdraw your question, press Star and the number one again, please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Simon Leopold with Raymond James. Please go ahead.
Simon Leopold (Equity Analyst at Raymond James)
Great. Thank you very much for taking the question. I wanted to explore your commentary around the Scale Across opportunity in particular and I guess what I'm trying to get a better sense is how much revenue, if any, did that contribute last year and how material is that to the $3.5 billion forecast you're giving this year and how should that trend longer term? Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Sure. Simon Leopold. I think last year on Scale Across we were just beginning. So I think they were small numbers and majority of the numbers were really Scale Out. That's. That's sort of our heritage and that's where we excel. If I were to anticipate how it would be this year, again, scale up is virtually zero and non existent because it really only comes to play after the ESAN spec. So consider that more a 2728 kind of number. So I think the number will be really shared between Scale Across and Scale Out. I don't know if I can say it 5050 or 70, 30 or 60 40. But Scale Across will definitely contribute at least a third of our AI number.
Regina (Moderator)
Our next question will come from the line of George Nader with Wolff Research.
George Nader
Please go Ahead. Hi guys, thanks very much. Maybe just continuing the discussion on scale up, we are starting to see rack design wins. One of your competitors in the Original Design Manufacturer (ODM) space I think has got a couple of design designs that they've announced at least. And I know you're kind of pointing towards Ethernet Scale Up Networking (ESUN) as being kind of a key, you know, catalyst in, in, in, in generating business there. But can you talk a little bit about where you are in terms of designs with customers progress? Anything you can tell us there would be great. In fact, I think a few quarters ago you said you had 5 to 7 scale up rack designs that you were at least working on. I'm just wondering if you can update that. Thanks a lot.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, that's correct, George Nader. I think there is no doubt in our minds that we will have a number of racks, a number of Scale Up use cases in 2027. Maybe some of them will be in early trials, but majority of them are looking at really starting with 1.60 and 1.60 chips will really happen in 2027. There may be a few, a handful of them that try some experimental stuff at 800 gig, but we continue to see at least five to seven rack opportunities. Some of them are multiple racks with the same customer. We're actively designing with them. There's a huge amount of liquid cooling designs with very dense cabling options, acceleration of collectives and memory features we have to work on for low latency. So I definitely feel we're in active engineering phase with Ken Duda and Hugh's teams this year. But unlike the ODMs, I think we're held to a higher bar and we have to just make sure that this thing is production worthy and specification adhering to esun. So I would say today's Scale Up is mostly limited to NVLink from Nvidia and maybe some PCI switching. But majority of the ethernet Scale Up will only really happen in 27 and 28.
Regina (Moderator)
Our next question will come from the line of Entwine with New street research.
Antoine
Please go ahead. Hi, thank you very much for taking my question. So with supply outstripping demand, I'm wondering how much does your current supply allow you to grow this year? Next is the updated unit supply growth guide of 28% growth rate, a good reflection of how much supply you've secured for this year and what could that number look like next year based on how much supply you think you can get as of today?
Jaishree Ulal (Chairperson and Chief Executive Officer)
Antoine, I think the supply chain problem and Todd, maybe you can add to this is not a one or two Quarter phenomena. We now think it's a one or two year phenomena. You know, at first we thought it was memory. Now it's all the wafer fabrication facilities. Every chip is challenged. And you can see how Chantal Brightoff has leaned in with the purchase commitment for multiple years. So while we will continue to improve it, this is a reflection of not just demand, but how much we can ship this year. And as we continue to ship this year, we can give you better visibility on next year. But I can just tell you we see multi year demand and we are going to do everything including hurt our gross margins to supply to that demand this year and next year because we believe that we certainly don't want to keep GPUs idle and AI infrastructures underutilized because Arista didn't supply the network. So can the number get better this year? I think this reflects our best attempt at a good number. We started out at. Did we start out at 20%, 25% growth? Yeah. So we started out at 20, we're at 25, now at 27.7. Could we improve to the tail end of the year? We'll see. But the amount of decommits we're seeing doesn't feel good. So we think a lot of this will continue into next year and keep us constrained for the next couple of years.
Regina (Moderator)
Our next question will come from the line of Erin Rakers with Wells Fargo.
Erin Rakers (Equity Analyst at Wells Fargo)
Please go ahead. Yeah, thanks for taking the question. You know Jayshree, last quarter you had alluded to kind of engagements with other hyperscale cloud titan customers. I think you also pointed to maybe having one or two new 10 percent customers this year. I'm curious of you know where we stand today. Any updated thoughts on adding one or two new customers at 10 percent plus and maybe qualitatively. Just talk about your engagements you're having beyond your two big cloud titans across the hyperscale vertical. Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, absolutely. First of all, two big ones, we never take them for granted. Microsoft and Meta, they're our all time favorites. They've been on 10 percent and greater customers for over a decade. And the partnership could never be stronger and continues to get better both in cloud and in AI. In terms of the new entrants, we still expect at least one, maybe two. And maybe I should caveat this by saying certainly in demand we see one or two. We shall see Todd how we do on shipments to see if he can achieve the greater than 10 percent. The two of them have very interesting characteristics. They exhibit what I would call the three use cases I just alluded to Scale Up, Scale Out and Scale Across where we really have a fabric notion of creating. So far we've been working with them a lot on the front end and now we get to complement that on the back end. Definitely for Scale Out and Scale Across and maybe even a little bit of Scale Up in some of these use cases. The other thing we're seeing with a lot of these use cases is the lack of power insights and the ability and demand to distribute and get a more multi tenant Scale Across is very high in these two use cases. A third common thread we're seeing across of them much as we all talk about ODM and white boxes, they deeply appreciate EOS and the features and the reliability and the observability and the just the fact that we have a robust, highly scalable layer two, layer three stack commands a lot of superior advantages. So I believe the diversity of these cloud titans is largely due to the fact that we have great hardware and software combined. Ken, you want to say a few words on that?
Ken Duda (Co-President)
I It's just been an incredible journey to live through this and see the level of infrastructure build off we're getting and how well positioned our hardware and software roadmaps are to address these ever evolving more advanced use cases. It's a blast to get to work on this stuff.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Okay, that's always fun when your job is a blast. So Ben Bolen, I still see one, I think one, maybe two 10% customers. And Todd Nightingale, hopefully we can ship it. Oh, sorry, Erin Rakers.
Regina (Moderator)
And our next question will come from the line of Ben writes with Melius Research.
Ben Bolen (Equity Analyst at Cleveland Research)
Please go ahead. Oh, there you go. Jay Sree, here I am. So yeah, I wanted to ask around the constraints. Are you able to say what the number was in the quarter and what it's taking away in terms of the $2.8 billion guide? Is it. Is it safe to say things would have been 100 million or 200 million higher for both? And then if you don't mind, just if you can touch on why the gross margin should go back up to 63 percent. What is it that you guys are doing that gives us confidence that it can actually expand a tad from here?
Chantal Brightoff (Chief Financial Officer)
Yeah, I think that maybe you'll just. I think, I think that. I don't think the commentary about the demand outstripping the supply as a Q1, Q2. I think we're talking about looking ahead Q3 2026, Q4 2026 into next year. So I don't think there's something outside of what we've got or what we've delivered in the first half, I think in the sense of the margin. So the margin's a mix of things. Right. And I think that all, all the team members are executing in full force. I think the supply chain is doing everything they can on ensuring that we have the best supply at the best price. And so we've incorporated that. I think that the mix of customers, the only chance for mix expansion or margin expansion would be due to mix. And so I think that's the opportunity as we look to see what we can deliver in the second half. Then I think that would be the opportunity.
Todd Nightingale (Co-President)
The teams are also doing everything they can to make sure we control our costs, especially in the manufacturing side. And that includes bringing on secondary providers, quality new components, et cetera, to make our supply chain more resilient and more cost effective in the long run.
Jaishree Ulal (Chairperson and Chief Executive Officer)
And one thing to clarify also on gross margins percentages is we view this as a partnership with our customers. So while we did consider and have raised prices a little bit, unlike our competitors, we haven't done two price increases, we haven't done major price increases. And the price increases really come into play once our backlog starts to reduce. Right. So you won't see the impact of that. So our gross margins percentages are as strong factor of cost going up and are still eating a lot of the costs and you know, giving our customers the benefit and promise of the pricing we said we would give to them.
Regina (Moderator)
Our next question will come from the line of Michael Ng with Goldman Sachs.
Michael Ng (Equity Analyst at Goldman Sachs)
Please go ahead. Hey, good afternoon. Thanks for the question. I was just wondering if you could talk about whether or not Arista is seeing networking attached opportunities for customers that are using TPU or TPU like architectures and then anything you comment about as it relates to growing Neo cloud traction, is that something that you think may be a little bit underappreciated by the analyst community? Thank you very much.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, Michael, you're absolutely right. I'll take your second question first. It's easy to talk about the titans because they're giant numbers are so ginormous. Right. But the neoclouds are a very important sector because they don't always have the staff to do everything they want to do. And they really lean on Arista's design expertise, EOS expertise, you know, network design configurations. We can provide them, you know, a family of 22 products we have in AI. So yes, I would agree with you. It's an underappreciated. And the neocloud was very strong this quarter, if I recall. Chantal Brightoff, for us in the Specialty and cloud providers. What was the other question you had? A1, A1, B. The TPU. Oh yeah, the TPU. So in general we are seeing diverse accelerators. Last time I spoke about the AMD accelerators. This time I will definitely give a Nod to the TPUs (Tensor Processing Units) because in particularly scale across use cases we're seeing multi tenants connecting to different AI accelerators, including TPUs (Tensor Processing Units) as well. So I think the diversity of accelerators is creating tremendous multi accelerator opportunity and multi protocol features that we can provide for them in our network.
Regina (Moderator)
Our next question will come from the line of Sean o' Laughlin with TD Cowan. Please go ahead.
Sean O'Laughlin
Great, thanks. Congrats on the results and thanks for letting me join in on the fun here. Jaishree, I wanted to get your thoughts on. You know we've been talking a lot about agentic AI and the demands that it's placing on maybe some of the more general purpose infrastructure that has been maybe in the background over the last couple of years. You've talked in the past about a 2 to 1 pressure, you know, on front end networking created by backend. First, I guess is that still the correct way to think about it? And second, you know, as agentic workflows become more common, is there any additional demand from your perspective, having a single image EOS platform on the front and the back end? Or is the front and back end still pretty siloed?
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah. Well first of all Sean, welcome to your first call. It will be fun. Join the fun. So agentic artificial intelligence (AI), it's kind of a buzzword, but let me sort of break it into how the biggest killer application we see in agentic artificial intelligence (AI) right now is still training and indeed it's going to move to more distributed inference. And we'd also like to see agentic artificial intelligence (AI) move into a lot of enterprise use cases, all of which we're seeing by the way. But I would say large, medium, small. The largest killer agentic artificial intelligence (AI) application is training. The medium is enterprise and the small is medium is inference and the small is obviously enterprise. In terms of back-end versus front-end, we are now seeing way more backend activity, particularly with our large AI titans and cloud titans, because there is just so much scale they need to prepare for the billions of parameters and tokens. And this is where a lot of so much so that I think the front-end they might come back and refresh, but they're almost ignoring right now in favor of the backend. Having said that though, by virtue of the backend deployments, I don't know if we anymore see a 2:1 to the front-end, but we at least see a 1 to 1. And the 1 to 1 can be wide area CPU and storage. Those are probably the three common use cases. Not all the customers are up and lifting everything and doing all three. Although we've had cases where some of them did an upgrade of the front-end before they went into the back-end. But usually they will have to come back to that because the minute you put that kind of performance pressure and scale on the back-end you almost have to do something in the front-end. But at the moment I would say it's more one to one. And at the moment I'd also say the scale across in the back-end has become a bigger use case than we imagined this time last year.
Ken Duda (Co-President)
The other thing I like to mention here is just how good it feels to be have the same set of products in the same common operating system management suite and operating model across the front end and back end. This lowers cost for the customer, simplifies their design process to get that leverage. And we're one of the few vendors who can do that. I think only. Yeah, I think so. I think only. Yes, absolutely. Good point, Ken.
Regina (Moderator)
Our next question will come from the line of Mita Marshall with Morgan Stanley.
Mita Marshall (Equity Analyst at Morgan Stanley)
Please go ahead.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Great, thanks. Appreciate the question. Maybe just a question on XBO monetization or just how it helps you kind of continue to gain share with customers or just mind share with customers by being so front footed with the technology. Thanks. Yeah, thank you Mehta. I think as you know we're not a classic optics vendor, but almost always whenever we are selling our switches it has to connect to something and usually it's some form of copper or optics. So Andy's innovations with OSFP, I remember this super well where everybody was saying oh no, no, we can just use. Quad Small Form-factor Pluggable (QSFP) has proven to be, you know, not only a contribution for Arista but really for the industry wide. And that's still how we see it with XPO as well. You know, while the industry has been talking a lot about co package optics, these are still science experiments and they're very proprietary with individual vendors doing their own thing. We embrace open Co-Packaged Optics (CPO) few years from now, but we think xpo has a 10 year run especially at 1.60 and 3.2 where you need liquid cooling and you need that kind of capacity. So you know all those scale up racks we're talking about wouldn't be possible without XPO or CPC or any one of those technologies. So we see this as just as the last decade was greatly influenced by OSFP, the next decade will be greatly influenced by XPO. And remember, 99% of the optical market today that we connect to is all pluggable optics. So this is a very crucial invention and innovation not just for Arista but the industry at large.
Todd Nightingale (Co-President)
I think this is a great example of how Arista enables an ecosystem and then we profit as that ecosystem grows. And what XPO unlocks is a standard interoperable multi vendor way to get to four times the network density in liquid cooling, which is absolutely critical for these AI use cases. Without that got this huge bottleneck at the front panel and the amount of extra rack space is required to get through osfps. It's so we're really enabling the future growth of our industry this way, which we benefit and others benefit as well.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, it's stunning to me. I remember when I first talked to Andy and Vijay, they said, oh, we think we'll get about 20 signatures. And then it was 40 and now it's north of 100. So it tells me the whole consortium is coming together for things like Ethernet, IP and standards and standardization of optics.
Regina (Moderator)
Our next question will come from the line of Tal Leiani with Bank of America.
Tal Leiani (Equity Analyst at Bank of America)
Please go ahead. Hi guys, can you hear me? Yes, we can hear you.. Hello. I promised myself to be nice today, so I have a good question for you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
I promise to be nice too.
Tal Leiani (Equity Analyst at Bank of America)
Deferred revenues. Deferred revenues doubled in the last year and it went up. If I combine short term, long term, it went up 826 million. It went up significantly in the last four quarters. What needs to happen? What are the conditions to recognize deferred revenues? Meaning what needs to happen for deferred revenues to be recognized over the next few quarters? Is it about data center going live and traffic goes into data centers or what are the sources for the deferred revenue increase? Thanks.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Right, right, Tal. So I really do like you. So I'm going to be nice to you not because I have to, but because I like to. So I think if you remember 10 years ago, Tal we had a similar phenomena where in the cloud the whole leaf spine design was brand new. Nobody really knew how to build it or monetize it. And we were building some of the world's largest networks for Microsoft Azure, et cetera. Right? And we had new products, they had new designs, they had done traditionally the access aggregation core. And we're now moving to this flat flat topology. And we had some fairly lengthy qualification cycles. So I would Say there's a customer aspect to it and a product aspect to it. The customer aspect to it is they need to have the space, they need to have the facilities, they need to have their, in this case, GPUs now back. And then it used to be CPUs, they got to have their rack and stack. And many cases, by the way, we're running into examples where it's literally they need to manually install the cables. And that takes several months. Right. Thousands of people have to do that. So there's certainly a customer acceptance piece of it, which starts with being ready. There's also a new product. Many of these new products in the Arista Etherlink family, particularly for the AI, are brand new, brand new chips, brand new software. The familiarity with it, particularly in the back end for scale out and scale across, is new to them. So there's a level of testing and level of making sure it works with the rest of their ecosystem, including the front end. That is super important. And Arista bears a huge responsibility to that as well. So all this to Tal you that the length of time to qualify this, which used to be two to four quarters, has extended more like six to even eight quarters. So it's gotten much longer. Chantel, you want to add something? Yeah.
Chantal Brightoff (Chief Financial Officer)
The only other thing I'd add, thank you, Jaishree, is that we do recognize some of it every quarter. So it's not like it's one balance. This is aging and growing tile. We recognize things every quarter. Things come in and things are recognized to the P and L. So I just wanted make sure you understand that.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, it's not piling. Some things go in and some things come out. Yeah. Does that make sense? Tal. What? You're on mute. No, no, they mute him after his question. Oh, he does? Okay. All right.
Regina (Moderator)
Our next question will come from the line of Amit Daryanani with Evercore. Please go ahead.
Amit Daryanani (Equity Analyst at Evercore)
Yep. Thanks for taking my question. You know, I guess, Jeffrey, you folks have kind of positioned XPO as the next osfp. And I would love to kind of Understand, from the OFC demos to potentially deployments in 27, how do you see changing the optics architecture within AI clusters? And then maybe specifically for Arista, does that change the growth profile or your content per AI rack or cluster as we go forward? Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, thank you, Amit Daryanani. I think you should look at XPO (Extended Pluggable Optics) as a partner to OSFP. So at 400 gig and 800 gig, you'll be fine with OSFP. And as we go to higher speeds in 2728 or even beyond, you know, OSFP will run out of steam and this will be the new connector of choice. So the migration to higher speeds equals the migration to xpo, particularly for scale out and scale across. Within Iraq and scale up there are still a number of choices. I think within short distances of 2 to 3 meters you're still going to see a lot of co packaged copper. And I think XPO (Extended Pluggable Optics) in terms of density will be another alternative. But I don't rule out open Co-Packaged Optics as well over there if they're really looking to maximize their density in a minimum amount of space. So I think XPO (Extended Pluggable Optics) will be particularly prevalent in scale out and scale across and will be one of the choices in scale up.
Regina (Moderator)
Our next question comes from the line of Ryan Koontz with Needham.
Jeff Hobson
Please go ahead. Hi, this is Jeff Hobson on Orion. I appreciate the question on the scale across. It seems like that would be a really good fit for all Arista's capabilities and I know you mentioned it would maybe be around a third of revenue this year but but is this something where scale across could even be larger than scale out over the next couple of years? Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Hi Jeff, or rather Jeff. I think the answer to that would lie on how well we do with both and what form factors are used for both. Majority of the Scale Across today is a very premier valuable heavy duty routing platform, the 7800. So if we do lots of that it could get well beyond the 30%. But some of them may do it with fixed boxes too, or fixed switches and choose to add a lot of cable, in which case it wouldn't go well above that. So we don't know what we don't know. But I would agree with you that Scale Across is by far the most significant and differentiated opportunity that really highlights Arista's prowess in both platforms and software.
Regina (Moderator)
Our next question comes from the line of Somic Chatterjee with JP Morgan.
Samik Chatterjee
Please go ahead. Hi, thanks for taking my question, Jason. Maybe slightly related to the last question here. Just trying to think about you said most of the cloud revenue near term is going to be scale out and scale across. As we wait for scale up to ramp, how are you thinking about your market share when it comes to scale out versus scale across? In the early days of scale across what you're seeing in terms of market share and are you seeing customer decisions being led in scale across by sort of the incumbent in scale out or is it a different decision altogether in terms of how they're designing vendors in for scale Across. Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Good question, Samik Chatterjee. You're making me think so. I would say if it's Greenfield deployment then they tend to think of it together because they're not only building the sites but they're thinking of the interconnect across them and therefore market share is generally strong in both. In some cases where Arista has not been a historical participant within the data center, we now have an opportunity to offer the scale across multi tenant even in a non Greenfield situation and let's say in a Brownfield where now they've got disparate data centers or AI clusters that we now have to bring in. And so once again I think Arista is a really fitting example to be in scale across for both those use cases, but has the additional opportunity in a brand new data center to be in all use cases if that makes sense. So it's giving us a chance to participate with different types of accelerators and different types of models because people aren't getting the power and they're having to distribute the data centers and as a result of distribution you need more traffic engineering, routing, multi tenancy. So I would say scale across is the common denominator in all our use cases and scale up and scale out maybe nice options and brand new green fields.
Regina (Moderator)
Our next question comes from the line of Carl Ackerman with BNP Paribas.
Carl Ackerman (Equity Analyst at BNP Paribas)
Please go ahead. Yes, thank you. Jaishree, you are doing more networking design today more than ever. Does that change your ability to monetize your services to capture more of the work of the other value that you're adding to this, to these applications? And I guess as you address that given the large mix of services revenue within Deferred, could services revenue accelerate faster and represent perhaps 25 or 30% of sales going forward? Thank you.
Jaishree Ulal (Chairperson and Chief Executive Officer)
I don't think so Carl Ackerman. I think we're a product company and majority of our revenue generation and interest in Arista Networks Inc as a company for all the designs we're doing comes from our product heritage and it's not like we charge for services. In fact we work with closely with our partners also we will recommend network designs, we will support services and certainly things like we are the gold standard for worldwide support. But I don't expect services as a function of our revenue to go up. I continue to see ourselves as a product led company.
Regina (Moderator)
Our next question comes from the line of Matt Nicknam with Truist.
Matt Nicknam
Please go ahead. Hey, thanks so much for taking the question. I just wanted to go back to gross margin. So I know we were sort of in that 62ish range they dipped about 170bps year on year. And I want to dig into whether it was primarily mix related or you know maybe if you can quantify whether there, how significant the memory and cost related impacts were. If there's any call you can provide. Thanks.
Chantal Brightoff (Chief Financial Officer)
Yeah, I think it's a great question. I would say the majority, if you look at, even if you look at prior quarter or prior year, the majority of the difference is mix of the customers. And just to clarify, you know our larger customers, you know, have a, have a lower gross margin accretion and so that mix is the primary driver and then the secondary, although not as significant would be things depending on the quarter, depending how deferred. Deferred's moving tariffs or the memory costs or the silicon cost depending on the quarter. So secondary driver but the primary drivers mix of the customer segments. Our next question comes from the line
Regina (Moderator)
of David Vogt with ubs. Please go ahead.
Andrew
Thanks. Hi, this is Andrew for David, you know from a high level with 2.4 billion almost of inventory and you know almost two years in COGS of purchase commitments, how should we think about the supply constraints and where that inventory and purchase commitments are not satisfactory to meet demand? Where are the holes in your inventory? I wouldn't say we have holes in our inventory but we have surging demand especially on the newest, newest platforms which of course is driving our need for the most modern silicon from our providers and it's driving need for an expanded amount of memory even, even more than we were expecting before the year began. So that's driving us to be a buyer in the market. Luckily we've got pretty good spending power. We're a very reliable partner in these scenarios and so we partner closely with these vendors. But there's no doubt that like the newest platforms that we're delivering, especially in the AI space is driving needs of ours in the high end of our portfolio.
Todd Nightingale (Co-President)
Yeah, and just to add to that David, the real hole is lead times. We are experiencing such significant wafer fab shortages that we're not getting the chips in time. So more than a whole, I would just say our purchase commitments are multi years because we're having to deal with forecasts that are out multiple years so that we get them in time because the lead time of these chips is so long. So I think that's the biggest hole. Lead times.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, we are experiencing 52 week lead times pretty reliably with reservation needs beyond that. And our customers certainly do not want to wait that long.
Regina (Moderator)
Our next question comes from the line of James Fish with Piper Sandler.
James Fish (Equity Analyst at Piper Sandler)
Please go ahead. Hey guys, maybe for you, the guide raise was primarily all on AI. Are you guys prioritizing these shipments or what's given the hesitancy around sort of the non AI, non human campus at this point and leaving that roughly flat still. And Jay Sri, just for you, just as we think about the mix here on gross margin, what are you guys seeing in terms of BlueBox adoption now? And are you seeing any sort of net pull in of demand just given, you know, you have a lot of smart customers here and they're very much aware of the supply chain constraints. Thanks, guys.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, thank you. Thank you. I'll start with mine first in the sense of the order of your question. So I don't, I don't think we're saying because we're raising the revenue and trivia into AI that we're not excited about all the other customer segments. I think you heard both Jay Sri and I talk about we're very happy with how the year started, what we're seeing across all three customer segments. We're very happy what we're seeing in enterprise, which I wouldn't say is quite AI yet. So let's count that as the non AI bucket that you referred to. So wait and see. We're in Q1 reporting Q1, we'll see how the year goes. But we're very confident across all three that we're seeing strong demand. So I think I would leave it in the sense of let's see where we get to in our future quarter guides. And I would agree with that. Just to remind everybody, we've raised now from 10.5 or whatever we just said last September to 11.5 billion. And yes, a high degree of that is AI, but we have aggressive commitments on the campus to go to a 1.25 billion quarter and continue to service and grow our data center and cloud just as well. So all three are growing, but certainly AI is taking the news headline regarding BlueBox adoption. One of the customer use cases you actually heard about was moved from that. You heard from Ken Duda, moved from white box to BlueBox. And their goal right now is their desire to move to BlueBoxes. It works. Number one, it scales too. It actually does the job for us with AMD accelerators, number three. And down the road they may use open operating systems, but they were very pleased with the diagnostics capability, the platform SDK where we literally rewrite every piece of software and bit twiddle all the Broadcom chip, transistors very, very well. And the Extensible Operating System (EOS) features down the road they may use some open noses as well. But that would be a really good example of a BlueBox that has Extensible Operating System (EOS) today and may go down to other noses. And we continue to see that particularly in the NEO clouds. We've always seen a bit of that in the cloud and AI titans because they know how to work with openness. So we've had that hybrid strategy always, but we're certainly seeing more of that in the NEO clouds now.
Regina (Moderator)
Regina, we have time for one last question.
Ben Bolen (Equity Analyst at Cleveland Research)
Our final question will come from the line of Ben Bolen with Cleveland Research. Please go ahead. Good afternoon everyone. Thank you for taking the question. Jaitree, you referenced inference a little bit earlier, said it's kind of a smaller use case right now I'm interested your thoughts on where you think enterprise is in terms of their ability to consume inference and create agents and then how that develops over time and where you think the front-end networks and edge networks are today in their ability to support those use cases. Basically just do we get the sustained investment period? Because what you're seeing now bleeds and becomes much more significant in enterprise and how long lasting that might be.
Jaishree Ulal (Chairperson and Chief Executive Officer)
Yeah, no, I Ben, I tend to agree with your thesis that while today we are in a training fever that a more distributed AI generative AI paradigm with inference-baseds, which means you don't always need the GPU, you're going to have high end CPUs and you're going to have a smaller set of parameters and tokens to manage and you're going to have specific agentic artificial intelligence (AI) use cases and applications. We're seeing very, very early trials and stages, nothing super big yet, but we are seeing, I mean they're not in the hundreds of thousands of GPUs like you see on the AI Titans, but we're frequently seeing our customers in certain high tech sectors want to deploy clusters that are thousand, few thousand, definitely not ten thousand but in the hundreds of thousands and they tend to be exactly as you said, not training but more inference-based based, more agentic artificial intelligence (AI) edge inference-based based as well. So I think we'll see more of that. This is the calm before the storm if you will. And as AI gets more distributed I think it doesn't need GPUs alone, it's going to need more high performance compute. And many of them seem to feel to us like high performance compute HPC use cases that are sort of getting revived for AI. So I, I agree with your thesis Ben. I think it's going to take a couple of years to fully happen.
OPERATOR
This concludes Arista Networks Inc's first quarter 2026 earnings call. We have a presentation posted that provides additional information on our results which you can access on the investor section of our website. Thank you for joining us today and for your interest in Arista. Thank you for joining. Ladies and gentlemen, this concludes today's call. You may now disconnect.
Disclaimer: This transcript is provided for informational purposes only. While we strive for accuracy, there may be errors or omissions in this automated transcription. For official company statements and financial information, please refer to the company's SEC filings and official press releases. Corporate participants' and analysts' statements reflect their views as of the date of this call and are subject to change without notice.
Login to comment