Pat: Yeah, one of the things. A little bit of my own personal story. I was, I started at Intel so young, I went through puberty there. I grew up at the feet of Grove, Noyce and Moore. 30 years with the company, then I took an 11 year vacation to EMC and VMWare--which by the way, I was outside of Intel almost to the day the same time Steve Jobs was out of Apple. It's like the death of the vision. I wanted to be the CEO of Intel but when I was pushed out of the company 13 years ago, they killed a project that would have changed the shape of AI. I had a project underway--it was called Larrabee at the time--to do high throughput computing in the x86 architecture. They killed it after I left. Nvidia had an unmerited potential in the space of AI hardware, essentially throughput computing versus scalar computing. Nobody put pressure on them. And Jensen, by the way, he's a really good guy. I know him super well. He worked super hard at owning throughput computing primarily for graphics initially and then he got extraordinarily lucky. You know, they didnt' even want to support their first AI project. Right. Big flop machines was needed for these big data sets that they were looking at. Intel did nothing in the space for 15 years and now as I come I have a passion. Okay, we're going to start showing up in that space and our strategy here is, number one, democratize AI because today you see the collision of these high performance computing AI for these 100 billion, soon going to trillion parameter models that we're training on that take 30, 60, 90 days for training. How many of those can you do? It is the ultimate of high performance computing, weather modeling, nuclear simulation, high end training. You can't afford it. So what are we going to do? We want to democratize AI. We want to make it available to every application developer. You can put it under your desk. It's in your laptop. Our next generation client products will have 20 TOPS in a standard mediocre PC. It's 20 TOPs. That was more than was possible on a high-end supercomputer about 15 years ago. So democratize it. Also open up the software stack. For industry standards, get away from the proprietary technologies. We're driving a technology called SYCL, an open standardized parallel C so that we can eliminate proprietary technologies like CUDA. And we begin to offer our high-end computing chips to compete in that high-end training space as well. Build it into our standard Xeon processors to truly make AI available for any and all. Because it's the most important active workload today. And it's evolving like crazy. I helped to create IEEE p854 which was the floating point standard at 64 and 80 bits. Now we're talking about fp8. I mean It's like what! Such trivial low-end data types and are suitable because instead of big vectors let's create little things that are guessing at the right answer and AI is just sort of a probabilistic guess for most things. So it's changing rapidly, sparce matrices, different algorithmic models, breakthroughs happening rapidly. And if we think about AI, essentially you look at something like ChatGPT, incredible technology, but it's on the simplest data set, text. We're going to start getting to some hard datasets going forward. Which are going to be orders of magnitude more challenging than the ones than the ones that we're solving today with AI. So I think we have at least a decade probably two decades of just sheer innovation in front of us and all of it's going to need extroardinary amounts of data, extroardinary amounts of networking, and big honking compute. So we're going to build lot's of fabs so we can build lots of compute.