Related PagesSee All
2,995,452 Followers · Computer Company
4,426,442 Followers · Product/Service
37,252,176 Followers · Product/Service
118,267 Followers · Product/Service
1,714,480 Followers · Computer Company
815,021 Followers · Computers & Internet Website
854,231 Followers · Information Technology Company
1,463,139 Followers · Education
1,948,466 Followers · Consulting Agency
730,250 Followers · Consulting Agency
1,582,014 Followers · Magazine
582,828 Followers · Science, Technology & Engineering
Video Transcript
CONNOR KRUKOWSKY: Actually, it's funny. I've had people come up to me and be like, have you heard of that kid with the mainframe? I'm like, hi, like, that's me. [ MUSIC ] CONNOR'S FATHER: About several years ago, Connor started collecting vintage computers. He was bringing home Teletype machines, big old tape drives. KURKOWSKY: You know, it's these types of things, it sounds like, shouldn't that be in a museum? And it's like, maybe, but it's in my basement. I've just always loved to know what makes something tick, and it's just turned towards computers. I guess I could have become a doctor and cut people open, but it turned out to be computers that I want to rip open instead, so. I've got things here, like this is a PC, IBM PC XT in the box. FATHER: I think it was either seventh or eighth grade science fair, while most of the kids were doing, you know, the volcano type projects and growing plants, Connor built a hydrogen gas generator and was actually generating hydrogen. In fact, he told me not to play with it when it was in the garage, and I did, and almost blew it up. KURKOWSKY: Once we got moved into this house and I kind of finagled the deal to get the basement, I started filling it with stuff. FATHER: When he gets his mind set on wanting something, he's pretty convincing about it, which is kind of where we got with the mainframe. CONNOR: And of course, we have the mainframe, the thing that everybody talks about and everybody knows me for specifically. FATHER: He saw an ad for an IBM mainframe that was about 10 years old, I think, and he decided that he wanted this thing. And we actually tried to talk him out of it a little bit in the beginning. He brought this thing home and it was just huge. KURKOWSKY: We got it off the trailer, kind of got it over to the side of the basement, realized it wasn't going to get under the deck. We had to rig up a chain fall up under the deck to lower it down into the basement well and then send it through the door. Flip the big red switch, and it will come up slowly. And I hooked it up and I flipped the switch and it came up and everything worked fine. FATHER: Then he spent the next three months basically living like a hermit in our basement learning how to get it up and working. KURKOWSKY: It's 12, 13 years old and computers today are barely catching up to it. So, it's interesting to think, you know, that it was that far ahead at the time. FATHER: Somewhere along the way, people kind of got...started getting interested that this 18 year old kid had a full-sized IBM mainframe in his basement. Somebody wanted him to come and speak in Texas to his experience with this mainframe, and we were kind of like, wow, you know, that was one of our first inklings that this might be actually something that was good for him. KURKOWSKY: It's been a couple of months of quite the adventure, so, and that's what this entire presentation is about. You know, I gave a talk and somebody from IBM -- [Mark Ensonn], he was like, you know, this needs to be on YouTube in two weeks. And at first I was like a few thousand views comes in and I'm like wow, you know, that's getting a lot of views. And it was like 100,000, 200,000, 300,000. I'm like, uh, what's going on here? FATHER: After his speech in Texas, I know IBM approached him and asked him to come up for a tour. KURKOWSKY: I was like, hey, can I get a tour of the Poughkeepsie plant? And they were like, wait, you know what Poughkeepsie is? And I'm like, yes, that's where the mainframe was created. That's where the original 360 was built. You know, I know a lot about the history of these machines, and you know, I don't think they really expected that. And they were like, yes, sure, we'll give you a tour of Poughkeepsie, so and you know, now I work there. You know, I've never really known what I wanted to do. But collecting vintage computers, you know, IBM's always kind of stood out on top. You know, it's this company that kind of has done everything, and they kind of always made the standard -- the PC, with the mainframe itself. I've always been like, that's the company that I would like to work at. Even have things like bus and tag cables. The idea behind these is that this is how they used to transport data to and from, say, like tape drives or DASD hooked to the mainframe. People have the misconception, that oh, you'll never get a job doing A because it's so niche. Mainframes are pretty niche but also there's a lot of job openings coming up now. If you want to get job security, get in the mainframes. You can you make a lot more noise if I do the full powerup sequence. My parents could have shot me for getting this thing and, you know, I wouldn't be here, I wouldn't have a job, I wouldn't have a house. FATHER: He said, for all the hemming and hollering we did about it and asking not to bring this great big thing in, it was a major payoff. I mean, it was a fantastic opportunity for Connor. CONNOR'S MOTHER: You never know. KURKOWSKY: And that's it. CRUTCHER: Hello, everyone, and thank you so much for tuning in today. We have a very special Facebook Live IBM session for you today.. I'm your host, Troy Crutcher. I actually work here at IBM on the IBM Z Academic Initiative Team, and I'm the program manager for the Master the Mainframe contest. We're actually going to get into all that later; we've got a lot of great content for you today. But for right now, I'd like to introduce to you my buddy, Connor Krukosky. You all probably know him as the 18 year old who built a mainframe in his basement and is actually working here at IBM now full time working on the hottest new machine, our z14. So, Connor, if you could introduce yourself for us. KRUKOSKY: Hey, everybody. You probably already know me. My name's Connor Krukosky. I am that crazy kid who got a mainframe and put it in my parents's basement. They may or may not be watching, hello. So, yes, today we're going to show you the z14. We are here at the Poughkeepsie test floor. This is where we test a machine from the first hardware we get all the way to ship, and we even test customer problems here. So, this is it. This is ground zero, so you might be able to see from some of the background. Today we're going to show you one of our machines in the thermal chambers. These machines are basically giant refrigerators. We test machines from below freezing to well above any operating range to make sure that they will always work no matter what kind of failure they may see in the field. So, with that, I think we could take a walk over and see the machine we're going to be looking at today. CRUTCHER: Yes, let's do it. KRUKOSKY: So, as you can see here, these are the big doors, this is the thermal chamber. So, this machine is standing by itself. This is the z14. This is a very large machine, as you can see. I'm about six feet tall. Very large set of racks, they are bolted together. This is effectively one unit. So, this is pretty much a maximally config'ed machine. We've got four CEC drawers and we have five I/O drawers, one of which you can see above the CEC drawers and the other four are over here behind this panel, which I'll talk about in a minute. So, and above the I/O drawers we've got our power units. CRUTCHER: So, Connor, we forgot to mention we're taking live questions. I've got them in right in front of my face. So, as we're going through this... KURKOWSKY: Absolutely. CRUTCHER: ...don't make us do all the work, everybody. KRUKOSKY: Yes. So, you know, I'll start with, you know, this machine supports up to 170 user configurable processors and up to 32 terabytes of RAM. It supports pervasive encryption, which is a new technology that we've released and has kind of become the standard for security. It will encrypt everything from your DASD all the way to the processor and then from the processor to even the memory back out again and to the customer. So, you have encryption from Point A to Point B. We've got up to 160 I/O cards, which is an incredible amount of bandwidth. And that's what the mainframe's really been always, you know, that's what it's always been best at. So, to start, CEC drawers, we've got four of them, six CPs per drawer and one SC. CP is Central Processor; SC is...actually, I don't remember the acronym for that one, but effectively it's L4 cache. CRUTCHER: So, Connor, as we're throwing out all these acronyms today. You mentioned CEC; what does CEC stand for? KRUKOSKY: The CEC is the...you know, I really have never learned that acronym while I'm here. There are a lot of them I have learned; that is not one of them. A Central something complex. And I know there are probably some IBMers watching shaming me right now in the other room. But that's okay. It is the Central Processing Unit, effectively. I believe we used to call them the CPC, which is the Central Processing Complex. So, each drawer has six CPs, up to 10 cores per CP. The SE chip is effectively, like I said, L4 cache. And with a fully configured machine, you have 960 megs of it, almost a gig of L4 cache. In the front you'll see these cables; these are just loops. It's PCI Express. This connects from our CEC drawers over to our I/O [cages]. This is, you know, we run the industry standard PCI Express between our I/O and our CEC drawers. And so with up to 24 processors, 10 cores each; and of course, we have some that we use for redundancy, and so, max a customer will have 170. Do we happen to have any questions yet? CRUTCHER: We do. So, as we've already discussed quite a bit of stuff on this side of the machine, we actually have a question, is, what is better about the 14; or, is there stuff that the 14 can do that a 13 couldn't? KRUKOSKY: So, I believe, the big difference was the pervasive encryption. It, of course, is faster. We've got more cores. The last machine was eight cores per chip; now it's 10. It went from 161 supported CPs for the customer up to 170. It's a faster chip. I can't remember exactly what the z13 speed was, but this runs at 5.2 gigahertz now. We went from 10 terabytes max supported memory to 32, so over triple the amount of memory. And we have new supported I/O cards. So, I'll get into those in a bit. So, yes, so first off, this is what an I/O drawer looks like. This doesn't have anything plugged into it, so it's kind of actually visible for us. These are our PCI Express inputs from the CEC drawer. These fan out to the four I/O cards left and right. And the same mirrored on the right. This is actually, if you look at the back of this it looks similar to the front. Actually there's 32 cards per drawer; you may notice there's only 16 out front, another 16 in the back. The way this works is you've got one PCI Express X 16 coming in the front and one coming in the back. And if one fails, the one in the front or back will take over and pick up the workload. So, there's a lot of redundancy in this machine. CRUTCHER: So, I do have an interesting question here, and I don't know if we can answer this. But how many smartphones would this machine replace? KRUKOSKY: If it's I/O, probably almost an infinite amount, because the smartphone has some wireless which is slower and USB 2 or 3, which, you know, USB 3 can go up to like five gigabit. Each of these cards here has two 16 gigabit ports, so, that's 32 gigabit per card. And actually there's a new very interesting card which is way faster than those that we'll talk about in a bit. And so, a lot. Each phone nowadays maybe have four cores, you know, 170. So, you're replacing a lot of phones. I'm not going to do the math in my head, but... CRUTCHER: A lot. KRUKOSKY: A lot. CRUTCHER: So, we do have another question before we move on. Where do the processors come from? KRUKOSKY: Well, we develop the processors in-house. It's not like we buy from, you know, like in x86 land, you'd buy from Intel or AMD. We make our chips in-house, we design them from the beginning. You know, our design teams do their [INAUDIBLE]. I work in development where I work in some of the code stuff that deals with, you know, bringing up the chips, initializing them. So, I work kind of right in the middle and then we develop our silicon in-house and we get the chips. And they're big chips; I should have actually gotten one to show. But big chips and yes, about...they're just impressive, I mean, for what they are. So, you know, some may notice there's, these are actually 24-inch racks. Some may know what that means; some may not. The standard server are 19-inch racks, like up here. This is our SE, we call it, which stands for Support Element. This is what brings the machine up and initializes everything and kind of controls the machine. It's just kind of tells everything what to do. So, and this is the screen and keyboard that connects to that so you can do local support but you could also remote into it. So. this over here is what we call BPU, Bulk Power Unit. So, you'll see we've got AC coming in here, about I think we run 480 volts, three phase here. And you'll see here a bunch of repeating units. These are each an AC to DC converter. For a lower configured machine, you get less of these; for a higher configured machine, you have more for the capacity of what you're running. And so, these are the cables that come out of this. You get a few hundred volts DC going into all the other parts of the system which then they bring down to the negative voltages they need. So, we'll come look over here, we can see a bit more I/O. So, this is really kind of the business end. This is where if you swipe a credit card, it's going to come in somewhere here and make its way over to the processor. So, there are a lot of interesting parts here. First, I'll start by saying these are our PCI Express link cables. You can see there's...you know, we've got these two drawers hooked up here. And there's a few interesting cards here. We've got some [flag cons] littered about, I think, but we've got some OSHA cards. OSHA is our Ethernet card, effectively. We've got 10 gigabit, gigabit fiber and gigabit copper, and so you can use those for anything from if you're running Linux -- which, yes, the mainframe does run Linux, that's what I ran on mine. If you're running Linux you use it for an SSH session, V and C, whatever; or, you could use it for 3270 terminal, it's TN 3270 which is for z/OS, z/VM. So, you've got OSHAs and we've got these here, these are our Z hyperlinks. This is one of our newest cards. This is a very interesting card. So, FICON connects the DASD through a switch or something, right, it's good bandwidth, 16 gigabit. But FICON is a very advanced protocol. It's kind of heavy. So, you've got good bandwidth and good throughput, but it's probably not that good in latency. CRUTCHER: So, I do have a lot of people asking the same question, which is awesome. I know you guys do this a lot here. When you take this machine completely down, how long does it take to bring it all back up? KRUKOSKY: Okay. That is a good question. So, actually I'll kind of run you through what we do. So, this is the big red switch. Everybody likes the big red switch. EPO switch, we call it -- Emergency Power Off -- because generally, it's not the on/off switch, because once you turn this on you never shut this machine down. This is for emergencies only. That's why it's red. So, you turn that on and it would come up and basically it starts, you hear it spin up fans and test fans and do some things while the SE comes up. And that boots up and then you'll, once you get a log-in screen there, that probably takes five, 10 minutes. Then you have to do a power on, which can take about 20 minutes. So, what that will, what that will do is, once you come up from cold from EPO you have standby voltages everywhere where everything can talk to each other but nothing can do any activity, like, you can't bring up the processor yet. So, then you do a power on, which is like pressing a soft power switch. And it talks to everything and says okay we're going to give you power here. Give you power, are you okay? Yes. And so and so forth through everything. Takes about 20 minutes. And we do something for all the power on reset, a POR. And what that does -- or, "IML the machine" we sometimes call it, so Initial Machine Load. And that will tell everything, okay, we want you to initialize your links, we want to bring the processors up, give you millicode, make sure that everything is good and happy and can talk to each other. And then, once that's finished, that takes about another 20 minutes. Once that's finished, then we can IPO our operating system, which IPO is our way of saying boot an operating system, which is Initial Program Load. So, and it really does depend on the machine. So, you're looking from anywhere from, for a very small machine from anywhere from half hour up to an hour, maybe an hour and a half for the whole process. But of course, you bring one of these machines up, they never go off. Unless if you press the red switch, then it will go off. CRUTCHER: Don't press the red switch. Got it. KRUKOSKY: Yes, because, you know, these machines are very recoverable. One thing you may notice here this is water cooled. Now, there are two options for the water cooling. But so you put every z14, I believe, comes with the CEC drawers water cooled. So, we've got our hoses on the left and right. Cold comes in and hot goes out. Then we come down, there's a little reservoir at the top here, and then we come down to our radiators. There are three of them down here, it's for redundancy. You know, you have one or two fail and still be operating while the IBM service field tech comes and replaces it. Basically what these are are very dense radiators, so imagine the radiator in front of your car but contrast with very dense, requires a lot of air pressure to push air through. But that's how we keep our machines cool in this configuration. Otherwise, we have the heat exchange block down here and you're going to hook to, if your data center has water coming in and out, you can water cool that way, too. So, you don't have to have radiators blowing hot air on to your data center; instead, it will transfer the heat into a pipe and pump it out of the building where you can chill it and bring it back in again. CRUTCHER: So, Connor, before we move on, why don't we take, because I've gotten a lot of questions about what exactly this machine is used for and what this big fridge is used for. KRUKOSKY: Right. Okay. So, this is what we call one of our characterization machines. So, there are four chambers like this, this is just one of them. This is a giant refrigerator, we can bring it to below zero up to as high as you want, effectively. And what this is used for, we put in chips and cards and stuff we want to test under extreme temperature to make sure, for example, if this machine is sitting on a data floor and AC fails, it can get very, very, very hot in there. And there have been incidents where the chamber decided to quit on us and it got very, very hot in here, upwards of 60 C or something ridiculous. But the machine still works. But we don't usually test that crazy, but it still works. Well, actually not...yes, 60...well, actually even I think it's up over 100 C but it was very hot in here. You know, you open the doors, it feels like a sauna. But that's what we do with this machine, and that's why there's a lot of test wraps and things that you really wouldn't see in the field. Usually you'd have stuff hooked up here. So, it's a test machine. It's a development machine. We don't have the doors on here because otherwise they would get in the way even more. So, yes, as I was saying over here, the Z hyperlinks. That's an interesting card. So, what they're designed to do is they go specifically straight to a DASD unit, say a flash-based DASD unit, for some SSDs, very fast. This is basically a very low latency, very fast connection. You go and [talk] one DASD with it. You also have FICON going to that DASD for slower communications if you need to but also for kind of controlling it and telling it, hey, we want this data. So, this is kind of like almost a data specific line -- you know, we request data and it comes speeding down this line into our CEC, and so it's a very, very fast link. It's not comparable at all to fiber channel or FICON, which actually, going back on FICON, they do support fiber channel storage as well. So, if you are running Linux or even CVM, you can use fiber channel storage, which is significantly cheaper, and use that for IPLing like, say, Linux. Happen to have any other odds ends, questions before I continue? CRUTCHER: Keep on going. KRUKOSKY: Okay. So, some may notice some of these cables are a little different than the others, a little more shiny and braided. These are A bus cables. These connect between the CEC drawers so that they can talk to each other, because otherwise these are kind of independent units. So, these are the very, very high speed links between one drawer to the other. And that's the other thing that the SE will do is it offers up those connections. The SE talks to the CPs which sends out three of these cables which will connect, so the four drawers. One drawer will connect to the other three. So, if one drawer wants to talk to a processor and another drawer has got one hop of a cable and it's to another processor. You may notice, some interesting little things, like we have these, what they call light bars because they're just bars with lights. If a steel tech needs to come out and replace something, we can light up the light and say, this card needs to be replaced. So, we guarantee we're replacing the right card, because you pull the wrong card, that wouldn't be very good. Yes, so that's another interesting feature. So, say this was powered on and a card fails. What we can do is we can power off the slot. If you've got it configured properly, say you've got two cards handling your data to your DASD, you say I want to show them as one card. All the data will fail over to the other so we're only going through one card. And then what you can do is once the slot's powered off, you can pull a card and then replace it while the machine's operating, no one would ever know. By the way, this is what one of our I/O cards looks like. Very mundane. Not much going on. Some vents at the bottom and top. The air flow goes through the bottom and out the top. Your I/O connections and what connects to the back are some power and data. And yes, so, it's just a nice little I/O card, very easy to install and uninstall. I mean, you can have a field swap done in a matter of, after going through all the processes, make sure you do it right, matter of half an hour. And you can even do things like swap processors while the machine's running, which is something, you know, some people coming from other fields may not have even thought about. So, what you could do is say you've got a processor that failed. You've got spare processors, so if a processor fails while you're running you'll fail over to some spares. So, you'll still run, you'll never notice. But then this machine will call out to IBM, say, we have a processor that failed, we've got to get another one. So, either have an on service tech or off site, will come and what they'll do is they'll go through the process, bring down a drawer, power it off. You'll lose all the processors in that drawer but they'll all move over to spares on the others. And then you disconnect everything. You pull the drawer out after powering it off. You can swap a processor, swap memory, swap an SE chip, reinstall it; and then, it will bring all back up, go back, move it back all over on to that drawer and you're back up and running as if nothing ever happened. It is fantastically amazing that you can do that with machines like this, and this is really how we get 24 by seven by 365 up time with these machines, you know, with 99.999 percent uptime. The other thing is the memory. I've worked with the memory team and they're very proud of RAIN which may sound familiar to RAID, which is random access...random access...oh, man. Why is that escaping my mind right now? Anyway, most people probably know what RAID is, and RAIN is basically the same thing but memory. It's redundant. So, if memory fails, you're still operating. You just have to replace it before more fails. If you have enough of it fail, you're toast, but that's pretty much never happened. Any other questions? CRUTCHER: So, yes, so I did see something about running Linux. KRUKOSKY: Right. So, we supported Linux since pretty much the first Z we released in early 2000, which is the generation right before my machine. And since then we've gotten better and better support and we test it. I've tested Linux on this machine before it was released. So, you can run Linux and we offer something called IFLs which are basically CPs cut down to be for IFLs and it's cheaper to do. And yes, so it's very nice to be able to run Linux. Say you've got...say you primarily run z/OS but you need a front end for something and you're migrating over from something that was running Linux you put it right on this machine. So, there's that. You can run it independently. You can order a machine that's only set up for Linux. And so you don't have CPs, can't run z/OS through ZM but you'll only run Linux. And of course, it's cheaper that way because z/OS and z/VM are...you know, that's what big banks are running -- and it doesn't mean you need to be a big bank to run it -- that may have features that maybe some smaller companies don't need. So, yes. CRUTCHER: Awesome. And I do want to thank our followers right now for answering our questions about some of our acronyms. That's awesome. KRUKOSKY: Yes. Thank you. So, yes you might be able to see there's some blank spots up here and over here. This is what would, this is where we would put a battery backup unit. So, this machine can have basically giant UPSs that go from here back. This rack is about six feet, five feet deep. So, you've got big batteries up there. Now, effectively say there was a power off or a brownout at the data center. And that would allow the machine to run for about 20 minutes while you either spin down your workload or while you wait for power to come back because a lot of buildings have generators and stuff, but if you lose power suddenly you have a dropout for a few minutes. So, that's a feature that some customers like to have if they don't already have UPSes in the building for that type of infrastructure already. So, another card that can go in the front of these. So, the output here is a PCI Express, X 16 link that goes from the front of the CEC out to the I/O drawer. Another card you put in here is the coupling card. Coupling cards are something specific to a mainframe, they allow you to interconnect the machines and talk to each other as if they're on a LAN but a higher speed, kind of lower level interconnect because they know it will never be out on a public link but doesn't mean it's insecure. But that allows two operating systems to connect to each other and talk to each other like z/OS. And actually another z/OS CVM mainframe operating specific card is a [rocky]. A [rocky] is effectively an Ethernet card that allows z/OS to talk to the Internet very well. And it's better than OSA. You can use OSA but specific features to that. It's just kind of an interesting card. We, of course, have other cards like encryption, Cryptos. We also have compression, so compressing data, encrypting data. So, there's quite a few different I/O cards you have in a machine like this. CRUTCHER: So, I have my own question, actually. Is there anything in the z14, because I know you've worked on it the second you started here and you just weren't allowed to talk about it because a lot of stuff was very hidden. But since the announcement of the new machine, is there anything in here specifically that you did that's different than the other machines that you can say, look, oh, I did that, I'm Connor, I put this part in here. KRUKOSKY: Well, I haven't been here for enough of the development process. It's not like I made a decision saying we should have this. CRUTCHER: Right. KRUKOSKY: But I had a lot of fun working on, let's see...I have fun with everything, you know? But oh, man, you know, a chip running 5.2 gigahertz was one thing. I believe the last chip ran at about 4.8 or five, you know, a significant jump for this generation, like a 15 or 20 percent increase in performance. Oh, and also SMT. I believe they've added some features. More support for SMT, which is Simultaneous Multi Threading. So, you know, I have the most fun with the core. And seeing them bring up the chip and having issues with that speed and getting it to work and working through that was kind of the most fun I had here. So, yes. CRUTCHER: So, we have another question. Can we have virtual machines on this zSeries? KRUKOSKY: Oh, that's a great topic. Okay. So, VM support. So, people from the x86 world might be used to VMware. VMware is a pretty good hypervisor. I think it's kind of the industry standard for x86. We have, you may have heard me say z/VM a few times; that's the operating system that stands for like Z Virtual Machine effectively. VM, that's what it is. We also have LPARs, it's called. So, when you configure a machine on IOCDS, you set up the I/O, you also set up what we call LAPRs -- Logical Partition something or another. It's basically, imagine if you could set up in your bios saying I want two virtual machines running here and here except it's in the bios. That's effectively what LPARs are, it's very low level virtualization. And nowadays...you know, it used to support running on the whole machine as one unit but nowadays you can't do that. If you want to you can apply basically all the resources to one zone, but nobody really does that. So, we've kind of dropped support for that. But so LPARs, right, so they're very efficient and z/VM is actually based on pretty much the same code. The stories I've heard, VM was kind of developed first and then they turned it into LPAR, and so it's very hand code. From what I've heard on z13, people have been able to go 14 layers down with z/VM with it still being usable. Think about that. So, you've got VM running under LPAR,m then VM running under VM, VM running under VM, VM running, 14 times and it's still usable. That is, it's amazingly efficient, because here we developed the software and the hardware so we have the chip working together with that software to make that as efficient as possible, because VMware develops software and Intel supports...you know, develops hardware. So, it's amazing how efficient that is, and that's one of the huge selling points of the mainframe is how efficient that virtualization is. And I think, now with how technology's grown to be so complex is that I think a lot of companies are running more and more virtual machines. And that comes with the advent of Docker and stuff, finding more efficient ways to do multiple things on the machine that are secure and separated. So, that's like, you know, VMware is effectively being able to virtualize stuff without the need of Docker. Not to say that we don't support Docker. Docker runs under Linux on Z. And you know, something else I know we've talked about, blockchain. Blockchain is a thing on Z, with z/OS I believe and Linux. So, yes, the virtualization on the machine is something that you should read up about if you haven't and if you're interested in that type of thing. CRUTCHER: Absolutely. And we're also posting links about all these different technologies -- blockchain and Linux -- in the comment feed, so definitely check out those education pages. KRUKOSKY: Absolutely, yes. So, yes, I mean, c'mon, there has to be some questions in there. CRUTCHER: Yes. Can you talk more about the cooling system? KRUKOSKY: Oh, okay. So, the way the cooling system works is each CP and MVSC has a water block on it. So, you've got one hose kind of going in the input of each block and then go out. It's very simple. You've got cool water going in, coming out, and there's just a pump down there that cycles it through. And with the radiator, pump it through the radiator and a big fan will blow across it. And actually, we do have safety features. Say all your pumps fail and now basically you've got an insulator on top of your CPU. CPUs will slow down to the point where they sit at a safe but still high temperature, but still safe, and the machine will slow down but it will not hurt itself and it will not stop your application from running. Other machines might panic, shut themselves down. They might start beeping really loudly and cook themselves to death. It's...I mean, we, this machine, there's a lot of recovery. I would say half of the work on this machine is getting all that recovery to work flawlessly and making sure the machine doesn't...will always be up. And that's one of the reasons why a lot of larger customers, you know, banks rely on this technology, because they know it will, they can rely on it and it will always work. So, and I mean, it goes to show, my machine is from like 2008....well, actually, 2004 to 2008; mine was made in 2005. I took it home. You can tell they sat it out in a warehouse for a while, it has a little bit of rust on it. I transported it, no static protection, in the back of my truck, carpet. You know, threw it all together, dragged the rack through mud, you know, and it still works once I put it back together. CRUTCHER: And your parents haven't made you get it out yet? KRUKOSKY: No. I'm sure...they want it out. They want it out. But I...yes. It will be up here in Poughkeepsie in my house down the road some day. CRUTCHER: That's awesome. Yes, it looks like people are just making comments, keep your questions coming. KURKOWSKY: Yes, please do. CRUTCHER: Definitely something that we wanted to talk about is not necessarily getting your hands on a physical machine, because that's rare. That's very rare. KURKOWSKY: No, totally do that if you have the resources from friends... CRUTCHER: If you have the resources, yes. KURKOWSKY: ...and the concrete floor to put it on and the power to feed to it. Do that. I highly encourage you. One thing we do ask is on older machines, you know, make sure the SEs haven't pulled the hard drive, because then it's the coolest hot rod you can own without an engine. That's what it becomes, so it's a giant paperweight. CRUTCHER: Because I had a couple of people ask, how can I get my own mainframe, so. KURKOWSKY: You know, this machine probably weighs almost two tons, three tons. You know, my machine weighed about a ton, so not for the faint of heart, but I'm not saying don't do it. CRUTCHER: All right. KURKOWSKY: But you can get access to them, to mainframes. CRUTCHER: Exactly. Yes, let's talk about some of those. KURKOWSKY: So, Marist, right down the road, you know, like two miles up the road, they have a machine they have dedicated to running Linux and they have online, you can sign up and get like a month of access or two months of access to your own Linux zone by itself. You get like two cores and like 32 gigs of RAM or something. You get...I can't remember exactly; I played with it like a year ago. But you know, you can do anything you want and it's yours. CRUTCHER: And it's yours. KURKOWSKY: I ran a Minecraft server on mine. You can do literally anything on it if you want. It's fun. I would encourage you to go try that if you know Linux at all, because it's the same thing, it's just, you know, you're not going to be able to install everything because x86 binaries, yada-yada. Other ones, IBM's been starting to rolling out more and more like coding things. You may want to get in the coding side of things like COBOL and stuff, they have trial stuff online. I'm sure the links will be in there. I'm looking at the guy who should be posting the links; you should be posting the links. But yes, so you can try that out, Spark, you know, different features of the mainframe. I think you can even play around with blockchain if you want. As for...there's a lot of things you can try out. And honestly, Linux is the one I'd tell you to try out first if you're not familiar with any of the IBM specific operating systems, because 3270 is not old or archaic; it's...I like it, I think it's a very nice interface but it's going to be odd to people who haven't used it before. And actually, a lot of the examples they've been trying to make it so you don't have to use 3270. So, if you want to try programming but you're scared of that interface, I think a lot of them you don't have to use the 3270. And so you're encouraged, go try it, you don't really have anything to be afraid of. CRUTCHER: Yes, so we actually have a developerWorks website, I believe it's developer.ibm.com, has amazing new developer journeys now where you can go out and test. We've got like five Z specific journeys out there now where you can actually go and get your hands on and play with this new technology. It's fantastic and we're trying to build more and more every day. KURKOWSKY: And since you run it, you should talk about Master the Mainframe. CRUTCHER: So, yes, so I told you all we were going to come back to it. So, I'm the program manager for the Master the Mainframe Contest. I know a lot of people on here have actually heard about this contest. We actually run it specifically for students. So, we reach high school students, college students, Ph.D. level students globally, we're actually on a global thing. You go to to masterthemainframe.com, you can actually see it. And we actually, we offer it at no experience necessary, and it's really just about getting mainframe hands and the newest talent that's about to come out in the world, because we're about to have a real talent job here. So, we're looking to fill that up with all these high schoolers and college kids. Also is a really great way to do that. They can win prizes. So, it's a really great thing for them to do. So, what a lot of people don't realize, though, is that you can actually do the Master the Mainframe contest and not be a student. So, we're going to send the link out there, too, which we call the Master the Mainframe Learning System. KURKOWSKY: Yes, actually, I had somebody on Twitter, IBM said, you know, oh, try the Master the Mainframe to them, and they said, but I'm not a student. So, yes, you don't...I didn't know that, I kind of figured you didn't have to be, but you heard it from the man himself, you don't have to be a student to do it. CRUTCHER: Yes, so anybody that's watching, following, you can definitely... KURKOWSKY: And also, you know, and I can attest, I looked at the Master the Mainframe, I haven't done it myself because I was too busy with my own machine. but I looked at the beginning stuff and it seemed, it was very simple, I was like, okay, this is going to make me tired, so, you know, personally. And the stuff at the end I was like, okay, umm, I shouldn't have skipped the reading. You know, it does get complex, and you go at your own pace. It's not like you've got to sit and listen; there's just I think videos and reading, there's kind of...it's kind of a mix. And yes, you get access to a mainframe 3270 connection and et cetera, so yes. I think we're running out of time here. So, but yes, any last things you have to say about the Master the Mainframe? I know I kind of cut you off. CRUTCHER: No, everyone go and try it. You actually get open badges for completing parts two and three, so that counts towards your credentials. It's a great plug? KURKOWSKY: So, yes, I think we're going to be wrapping up here. Any final questions, actually? CRUTCHER: No, I think... KURKOWSKY: It's just, you know, give it a minute, see if anybody rolls in. CRUTCHER: No, I think we're good. Yes, I just want to thank everyone for tuning in today. i want to thank Connor very much for walking us through for this love of the machine, and we'll be following up with comments on here. KURKOWSKY: All right. Have a nice day, everybody. I hope...hopefully we can do this again in the future.