Video

StormForge Fireside Chat Podcast | Episode 3

Brian Gracely (@bgracely) from Red Hat speaks with us about the history of OpenShift and how both it and its team have evolved to address the needs of the community. We talk about the duality of Kubernetes in both an enterprise and upstream sense, and how to solve for the pain of companies that are NOT Google.

NOAH: Hi everyone and welcome to today’s episode of the StormForge Fireside Podcast. This week in our lead up to KubeCon, we’re talking all about pain points and we’re talking with a variety of people representing a variety of backgrounds. I, as always, am your host, Noah Abrahams, Open Source Advocate here at StormForge, and with me so graciously today is Brian Gracely. Brian, why don’t you introduce yourself?

BRIAN: Hey Noah. Thanks for having me. My name is Brian Gracely. I’m Senior Director of Product Strategy for Red Hat’s cloud platform specifically OpenShift and what we do around Kubernetes.

NOAH: Awesome, we’ve got a handful of different perspectives coming along this week and I like that you’re representing a vendor perspective, something we represent as well. We like vendors. And vendors bring products and products sell paid points. So, let’s start at the beginning. You’ve been with OpenShift for a long time, right?

BRIAN: Yeah, so I joined Red Hat right when we started shipping OpenShift as a Kubernetes platform. Previously it had been non-Kubernetes. So yeah, I’ve sort of lived through everything from… I think we actually shipped the first version of OpenShift even before Kubernetes 1.0 went GA. So yeah, I’ve seen a lot of the pain points.

NOAH: So what were those points? What were people looking for that sort of drove this product to evolution that drove the development from the beginning?

BRIAN: Yeah, wow. I’ll try and keep it fairly succinct. I think when we first got.. so if we first got started you put this around in 2015, we were somewhere between this, you know, paas is going to be the way that developers build applications, so very opinionated, hide a lot of the infrastructure, and there was a lot of folks kind of in that camp. Especially if you were from the application side of the world, and then you had people who were enamored with Docker and being able to get you know low level container level stuff and you know consistency from desktop into production and you know kind of wanting to tinker. And when Kubernetes first came out, you sort of had this mashing together of both those things. So the thing that we’ve always been trying to fix or that’s been the biggest pain point has been, do we pay more attention and we give more capabilities to the operations people, or the platform, so the people that run the thing that runs containers? Or do we hide a lot of that and give more capabilities to the developers because ultimately applications run the business. That’s been, you know, you can extrapolate that a lot of ways, but that’s always been the biggest pain point that Kubernetes has been trying to find the right balance between.

NOAH: So, you’re on the product strategy track that’s really focused on that sort of high level need, correct?

BRIAN: Yep.

NOAH: So, how do you gather the understanding? How do you wrap your heads around where the project’s going to grow, what the pain points are that the customers are working, I suppose this is sort of more of a explain to me product management is kind of this question, but what’s your process look like when you’re talking customers and you’re trying to figure out what do they need, and where does it hurt, how do you make it better for them?

BRIAN: Yeah it’s a great question because there’s a lot of pieces to it. I’ve explained it to people in a bunch of different ways because it kind of depends like you said where you come from, what your perspective is. So on one hand, there’s the basic pain point of I have this set of open source projects and communities building things, and if you don’t have any perspective on what you’re building towards, you’re just sort of building things. So like one of the… on the project side one of the pain points that we’ve all tried to sort of overcome at least from the earliest days to where we are now was when Kubernetes first got started, it was very Google-centric, right? I mean it came sort of as a variation off of Borg. When Craig and Joe and all those guys were building it, their perspective was I’d like to take this thing I’ve done at Google for a long time with Google skills, with Google scale problems, and bring those into the world. That’s an awesome concept and we’re glad that they did it, but one of the biggest things that we had to do especially Red Hat as one of the earliest contributors was go hey, that’s not what American Express, or Ford Motor, or Pfizer Pharmaceutical… They don’t look like Google. So what they need is things that are capable of dealing with the non-cloud native application. They need to deal with persistent storage. They need to deal with multi-tenancy or role-based access control. So in the earliest days, one of the things we had to bring to the projects was this just sort of perspective of things that weren’t Google, right? The second thing we have to do effective and this concept is, you know, there’s so much that comes out so fast out of whether it’s Kubernetes project early on or all the things in the cloud native computing foundation, is as you talk to customers or end users, whatever you call them, is them going what is all this stuff and do I need it, right? So service mesh comes out or you know something in the security side of things comes out and they go, do I need that thing? What is it, you know, what does it look like? Does it look like my existing security? Is this a developer thing? So there’s the whole thing of kind of explaining to people, this is why this project got started. There’s a problem. You may not have it yet, but as you grow your environment you’re going to have this problem and this will help fix it, right. So there’s an education aspect that we take out into the market, and then there’s an aspect of it that is customers, end users, groups coming back to us and going here’s… now that we have some education, now that we’ve used it somewhat, here’s how we’re using it and this is where we’d like to see it go. This is where we’ve seen a lot of things evolve, right. So we saw people come back to the project initially and say hey, we’d like to deploy this on top of VMware, on top of AWS, on top of Azure. We’d like you guys to build it such that it’s aware of those things, right. I don’t want to have to know about all those. Oh, cool. We are doing a bunch of stuff with stateless applications, but the world isn’t all stateless. We need you guys to figure out how to do stateful applications, like make it so that I can get to persistent discs and stuff like that, and so there’s always been a good flow of information that comes back, and what we’re always trying to figure out is how many of those things can actually be solved with what exists? So oh, no, no, we can solve that problem if you look at it slightly differently, and other times when you have to go yeah, that’s a huge problem. I can see why 25 companies all have the same issue. Let’s figure out how to get that into a working group, how they get that built, let engineers look at how to *inaudible*, then we can figure out do you commercialize it, do you put it in the open source project, whatever that might be. So there’s always sort of multiple lanes that are coming and going between the perspective of what’s possible and what people actually are using it for and trying to find the right balance between solving it today and solving it two years down the road.

NOAH: Okay, so that brings up in my mind sort of an interesting question because you’ve got this dichotomy now of what you’re contributing to this larger enterprise class product. You’re bringing it to you know the Amex’s of the world, but you’ve also got whatever’s going back upstream into individual contributors that might be, maybe they work for a large company, maybe they’re students, but they’re all sort of operating as individuals, typically, from my experience in the Kubernetes community. When you’re talking about how problems have been solved, like you introduce a new feature into the OpenShift platform, what does it look like to communicate back to the world at large and to have that dichotomy when you’re talking about we’ve solved this pain point, we’ve solved this problem?

BRIAN: Yeah, it’s really… it’s another good question and it’s an interesting sort of how the whole thing works together. So let me give you some examples. So you’re right that in these upstream communities, the people doing the actual work, they’re individuals, right, and sometimes they work for different companies and so forth. But there’s also an aspect of it that is, yes, a feature or something can get put into the project, but when things are fairly significant, a security framework, how we’re going to do a new class of deployments and so forth, there’s some thought process that goes on that from a little more of a group perspective, right? So Red Hat can’t just jam a bunch of things in. Google can’t just jam things in. There’s people who look at this and go hey, remember we’re all going to end up living with this software. So I’ll give you a couple of examples of how that works in real life. One of the things that… I’ll give you two or three examples.

So in one case when Kubernetes first came out, it had this idea that like all store *inaudible*, right? So everything you’re just going to use whatever’s on your local disk on that local machine. We came back and said hey, the enterprise doesn’t typically work like that. It typically has these big storage pools, NFS, and whatever. Google may not care about that because for GKE that doesn’t matter, but we’re going to contribute some stuff that’s going to allow a NetApp filer, or an EMC storage, or whatever down the road, to come in. So in that case, they look at that and they go okay, you guys are willing to do the work, you’re willing to put that in and test it, great, we’ll accept that, right. So that’s sort of your path of pretty well-known problems, somebody’s willing to do the work, it gets into the project, it’s useful for everybody, right. But in some cases, the larger group for Kubernetes, some of the steering groups, may look at something that comes in and go, you know, we’re not sure if that thought process is mature enough. We’re not sure if we’re ready to prioritize that capability.

So I’ll give you a second example. So when we took it out into the broader set of customers, one of the things they all came back with is they said hey, look, we’re a regulated industry, we have controls over security, and we said okay, well that’s a role-based access control type of problem and we created a concept of what became RBAC – Red Hat created this into RBAC. The Kubernetes community at the time said we’re not ready for it, right. We know it’s a big problem, we don’t know that it’s been completely thought through enough, enough corner cases and edge cases, we’re not willing to accept what you did. So this is where vendors and communities start to have to figure out okay, what do you do? So in our case, we took the work that we did, which we thought was very good and complete at the time. We included it in OpenShift, but we still made the code readily available, it just wasn’t mainstream. Now, what was interesting is what happened was, we would come back every new release and say hey, I think this should get included, and again it sort of took a little while, and then at some point, something else happened in the marketplace and some customer asked for it and the folks at Core OS said hey, we want to go run with this. So now you had a second sort of party within the community saying this is important and they actually were the ones who drove that implementation into the market. They made some tweaks to make it a little simpler, make it more full-featured. So those things happen sometimes where you know a vendor tries to do some things, it’s not right, whatever. Then a third example, and again, I won’t go too much deep in details, but a third example was one of the things that we did in OpenShift, again having worked with these sort of big gnarly enterprise companies, was they said we have this problem that people download stuff off the internet. They pull things from Docker Hub. They’re great for demos, but they all run containers as root, and so we’d like you guys to figure out a way to prevent that from happening. So we did it, we were able to leverage some stuff that was down at the Linux level in SE Linux in RHEL, and we came back to the community and said hey, we think this is a thing, you should implement this. And they said no, we’re going to go think about it a little more. So in our case, we have this product that doesn’t let you run as root, most container Kubernetes implementations allow you to, and then, and shoot I’m gonna forget the feature name, but essentially it was you know don’t let us do that, and it just got deprecated, I forget what it was and we should edit this at some point, but that’s another situation where the Kubernetes community said no, we don’t want to take this implementation. We think maybe it’s too vendor specific – fair enough. We’re going to implement this sort of generic thing, but the problem was the timing of it was such that so many people were used to just getting stuff to work that going back and making something more restrictive became problematic and they couldn’t kind of harness that beast and they end up having to deprecate it and look at something new. So there’s always this sort of give and take tension between what somebody can build, whether it fits in the community perspective, whether somebody puts it in a product, and it doesn’t always mean something is right or something’s wrong, there’s always multiple layers of the dynamics of how all that works, and those are just a couple of examples.

NOAH: Okay, that’s some great insight. Thank you for that.

BRIAN: Sure.

NOAH: I want us to kind of turn it on its ear now a little bit because you gave us some great insight on what it’s like to bring it to the market, but I also want to take a little bit of a look about what it’s like for you evaluating internal pain points. If you’re looking at the platform needs a new feature, you’re deciding on other technologies, like whenever Ansible was introduced before it was a Red Hat product, how do you decide on technologies to integrate inside the product when you have pain points that you’re looking at internally that you need to fix…

BRIAN: Yeah.

NOAH: …and really what are the questions that you ask yourself?

BRIAN: Right, yeah. It’s a great challenge because it’s the same sort of dynamic, right? So in our case within Red Hat, we have a handful of technologies that we’ve commercialized. So you mentioned Ansible. We’ve got a Linux distribution and some other things, and when Kubernetes first got started for example, and even still really, there wasn’t sort of one installer, right? There’s two dozen installers. There wasn’t a built-in monitoring thing. So like Prometheus didn’t exist yet. People weren’t using Grafana on the normal. So at the time in the early days of OpenShift, we sort of said okay, do we have something we can use because we know that’s a hole, but there’s not really a thing that fits, that’s sort of built for Kubernetes. So we’ve gone through a couple of iterations of some things that maybe we had internally, you know, products that we use for other things like virtual machines or other stuff, that we sort of retrofitted to Kubernetes and in some cases we’ve kept those around.

So for example, we had a technology called OpenShift Templates that kind of preceded helm. A number of customers used them and they did similar types of things. Then what we typically do is once that that area shifts over to a new standard, helm is an example for packaging, or operators for packaging, Prometheus for monitoring, we want to adopt the thing that’s most widely used and the challenge for us is always how much has it been already, the previous thing already used by our customers. We don’t want to pull the plug on them or sort of leave them you know in a bad spot. So we have to support some things that aren’t commonplace for a little while, and then we try and bring these new things in as quickly as possible and sort of run them in parallel for a while, help them move to the new stuff as quickly as possible, because again then our customers get the benefit of well I’m not just using Prometheus, I get all the plugins that go with Prometheus. I can find skill sets that work in that space or I can use all the community helm charts, or whatever it is. But yeah we’ve gone through it a number of times. We have to ask ourselves what’s the engineering cost of making a change? How much does that impact our customers in terms of if they’re using something very heavily, to shift over to something do we have to build migration tools? But it’s been really commonplace because if you think about the evolution of Kubernetes, a lot’s happened in the last five, six, seven years, and so if you weren’t flexible enough, you were going to get yourself into a bad spot and ultimately put your customers in a bad spot.

NOAH: Okay, so that’s what your thought process is leading in. What about once you’ve made some sort of decision? What questions do you ask yourself when something is ready to be released? Ready to go out the door? Is there a separate set of questions where you’re saying okay, we made a decision to get ourselves into this, now before we give it to them, are there additional evaluators that go along with that?

BRIAN: Yeah, I think we do. I mean I think we do a couple things. We try and balance does it make good technical sense and does it make good business sense? In the early days for us, we were really just trying to keep up with what was going on; we were trying to build something. So even though OpenShift is part of a much larger company now, we were very much like a startup. We were sub 100 people. Revenues were sort of choppy and so forth. So we were always trying to define that right balance.

I think as we got a little bigger and our scope got a little broader, we started looking at it as will we be able… Did we learn anything from before, and will this help us sort of grow it as we go forward? So a great example of that was the earlier version of OpenShift, OpenShift 3, was very much kind of one big piece of software, if you will. One of the things that we acquired when we acquired Core OS was this technology now called operators, which has become really widespread, but that let us make the platform modular. So it solved a lot of problems for us. It made it easier to upgrade things. It made it easier to bring in third-party tools. It made it easier to test things. But it was a massive, massive architectural change and so you had to sort of go, are we sure we want to do this? Then one of the things that we had to implement and our customers do this all the time, lots of customers do this, is we had to start implementing telemetry capabilities within each piece of it to start understanding who’s using this, how frequently are they using it, what patterns are they using this in? So we now have automated *inaudible* to collect that information. We can have a little more visibility into how our customers use the products. So we’re always looking back loops that aren’t just anecdotal conversations. We’re looking for raw data. We’re looking for trends that might be coming along, and again, try and find that balance between what’s good business and what’s the right technical path forward with the information we have in front of us.

NOAH: Yeah, I feel like with the speed of evolution… As an anecdote, I have a standing tweet at the top of my Twitter feed that says the biggest problem Kubernetes has right now is sort of like trying to catch up on your DVR when there’s 8 new hours of content every day. We have that same sort of phenomena with all the new projects and all the new products that come along. Is there an additional emphasis that’s put on, I don’t want to say throwing good money after bad, but in the no, no we’re going to have to reevaluate at a much higher, much more frequent rate than we normally would with any other set of products.

BRIAN: I think we do. I mean I’ll give you a simple example of something that we have to deal with as a company, but a lot of our customers deal with the same thing. So for us, Red Hat Enterprise Linux is this product that people expect to have a 10-year life cycle, 5-year, 10-year life cycle. So there are still people that are on RHEL 5 and RHEL 6, even though we’re well into RHEL 8, but that’s what works for them and that’s a value to them. In the Kubernetes world, new releases come out every three or four months. We support them for shorter periods of time, two or three years. For us, just that difference is something that we have to sort of think through because we put the two of those technologies together oftentimes. Our customers often go like are you the 10-year company? Are you the 3-year company? But it’s the same sort of challenge, even companies have. So we’ll deal with companies where, and we see this all over the place, like their IT group, their central IT group, is sort of incentivized to not make mistakes, not have outages, go slow, be steady, and so the idea of touching infrastructure software every three or four months, they’re like that’s insane, right? I want to touch it maybe every two years or three years. But they then have groups that are you know in their Center of Excellence or in their sort of advanced technology groups that want the latest features every three months. So even within organizations dealing with Kubernetes, there’s some that want it to be really stable and some that are like give me the bleeding edge all the time. I can’t get the latest Istio release fast enough. So that yin and yang kind of summarizes what I think everybody in the Kubernetes community is dealing with. Should we go fast? Should we be stable? Somewhere in between? And it’s not a one thing fits all for everybody.

NOAH: Alright. Well, this has been great. I think we’ve got a lot of good insight about what it’s like to work both as a vendor, but specifically as a vendor in this really fast-moving Kubernetes environment.

As I said earlier, part of this is the lead up to KubeCon. We’re gonna have a booth. I know Red Hat’s gonna have a bigger booth because Red Hat is always in contention for that like diamond sponsor spot, right. Is there anything you can tell us about what Red Hat’s going to bring this year?

BRIAN: Yeah, this year is going to be really strange because usually we build the booth for maximum interaction, right. Lots of demos, lots of experts there just talking about code. The people who wrote the code are always there. This year it’s going to be interesting because we don’t know exactly how many people are going to show up, but our expectation is lots of demos. No sort of vendor fluff. The people that we send typically are the contributors to the project. Our folks who have similar roles that you have are out talking to developers every day. So if you know if you stop by, the expectation is either you’re going to get hands-on with something, we’re going to be showing you stuff that’s live. We want you to come to us with your problems and kind of help solve some things. So it’s very engineering, technical driven, other than probably giving away some swag. So that’s always kind of the focus that we do and we’ll be focusing a lot on GitOps. We’ll be focusing a lot on multi-cluster management, multi-cloud deployments, and stuff. That’s kind of the trend that we’ve been seeing a lot lately.

NOAH: Fantastic. Well come visit Red Hat and StormForge both at KubeCon! Now we get to the fun portion of this where we go through just a handful of Rapid Fire questions and we try and get some fun responses.

BRIAN: Yeah!

NOAH: Ready? Let’s do it. Okay. Yes or no – pineapple on pizza?

BRIAN: No.

NOAH: Favorite piece of technology, any technology?

BRIAN: Probably the iPhone just because it’s so much a part of my life and it kind of is… it’s so good at what it does it’s sort of hidden, but that and just the fact that I can stream every movie on the planet anytime I want is pretty cool.

NOAH: Awesome. Favorite open source project, though I think if you say Kubernetes you’re cheating.

BRIAN: Yeah, I’ve been following a lot of what’s going on with the database space. I’m sort of fascinated that we went from decades and decades of nothing but SQL to kind of this massive explosion of everything from Kafka to Mongo to all of those. That stuff’s fascinating because it’s like every time I learn about a new one, it’s this totally different use case that I never thought of. So everything in that open source database space is interesting to me.

NOAH: Cool. Favorite hobby?

BRIAN: I’m kind of a tinkerer, kind of home repair, those types of things. I’ve got an old cabin that I like working on, so anything where I get to kind of fix old stuff is my new hobby.

NOAH: Cool, very hands-on. I like it. Favorite place to vacation?

BRIAN: I’ve been lucky enough to go a lot of places. Anywhere I get to be outside. So whether you’re in the ocean in Hawaii, or the North Carolina beaches, or up in the mountains, go to Colorado or something. Anywhere it’s like outside, get away from the tech stuff, I love it, whether it’s hot or cold.

NOAH: Love it, great answers. That’s it. That’s all we got. So thank you for joining us, Brian. This has been a wonderful conversation and to everyone who’s watching, tune in tomorrow for our next episode, and hopefully we will see you all at both of these booths at KubeCon, either in person or virtual. We will see you there.

BRIAN: Thanks, Noah.

NOAH: Bye!

Latest Resources

Seeing is Believing

Start getting resizing recommendations minutes from now.

Watch An Install

Free trial includes full version on 1 cluster for 30 days!

We use cookies to provide you with a better website experience and to analyze the site traffic. Please read our "privacy policy" for more information.