In episode 51 of The Secure Developer, Guy Podjarny talks to Adrian Ludwig, Chief Information Security Officer at Atlassian. Adrian has a marketing and tech background, we speak to him about his transition between the two seemingly unrelated fields through his work at NSA, Adobe, Nest, and Android, and how both sides inform his approach to security at Atlassian. We then get into the nitty-gritty of how Atlassian thinks about security, and the operations and technologies they have in place in order to achieve that goal. Join us for a fascinating behind the scenes look into the cogs that make Atlassian work.
Adrian Ludwig is the Chief Information Security Officer (CISO) at Atlassian. Adrian joined the company in May 2018 and is responsible for Atlassian's security team and practices. Prior to joining the company, Adrian held a number of leadership positions, including building out the security capabilities at Nest, Macromedia, Adobe, and Android (Google).
Security is a vital feature of a platform’s architecture on both the service provider as well as the consumer, and it helps to have a leader who can see the big picture. Our guest for today is Adrian Ludwig, Chief Information Security Officer at Atlassian. Adrian has a marketing and tech background, we speak to him about his transition between the two seemingly unrelated fields through his work at NSA, Adobe, Nest, and Android, and how both sides inform his approach to security at Atlassian. We then get into the nitty-gritty of how Atlassian thinks about security, and the operations and technologies they have in place in order to achieve that goal. We talk about how Atlassian has transitioned from being an on-premises to a cloud provider, and the benefits of merging microservices with security boundaries in its system. Our conversation also covers other systems Atlassian uses to maintain its software and delegate to teams. We speak about the granulations of the roles of embedded developers in security teams, and how timezones are used strategically to speed up turnover time. You’ll also hear about how they use bug bounties as a way of gauging its embedded developer ratio, and different strategies to deal with backlogs. Toward the end of our conversation, Adrian touches on the concept of consumer versus enterprise-grade security, and why it is necessary to build systems that reduce the risk of human error and not the other way round. Join us for a fascinating behind the scenes look into the cogs that make Atlassian work.
Links Mentioned in Today's Episode:
[0:01:17.3] Guy Podjarny: Hello everyone, welcome back to the show. Thanks for tuning back in. Today, we have a great guest that both delivers and sort of helps customers with making them secure but also very much keeps their own sort of system secure. Adrian Ludwig who is the chief security officer at Atlassian.
Welcome to the show, Adrian. Thanks for coming on.
[0:01:32.1] Adrian Ludwig: Yeah, thank you, I’m happy to be here.
[0:01:33.9] Guy Podjarny: Adrian, a bunch of conversation to talk about today, including kind of org structures, talking about security for developers and some maybe slight contrarian views, you kind of mentioned to me. Before we dig in to that, you have a really interesting path into security or maybe into security and out and back in again.
Tell us a little bit about the role you do today but also just kind of the path that you took to get there.
[0:01:56.7] Adrian Ludwig: Sure, my entrance to security was a little bit of an accident. I was actually in my high school guidance counselor’s office and we got a letter saying that the NSA was willing to pay for people to go to college and paying for college for me was a big deal so I applied to a program and they ended up funding my education, got a bachelor’s degree in math and started learning about computers which were – I won’t say new at that time but they certainly weren’t a common thing for someone to have at home or to even be thinking about.
So I ended up working at the National Security Agency for basically almost a decade doing a variety of different things from cryptography to learning about very early computer exploitation and thinking through how we could protect our – Things that the government needed to protect as well as how we could potentially get access to the information that was being held in other people’s computer systems. Really, really interesting time and a lot of very smart people just kind of figuring it out in time.
Which for me has been the path all along. I like working on hard problems. I like learning about new things. From NSA I then went and did consulting for a few years, staying in the security space and I got a little bit frustrated. Consulting is hard, right? You keep giving people recommendations and a lot of times they’re the same recommendations and I eventually said, “You know what? I can’t keep doing this, I need to go help somebody to kind of do it right,” and so I ended up at Macromedia was the name of the company and eventually at Adobe through acquisitions and corporate change.
I helped them build up their security team for a while. I ended up at one point, really enjoying Adobe, which is great for helping designers and developers build stuff and I ended up in the marketing role. When Flash was moving towards being on mobile devices and helping figure out what that product set would look like, you know, what features they needed and then how to communicate with the developers and what that would look like. Dallied around in marketing for a while and then came back to security for a bit.
[0:03:47.6] Guy Podjarny: What was that, what marketing role did you have? It wasn’t a security minded marketing role?
[0:03:52.6] Adrian Ludwig: No, not at all, it was – I think a lot of folks in security are a little bit frustrated because it’s hard to see the light at the end of the tunnel and so for me, one way to get light at the end of the tunnel was to just look down a different tunnel.
[0:04:04.9] Guy Podjarny: That works as well, you know?
[0:04:07.0] Adrian Ludwig: Yeah, I went and did just straight product marketing, right? I knew how to talk with developers, I knew how to write code, which is not something that’s common in somebody that’s in a marketing role and we were trying to figure that space out and so I just ended up working in that area for a while.
It was really great actually, like everybody around me was optimist, you know, people didn’t wear black. It was a big shift but it was cool.
[0:04:29.3] Guy Podjarny: I remember in my career, I made the transition from security to performance and back again. So not quite marketing but sort of in the world performance and it felt like that. It felt like you go to Velocity and come back and everybody wants to sing kumbaya and you know, kind of make the world better and come back from back hat and you kind of want to curl up in a corner and cry, you know? It’s much more adversarial and all that. Sometimes you need a little bit of that breath of fresh air.
[0:04:52.1] Adrian Ludwig: Yeah, I often got, you know, “Why are you so dour?” is how people would describe it in the marketing space but then on the security side, it’s like, “Why are you such an optimist?” Like, “Okay. Really, I’m a centrist and the world is just you know, split in lots of little different fiefdoms.”
[0:05:05.5] Guy Podjarny: Indeed. You did this work in marketing and then you got back into security. Was that still within Adobe or?
[0:05:10.2] Adrian Ludwig: No, I left and I got a call from Google at the time and Rich Canning is his name. I think he’s at Facebook now and he said, “You know, Android is coming up strong and we need to figure out security and make sure that we do it right.” That for me was really a seminal moment because for me, it was clear that mobile was an opportunity to completely reset how we thought about security.
Brand new operating system, vendors of those operating systems, both Google and Apple who were really knowledgeable about and committed to security. For me, it seemed like this is a great opportunity to actually do it right. That turns out to be the case, I just saw a couple of days ago, Google posted that they’re now paying like almost a million and a half, I think it’s a million and a half for certain types of vulnerabilities in Android.
Which is insane. When I think back 20 years ago, it would take a few hours for someone to find vulnerabilities in the operating system in that day. They certainly weren’t worth a million dollars. They really are, and they’re not even paying those bounties. They’re there but nobody can find the stuff, so – That’s kind of why I came to Atlassian as well. I think the shift to cloud is a similar fundamental shift for companies. They’ve been burdened by legacy operating systems and legacy infrastructure and datacenter, that makes it really hard to keep things up to date and I’m very optimistic that whether it be Amazon or Google or Microsoft or whatever cloud provider that they’re doing away with a lot of the complexity of managing your infrastructure.
At the application level, Atlassian certainly, but same is true for Gmail and same is true for Box. All of these different apps, they’re taking away a lot of the heavy lifting that used to be had to be done by every single company in the world. It gets to be done by instead companies that are well capitalized and really thinking and caring about security.
[0:06:55.4] Guy Podjarny: I love the perspective, you know? Sort of thinking about cloud is just an opportunity to rethink a lot of what we do. In Android though, we started, I guess from the bare bones, you know? Sort of, you know, you built the mobile application, was consumer oriented and I guess there’s apps that get built but maybe I’m downplaying a little bit mobile development but there is like a certain limits to the complexity that can run on a mobile device.
In the world of cloud, it's probably not entirely true.
[0:07:21.8] Adrian Ludwig: I’d be willing to bet the average mobile app is more complicated than the average Windows NT4 App was back in the day.
[0:07:28.0] Guy Podjarny: Yeah, no for sure. I’m not comparing it to the Windows apps, I’m actually thinking it’s compared to the cloud. It’s compared to an application that might be deployed like a SaaS, you know, the cloud, right? Or sort of an application that’s there that has that many more kind of moving parts. I guess, how do you see like when you think about cloud and you think about what it allows you're thinking, is it more like as a consumer, is it like when I use – I just want to go on that thread of Bitbucket cloud, you know?
You help me be secure more than ever before. Or is it about Bitbucket cloud, how Bitbucket cloud secures itself or like a SaaS application secures itself that is the change.
[0:08:05.0] Adrian Ludwig: Yeah, I mean it’s yes. Everywhere I look, I see opportunity and I guess now I see why people think I’m really optimistic. You know, let’s take user authentication. Building a really good risk engine to analyze your user session to determine whether or not the use of these credentials is something that’s potentially been a compromised use of those credentials, it’s not something that every company is able to do when they’re thinking about authentication, right?
But it is something that we have an expectation that every cloud provider is going to do. There’s like a shift in what we think is possible there. I’m going to give you another example. Actually, Google just wrote up about this, which is exciting because they do a good job of branding some of their security things.
They wrote about something they’re calling Beyond Prod, which to boil it down. is basically thinking about micro services in your cloud architecture as security boundaries, which is sort of what we did with Android as well, which was taking each service running on Android and really hardening it, making – sorry each application is really hardened and that those are all valid security boundaries.
And then, lateral movement within that operating system or between one application becomes really difficult and in the cloud, what that means is, we might have a bug inside of a media handling library, inside of any one of our applications, whether Atlassian or anywhere else. But that shouldn’t lead to compromise of all of the rest of your service. But the reason that we have chosen to decompose our app under these micro services is for performance, it’s for scalability, it’s for reliability. But it also accidentally almost introduced the possibility of having really strong security boundaries and lots of different places that makes what used to be a catastrophic vulnerability – everything was critical back in the day. Now, maybe a much smaller scale.
That’s not something you can do on an application that’s running on a single Java server running on top of Linux. That’s only possible because the applications decompose in order to get scalability at cloud. That’s something that we’re taking advantage of, we’ve been working on that for years now and it’s a core part of how we’re thinking about security within our application.
[0:10:23.2] Guy Podjarny: I think it’s a really interesting perspective so allow me to sort of spend a few more minutes on decomposing it. I really like and very much relates to this idea that you can now, when you have micro services or, you know, serverless kind of an extreme version of that. You can think of each one of those as a boundary, not of just a functionality but also of security.
There’s a lot of kind of counter statements, a little bit about to talk about the complexity, you know, that that entails that it’s kind of hard to see the forest from the trees a little bit, right? What’s the app, how does data flow through it. What are views that are practicing or seeing as a way to kind of help navigate securing each other’s parameters but also, thinking about the whole or not falling into that complexity trap.
[0:11:07.7] Adrian Ludwig: Yeah, I love your forest and trees analogy. It’s trite. It’s probably even overused but I think it actually – maybe even unintentionally gets at, the way that the world, nature has become more secure is smaller, individually evolving creatures, right? That all interact with one another as supposed to homogenous exactly the same, all subject to [inaudible 0:11:36]. Sort of style of architecture that we’ve proposed in the past.
I actually think that that forest from trees thing the key is that healthy forest is one that’s diverse, it’s complicated, and it’s one that humans haven’t really been able to completely understand. But it’s healthy and it’s good to go.
Yeah, we worry about complexity, we try to minimize it as much as we can but at the core, we see a lot of value in having more granular security scope and being able to define each component as supposed to trying to secure the entire thing in one fell swoop.
[0:12:04.4] Guy Podjarny: Sounds super healthy to me, you know? To an extent, you build those components, we can go down that rat hole of like viruses that go through forests and things like that. I’ll try to avoid the temptation.
We kind of veered a little bit from sort of your journey right now to this cloud, we might come back to this a bit as we get into specific aspects of what you’re working on. Maybe let’s take a bit of step away. So, you know, you’ve done this journey, it included that sort of marketing piece in between which is unusual. How much do you find those marketing skills or, you know, the muscles you build during marketing.
How much if at all, do they help you in your security work, now that you’re kind of back on the dark side?
[0:12:41.7] Adrian Ludwig: I mean, there are lots of different aspects of it that are helpful. For sure, I think keeping some perspective is helpful from time to time. The world is getting better.
[0:12:51.0] Guy Podjarny: Staying optimist.
[0:12:52.7] Adrian Ludwig: Yeah, that makes a big difference. I also think there’s a temptation sometimes for people that come from a technical background to really want to worry about the details and the details are important but the details are not so important that they should preclude thinking about the big picture. A lot of marketing, ultimately, is like, “What’s the big picture? I only have five seconds to sell somebody. How do I boil it down to that?” That’s turned out to be an important skill I think that any executive has that any leader has in a company. That’s how you get people onboard. That’s how you sell security, you make it simple and you make people feel like it’s important.
[0:13:30.9] Guy Podjarny: Yeah, fully agreed. I mean, I think at the end of the day, you know, kind of how you rally people? You have to – whether you're rallying your own organization or like you're on security group or you're getting other people in.
Let’s indeed kind of veer into that org, what does your – you run this group, what is security at Atlassian look like?
[0:13:49.3] Adrian Ludwig: Sure, in some ways, we’re pretty traditional. We have a product security team, we have a security intelligence team, one of those is basically running our security development lifecycle, the other is monitoring our infrastructure. My team, broadly, thinks about all types of security. We’re a big enough company at this point, we have almost 4,000 folks on several different sites and you know, we have developers all over the world that are pushing code all over the world so we have to think about what the corporate environment looks like. We’re responsible for that as well.
We’re a hybrid, both product security team and corporate security team because they’re really connected in fundamental ways when you’re a cloud provider. That’s what we look like. We’ve grown a lot over the last year and we’ll continue to grow. Atlassian is going through transition from being primarily a provider of on premises software to being we are now primarily a provider of cloud software and we think the future is all about cloud.
Which means that we are hosting data that is not our data, it’s customer data and we take very seriously, making sure that the best possible protections are in place and I also like to think about us adding 10 or 15 or 20 more people to do analytics and to be thoughtful about it, scales incredibly well for the overall set of customers that we have.
Most of our customers have zero security people looking out for their data. Probably, 1% of our customers if that have hopes of having a security team as big or as capable as what we have at Atlassian. It’s a very efficient way for us to add security protections to help all of our customers in their own cloud.
[0:15:23.5] Guy Podjarny: I love the mission, what are sort of the key tenets in how you indeed sort of split up the team and maybe even like a little bit honing in on the dev engagement like the rest is, the context of this podcast and just for context.
[0:15:36.2] Adrian Ludwig: Yeah, corps – I’ll sort of throw away the ones that are least relevant. We have responsibility for things like making sure that the laptops that are connecting to our environment or all the computers that are connecting to our environment are from known hardware, come from – are the ones that we’re expecting. It’s tied into how we do authentication into all of our tools.
We have zero trust where we’re authenticating the hardware and the user. We’re working on making sure that we’re using the key, U2F sort of two-factor cross authentication to connect all of our services. That includes development pipeline but it also includes email and Slack and all the other corporate infrastructure.
Security intelligence, we have a single unified logging infrastructure across all of our corporate applications, they have all their logs flow into that infrastructure, as well as across all the applications that we develop, right? When you use the Atlassian cloud or use Jira, events that are being created there flow into a unified logging infrastructure across all of our different apps and across our corporate apps as well so we can correlate events that take place across all of that.
That team is responsible for both making sure the logging infrastructure’s working but also writing detections on that logging infrastructure. We’re seeing anomalies, whether those are localized or whether they’re global. Then they also do instant response, do the follow on investigation and after alerts are created and triggered. We’ve actually done some talking about how that process works.
A lot of the workflow for our detection creation is tied into Jira because that’s how we track stuff. That’s how most companies —
[0:17:04.0] Guy Podjarny: Most of the world does as well.
[0:17:08.1] Adrian Ludwig: The product security team manages our secure development life cycle. We have – I’ll just make up a number – 30 or so discreet product teams across the company. Jira, Jira Software, Jira Service Desk, Confluence, Bitbucket, Status Page. All these different teams. Each of those areas of the company has what we call embedded engineers, which are members of the security team that are focused as consultants on that particular product area, they’re familiar with everything that’s going on there.
New features that are coming up. They are attuned to potential risks around both the architecture of the product but also processes. You know, “Hey, you have a release that’s coming up soon, are you going to be able to fix the secure issues that have been found within that release or do we need to adjust the number of engineers that are working on that timeline?” Or things like that.
We have what we call security score cards for each of the different product areas and on a monthly basis review with the head of engineering and head of products and security team, how things are going, what’s coming up next, what they should be working on. We also have what we call programs, things like vulnerability management, things like threat modeling, et cetera, which are cross cutting functions that are used by everyone of the different product areas and those are bundled up, if you will, in the security score card. So you can see, are these security programs being used effectively and getting the results that we’re looking for?
We’ve been investing a lot recently in finding vulnerabilities and they’re everything from making sure that our scanning infrastructure scales out really well, making sure that our scanning of source code works really well and that we’re able to find things earlier and earlier in the process.
And, coupling that with a Jira-based workflow for making sure that all the settings gets fixed as quickly as possible.
[0:18:58.0] Guy Podjarny: I’ve got a whole bunch of things I want to unpack out of there. So let me hone in for a second about the embedding process in those teams. You’ve got how many, like loosely speaking, how many engineers are embedded in total? It sounds like it’s sort of 50-60 type engineers.
[0:19:15.6] Adrian Ludwig: Yeah, currently we don’t have – that that’s probably the end game. We are not currently at that scale yet. We are about half that in terms of the numbers of embedded engineers.
[0:19:23.0] Guy Podjarny: Are you aiming for a certain ratio? So like they come in and they embed in the team, which I love and I think several other guest mentioned it. It seems to be a constructive, actually works type of model.
[0:19:34.3] Adrian Ludwig: You know, ratios are one of things that drive me nuts. We have basic premises that we are using to think about it right now. So one of the premises is every team should have at least one person that knows what they are doing. It is not a surprise. I don’t think one is enough, so we actually aim to have two. So every product area you should have two engineers within the security team so you have a redundancy if somebody goes on vacation, if somebody is out whatever.
And then it also gives the security team sort of a knowledgeable foil that you can have interactions with within the team because we are constantly thinking about how do people rotate through their interactions with different product areas. For the most part, security engineers like to be working on a variety of different things. So we actually have our embedded engineers work on at least two different products. You almost always end up with two engineers per product and each security engineer has at least two different products. So you sort of end up with this overlap interaction with a variety of different people.
In terms of long term, you know, we have some macro-level ratios that we think security shouldn’t be growing more slowly than the rest of the org, certainly not at this point. Do we think that it is 5% of engineering or 8% of engineering or 10% of engineering? Don’t know. Right sizing is probably going to be determined by results.
If we start to see the amount that we are paying for vulnerabilities inside of our bug bounties going up and there being not more issues being found that is a positive indicator that we’ve had a good level of investment. If we see a lot more stuff coming in and we’re not being able to increase our bounty rates that is probably an indication that we are underfunding preventative measures.
[0:21:05.4] Guy Podjarny I love that. I was going to go back still for that process but that is too interesting not to touch on. So you are using like bug bounty rates or sort of cadences as kind of high level, kind of at the end of the day bottom line how well are we doing security?
[0:21:19.7] Adrian Ludwig: Yeah and that is a very clear indicator. It is a trailing indicator because it means you actually shipped product that has an issue in it. We would love to move further upstream and we are. There are a bunch of intermediary metrics that we look at as well but as a worse case end of the day metric yeah. If you shipped it, that is an indication then, and it is sort of independent. Most other metrics on quality of – that I have seen that are around quality of secure development life cycle, you can win those metrics by just having no engineers in your secure development life cycle, right? You got nothing, you must be doing it. You can’t win by –
[0:21:53.2] Guy Podjarny By doing nothing. I love that as a bottom line indicator. There is a whole interesting panel in one of the recent DevSecCons talking about how it is a bittersweet moment when you find a whole bunch of vulnerabilities because it ruins your metrics and it should be the other way around but I love that as a bottom line metric. It might be trailing but it definitely beats breaches like the number of breaches is far worse. It is more powerful, but hopefully you don’t have enough of those to warrant metrics.
Back a little bit to the org though. So you’ve got those – still a substantial number of them. Do you further group them by product lines or they ask as how do – in the security organization, how do these 30 odd people report into?
[0:22:33.5] Adrian Ludwig: Yeah so the security org is bigger that. The 30 odd people are doing prod-sec and are tied into embedding. Corp-sec I guess is one grouping, right?
[0:22:42.3] Guy Podjarny I meant specifically those 30 people that are in prod-sec that are embedded, how are they further organized? I think it sounds like you have a lot of these answers very well baked and hoping to share those with some of the rest of the audience.
[0:22:56.6] Adrian Ludwig: I don’t think we have a perfect answer to that. You want to have some level of coding – sort of technology deployment similarity tends to have patterns in terms of the types of things that you worry about from a risk standpoint. So at Atlassian, we have what we call our server products, which is the un-premised versions of our products. Those tend to be distinct from our cloud ones. They’ve actually got a different code based at this point. They forked just a couple of years ago and they’re diverging in terms of the code base.
So we do tend to have embedded engineers specialize in server versus cloud, not that they are in competition with one another but that distinct important one. Geography is a huge one honestly. I know it sounds simple. Atlassian has folks in Sydney, Australia. We have teams in the Bay area, teams in Austin, Texas. We have a development center in Gdańsk. We have a development center in Bangalore.
And so, you know, having a personal relationship is super important for being able to be efficient about getting stuff done. Being on the same time zone is super important in terms of being able to get something done. If your turn around cycle on a bug is always going to be 48 hours, you will fundamentally be slower, and so we have done a fair amount of geographic load balancing. You know, “Where is the dev team for that particular product based?” is a key thing for us to think about.
[0:24:15.4] Guy Podjarny Yeah. Well those are actually really great metrics to think about how do you group those as people together. So the other thing you said is that those prod sec people or you said ‘we’ as a group, work with the PN’s to prioritize security work. When you think about vulnerabilities, one of the common challenges is you accumulate a backlog and it is one thing to stop creating them and then you have a pen test or you run some of those tools.
And you come up with a lot of items. I guess how – is there some methodology or thoughts or alignments in the business about how do you tackle that backlog working with dev and how does security versus dev interact with that?
[0:24:56.5] Adrian Ludwig: Your ‘backlog’, so there’s variety of backlogs. I think when we treat vulnerabilities, I’ll define vulnerabilities to make sure we are all in the same page. If you have a well-defined security model and there is some exceptions to that security model and it is one that would be a surprise to you and to your customer that that’s in there, which is different from we haven’t turned on encryption yet but everybody knows that but we should do it sometime in the future.
So we have a backlog of those types of features like encryption, which we do have now just to be clear but we didn’t before. That is a backlog that’s very long, very deep. It will take a long time for us to get through. On the vulnerability side, we have very strict rules they are public on terms of how quickly we have to resolve things. It is based on CVSS scores. There is an internal number, there is an external number. I believe the external one is for eight and 12 weeks in terms of how quickly we will resolve based on critical high and medium and low severity.
So the backlog is exactly as deep as that. There is nothing in there that is longer than the duration of 12 weeks and the team has to take action on it and we have notifications that take place as things get closer and we go through escalation. At any given time there is probably a couple of issues that are either about to breach that SLA or that have breached it but even those get rid of really quickly after that point.
We have to be a little bit sensitive as we introduce new scanning tools not to completely overwhelm the team. So one thing that we do do is if we are introducing a scanning tool that’s finding a new class of issue we will trial it with one area and then we usually go through a ramping process where we first warn about critical level issues then once those are cleaned up then we know that those are being ingested and managed in a reasonable basis, then we’ll warn about high and warn about low.
So we don’t ever have a large backlog or such a large backlog that the team feels overwhelmed and that is something that we manage sort of how we introduce those new tools so that that’s not too disruptive but ultimately it makes a huge difference that we have an executive team and a CTO that they say security is number one and so do you have enough issues to fix your 100 bugs? Absolutely.
Or do you have enough people to fix your 100 bugs? Absolutely. There is never a question about that. It is a question of what does that mean you won’t get done. So that’s it. That is a huge factor for us.
[0:27:15.3] Guy Podjarny Yeah for sure and that’s fundamental to get things done. So I have actually really love the org structure and I am tempted to ask you more questions but before we started here, we talked a little bit about a different view. So to step away from org and we talked a little bit about this whole world of we talked about cloud and how we can rethink security and you used the term consumerization of security and talked about that, a little bit about how it hasn’t happened in the developer world. I guess, what is your view about this like the journey of security as a whole and what is happening to it?
[0:27:50.2] Adrian Ludwig: Yeah, when I first started working on Android I kept having people saying, “We need enterprise grade security” I am like, “That is the last thing we need.” What we really need is security that is as good as what we would expect for consumers because it tends to be the case where you don’t expect the person using – the consumer to be thoughtful about security and so you have to design your system in a way that that still protects them.
Somehow banks have built fairly good ways to protect money even though the average person out there in the world doesn’t think about how to protect their pin for the banking card and a bunch of those kinds of things. So yeah, I am a pretty big believer that simplifying security, acknowledging that developers for example don’t care about security that much more than the average human and the average human doesn’t care about security unless it is too late and you have to design things that way.
So the defaults have to be secure. If it is possible to build something insecurely, it needs to be harder to do that and to build it secure by default those are things that we are constantly thinking about.
[0:28:54.6] Guy Podjarny So I think the mental model where consumer security is actually the top tier versus enterprise security being top tier is a fascinating one but challenging a little bit on the dev side. Like a typical developer doesn’t have as much power, or the other way around, the typical consumer doesn’t have as much power as the developer as it relates to the work. Plus that they are not building the ATM, right? They are just using it. I guess how realistic – do you have a view to the thresholds or is it just aspirational or is it something you actually get to given also the power we put in developers hands?
[0:29:28.8] Adrian Ludwig: I think the core of it is every time a security event takes place, do you make an assumption that documentation and training of the developer will solve the problem or do you say, “This is a systemic mistake and I am never going to fix people but I can fix the environment that they’re in and change something about the environment that the developer is in to make it less likely that this will happen,” and I think that’s how consumer apps work, right?
You fix it because you can’t change people and so that is what we have to do and we get to say, “Hey, there is a buffer overflow, that library can’t be available anymore” or, “They are not updating their software or they can’t turn off updates.” So one by one by one right? The classic one I think in enterprise is fishing. I know, we’ll train all the people not to click on links. Really?
[0:30:25.3] Guy Podjarny Yeah that is not going to happen.
[0:30:26.5] Adrian Ludwig: You’ll probably be clicking on links. So we have to do something different. We have to go to two factor. We have to go to two factor where the second factor actually validates where it is being deployed which is what UTF does, right? UTF checks that credential as being used on that website. It is a fundamental shift. It is one where the creators of that technology realized that they couldn’t fix the people but the pieces were all there to fix the technology.
[0:30:55.4] Guy Podjarny I think that is a really good mental model to just try to build towards. As we move things around then I guess we do have examples of this probably like the SalesForce app exchange might be a good example of that, which is a system that has been really robust despite being powerful because they pretty much disallow and I think that is risky. You have to make do.
[0:31:15.4] Adrian Ludwig: Disallow, check for, make sure your responsibility is very well understood by those developers. There is a lot of elements to that yeah.
[0:31:23.3] Guy Podjarny Yeah, exactly. So I’d like to bring it back a little bit to this notion of cloud. So we have this philosophy and we have the team structure. I want to talk a little bit about cloud security and I will posit that in the world of cloud above and beyond micro services, what also happened is that a lot of responsibility went from maybe more of an IT stack. I mean yes the infrastructure itself went to the IS vendor but then a lot of the rest of it went from maybe some central IT team into the hands of the developers who might be positioning containers or network configurations and the likes and you’ve got that at the hands of the developers.
So looking at the simplification this might go the other way around in terms of giving you less ways to shoot yourself on the foot. How do you tackle that when you think — do you have a dedicated cloud security team? Especially you are talking about moving or you have now moved to being primarily cloud. What is your approach to cloud security as a whole?
[0:32:18.0] Adrian Ludwig: Yeah it’s a fun one. I never – I always came from a product world. I worked on Flash, I worked on Android, I worked on browsers, those kinds of things. So I never had to deal with the non-cloud world but I hear it’s really hard to secure because you can’t find it. It’s like there is a server under somebody’s desk, who is managing that? Who is keeping it up to date? And one of the things that we get in cloud is a pretty good understanding of what we have.
You know, Atlassian has a fairly consistent, pretty well-managed both corporate and product infrastructure because it is all in cloud and so we have a pretty good inventory of it, a good understanding of what’s there and that is quite different from other places. It is originating from developers like they say, “I need a new instance.” Boom they have a new instance, but we have done quite a bit to add platform as a service capabilities and make it easier for developers to add new things by using that platform as a service and there is a bunch of security services that are tied into that platform as a service that give us the ability to do monitoring and that give us ability to have visibility to what is going on in that environment.
But the reason that developers use that is because they also get a bunch of benefits, right? They get tools for debugging, they get tools for tracing. They get a whole bunch of scaling infrastructure that they wouldn’t otherwise have.
And so you have to couple those security benefits with the developer benefits and then it is just harder for them to use non-platform stuff is what it comes down to.
[0:33:46.1] Guy Podjarny So it is that paved path approach of sorts. You give them these options but you allow them to go off road like you allow them to go the other paths. They just need to jump through more hoops?
[0:33:58.4] Adrian Ludwig: They can.
[0:34:00.0] Guy Podjarny I am mixing my analogies here.
[0:34:01.6] Adrian Ludwig: Oh yeah, okay. Yeah we don’t prohibit it. We do have constraints around if you want to have customer data inside your service then it has to meet these minimum requirements around security, privacy, compliance etcetera and sure, you can provide all the infrastructure that is being provided by the platform, by all means, knock yourself out but it turns out that it is actually pretty hard to do that and so we don’t tend to see – we tend to see experiments. We tend to see people building totally new things but integrating into our existing apps is not quite as much a completely new infrastructure.
[0:34:33.1] Guy Podjarny So I have a whole bunch more questions but I think we are already going long a little bit with the episode. So I will ask you what I like to ask every guest on the show, which is if you have one bit of advice, it could be a pet peeve or something that annoys you that people do or just like what you think is the best eureka moment, for teams looking to level up their security through what would that be?
[0:34:55.5] Adrian Ludwig: Yeah I think for a long time security teams were are seen as blockers and what I have started to see is now security teams that really don’t want to be seen as a blocker and so they end up being an enabler and so by that I mean the person that would give you alcohol at a party even though they knew you had too many to drink and or pick your substance of choice and I think that finding that fine line and making sure you don’t fall into either of those two traps is really hard to do.
But, it’s something that you got to find. I would say, look for that line where somebody will come to you and ask for help because they know they need help. That’s what I see time and time again, it’s like, engineers that are building product, they know they don’t know security, they want you to help them.
Now, they’ll push back, they’ll say, do I have to fix that? The answer is yeah, you do. You got to fix it and you got to do it now and then we got to talk about how to avoid it in the future and I’m here to help you do that but I’m not here to say, “It’s okay.” That’s a tough thing to do but if you do it well, all of a sudden, security becomes a lot less painful. That’s what I aim for is trying to find that balance.
[0:36:07.3] Guy Podjarny That’s great advice. I just sort of add that I think that a lot of the thoughts on the concepts and how you model it and how you explain it does imply that maybe another bit of advice is to spend a couple of months in marketing. I think that might help it. Adrian, this was a pleasure, thanks a lot for coming on the show.
[0:36:21.5] Adrian Ludwig: Thanks, I had a great time chatting with you, thank you.
[0:36:24.0] Guy Podjarny Thanks everybody for tuning in and I hope you join us for the next one.
[END OF INTERVIEW]