Moving from Legacy Dev Tools to Transformative DevOps

Hear about the benefits of upgrading to Cloudbees Enterprise Jenkins

Josh Galde: Good morning, everyone, and welcome to Apexon’s webinar, titled “Moving From Legacy Development Tools to Transformative DevOps With Enterprise Jenkins.” My name is Josh Galde, and I am Director of Marketing here at Apexon and will be your host today.

Today’s presenters include Sanil Pillai, Director of Apexon Labs, here at Apexon. He is an experienced engineering leader from Global Enterprise Applications. He has built and managed off-shore and onsite engineering teams, managed mobile projects for Fortune 500 clients, and has deep technical and functional expertise in mobile and enterprise applications.

We’re also joined by Kari Price. Kari is Director of Partner Marketing at CloudBees. She is focused on cultivating mutually beneficial go-to-market relationships with CloudBees’ strategic technology alliance partners and global business partners to help joint customers and cover their competitive advantage. Prior to Cloud Bees, Kari spent the last seven years at Red Hat, where she drove global marketing efforts for Red Hat partnerships, SAP, Symantec, and Cloudaire, as well as served as Solutions Portfolio Manager for Red Hat Services Solutions’ engineering team.

A couple things I want to cover, a couple housekeeping items. No. 1, this webcast is being recorded and will be distributed via email to you, along with the slides, allowing you to share with your internal teams or watch it again later. Second, your line is currently muted, and if you give me questions, please feel free to utilize the Q&A Panel, which you can find in the center part of your screen. If you click on Q&A Panel, you should be able to enter a question, and we will respond to throughout the broadcast. We will do our best to keep this webinar to our 45-minute time allotment, and we should have no problems doing that. At this time, I will go ahead and turn it over to Sanil, and he will take it from here.

Sanil Pillai: Thank you, Josh. Good morning, good afternoon to everyone. It’s my pleasure to be presenting this today with Kari. Apexon potentially provides services across the entire application life cycle, right from design to development and, of course, DevOps. We are a CloudBees Platinum Partner, and we are also founding member of the DevOps Express Initiative. Next one here – Okay.

Kari Price: Cloudbees helps enterprise organizations automate software development and delivery. We start with Jenkins, the most trusted and widely adopted CIC platform, and we add enterprise-grade security, scalability, manageability, and support to really help you continually deliver and improve the software that fuels your business. I’ll dive into Jenkins and CloudBees’ Jenkins platform a little more in this session, but for now, I’d like to pass it back to Sanil.

Sanil Pillai: Thank you, Kari. Alright, there you go. So what are we talking about today? So today, essentially, we want to kind of take you through a journey of what is involved in really moving from legacy DevOps to more modern DevOps tool chain. And the whole idea of those insights from the migration journey is because it is challenging, but there are strategies to mitigate it, and there are best practices around which we are very happy to share with you.

So we’ll talk about some of the existing legacy tools and what are their attributes, what should be the attributes of a modern tool set, and the motivation for moving to the modern tool set, and what are some strategies around doing that. And we’ll cap it with some important aspects about Enterprise Jenkins. Okay? So, let’s get started.

What does the typical legacy tool chain look like? What you see on the screen is essentially a snapshot of some of the legacy tools in the tool chain. It’s by no means an entire set of tools, because there are so many. But they’re quite representative of the different areas of DevOps. And one of the things we would notice is that there is a tool for every aspect of DevOps, but a legacy tool chain historically evolved as being heavier in one area versus the other.

For example, you will see that you have a lot of build management tools around scripting languages that are used, because that’s where a lot of the DevOps initiated the genesis, the DevOps really start from, and companies. And these scripts really, for a period of time, they kind of proliferate, and they expand to actually different areas of DevOps, right? So, you would have engineers who write scripts to help with their management, and they write to help with deployment, they write something to help with monitoring.

A typical legacy tool chain would be heavy in one area versus the other, and they are mostly around embedded management tools. And a different aspect of DevOps can be managed with these customized tool sets. You’ll also see things like messaging notifications, and not really geared more towards the real-time, actionable feedback that you can get from the modern tool chain, which I’ll talk about later. But they are more passive in nature, and where the reaction to those notifications and such are done in an offline channel, for example. You would see things like Cloud, AWS, and Rackspace here. Now, when we mention AWS, it’s important to note that it’s about how to use the AWS deployments in the case of legacy tools. What is the modern deployments you want to run with? You will see very little continualization, typically, so you need a lot of hardware impacts on how you deploy them.

So, there are some aspects to legacy tool chain, but they’re really characterized by being heavy in certain areas of DevOps, and mostly a lot of customized scripts that need to be put together to achieve what they want to achieve with DevOps.

Given this kind of a snapshot, really, what are the real main issues with legacy tools, and what are the motivations to transition? Now, one of the important aspects of DevOps essentially is that it’s about speed to market. So, you really want to get your products to the market faster, you want to reduce your cycle time, and. And what that means is that we underline DevOps infrastructure. The plumbing player for that needs to be pretty nimble and fast, and low maintenance so that you can have the main processes run as fast as possible. What happens in legacy tool chains is that you have a lot of efforts spent in actually maintaining them; you have a lot of effort in enacting them and actually keeping the lights running as part of the DevOps.

Essentially, these tools, when you kind of put them together – because they’ve been designed for an era where tool chains were not prevalent – you have to write a lot of customization to actually make them work together, right? So, you have to make them talk together and really make sure that you can now get your DevOps process moving from one stage to the other. The accepted customization leads to a lot of maintenance, and hence you have to keep a lot of effort in making sure that they are enhanced and they’re optimized for the purposes that you want them to be used for.

The other attribute that you will also see is that they’re built for platforms – they will honestly be open platforms. The tools themselves will not be open, so you’re stuck with proprietary tools that you have to – you’re locked in from that angle. And that also respects the kind of support that you might get from the community to actually enhance it.

Given these factors, when they come in, what you really end up with is a tool chain which really doesn’t support some of the key CI/CD functionalities that you would really need, like you want coded pipelines, for example, right? You want an amount of customization to be built into a setting and deployed easily. And those are not really available. And hence you spend a lot of time maintaining something which really can be streamlined with a modern set of tool chains. And that itself becomes the main motivation why it is important to look at the modern tool chains and what are the migration parts to it.

Talking about legacy tools, we kind of looked at a few attributes about legacy tool chains, but what about a modern tool chain? Now, in terms of a modern tool chain, one important aspect is CI/CDs (continuous integration and continuous delivery) And one of the important ideals is where these layers are kind of connected together and every stage within CI/CD also being able to connect together. So, the important aspect of it would be real seamless integration between different tools in the tool chain.

And what that really means is that you really have aspects of open tools deployment available. So, you have open platform these are deployed for the Jenkins. For that matter, there is a use community around it, which means you can not only deploy it on a platform, but you also get a lot of support from extensible plugins that can be added on to Jenkins to expand your capabilities. So, being able to integrate very well, being able to support a plugin architecture, and a plugin architecture is very important.

The other important aspect of a modern tool chain would be where you need a lot of power of customization to recode it. So the aspect of coded pipelines eventually enables you to orchestrate your DevOps processes, and your build processes, and your deployment processes in a way that really suits your organization without having to worry about the restrictions that a tool might impose on you. So, having the ability to write strong scripts in some domain-specific language, or maybe even group scripts, for that matter, gives you strength of integration which can really harness the power of a good DevOps chain and make it very optimized for your purposes.

A good tool chain also would need to have the right interfaces to connect to external systems. For example, you may want to involve your CI/CD pipeline through a third-party process. On a trigger for a third-party process, you want it to be involved. So you want the right kind of REST API available for your tool chain to be involved from that angle. If not, then you have a manual process that has to be built, and this really kind of defeats the purpose of DevOps to some extent.

Then you talk about enterprise-grade skill. We’re talking about multiple pipelines to be run, and if you need multiple jobs to be run for multiple products, what that essentially means is that you need a scalable architecture. You’re going to run multiple master, then multiple slave, and have the orchestration capable to manage them right from the environment level to the job, in a very seamless manner. And what that also means is that you really need enterprise-grade components of security and virtualization written, supporting, so that you can really scale easily, depending on your organization, along with your organization’s needs.

And then, last but not least, it’s important o be able to share a lot of libraries from third parties that you can actually enhance your jobs with, and eventually to have a good dashboard view of your entire orchestration. Many times, you have older legacy tool chains, where you have pipelines running, and you have a one-off view of these pipelines, but really to get the orchestrated view, either you have to write a lot of scripts to make it happen, or you have to look at it in an individual style or manner. And that really doesn’t help the case for a good streamlined CI/CD. So, one important aspect of this is to have that kind of visibility across the entire dashboard, which we could give different names like stage flow, etc.

So that’s, in a summary, the attributes of a modern tool chain, and that is again the motivation for us to move towards it. So, let’s look at what are typical tools in a tool chain. I’m pretty sure, for those on the webinar; a lot of these names will be familiar. One of the important attributes of CloudBees is a lot of it is open source. They’re a really strong community following behind it. And they are very purpose-built for specific areas, and also built in a way that they really integrate with other tools, and also integrate with an orchestration, like Jenkins or CloudBees Enterprise Jenkins.

And you will also notice in the modern tool chain is that they are the best-of-breed tools. They have been time-tested over a period of time in their individual capacity. So, you would see, for example, Artifactory and Nexus. They are very strong repositories, which have been in use also outside the realm of DevOps. But when they are orchestrated or integrated with DevOps and the Jenkins chain, it really enhances its value.

So, you have an open architecture, you have a stable architecture, you have continualization, you have repositories built in, you have open-source tools for static analysis, code coverage, etc. You have open-source build tools, and you bring them all together, and orchestrated with open source tools with Jenkins, then the power is unparalleled. And easy integration, strong community support, and ability to be orchestrated with tools like Jenkins, actually leans – it works very well for really scalable pipeline architecture.

And also, you would see notifications, for that matter. I did talk about emails and pager, in my previous slide. You would see that modern notifications are more – it goes into headshot, it goes into slack, that there are going from the channel reacting to notifications coming from pipeline really fast and feeding back in. And that really gives you the power to create very well and support your agile development methodology, okay?

So, moving on, we talked about legacy tools today, we talked about modern tool chains, we talked about the reasons why we need to migrate and what were the main motivations for doing that. Then it’s important to look at what are the typical migration challenges, and what typically an organization faces managing it.

There are two main categories of challenges, if you will. One of them, of course, the idea that is an organization cultural challenge, or maybe in terms of structure, or maybe in terms of philosophy of development. That needs to be tackled. And opposed, the other part is going to be the technical challenge of how do you manage the migration from a whole set of tool chain tools to a new set of tools, and they probably have no interconnection with each other.

On the organizational front, it is important to take the right approach and not take a big-bang approach, which means that we have to always look at it from a perspective of saying, “Okay, as an organization, does it even – why do we want to move to a modern tool chain? What are the business drivers behind that?” And hence, you look at what kind of projects really are going to be right for a process like this. And then you start working with a couple of projects which are trailblazers in your attempt to move to the modern tool chain and have pilots built out and have the right skill set developed around it. Potentially, you’re kind of building a small team, which becomes a mini-center of excellence around these DevOps practices that you want to build, and that gives you the confidence and that gives you the visibility as to when you take on a new set of tool chain, what are the implications of doing that?

And once you have built that out from an organizational perspective, you also have to kind of look at are your processes spanning across multiple departments? And if you’re spanning across multiple departments, what are the communication channels between those departments today? And are they automated in a fashion, or are they pretty transparent and pretty collaborative in nature? If not, then those are the areas that need to be tackled, because those are important aspects of how DevOps would deflect in dry run time. Among the challenges that we would see from an organizational culture and structure perspective would manifest itself in some of the DevOps problems that you might see as you run through these DevOps by code.

Then, apart from an organizational perspective, is the taking on a big migration project that has all the sophistication of ROI. And shorter cycle times are often a big driver for why it’s important, and Kari will touch upon some of these aspects in her slides down the line.

Talking about technical challenges, one of the technical challenges, obviously, is that you have a whole set of tool chains and you’re moving to a new set of tool chains, which means that you have to keep the profits as less destructive as possible while you’re actually building on a new set of tool chains. So, there will be times when you’re actually in the migration part, where you actually have a mix of both sets of tool chains before you end up with a complete cut-over. And it’s important to maintain the processes running, so that development cycles are not impacted.

And then, you also have to keep it stable, because stability is important, otherwise – when you are especially dealing with the migration part and other issues in it. I think one of the big challenges is to figure out, okay, realization is a migration strategy. Is this my older tool chain, is it my newer tool chain? That kind of complicates the problem.

In terms of technical challenges, it’s important to look at: What are the profits today? What would be the migration part towards it? And then, how do you reduce the manual stuff as much as possible, to make it automated? So, you would see a lot of small automation pieces which are built in, which you want present, which aren’t present today, to support your final DevOps tool chain migration strategy.

And branch by abstraction pattern is a very popular pattern. The idea eventually is that when you’re really moving a lot of things together, you have to create proxies for those things which are being moved so that your dependent proxies should actually speak to those proxies while the actual proxies have been changed behind the scene. So that’s the bridge strategy pattern to actually allow you to move pieces of the components, actually, of a lot of the system without really breaking down the entire system. And a lot of the migrations that we help, we kind of follow the branch by abstraction pattern, to make it really streamlined.

These are the migration challenges and how do you deal with it. Now there are some specific migration best practices or strategies to kind of mitigate these challenges. There are several, but the key ones really are – to start with, we can talk about Model Demo Projects. So what really is a Model Demo Project? This is where you take an existing project across maybe a multiple platform, or you could divide it across multiple disciplines, and try to identify those projects and then try to build the DevOps or the CI/CD tool chain around it.

And what that helps you with – you kind of have a perfect representation of either the different business units in your organization or different technologies in organization as to how they would be in the new tool chain structure. And you get a lot of the data points that you would need before you plan it out to the larger organization. This has worked very well when you have disparate business units or disparate product lines you are dealing with which have different technologies you are dealing with, and which may have customized developed existing tool chains that are created.

So, on the first question, you would say, “What does that mean for a move toward a very streamlined DevOps tool chain?” So you can pick up model demo products from individual technologies business units and try to create tool chains for them.

The other strategy is the Greenfield Strategy. This is where you start with a whole new project and visibly state, “This is going to be our next-generation showcase project in terms of the right methodology, and also in terms of the DevOps tool chain.” But you’re really starting from scratch. You have a lot of leeway in deciding what are the tools that you need to put in, and what should be the timelines around that. And you can freak and you can play around a lot with the Greenfield Strategy.

And this really works well when you have a lot of interconnected proxies that exist today, and you may not have the leeway to actually go and change one of them without disturbing all the other proxies. So it’s best to start with the Greenfield project and kind of have that as showcase POV, and then you can start to open it up to the larger organization.

The third key is more of a DevOps Culture. This is where we take a step back and say, “Okay, let’s look at the organization structure, our culture.” And then try to work on that and try to align the right people, the right teams, and the right philosophies around DevOps first so that you can lay your best practices in play before, etc., before you really embark on a Greenfield or a Model Demo Projects strategy.

So, this helps really when you’re a large organization and you have multiple different philosophies of DevOps. This goes on in organizations. It’s important to have a DevOps culture laid across the entire organization. Agile Project Management is a very important part to mention here, because a lot of times you may have projects that are not agile in nature, and you may want to move the new set of DevOps tool chains, and you’ll find that sometimes these do get octagonal, in a way. So, it’s important to identify and get the projects which are agile, or obviously moving to agile, and make them as the initial showcases for implementing your CI/CD strategy.

And last but not least, it’s important to have a Clearly Defined Migration Path, which means that you cannot just physically decide to embark on a tool chain strategy. You have to define as to what migration process, where do you want to go, what would be the steps helping you lead there, and then the other pieces would fall in place. With that, I’ll hand it over to Kari to take it from here.

Kari Price: Thank you, Sanil. So, as you can see, just as you have a lot of options in terms of the strategy that you’ll take in terms of modernizing your CI/CD tool chain, you can see that there is quite a vast eco-system of tools to review and to look at as you’re doing that. I’m going to focus today on Jenkins and on CloudBees Jenkins Platform, which, based on the functionality we deliver, we see as the hub for continuous delivery. But as you can see, there are a lot of tools, and I think this is where working with a company like Apexon can be highly, highly beneficial to you in determining what that right set of modern tools are best for your organization.

With that, I will dive into a little bit regarding Jenkins open source, as well as CloudBees Jenkins Platform. So, ten years ago, continuous integration entered the scene, when our now-CTO, Kohsuke Kawaguchi, created Jenkins. He created it to address the speed problem with automation. Jenkins really allowed code to be put into a framework to repress repetitive tasks, like commits, builds, and unit tests. And since that time, Jenkins has really grown to become the defective standard for continuous integration, with millions of users and projects across the world. And it also has become a core technology within the application development ecosystem.

As we talk to organizations about their use of Jenkins, we’ve begun to see a pattern emerge as Jenkins usage has grown within an organization or within organizations. See if any of these sound familiar to you.

First of all, the popularity of Jenkins as an open-source software package means that more and more teams are downloading it and using Jenkins. And this can cause what we call “Jenkins sprawl,” where an organization or an enterprise organization can somewhat lose control over who’s using it, what the developers are doing with it, and how much resources it’s consuming. So, one of the areas where the CloudBees Jenkins Platform can help support that is by managing Jenkins sprawl with a component of the CloudBees Jenkins Platform called “Operation Center.” And this provides that manageability aspect around it.

In addition, Jenkins has more than 1,000 plugins readily available in the open-source community. What that does is that makes it hard to control who’s using which plugins. There’s a governance factor, when you’re starting to look at deploying Jenkins more broad-scale across your organization, that needs to be considered. Because if developers may be using various plugins and, unbeknownst to you, there may be some technical problems or, really, more technical conflicts that may arrive between plugins that may cause inefficient use of resources. And so, we have, with CloudBees Jenkins Platform, our custom Plugin Update Center that can help support that.

Security – this is an issue that you may have that you may want to have more control over. You know, who’s accessing Jenkins, and what they’re doing with it. The CloudBees Jenkins Platform, provides enterprise-level features around role-based access control and folders for managing that.

In terms of downtime, we recently surveyed Jenkins users, and we found that 92 percent of them said that Jenkins is mission critical, and so, you want to ensure that you don’t have that downtime. So, additional features, such as a high availability and cluster operation features are offered in the CloudBees Jenkins Platform to deliver that type of enterprise-scale capabilities that you need.

You may be looking to push more workloads to Jenkins, you look to have it run faster, make sure that its performance is optimized, and really looking to run Jenkins at massive scale. This is another area where the enterprise-scale version delivered by CloudBees Jenkins Platform can help. Monitoring, visualization, as well as technical support.

One of the biggest challenges that we hear about is that the technical support does not exist with the open-source version of Jenkins. So what that means is if you’re running mission-critical enterprise-scale Jenkins and you need someone to help resolve an issue or a bug fix, you’ll get probably more enterprise-grade support when you’re working with CloudBees versus what you would get with an open source solution. So support is really, really critical at this point. So this is just some of the ways that the CloudBees Jenkins Platform can amplify Jenkins open source.

Let’s take a look at where Jenkins plays and where more emphasis needs to be placed as organizations consider taking that journey to enterprise-continuous delivery. Clearly, Jenkins is a solid, safe, and widely adopted foundation to build upon. Since this is open source, you’re going to need to consider how to support and maintain your large-scale implementation, and support is really the bedrock to consider under your Jenkins foundation.

You’ll also need to do a dock security plugins and practices to ensure that the right user access and audit controls are in place to meet your compliance requirements. And as projects multiply and, as I mentioned earlier, potentially could sprawl, the ability to scale and operate the many components requires more focus and resources. And so, high availability and redundancy become critical to maintain the velocity of the delivery pipelines.

And, of course, with much activity in operational effort, the need to manage the overall systems and gain visibility and control of the operations also becomes very, very critical. But once in place, you have a solid and flexible platform for large-scale, complex continuous delivery.

So, Jenkins is not your typical open-source project, but what it does share with other open-source tools is a lack of core capabilities that most larger organizations require as systems become critical to business continuity. Though, as usage grows, teams will typically need support, a place where they can go to ask questions, get help with plugins. They’ll need security to ensure team access as well as to maintain the compliance and traceability. You’ll be looking for scalability as usage spreads from individuals to teams to larger groups. Scalability issues can arise. Jenkins initially was not designed for this, so this needs particular support and attention.

And then finally manageability. Especially with complex pipelines, multiple-user groups and projects, visibility and manageability become critical to getting the most out of your Jenkins. So these are some of the gaps that were identified. That’s really where CloudBees Jenkins Platform comes into play. We take that base core, strong foundation that Jenkins delivers, and we really add on those enterprise features.

How do you know if an enterprise-scale solution from CloudBees is the right approach for you, or if open source will meet your needs? Here are just a couple of questions you can ask yourself: Are you looking to increase developer productivity by leveraging advanced developer features? Are you looking to get access to Jenkins support experts for problem resolution? Are you looking to run Jenkins operations at scale to support a large number of developers or teams? Are you looking to monitor usage and share resources across your Jenkins implementation?

Those are some of the places where you can look. And this really gives you the overview so you can see that we’ve got the foundational support, we have the added features to help provide, we’ve got the cloud and container support, and then of course, we have the option of both on-prem or in the cloud for deployments, based on your needs.

I mentioned earlier, we add on these features onto the Jenkins open source, and I just want to share with you – Here are some of the features that you’ll get that are not necessarily available in Jenkins open source. These are things such as reusable job templates, automatic failover to recover from master failures, docker agents, the ability to monitor Github pull requests, role-based action control, and end-to-end CD pipelines with Jenkins pipeline. These are just some of the features that the enterprise edition or the enterprise will provide for you.

This isn’t a chart that I necessarily am intending for you to read completely here on the screen, but what it does represent is some data that we mined from a couple of our customers’ success stories. If you take a look at it, you can see that there are some strong impacts across the delivery cycle, and sometimes with dramatic results. When you look at this, you can see that there are huge reductions in person hours, there’s an elimination of time and effort even in some cases, and there’s even a reduction in bug identification and fix time.

But what we’ve highlighted in yellow are some specific delivery tasks around deployment time, development time, and build-related tasks. And we did some quick math, and we came up with some pretty impressive results. Just from this set of customers, this small set of customers, we identified that these customers experience a 6x faster delivery, which is pretty impressive, across the board. This is an ongoing study, and we will absolutely be continuing to monitor this, but it just gives you a snapshot of when you’re deploying CloudBees Jenkins Platform, what the benefit could be to your organization.

So, to dive into a specific case study, I want to share with you a case study for a customer, Orbitz. Basically what Orbitz did is it came to us and it wanted to shorten the delivery times for more than 180 applications that power 13 of their different websites. They worked with CloudBees to really refine their software delivery processes, and they implemented both open-source Jenkins but also the CloudBees solution for continuous delivery.

And really, what they walked away with is, they walked away with three key results. One being that they were able to cut their release cycles by more than 75 percent; they were able to better focus their teams on the higher-value tasks, not necessarily those more, as I mentioned earlier, redundant or mundane tasks; and they were able to amplify their user experience by increasing their testing that they were able to deliver, due to the automation that it provided. We have a YouTube video on this case study, so I encourage you to go to the CloudBees website and watch that video and hear a little bit more about what we were able to do for Orbitz. But yeah, it’s a very good story.

In conclusion, again, Jenkins and CloudBees Jenkins Platform really serve as that hub of the continuous delivery ecosystem. Jenkins has over more than 1.2 million users. As I mentioned earlier, there are 1200-plus plugins available. And as you can see, we integrate with a large spread of technologies to really help you get towards modernizing your CI/CD tool chain. And again, this is a great opportunity to work with a partner like Apexon to help identify which of those tools and what’s the best strategy for you, to really migrate and achieve that modern CI/CD tool chain. So, with that, I’ll pass it back to Sanil.

Sanil Pillai: Thank you, Kari. So, to recap, essentially, when we talk about migration and we talk about a new set of tool chains, I strongly believe that it is important that we don’t look at just feature parity. It is not enough to basically say, “Okay, I’m going to get what I was getting from my older set of tool chains with the new set.” We have to make sure that we can harness completely to get the best value out of it. And that is where something like Enterprise Jenkins and open source Jenkins come into play.

So, looking at the migration strategy, once you’ve defined it and once you see that I’ve got to have these multiple pipelines, multiple job etc, it is important to look at it and say, “Okay, am I getting the right dashboard analytics capabilities that I need? Am I getting enough of support for my plugins? And I getting enough of scalability, etc.?” And, depending on where you are in your state, as Kari mentioned, you could start with open source, and then there is a better option possibility to say, “Now this needs Enterprise Jenkins strategy to take it further and scale it across my entire organization.”

So, recapping: Migration from one tool or one set of tools to the other has a definite strategy that can be implemented, and it is important to keep the end goal in mind and understand the attributes of Enterprise Jenkins and also for Jenkins, and then define your migration accordingly. And this applied really recaps the most important attributes of Enterprise Jenkins which could be relevant for your pipelines. With that, Josh, I’ll hand it over to you.

Josh Galde: Great. Thanks, Sanil, and thanks, Kari, for walking us through that. I really appreciate it. At this time, we’ll go ahead and take some questions. If you go to the center top part of your screen to be able to open the Q&A window, feel free to enter a question or two, if you have, for Kari or Sanil, about our presentation today. While we’re waiting to get some questions in here, I did want to mention that Apexon is offering a free assessment of an opportunity for you to meet with one of our Jenkins experts. We’re going to be sending out an email to you shortly, after this webinar, in which we’ll have a link where you can contact someone directly. So, look for that, keep an eye out for that on your email.

With that, I’ll go ahead and – it looks like we’re getting some questions in. The first one is for Sanil. And it is, “Does the transformation need an organizational structure change?” Do they need to change their structure?

Sanil Pillai: Right. That’s a good question, and a very relevant question. If they know that they need an organizational change, I think what is really important is to form the right kind of body, which can really develop from a group and actually take it across the organization. So, it is important to start from that strategy and have a body which represents multiple different groups within organizations, so they are representative of the problems that exist. And then define a DevOps strategy according to that, and I think that is the right approach to take. So, not necessarily it might need an organizational structure change.

Josh Galde: Okay, great. Thanks for that. And there’s a question here for Kari. Someone has asked me how you differ from Jenkins, how is this different from the open source.

Kari Price: That’s a good question. So, again, CloudBees is really built upon Jenkins. And so, what you’re going to get when you’re implementing a CloudBees Jenkins Platform solution is you’re going to be getting that supported enterprise version of Jenkins. You’re going to get those core enterprise plugins around manageability, security, scalability. You’re also going to get fully QA-tested solution and verified plugins, even from third-party providers. And then finally, the security updates and patches. That’s another core, especially if you’re looking to run Jenkins in a very mission-critical format – or deployment, rather – the security updates and patches will be key. And then, of course, the support. The gold- and platinum-level support that you can also secure with the CloudBees Jenkins Platform.

Josh Galde: Super. Great. So, this next question’s for Sanil, and the question is, “Do we have to be an agile organization before adopting these newer tools? Is that a requirement?”

Sanil Pillai: So, it is definitely recommended that the projects that DevOps are imprinted on are agile projects, because the real value of DevOps and CI/CD in the newer tool chains is really speed-to-market, faster feedback, and these are important attributes of agile, too. So, since they kind of go together, I would say definitely, if not as an entire organization, though, for the projects that you embark on with the CI/CD tool chains, it definitely helps those to be agile in nature.

Josh Galde: Okay. Good to know. This one is for Kari. It’s specific to CloudBees. “Is CloudBees a forked version of Jenkins?”

Kari Price: Yeah, that’s a good question. In some open-source situations, that is the case. But in terms of CloudBees Jenkins Platform and Jenkins, no, CloudBees Jenkins Platform is not a forked version of Jenkins. It is actually built upon the latest releases of Jenkins, and that is integrated into the CloudBees Jenkins Platform. Hopefully, that answers your question. And, of course, any additional questions, welcome you to visit www.cloudbees.com to learn a little bit more.

Josh Galde: Great, thank you. And I think our last question – we have time for this last question, so we’ll go ahead and do that and end on time. This is for Sanil, and this question is, “Our biggest challenge is to change the tires while the car is running at 100 miles an hour. What is your take on that?” That really seems to be a common concern.

Sanil Pillai: Absolutely, absolutely, yeah. We hear that a lot, and that’s a very relevant concern, too. Our strategy and methodology always have been: define, manage, and then phase in. What that means is that you have to always define what are the end goals that you want to get to and define all the different stages that need to happen to transition, and then manage the phase-in process one at a time.

That challenge will never go away, but the way to do that is you have to pick the pieces that you can transition out and keep the hybrid mode on for a while, until you completely cut over. But that needs the right methodology defined right at the beginning, and then a very close, diligent management of the process to make it. It’s all about change management. Once we have the right change agents in place to manage it, it becomes a really manageability issue at the end. So manage, define, and then phase in. I think that should be the philosophy to manage the problem.

Josh Galde: Okay, great. Well, that concludes our webinar. Thank you so much for joining us today. You will receive an email in the next 24 hours with the slide presentation and link to this webcast replay amongst your teams. As I mentioned earlier, Apexon is offering a free, no-obligation assessment with one of our Jenkins experts, and there will be a link in that email for you to click on to access that. If you have any questions, feel free to contact us at info@apexon.com, or call us at 1-408-727-1100 to talk directly to a representative. Thanks again to Kari and Sanil for their time today, and that will conclude this webinar. Thank you and have a nice day.