Share this page with a friend.

Software Quality at Speed in Agile and DevOps Environments

Overcome common quality challenges that are faced by most developers

Andrew Seger: Hello and thank you to everyone for attending today’s webinar, Software Quality at Speed in Agile and DevOps Environments. My name is Andrew Seger from 451 Research and I will be your moderator today. Presenting on today’s webinar first will be Nancy Gohring, Senior Analyst at 451 Research. Following Nancy will be Dharminder Dewan, Agile QA Manager at Infostretch.

I just wanted to go over a few housekeeping items before we get started. After the two presentations, we have a brief Q & A period. To submit a question, just click on the Question button within the webinar interface. We’ll try to get as many questions as possible. If we ran out, we will answer the question after the webinar. In addition to the presentation, currently, we will make it available to all attendees.

We also ask you to provide us any feedback that you might have in today’s presentation. Also, over the course of the webinar, we will have some polling questions. They will be posted on the interface and simply click the answer you most agree with as the answer. Perfect. With that, let’s get started. Nancy, please take it away.

Nancy Gohring: Great, thank you so much. Thanks everyone for joining us today. I’m Nancy Gohring. I’m an analyst here at 451 Research and I follow application in infrastructure performance which means that I follow ATMs under infrastructure managing under end-performance testing. I’m going to talk today about how the need for speed and agility is shaping testing. I’ll start by talking a bit about what’s driving this need for speed and agility and how that need for speed is in part driving the IT transformations that we’re seeing across the board. In turn, how that’s impacting the ways that organizations are approaching testing.

To start out, I’m going to talk a little bit about serve the idea of corporate eating the world and how central software has become to many types of businesses and in turn, the pressure that that’s putting on these businesses to move quickly. These are just some big-name examples that lots of you are likely to recognize. For instance, John Deere, a large manufacturer of farm equipment. Last year, it’s said that they expected to have more software engineers than mechanical engineers in the next five years. That’s a pretty clear indication that this company is shifting resources towards software more so than its traditional business.

It’s not necessarily that they’re not going to be making the same type of farm equipment that they were. It’s just that software is becoming increasingly important to both drive that equipment and create new products and services that are going to be valuable for their customers. GE recently said that they’re transforming from a heavy industrial company to a digital industrial company. Again, in clearer time that they’re shifting towards software. World Bank of Canada, I heard this gentleman on stage at a conference recently saying that they’re basically an IT company with a bank sign on the front door. Clearly, IT is very central to making this financial services business run.

Then, my final example is Ford. Many of you may have seen that Ford made a $182 Million investment in Pivotal last year. That alone is pretty remarkable. This is automobile manufacturer, very old, traditional company that has been around for ages. They made a pretty huge investment in a software company. One of the reasons that the company’s CEO said that they invested in Pivotal was that it would help Ford’s big software development.

Clearly, software is becoming increasingly important to all types of business. That’s why these businesses are feeling a pressure to move faster to continually update their software, both to meet the customer needs and to meet competitive pressure. We’re seeing this reflected across the board. This is a survey question that we ask to IT decision-makers. “What’s the most important goal for your organization’s IT environment over the next 12 months?”

You can see, the very topmost important goal is to respond after its business need, it’s 36.4% who said that. Number two, reducing cost. It’s not a big surprise. The third most important goal is improving reliability and availability which obviously, gets to the topic at hand here which is testing. It turns out that in at least forward-thinking organizations that are adopting modern technologies and practices, it’s working. They are in fact able to release software at a much faster cadence than they used to. The caveat to this side was a relatively small survey. We only talked to 200 people and left you’re looking at this slide and feeling bad if you’re releasing at a slower pace.

This was specifically geared towards folks who are developing in the cloud. We know that there are already advanced because we know that people who are adopting the cloud are one of the main drivers of cloud adoption is speed to market. This is already an advanced group of people. It goes to show that if you adopt modern technology practices you can, in fact, speed your time to market with software development.

Among these folks, only 1% said that they were still releasing annually. The biggest bucket was monthly at 34%. Then a fair number of folks said that they are pushing up software updates weekly and daily. How are they getting there? I mentioned cloud but there’s a bunch of new technologies containers micro-services can also help with speeding time to market. DevOps adoption is also something that we’re seeing folks do as a way to get to market faster with their software development.

This is a big buzz, lots of people are talking about DevOps as you may have experience. The definition varies pretty widely depending on who you ask what DevOps is. I think of it along to be pillars. It ends up involving a bunch of organizational changes. Cultural changes in terms of more empathy for people in your organization that have different jobs and are in different roles. There’s a lot of process changes. There is a lot of automation. A lot of changes to the way that software is developed and infrastructure is managed. We’re seeing really good interest in DevOps.

Here we asked, how would you describe your organization use of DevOps? 41% percent of IP decision-makers who we spoke with said that they’re either in broad implementation or initial implementation of production apps from DevOps organization. Again that’s 41%, that’s a pretty good adoption number there. Again, it depends on how you define DevOps but there’s clear interest here. If you say that you’re adopting Dev Ops likely you’re doing something, you’re thinking about it, you’re transforming in a way that you’re hoping it’s going to help you speed time to market.

An additional 24.5% said that they’re thinking about it. They’re in discovering the valuation and both doing in testing and another pilot. Seems pretty good interest in DevOps. We think of DevOps as a piece of broader I.T. transformation that we’re seeing underway. This is a question we’ve asked, it’s a very simple question. Is your organization currently undergoing an IT transformation initiative? We’re seeing growth in the number of people that are saying, yes.

The last year we asked and 53% said yes. Said they’re doing some IP transformation. This year that went up to 58%. DevOps is part of this. There’s a whole bunch of organizational changes that are going on in IT organizations for a number of reasons including, to move faster. Part of this IT transformation is tweaking the role within organizations. We hear the question was, which of the following that characterizes the layout of your IT technical teams?

This is a question that we’ve only just begun asking but we’re thinking that there’s a progression here, where the specialist orientation gives way over time to DevOps and more generalist focused ITP.

Again, there’s experimentation going on. There’s a lot of changes going on within development and IT and operations organization in order to better meet competitive pressure and customer needs. That brings us to our first poll question. The question is, what are the major pinpoints for your QA Team and take a second to take a look at the answers here and you can just click on the one that best fit. Do have long testing cycles? Do you feel like you’re putting best practices into action? Are you using the right set of tools or do not have the right set of tools? What are your big pain points? We’ll be interested to see what the audience, what stage the audience is here? Do take a second to vote and then I’m going to move on here. As part of this experimentation that we’re seeing in terms of IC transformation and DevOps is–What I’m hearing is a lot of vendors and also organizations talking about this idea of shift left where a bunch of different functions move into the responsibility of DevOps shop. Some of you have maybe heard some of these mash up the title, there’s Dev stuck ups. I recently heard Desk DBA. The problem here is that, this is happening in so many areas of specialization that I think it’s not really reasonable.

There’s this idea that you can have these generalists in a DevOps shop that are expert developers. There are also security pros, there are QA PROS. There are DBAs, network admins all rolled up into one person. At the end of the day, you’re going to think you’re going to fall through the cracks, right. One person can’t be expert in all of those areas. What I’m saying is some pretty interesting experimentation within organizations in terms of ways to allow all of these consents to happen, right. Make sure that you have a security policies in place.

Make sure that you have QA experts who can ensure that software is being properly tested before its pushed etcetera, without trying to unreasonably ask one person to do it all. I’m going to talk through some of those models that I’m seeing to specifically with an eye towards how this is happening and with testing functionality. I’m going to talk a little bit about the traditional testing organization looks like. Then talk about these three new models that I’m seeing companies play around with. I’m thinking of them as embedded, hybrid, and Devtest ops. I’m going to talk through each of these briefly.

In a traditional model, you have a development shop and then you have a totally separate QA team. What happens is, developers build their software and then they hand off to the QA team. The QA team run to launch of task, surely find some problem, shift the code back to the developers who work on trying to fix the problems and then they again ship the code back to QA. This process kind of goes on and on. The issue with this is that testing becomes the gate and it often slows down this process of shipping code. We’ve established here that speed is pretty key these days.

It’s worst case scenario is that this becomes so frustrating to development shop that they decide to skip testing altogether and that for obvious reasons It’s not really a great scenario. One organizational model that I’m seeing is I’m thinking of it as embedded. In this model, there are test or a QA expert that live within application team. That expert works very closely with developers and operations staff at DevOps to design appropriate tests for the applications that are being developed. Ensure that those tests are being run throughout the development process so as not to slow down production code and executing and designing and just ensuring that testing happens.

The result that I’ve seen in this model is you’re going to ship a pretty a really quality app. This is a really good model. This isn’t really a downside but something to think about is that this is really a model that’s going to work for very large organizations that have lots of application development teams. Also, can afford to hire QA people that are going to live in each of these teams. There is an organizational size thing that’s going on here to enable this type of model. Another model that I’ve seen folks have discussed with I think hybrid model but there’s still a QA team. There’s still a separate team but the individuals on that team, the individual experts are assigned to different application development teams, the DevOps team where they consult with them. If a new team brought up and wanted to build a new app. The tech expert comes in and helps that team understand what type of testing is appropriate for that application. How to integrate the testing within their CI/CD tools and within their cycle. The QA team they choose the testing tools because they are the experts and then they can mentor the team members on how to use them and how to best incorporate testing into their released cycle.

The results of this model that I’ve seen is a really quality app. There’s a thorough testing. A minor downside that I can think of here is that there’s a bit management headaches if you’re having to assign and reassign QA team members to different development teams. That’s pretty minor, I don’t think that’s a big thing. I think that this is a model that can work really well for companies pretty much of all sizes. Then finally, there’s this idea of Dev Catch-Up and this is the idea that one person on the DevOps team is designated at the testing pro.

The downside to this is that when I’ve seen this happen, typically this person that’s assigned to the testing pro doesn’t necessarily have a lot of experience. If they’re not a QA expert by trade, they’re just like a designated person. The good news here is at least testing is happening. I’m seeing more and more testing tools that are designed with this role in line. These tools are designed to help suggest the appropriate types of tests. Help the user build and run those tests. Help them figure out what types of tools are best to integrate with.

This is becoming an easier proposition even if there are some downsides which is that you’re potentially going to have not necessarily the most sophisticated testing because this particular person doesn’t necessarily have a QA background. Those are some models that I’m seeing both play with and do so successfully. If you are an organization that’s thinking about how to update your testing processes and ensure that you’re testing things properly and pushing quality code. If you’re thinking about ways that you can transform your organization, you have some things to think about.

Think about the tools that you’re choosing and I’ve seen that it’s a good idea to choose tools that are designed both for a QA Pro as well as more of a generalist in the DevOps shop. One reason for that is that if those two roles can collaborate closer, they will be able to move faster. It makes sense for them to be able to share tools in a way that QA Pro can design a lot of the tests and then the DevOps folks are looking at those tests and initiating them. Both types of roles are able to use the tool and really work together.

I would also say, think a lot about how to integrate you testing tools with the other tools that you’re using. It makes a lot of sense to try to integrate testing tools with automation tools that automatically kick off as code is being pushed. It makes sense to integrate with monitoring tools so that when tests are run, the monitoring tool can help identify the root of a problem that may surface. Makes sense to integrate with ticketing tool that can insert a ticket when a test surface is an issue so that the folks that are responding the issues can prioritize fixing the problems of the testing tool that surfaced. So, Integrate, integrate, integrate.

At the same time, know that adopting all these tools won’t get you all the way there. There’s a lot of other factors that play, just buying a bunch of tools isn’t going to turn you into a DevOps shop that appropriately does testing. Especially when you’re thinking about different types of way to organize around testing. Personally, think about your corporate culture. Some cultures are going to be more conducive to say without thinking it is a hybrid model. Others might be more conducive to a desktop role. Think about your corporate culture and what makes sense for you.

Finally, account for headcount because some of these models again, as I mentioned earlier, will work better with larger enterprises and others may work well in a more startup type of environment. Finally, I just wanted to close with this vibe as I would talking about integrate, integrate, intergrate. This is a response graphic this company called PBLF put together. They called the periodic table of DevOps tools.

It is really just a snapshot of all the different kinds of tools that DevOps shop are employing these days that helps them move faster, that orange grouping in the center is the testing tool. My point here is if a team is using all these tools and roles instead of working independently, that actually begins to slow things down. As you’re choosing all these different tools, as you’re trying to transform your IT organization and move faster and ensure you are doing things like testing and ensuring security into your processes. Think a lot about how those tools integrate because that is going to make your life a lot easier. With that, I’m going to hand over to Dharminder. Thank you very much.

Dharminder Dewan: Thank you, Nancy. Thank you so much. I think it was a great, from a research perspective. A lot of things which I also learned. Good morning everybody. Good afternoon to all our attendees who are from the east coast and anybody who’s from remote sites, good evening guys. From my perspective, what I’m going to talk about today, in general, is that how the testing as we saw from the research perspective, how do we see the shift in the testing from the traditional model to the new model. How it’s moving from past to present. What I really see is that any organization that we go to, any company that I have worked with.

There was very simple goal from a product perspective which is they want to deploy a quality product, they want to deploy faster, and with less cost. Lately, we talk a lot of process efficiency because that is how you have to reduce cost because gone are the days when the cost reduction was basically about to let’s get the people out. Let get less number of people that is the effective way of reducing cost. Gone are those days. I think it’s becoming more and more efficient getting done more from less. I think that is really the key. As you see here, there is a clear demarcation in terms how testing has shifted from past to present.

What Nancy was talking about earlier also was that in the traditional model there was a handlebar. It was like the compartment of a train. That one compartment is done with their work and then it hands over with a chain connection to the next compartment. Now, what we are seeing is that there is an overlap there. Testing is not the job of a Queue Engineer. It is being embedded into the belt, as well as into the upstream, or be traditional tools team. What we see a lot more focus on to unit testing than it was before earlier. The unit testing has been not a lot of focus was done on the unit testing. In general, in the past what we saw is that there was duplication.

There no optimization and even the resources sometimes were not efficient enough. What we see now is integrated testing which is optimized for set up, clean up and sequencing. In the past, I would say almost two decades plus, I personally have worked on a lot of models starting from the Waterfall moving towards Agile. We have seen everything, and as a company, a services company, we see different challenges across different customers when we work with them.

What we see now is digital. Digital, digital. Digital is the word right now. That is basically transforming the SDLC. What does digital really mean? As we see in the near future, we’re testing what we saw just now was 10% is what our numbers are. We are resuming that testing paradigm which would be the specialist doing the testing, the percentage will reduce further down and the testing would spread across as you mentioned generalist.

It would be part of development domain. It would be part of ops domain. When we say digital, we are talking about test automation. We are talking about nightly built. We are talking about building and testing every check-inn, faster bug discovery, faster bug fixing, and the optimization for setup, cleaning, and sequencing. Again, I just want to emphasize this because the idea here is that the more automation we do, that’s the only way we can be more efficient. One of the requirements from any organization is to make sure that we are deploying faster, we are deploying quicker and the time to market is much, much faster. What that really means from our perspective is the shift left. Which is basically what we are doing is we are moving from a QA environment to a QE environment. We are moving from a quality assurance to a quality engineering. If you see here, in this, that in a typical quality model was the shift left quality model, the peaking has changed.

Peaking, what that means is the attention to quality. Attention to quality is being given more towards in the beginning part of the SDLC rather than later on. I worked in traditional organizations where the focus used to be that, we always used to struggle with unit testing. We always used to struggle with finding majority of the bugs in the earlier part of the cycle. The issues where I would say 75% to 80% of the problems, they were found during the testing cycle which were very after the development was done and the cold freeze was done. That’s when historically the bugs used to be figured out.

That’s when the resolution used to be happening for those. It was not very cost effective. Whereas now, what we see is, as we saw, the hybrid and the DevOps model. All of these testing is being done as part of the development cycle itself. In the Agile model, it’s done within the sprint. What we are seeing as a company is that the product or the whatever features are being developed as part of that sprint are not okay till the quality gate is passed at a sprint level. The expectation from a process perspective, from the product perspective, is that the product which comes out on an Agile environment at every sprint level should be a deployable product.

This is something interesting stuff that we have seen, of course, that leads pretty much about the cost of fixing bugs. I think it is the goal of the organization is time to market. The goal of the organization is becoming more efficient. The cost of fixing bugs is very, very, very critical. Because if you look at the traditional QA model, I think anybody who is in this QA world, they would have seen this model. They would have seen this curve because it’s been there for ages. We have been discussing this that the sooner we find the bugs, it’s easier and it’s more cost-efficient to fix the bugs.

In the Agile QA model, that’s first thing, everybody, I think experienced that. We start seeing the cost of fixing bugs goes down. The time to fix bugs, in the SDLC is faster. The duration to fix the bug reduces because the stuff that we do here is that we do testing after every checking. Of course, out there a lot of times when you see this graph, sometimes it’s very difficult to correlate that how come I’m seeing the shift even though in my heart is onto lapses, the numbers are the same. It’s the way you should look at this particular graph is that the duration from requirements down to the production that has reduced in an Agile QA model compared to a traditional QA model.

That’s why the fixing, there’s a lot of overlap which is happening in within requirement, within the design description, unit testing, the functional testing, the UAT and the production. This thing is much, much faster. What that does for any customer is faster deployment, being cost-efficient and then, of course, the process efficiency is there. What we have seen here as a summary from our company’s observation is that average release frequency for majority of the companies has gone up by 150%. Just for mentioning it here, that we are working with Fortune 1000 companies.

Those are the companies where we are seeing the shift. We are seeing these enhancements. We are seeing these advancements. The pushing changes to production is two times faster. Of course, they’re still using CI/CD, Continuous Integration Continuous Deployment. CI, while that’s continuous integration, but for me, CI always has been continuous improvement. That’s how I feel. The shift has been from a traditional QA model to this QE model because every time there’s a retro that we do. We do a retrospective. We just figure out that, “Okay, what’s the advantage?”

We need to have how we became more efficient. That’s continuous look back, continuous process improvement. That’s what has developed here. Will it stop here? Absolutely, not. I think there’s still a long way we need to go. As we talk about this, I proudly would reflect back 15 years, 18 years. What I saw even in those traditional models was that the way to become more efficient was still there. Some of you whoever had been in the industry for a few decades, you would know that at that time it was a struggle. From moving the testing into development was a huge discussion point. There was a push back from every organization to make sure that they just focus around what they do best which is specialist.

Now, since we talk about generalist that’s what the future is. That we are expecting that a developer should be able to do testing. A tester should be able to do the building of the product. They should be able to run automation testing. Are we talking about manual tests? Are we talking about automation? There’s a merge which is happening right now. What that does to us is that if the more generalist you are, you can make even faster and intelligent decisions even from the predictive QA perspective. Test automation, we are seeing that being increased by 200%.

Do we see that just now? I think automation has historically been proven to be very, very effective. Because what that does to us is there’s the integration aspect of it, there’s a new feature aspect of it, there are soft testing we are talking about. We are talking about stress testing. We are talking about smoke testing. There’s so much we need to do. Every time there’s a time crunch. Every time there are drop dead debts. Automation is the key where we can definitely do testing. We can do builds. We can do nightly builds. That’s where the future was seen 15 years back.

We continue to progress and we see that going forward, there would be much more enhancements that would be happening in the same, same domain. Where the future would really for us and build that more efficient, more faster and more faster time to market. That brings us to our poll question number two. Then see have the first one. The question is, how would you rank your organization’s maturity in testing QA on a scale of one through five? One, being not mature enough and five, being very mature. I’ll give you guys a couple of seconds to put that through. Working with different clients like we said the Fortune 1000 companies, what have you seen?

What you have seen in general is that going into an organization, there are a lot of challenges an organization have. For a very large national bank that we work with, we saw that they were struggling with mobile device proliferation. Which was the rapid growth in the mobile device usage and for having the need to have the capability of deploying their products on the iPads, the iPhones, the smart devices and the rapid deployment of new apps and architecture. It was very important that with our training world, customers keep on requiring new features, they keep on requiring enhancements.

They’re not willing to wait for a longer time in terms of bug fixes because some of them are critical, which is impacting usability. That definitely pushes the consumer level. It was very critical for them to get this thing done. Of course, there are complex functionalities. There’s a lot of use cases that are complex test scenarios that need to be validated. Just to make sure when it goes out of the door, the customers are not finding those issues and their experience is excellent because every customer rates any bank, any product based on their user experience.

Of course, optimizing the device testing cost because it’s very important that you are efficiency, efficiency, efficiency. That’s the key across any organization. What we did was that going into this, we had to look at regression cycles. When we looked at the regression cycles, again, we are talking about digital here. A lot of end optimization went into that organization. After doing our analysis, we were able to reduce from four weeks to one week. Of course, how were we able to do that? There’s a lot of sequential aspects of SDLC that we were able to do parallel stuff.

Then we were able to shift left using automation. The requirement was that we need to go on multiple devices. That was done with the iOS, with the Androids, with the Amazon and the Windows App Stores. There was risk-based device selection methodology that was used. We set up a complete test environment for desktops, mobile, native, hybrid, web apps, and the mobile apps. The coverage was therefore functional as well as nonfunctional device testing. The other aspect which I would again like to highlight is the simulators. As we know simulators is historically any company which is into devices. Any company that is using hardware. They need to use simulators.

I used to work with a telecom company and the biggest challenge was for them getting those devices. Therefore, testing which was reduced to cost, a few hundred thousand dollars. The biggest aspect there was simulators because that gives you more predictability. It’s not just the cost. It is about predictability also because simulators we make the point every time you start at the same place and you can end at the same place. The possibility of your test cases or your execution going faster without having false failures is way, way, way higher compared to traditional testing that we sometimes end up doing with the actual devices.

The other case study that we did also was with a large department store. It’s a retail department store. The challenges these guys had was, they had some Omnichannel strategies. There was no CI/CD process in place. They used those test environments due to frequent releases. They were very unstable because the challenge they’re having was that, with the multiple stuff any time there is a test case, the automation there, what you would do is there would be realignment there, there would be changes that would be required, and we would so frequently test the releases. They we not able to manage that.

Going into this customer, we looked at the bunch of tools which were there, we analyzed what existing stuff they were using. We started working with Jenkins. The manual tests which were there, they automated test cases, and the manual test cases which were there. We automated those because the idea here is automation is the key. The more we are able to automate, we will be able to do extensive testing. Plus we need to make sure that there is a focus around for the test QA engineer. They need to focus around the areas which we figure are more critical.

There is a regression aspect from automation that can be done by making sure that it’s those test cases that execute on a nightly will perspective. Then, of course, the CI/CD process that was completely automated using Jenkins. They nightly built. There were code checked-in. Every code checked-in had a build ready to market and time to market. That was then within four months. Of course, the early we were talking what yearly releases from this customer perspective that went down to four months and execution time was reduced to by 75%. Again, how did we do it? That’s parallel execution, using more devices, automation, and introduced in sprint automation.

The idea really is that we don’t again, we don’t want to be in a traditional model where there is a sequencing that within the sprint majority of the time customers end up doing manual testing. Then the focus around automation is not is there, why? Because they don’t have a mature automation framework. This is something which we felt help them a lot. That was our observation from their perspective. I think that brings us to the poll question number three, that are you interested in talking to experts from Infostrech about Agile testing. You can just say yes or no at this time. I think I missed one slide.

Okay, sorry. This was the third case study. I was wondering where is my third case study. We worked with scholastic and the challenges that saw with scholastic was they were requiring a responsive web driven application web design to handle all the communal functions. They had some operational inefficiencies which included lack of automation. There was no common QA strategy. QA was done in silos by each product team. The automation framework also was very scattered. They are a common product overlap between features, but the automation framework was not closely tied together.

What that pretty much met was that any time there was a change, the enhancement used to– we don’t have multiple places. Going into this organization, what first of all again, automation for us was one of the solutions which we provided to them, that we were able to automate their manual test cases. We were able to develop close to 1000 plus behavior driven test cases for them because what that does is, the test case scenario have to be very closely tied to how a user would be using the particular application because that’s really the key. That would be driving how the customer experience would be.

Then, of course, we were able to create a standard automation framework for shared services for them. How they help them was that they were able to leverage it across multiple-features, faster realignment, faster execution, faster enhancements and then we worked with them for doing a test data manager for the regression suite. Then this reduced the time to test functionality and the prime to market that was enhanced this time for these particular customers.

What Indian do? What I’ve seen is that currently, the points that are driving this power SDLC is what we see from the customer is the time to market and delivering deployment. Product faster almost every sprint in majority of the cases and then there’s an improvement in testing efficiencies. That is one of the requirement which means reduction and analysis, faster realignment, faster automation, more automation, using digital technology available out there, CICD, continuous integration, continuous deployment and as I mentioned before I think for me that’s continuous improvement.

Where we today that’s I think a work of almost 15 to 20 years, a continuous discussion, there used to be forums, there used to be meetings which we used to go to share ideas, share cards, share suggestions just to make sure that how we can get to our goal faster. The customer goals what we have seen from here perspective absolutely is deliver a quality product with process efficiency, with a faster time to market. That’s all I wanted to present to you guys. Thank you, this is Dharminder, for attending the webinar from me. Andy, off to you.

Andrew Seger: Excellent, thank you Dharminder. Now it’s time for a question and answer period. If you have a question just post on the question’s spot and hit the submit button. Our first question is, nowadays testing is performed at every stage of development even developers perform unit testing. If testing is slowing down how DevOps coming into the picture and that domain. I believe that’s for you Dharminder?

Dharminder Dewan: All right, my answer to that question is I personally don’t think testing is slowing down. I think what I really feel is that the emphasis on testing is still there. Everybody understands that we need to value testing and I think that is being more now and with the DevOps, what I really see is that the DevOps, what we are doing is that testing is again doing the shift left in testing. With DevOps were able to do nightly builds, we are able to do every code checked-in. There’s a build that is triggered. What that does is testing emphasis plus every time there’s a failure, the developer can right away go and fix that issue and then again with that checked-in, there is a build which is happening.

Historically what used to happen was, that multiple developers used to do a code checked-in, then a build used to happen, once a build used to happen. Now, if there’s a failure, an e-mail goes out or notification goes out to everybody that, “Oh, you know what something which has failed.” Every developer needs to jump in to look at the problem and then figure out if it’s their bug or not. That’s the efficiency I see, it’s not the focus in the test stage. I think the time when we do the testing that has shifted left but focus to me is much more now on testing than it was before.

Andrew Seger: Excellent, thank you. I believe this question is for you also, Dharminder. When does the QA/QE team need to be involved as part of the shift left paradigm?

Dharminder Dewan: As we talk about the shift left paradigm, I can just answer this question from a historical perspective. I would just quickly go back to the traditional model. Working in the traditional model, what I felt even at that time was, for QA test to be efficient, the QA/QE engineers need to be involved very earlier at the time when the requirements are being discussed because the test cases need to be returned very earlier because that helps them understand about the product, how the product would be built. In this scenario, I feel that at the end of the time when the requirements are being discussed, that’s the time it’s good to get the QA/QE engineers involved and they should be involved during the entire SDLC. Even during the design phase, when the product is coming out, during your UI reviews. The QA engineers should be involved in every stage, as early as possible like I said from the requirements.

Andrew Seger: Excellent, thank you. I believe this question is for Nancy. What are the most common roadblocks you see among customers trying to shift testing left?

Nancy Gohring: I would say that it’s an organizational thing. A lot along the lines of what I spoke about in my presentation. I think that it’s trying to figure out who’s the right person to take responsibility for testing and ensuring that it sort of fits into your organization. I think the tooling is very important though if you’re trying to shift left and you’re using some legacy testing tools that require a ton of experience and a ton of training to use, that’s going to also be really difficult to try to incorporate that tooling into the current modern development cycle. I’d say those are the biggest challenges. Dharminder, do you have anything to add to that? In terms of safety for your customers.

Dharminder Dewan: I think I have too, Nancy. One thing I agree with you definitely is that the organizational thing. Plus, what I’ve also seen historically is that because that some shift have been trying to push for the last two decades in different organizations. What I see across is that lot of time it’s a pushback even from the manual QE engineers. I’ve seen because they don’t want to get involved, and of course, anytime they’re going to resist the change and definitely the effort in symbol discuss restructuring because if somebody is coming from a legacy model from a traditional model, they have to put an effort.

They have to ensure that they have to agree to put in that time. They have to ensure to put in that cost associated with making their change towards shifting left. Of course, learning new technology and skill enhancement because that definitely gets them move strong, move the engineers from being specialists need towards generalists. Manual QE engineers need to be involved in that automation. I think all of these factors play a key role in getting that pushback and making sure we are successful.

Nancy Gohring: Yes also that was really good for you in terms of do a lot of retooling using a few tools because people put a ton of investment into that, right. It’s hard to give that up. That was a really good point.

Andrew Seger: Excellent. Our next question is, how is the Agile model better than the V model?

Dharminder Dewan: Sure, I think probably what the Agile model versus the waterfall model. That’s what I’m understanding the question is. Right, Andy? Okay, basically what the VICUB Agile model is better than the waterfall model because some of the components, if you see in the Agile model is, we are talking about a sprint. If we talk about from the Agile model is a two-week. If you just compare that in a waterfall model, in a two-week period there is no way in a waterfall model. Development team would even get the QA engineer involved or look at the used cases from a customer scenario or do enough testing to make sure that then the product goes out after two weeks. That’s technically shippable, deployable product. That’s one of the things I’m seeing the industry shift right now.

That the expectation is well, after every two weeks they should be deployable and shippable products and that it’s moving. It can be implemented in Waterfall model and plus I also feel that in the Agile model, there’s an opportunity for engineers to enhance their skills. Because currently, what I saw in the traditional models I was working within different companies was there used to be manual test engineers. There used to be automation engineers. There used to be an integration test engineers and I was struggling for years to make sure to tell people that, “Well, the best way to do really is combine manual and automation and genius.” Enhanced the skills, that’s number one. Number two is definitely the testing of being involved as early in the SDLC as possible. In the traditional, let’s say 3D or architecture, what I’ve really seen is that when these servers or when these frameworks are build, there was not much testing that was done till the actual app was built. Once the app was built, that’s when the testing used to happen. What we did was that we used to push back and say, “Well, why do we not do a testing around servers as well as the framework.”

Because what we can do is reroute a functional testing but at least we can do some stability testing. We can do some stress testing and we used to see a lot of value in doing that. That’s where I feel that even in the traditional models, there have been bits and pieces where we were Agile but now with the full Agile, it’s smooth, effective, more efficient and much more faster and shorter cycle.

Andrew Seger: Excellent. That’s all the time that we have for questions today. On behalf of today’s presenters, I’d like to thank everybody for attending today’s webinar and we hope to see you on another webinar soon. Thank you very much and have a nice day.

Overcome common quality challenges that are faced by most developers

Andrew Seger: Hello and thank you to everyone for attending today’s webinar, Software Quality at Speed in Agile and DevOps Environments. My name is Andrew Seger from 451 Research and I will be your moderator today. Presenting on today’s webinar first will be Nancy Gohring, Senior Analyst at 451 Research. Following Nancy will be Dharminder Dewan, Agile QA Manager at Infostretch.

I just wanted to go over a few housekeeping items before we get started. After the two presentations, we have a brief Q & A period. To submit a question, just click on the Question button within the webinar interface. We’ll try to get as many questions as possible. If we ran out, we will answer the question after the webinar. In addition to the presentation, currently, we will make it available to all attendees.

We also ask you to provide us any feedback that you might have in today’s presentation. Also, over the course of the webinar, we will have some polling questions. They will be posted on the interface and simply click the answer you most agree with as the answer. Perfect. With that, let’s get started. Nancy, please take it away.

Nancy Gohring: Great, thank you so much. Thanks everyone for joining us today. I’m Nancy Gohring. I’m an analyst here at 451 Research and I follow application in infrastructure performance which means that I follow ATMs under infrastructure managing under end-performance testing. I’m going to talk today about how the need for speed and agility is shaping testing. I’ll start by talking a bit about what’s driving this need for speed and agility and how that need for speed is in part driving the IT transformations that we’re seeing across the board. In turn, how that’s impacting the ways that organizations are approaching testing.

To start out, I’m going to talk a little bit about serve the idea of corporate eating the world and how central software has become to many types of businesses and in turn, the pressure that that’s putting on these businesses to move quickly. These are just some big-name examples that lots of you are likely to recognize. For instance, John Deere, a large manufacturer of farm equipment. Last year, it’s said that they expected to have more software engineers than mechanical engineers in the next five years. That’s a pretty clear indication that this company is shifting resources towards software more so than its traditional business.

It’s not necessarily that they’re not going to be making the same type of farm equipment that they were. It’s just that software is becoming increasingly important to both drive that equipment and create new products and services that are going to be valuable for their customers. GE recently said that they’re transforming from a heavy industrial company to a digital industrial company. Again, in clearer time that they’re shifting towards software. World Bank of Canada, I heard this gentleman on stage at a conference recently saying that they’re basically an IT company with a bank sign on the front door. Clearly, IT is very central to making this financial services business run.

Then, my final example is Ford. Many of you may have seen that Ford made a $182 Million investment in Pivotal last year. That alone is pretty remarkable. This is automobile manufacturer, very old, traditional company that has been around for ages. They made a pretty huge investment in a software company. One of the reasons that the company’s CEO said that they invested in Pivotal was that it would help Ford’s big software development.

Clearly, software is becoming increasingly important to all types of business. That’s why these businesses are feeling a pressure to move faster to continually update their software, both to meet the customer needs and to meet competitive pressure. We’re seeing this reflected across the board. This is a survey question that we ask to IT decision-makers. “What’s the most important goal for your organization’s IT environment over the next 12 months?”

You can see, the very topmost important goal is to respond after its business need, it’s 36.4% who said that. Number two, reducing cost. It’s not a big surprise. The third most important goal is improving reliability and availability which obviously, gets to the topic at hand here which is testing. It turns out that in at least forward-thinking organizations that are adopting modern technologies and practices, it’s working. They are in fact able to release software at a much faster cadence than they used to. The caveat to this side was a relatively small survey. We only talked to 200 people and left you’re looking at this slide and feeling bad if you’re releasing at a slower pace.

This was specifically geared towards folks who are developing in the cloud. We know that there are already advanced because we know that people who are adopting the cloud are one of the main drivers of cloud adoption is speed to market. This is already an advanced group of people. It goes to show that if you adopt modern technology practices you can, in fact, speed your time to market with software development.

Among these folks, only 1% said that they were still releasing annually. The biggest bucket was monthly at 34%. Then a fair number of folks said that they are pushing up software updates weekly and daily. How are they getting there? I mentioned cloud but there’s a bunch of new technologies containers micro-services can also help with speeding time to market. DevOps adoption is also something that we’re seeing folks do as a way to get to market faster with their software development.

This is a big buzz, lots of people are talking about DevOps as you may have experience. The definition varies pretty widely depending on who you ask what DevOps is. I think of it along to be pillars. It ends up involving a bunch of organizational changes. Cultural changes in terms of more empathy for people in your organization that have different jobs and are in different roles. There’s a lot of process changes. There is a lot of automation. A lot of changes to the way that software is developed and infrastructure is managed. We’re seeing really good interest in DevOps.

Here we asked, how would you describe your organization use of DevOps? 41% percent of IP decision-makers who we spoke with said that they’re either in broad implementation or initial implementation of production apps from DevOps organization. Again that’s 41%, that’s a pretty good adoption number there. Again, it depends on how you define DevOps but there’s clear interest here. If you say that you’re adopting Dev Ops likely you’re doing something, you’re thinking about it, you’re transforming in a way that you’re hoping it’s going to help you speed time to market.

An additional 24.5% said that they’re thinking about it. They’re in discovering the valuation and both doing in testing and another pilot. Seems pretty good interest in DevOps. We think of DevOps as a piece of broader I.T. transformation that we’re seeing underway. This is a question we’ve asked, it’s a very simple question. Is your organization currently undergoing an IT transformation initiative? We’re seeing growth in the number of people that are saying, yes.

The last year we asked and 53% said yes. Said they’re doing some IP transformation. This year that went up to 58%. DevOps is part of this. There’s a whole bunch of organizational changes that are going on in IT organizations for a number of reasons including, to move faster. Part of this IT transformation is tweaking the role within organizations. We hear the question was, which of the following that characterizes the layout of your IT technical teams?

This is a question that we’ve only just begun asking but we’re thinking that there’s a progression here, where the specialist orientation gives way over time to DevOps and more generalist focused ITP.

Again, there’s experimentation going on. There’s a lot of changes going on within development and IT and operations organization in order to better meet competitive pressure and customer needs. That brings us to our first poll question. The question is, what are the major pinpoints for your QA Team and take a second to take a look at the answers here and you can just click on the one that best fit. Do have long testing cycles? Do you feel like you’re putting best practices into action? Are you using the right set of tools or do not have the right set of tools? What are your big pain points? We’ll be interested to see what the audience, what stage the audience is here? Do take a second to vote and then I’m going to move on here. As part of this experimentation that we’re seeing in terms of IC transformation and DevOps is–What I’m hearing is a lot of vendors and also organizations talking about this idea of shift left where a bunch of different functions move into the responsibility of DevOps shop. Some of you have maybe heard some of these mash up the title, there’s Dev stuck ups. I recently heard Desk DBA. The problem here is that, this is happening in so many areas of specialization that I think it’s not really reasonable.

There’s this idea that you can have these generalists in a DevOps shop that are expert developers. There are also security pros, there are QA PROS. There are DBAs, network admins all rolled up into one person. At the end of the day, you’re going to think you’re going to fall through the cracks, right. One person can’t be expert in all of those areas. What I’m saying is some pretty interesting experimentation within organizations in terms of ways to allow all of these consents to happen, right. Make sure that you have a security policies in place.

Make sure that you have QA experts who can ensure that software is being properly tested before its pushed etcetera, without trying to unreasonably ask one person to do it all. I’m going to talk through some of those models that I’m seeing to specifically with an eye towards how this is happening and with testing functionality. I’m going to talk a little bit about the traditional testing organization looks like. Then talk about these three new models that I’m seeing companies play around with. I’m thinking of them as embedded, hybrid, and Devtest ops. I’m going to talk through each of these briefly.

In a traditional model, you have a development shop and then you have a totally separate QA team. What happens is, developers build their software and then they hand off to the QA team. The QA team run to launch of task, surely find some problem, shift the code back to the developers who work on trying to fix the problems and then they again ship the code back to QA. This process kind of goes on and on. The issue with this is that testing becomes the gate and it often slows down this process of shipping code. We’ve established here that speed is pretty key these days.

It’s worst case scenario is that this becomes so frustrating to development shop that they decide to skip testing altogether and that for obvious reasons It’s not really a great scenario. One organizational model that I’m seeing is I’m thinking of it as embedded. In this model, there are test or a QA expert that live within application team. That expert works very closely with developers and operations staff at DevOps to design appropriate tests for the applications that are being developed. Ensure that those tests are being run throughout the development process so as not to slow down production code and executing and designing and just ensuring that testing happens.

The result that I’ve seen in this model is you’re going to ship a pretty a really quality app. This is a really good model. This isn’t really a downside but something to think about is that this is really a model that’s going to work for very large organizations that have lots of application development teams. Also, can afford to hire QA people that are going to live in each of these teams. There is an organizational size thing that’s going on here to enable this type of model. Another model that I’ve seen folks have discussed with I think hybrid model but there’s still a QA team. There’s still a separate team but the individuals on that team, the individual experts are assigned to different application development teams, the DevOps team where they consult with them. If a new team brought up and wanted to build a new app. The tech expert comes in and helps that team understand what type of testing is appropriate for that application. How to integrate the testing within their CI/CD tools and within their cycle. The QA team they choose the testing tools because they are the experts and then they can mentor the team members on how to use them and how to best incorporate testing into their released cycle.

The results of this model that I’ve seen is a really quality app. There’s a thorough testing. A minor downside that I can think of here is that there’s a bit management headaches if you’re having to assign and reassign QA team members to different development teams. That’s pretty minor, I don’t think that’s a big thing. I think that this is a model that can work really well for companies pretty much of all sizes. Then finally, there’s this idea of Dev Catch-Up and this is the idea that one person on the DevOps team is designated at the testing pro.

The downside to this is that when I’ve seen this happen, typically this person that’s assigned to the testing pro doesn’t necessarily have a lot of experience. If they’re not a QA expert by trade, they’re just like a designated person. The good news here is at least testing is happening. I’m seeing more and more testing tools that are designed with this role in line. These tools are designed to help suggest the appropriate types of tests. Help the user build and run those tests. Help them figure out what types of tools are best to integrate with.

This is becoming an easier proposition even if there are some downsides which is that you’re potentially going to have not necessarily the most sophisticated testing because this particular person doesn’t necessarily have a QA background. Those are some models that I’m seeing both play with and do so successfully. If you are an organization that’s thinking about how to update your testing processes and ensure that you’re testing things properly and pushing quality code. If you’re thinking about ways that you can transform your organization, you have some things to think about.

Think about the tools that you’re choosing and I’ve seen that it’s a good idea to choose tools that are designed both for a QA Pro as well as more of a generalist in the DevOps shop. One reason for that is that if those two roles can collaborate closer, they will be able to move faster. It makes sense for them to be able to share tools in a way that QA Pro can design a lot of the tests and then the DevOps folks are looking at those tests and initiating them. Both types of roles are able to use the tool and really work together.

I would also say, think a lot about how to integrate you testing tools with the other tools that you’re using. It makes a lot of sense to try to integrate testing tools with automation tools that automatically kick off as code is being pushed. It makes sense to integrate with monitoring tools so that when tests are run, the monitoring tool can help identify the root of a problem that may surface. Makes sense to integrate with ticketing tool that can insert a ticket when a test surface is an issue so that the folks that are responding the issues can prioritize fixing the problems of the testing tool that surfaced. So, Integrate, integrate, integrate.

At the same time, know that adopting all these tools won’t get you all the way there. There’s a lot of other factors that play, just buying a bunch of tools isn’t going to turn you into a DevOps shop that appropriately does testing. Especially when you’re thinking about different types of way to organize around testing. Personally, think about your corporate culture. Some cultures are going to be more conducive to say without thinking it is a hybrid model. Others might be more conducive to a desktop role. Think about your corporate culture and what makes sense for you.

Finally, account for headcount because some of these models again, as I mentioned earlier, will work better with larger enterprises and others may work well in a more startup type of environment. Finally, I just wanted to close with this vibe as I would talking about integrate, integrate, intergrate. This is a response graphic this company called PBLF put together. They called the periodic table of DevOps tools.

It is really just a snapshot of all the different kinds of tools that DevOps shop are employing these days that helps them move faster, that orange grouping in the center is the testing tool. My point here is if a team is using all these tools and roles instead of working independently, that actually begins to slow things down. As you’re choosing all these different tools, as you’re trying to transform your IT organization and move faster and ensure you are doing things like testing and ensuring security into your processes. Think a lot about how those tools integrate because that is going to make your life a lot easier. With that, I’m going to hand over to Dharminder. Thank you very much.

Dharminder Dewan: Thank you, Nancy. Thank you so much. I think it was a great, from a research perspective. A lot of things which I also learned. Good morning everybody. Good afternoon to all our attendees who are from the east coast and anybody who’s from remote sites, good evening guys. From my perspective, what I’m going to talk about today, in general, is that how the testing as we saw from the research perspective, how do we see the shift in the testing from the traditional model to the new model. How it’s moving from past to present. What I really see is that any organization that we go to, any company that I have worked with.

There was very simple goal from a product perspective which is they want to deploy a quality product, they want to deploy faster, and with less cost. Lately, we talk a lot of process efficiency because that is how you have to reduce cost because gone are the days when the cost reduction was basically about to let’s get the people out. Let get less number of people that is the effective way of reducing cost. Gone are those days. I think it’s becoming more and more efficient getting done more from less. I think that is really the key. As you see here, there is a clear demarcation in terms how testing has shifted from past to present.

What Nancy was talking about earlier also was that in the traditional model there was a handlebar. It was like the compartment of a train. That one compartment is done with their work and then it hands over with a chain connection to the next compartment. Now, what we are seeing is that there is an overlap there. Testing is not the job of a Queue Engineer. It is being embedded into the belt, as well as into the upstream, or be traditional tools team. What we see a lot more focus on to unit testing than it was before earlier. The unit testing has been not a lot of focus was done on the unit testing. In general, in the past what we saw is that there was duplication.

There no optimization and even the resources sometimes were not efficient enough. What we see now is integrated testing which is optimized for set up, clean up and sequencing. In the past, I would say almost two decades plus, I personally have worked on a lot of models starting from the Waterfall moving towards Agile. We have seen everything, and as a company, a services company, we see different challenges across different customers when we work with them.

What we see now is digital. Digital, digital. Digital is the word right now. That is basically transforming the SDLC. What does digital really mean? As we see in the near future, we’re testing what we saw just now was 10% is what our numbers are. We are resuming that testing paradigm which would be the specialist doing the testing, the percentage will reduce further down and the testing would spread across as you mentioned generalist.

It would be part of development domain. It would be part of ops domain. When we say digital, we are talking about test automation. We are talking about nightly built. We are talking about building and testing every check-inn, faster bug discovery, faster bug fixing, and the optimization for setup, cleaning, and sequencing. Again, I just want to emphasize this because the idea here is that the more automation we do, that’s the only way we can be more efficient. One of the requirements from any organization is to make sure that we are deploying faster, we are deploying quicker and the time to market is much, much faster. What that really means from our perspective is the shift left. Which is basically what we are doing is we are moving from a QA environment to a QE environment. We are moving from a quality assurance to a quality engineering. If you see here, in this, that in a typical quality model was the shift left quality model, the peaking has changed.

Peaking, what that means is the attention to quality. Attention to quality is being given more towards in the beginning part of the SDLC rather than later on. I worked in traditional organizations where the focus used to be that, we always used to struggle with unit testing. We always used to struggle with finding majority of the bugs in the earlier part of the cycle. The issues where I would say 75% to 80% of the problems, they were found during the testing cycle which were very after the development was done and the cold freeze was done. That’s when historically the bugs used to be figured out.

That’s when the resolution used to be happening for those. It was not very cost effective. Whereas now, what we see is, as we saw, the hybrid and the DevOps model. All of these testing is being done as part of the development cycle itself. In the Agile model, it’s done within the sprint. What we are seeing as a company is that the product or the whatever features are being developed as part of that sprint are not okay till the quality gate is passed at a sprint level. The expectation from a process perspective, from the product perspective, is that the product which comes out on an Agile environment at every sprint level should be a deployable product.

This is something interesting stuff that we have seen, of course, that leads pretty much about the cost of fixing bugs. I think it is the goal of the organization is time to market. The goal of the organization is becoming more efficient. The cost of fixing bugs is very, very, very critical. Because if you look at the traditional QA model, I think anybody who is in this QA world, they would have seen this model. They would have seen this curve because it’s been there for ages. We have been discussing this that the sooner we find the bugs, it’s easier and it’s more cost-efficient to fix the bugs.

In the Agile QA model, that’s first thing, everybody, I think experienced that. We start seeing the cost of fixing bugs goes down. The time to fix bugs, in the SDLC is faster. The duration to fix the bug reduces because the stuff that we do here is that we do testing after every checking. Of course, out there a lot of times when you see this graph, sometimes it’s very difficult to correlate that how come I’m seeing the shift even though in my heart is onto lapses, the numbers are the same. It’s the way you should look at this particular graph is that the duration from requirements down to the production that has reduced in an Agile QA model compared to a traditional QA model.

That’s why the fixing, there’s a lot of overlap which is happening in within requirement, within the design description, unit testing, the functional testing, the UAT and the production. This thing is much, much faster. What that does for any customer is faster deployment, being cost-efficient and then, of course, the process efficiency is there. What we have seen here as a summary from our company’s observation is that average release frequency for majority of the companies has gone up by 150%. Just for mentioning it here, that we are working with Fortune 1000 companies.

Those are the companies where we are seeing the shift. We are seeing these enhancements. We are seeing these advancements. The pushing changes to production is two times faster. Of course, they’re still using CI/CD, Continuous Integration Continuous Deployment. CI, while that’s continuous integration, but for me, CI always has been continuous improvement. That’s how I feel. The shift has been from a traditional QA model to this QE model because every time there’s a retro that we do. We do a retrospective. We just figure out that, “Okay, what’s the advantage?”

We need to have how we became more efficient. That’s continuous look back, continuous process improvement. That’s what has developed here. Will it stop here? Absolutely, not. I think there’s still a long way we need to go. As we talk about this, I proudly would reflect back 15 years, 18 years. What I saw even in those traditional models was that the way to become more efficient was still there. Some of you whoever had been in the industry for a few decades, you would know that at that time it was a struggle. From moving the testing into development was a huge discussion point. There was a push back from every organization to make sure that they just focus around what they do best which is specialist.

Now, since we talk about generalist that’s what the future is. That we are expecting that a developer should be able to do testing. A tester should be able to do the building of the product. They should be able to run automation testing. Are we talking about manual tests? Are we talking about automation? There’s a merge which is happening right now. What that does to us is that if the more generalist you are, you can make even faster and intelligent decisions even from the predictive QA perspective. Test automation, we are seeing that being increased by 200%.

Do we see that just now? I think automation has historically been proven to be very, very effective. Because what that does to us is there’s the integration aspect of it, there’s a new feature aspect of it, there are soft testing we are talking about. We are talking about stress testing. We are talking about smoke testing. There’s so much we need to do. Every time there’s a time crunch. Every time there are drop dead debts. Automation is the key where we can definitely do testing. We can do builds. We can do nightly builds. That’s where the future was seen 15 years back.

We continue to progress and we see that going forward, there would be much more enhancements that would be happening in the same, same domain. Where the future would really for us and build that more efficient, more faster and more faster time to market. That brings us to our poll question number two. Then see have the first one. The question is, how would you rank your organization’s maturity in testing QA on a scale of one through five? One, being not mature enough and five, being very mature. I’ll give you guys a couple of seconds to put that through. Working with different clients like we said the Fortune 1000 companies, what have you seen?

What you have seen in general is that going into an organization, there are a lot of challenges an organization have. For a very large national bank that we work with, we saw that they were struggling with mobile device proliferation. Which was the rapid growth in the mobile device usage and for having the need to have the capability of deploying their products on the iPads, the iPhones, the smart devices and the rapid deployment of new apps and architecture. It was very important that with our training world, customers keep on requiring new features, they keep on requiring enhancements.

They’re not willing to wait for a longer time in terms of bug fixes because some of them are critical, which is impacting usability. That definitely pushes the consumer level. It was very critical for them to get this thing done. Of course, there are complex functionalities. There’s a lot of use cases that are complex test scenarios that need to be validated. Just to make sure when it goes out of the door, the customers are not finding those issues and their experience is excellent because every customer rates any bank, any product based on their user experience.

Of course, optimizing the device testing cost because it’s very important that you are efficiency, efficiency, efficiency. That’s the key across any organization. What we did was that going into this, we had to look at regression cycles. When we looked at the regression cycles, again, we are talking about digital here. A lot of end optimization went into that organization. After doing our analysis, we were able to reduce from four weeks to one week. Of course, how were we able to do that? There’s a lot of sequential aspects of SDLC that we were able to do parallel stuff.

Then we were able to shift left using automation. The requirement was that we need to go on multiple devices. That was done with the iOS, with the Androids, with the Amazon and the Windows App Stores. There was risk-based device selection methodology that was used. We set up a complete test environment for desktops, mobile, native, hybrid, web apps, and the mobile apps. The coverage was therefore functional as well as nonfunctional device testing. The other aspect which I would again like to highlight is the simulators. As we know simulators is historically any company which is into devices. Any company that is using hardware. They need to use simulators.

I used to work with a telecom company and the biggest challenge was for them getting those devices. Therefore, testing which was reduced to cost, a few hundred thousand dollars. The biggest aspect there was simulators because that gives you more predictability. It’s not just the cost. It is about predictability also because simulators we make the point every time you start at the same place and you can end at the same place. The possibility of your test cases or your execution going faster without having false failures is way, way, way higher compared to traditional testing that we sometimes end up doing with the actual devices.

The other case study that we did also was with a large department store. It’s a retail department store. The challenges these guys had was, they had some Omnichannel strategies. There was no CI/CD process in place. They used those test environments due to frequent releases. They were very unstable because the challenge they’re having was that, with the multiple stuff any time there is a test case, the automation there, what you would do is there would be realignment there, there would be changes that would be required, and we would so frequently test the releases. They we not able to manage that.

Going into this customer, we looked at the bunch of tools which were there, we analyzed what existing stuff they were using. We started working with Jenkins. The manual tests which were there, they automated test cases, and the manual test cases which were there. We automated those because the idea here is automation is the key. The more we are able to automate, we will be able to do extensive testing. Plus we need to make sure that there is a focus around for the test QA engineer. They need to focus around the areas which we figure are more critical.

There is a regression aspect from automation that can be done by making sure that it’s those test cases that execute on a nightly will perspective. Then, of course, the CI/CD process that was completely automated using Jenkins. They nightly built. There were code checked-in. Every code checked-in had a build ready to market and time to market. That was then within four months. Of course, the early we were talking what yearly releases from this customer perspective that went down to four months and execution time was reduced to by 75%. Again, how did we do it? That’s parallel execution, using more devices, automation, and introduced in sprint automation.

The idea really is that we don’t again, we don’t want to be in a traditional model where there is a sequencing that within the sprint majority of the time customers end up doing manual testing. Then the focus around automation is not is there, why? Because they don’t have a mature automation framework. This is something which we felt help them a lot. That was our observation from their perspective. I think that brings us to the poll question number three, that are you interested in talking to experts from Infostrech about Agile testing. You can just say yes or no at this time. I think I missed one slide.

Okay, sorry. This was the third case study. I was wondering where is my third case study. We worked with scholastic and the challenges that saw with scholastic was they were requiring a responsive web driven application web design to handle all the communal functions. They had some operational inefficiencies which included lack of automation. There was no common QA strategy. QA was done in silos by each product team. The automation framework also was very scattered. They are a common product overlap between features, but the automation framework was not closely tied together.

What that pretty much met was that any time there was a change, the enhancement used to– we don’t have multiple places. Going into this organization, what first of all again, automation for us was one of the solutions which we provided to them, that we were able to automate their manual test cases. We were able to develop close to 1000 plus behavior driven test cases for them because what that does is, the test case scenario have to be very closely tied to how a user would be using the particular application because that’s really the key. That would be driving how the customer experience would be.

Then, of course, we were able to create a standard automation framework for shared services for them. How they help them was that they were able to leverage it across multiple-features, faster realignment, faster execution, faster enhancements and then we worked with them for doing a test data manager for the regression suite. Then this reduced the time to test functionality and the prime to market that was enhanced this time for these particular customers.

What Indian do? What I’ve seen is that currently, the points that are driving this power SDLC is what we see from the customer is the time to market and delivering deployment. Product faster almost every sprint in majority of the cases and then there’s an improvement in testing efficiencies. That is one of the requirement which means reduction and analysis, faster realignment, faster automation, more automation, using digital technology available out there, CICD, continuous integration, continuous deployment and as I mentioned before I think for me that’s continuous improvement.

Where we today that’s I think a work of almost 15 to 20 years, a continuous discussion, there used to be forums, there used to be meetings which we used to go to share ideas, share cards, share suggestions just to make sure that how we can get to our goal faster. The customer goals what we have seen from here perspective absolutely is deliver a quality product with process efficiency, with a faster time to market. That’s all I wanted to present to you guys. Thank you, this is Dharminder, for attending the webinar from me. Andy, off to you.

Andrew Seger: Excellent, thank you Dharminder. Now it’s time for a question and answer period. If you have a question just post on the question’s spot and hit the submit button. Our first question is, nowadays testing is performed at every stage of development even developers perform unit testing. If testing is slowing down how DevOps coming into the picture and that domain. I believe that’s for you Dharminder?

Dharminder Dewan: All right, my answer to that question is I personally don’t think testing is slowing down. I think what I really feel is that the emphasis on testing is still there. Everybody understands that we need to value testing and I think that is being more now and with the DevOps, what I really see is that the DevOps, what we are doing is that testing is again doing the shift left in testing. With DevOps were able to do nightly builds, we are able to do every code checked-in. There’s a build that is triggered. What that does is testing emphasis plus every time there’s a failure, the developer can right away go and fix that issue and then again with that checked-in, there is a build which is happening.

Historically what used to happen was, that multiple developers used to do a code checked-in, then a build used to happen, once a build used to happen. Now, if there’s a failure, an e-mail goes out or notification goes out to everybody that, “Oh, you know what something which has failed.” Every developer needs to jump in to look at the problem and then figure out if it’s their bug or not. That’s the efficiency I see, it’s not the focus in the test stage. I think the time when we do the testing that has shifted left but focus to me is much more now on testing than it was before.

Andrew Seger: Excellent, thank you. I believe this question is for you also, Dharminder. When does the QA/QE team need to be involved as part of the shift left paradigm?

Dharminder Dewan: As we talk about the shift left paradigm, I can just answer this question from a historical perspective. I would just quickly go back to the traditional model. Working in the traditional model, what I felt even at that time was, for QA test to be efficient, the QA/QE engineers need to be involved very earlier at the time when the requirements are being discussed because the test cases need to be returned very earlier because that helps them understand about the product, how the product would be built. In this scenario, I feel that at the end of the time when the requirements are being discussed, that’s the time it’s good to get the QA/QE engineers involved and they should be involved during the entire SDLC. Even during the design phase, when the product is coming out, during your UI reviews. The QA engineers should be involved in every stage, as early as possible like I said from the requirements.

Andrew Seger: Excellent, thank you. I believe this question is for Nancy. What are the most common roadblocks you see among customers trying to shift testing left?

Nancy Gohring: I would say that it’s an organizational thing. A lot along the lines of what I spoke about in my presentation. I think that it’s trying to figure out who’s the right person to take responsibility for testing and ensuring that it sort of fits into your organization. I think the tooling is very important though if you’re trying to shift left and you’re using some legacy testing tools that require a ton of experience and a ton of training to use, that’s going to also be really difficult to try to incorporate that tooling into the current modern development cycle. I’d say those are the biggest challenges. Dharminder, do you have anything to add to that? In terms of safety for your customers.

Dharminder Dewan: I think I have too, Nancy. One thing I agree with you definitely is that the organizational thing. Plus, what I’ve also seen historically is that because that some shift have been trying to push for the last two decades in different organizations. What I see across is that lot of time it’s a pushback even from the manual QE engineers. I’ve seen because they don’t want to get involved, and of course, anytime they’re going to resist the change and definitely the effort in symbol discuss restructuring because if somebody is coming from a legacy model from a traditional model, they have to put an effort.

They have to ensure that they have to agree to put in that time. They have to ensure to put in that cost associated with making their change towards shifting left. Of course, learning new technology and skill enhancement because that definitely gets them move strong, move the engineers from being specialists need towards generalists. Manual QE engineers need to be involved in that automation. I think all of these factors play a key role in getting that pushback and making sure we are successful.

Nancy Gohring: Yes also that was really good for you in terms of do a lot of retooling using a few tools because people put a ton of investment into that, right. It’s hard to give that up. That was a really good point.

Andrew Seger: Excellent. Our next question is, how is the Agile model better than the V model?

Dharminder Dewan: Sure, I think probably what the Agile model versus the waterfall model. That’s what I’m understanding the question is. Right, Andy? Okay, basically what the VICUB Agile model is better than the waterfall model because some of the components, if you see in the Agile model is, we are talking about a sprint. If we talk about from the Agile model is a two-week. If you just compare that in a waterfall model, in a two-week period there is no way in a waterfall model. Development team would even get the QA engineer involved or look at the used cases from a customer scenario or do enough testing to make sure that then the product goes out after two weeks. That’s technically shippable, deployable product. That’s one of the things I’m seeing the industry shift right now.

That the expectation is well, after every two weeks they should be deployable and shippable products and that it’s moving. It can be implemented in Waterfall model and plus I also feel that in the Agile model, there’s an opportunity for engineers to enhance their skills. Because currently, what I saw in the traditional models I was working within different companies was there used to be manual test engineers. There used to be automation engineers. There used to be an integration test engineers and I was struggling for years to make sure to tell people that, “Well, the best way to do really is combine manual and automation and genius.” Enhanced the skills, that’s number one. Number two is definitely the testing of being involved as early in the SDLC as possible. In the traditional, let’s say 3D or architecture, what I’ve really seen is that when these servers or when these frameworks are build, there was not much testing that was done till the actual app was built. Once the app was built, that’s when the testing used to happen. What we did was that we used to push back and say, “Well, why do we not do a testing around servers as well as the framework.”

Because what we can do is reroute a functional testing but at least we can do some stability testing. We can do some stress testing and we used to see a lot of value in doing that. That’s where I feel that even in the traditional models, there have been bits and pieces where we were Agile but now with the full Agile, it’s smooth, effective, more efficient and much more faster and shorter cycle.

Andrew Seger: Excellent. That’s all the time that we have for questions today. On behalf of today’s presenters, I’d like to thank everybody for attending today’s webinar and we hope to see you on another webinar soon. Thank you very much and have a nice day.

Latest News, Events, and Thought Leadership

Hear us speak on “Hyper-Connected Apps: Don’t Let Peripherals Play Havoc with Mobile App Testing”
Learn More

Sivakumar Anna

Feb 19, 2019 @ 12:45pm – 1:45pm

The Ohio Union, Columbus, OH

Feb 20, 2019
See More Events
An Insider View from a Key Analyst at NelsonHall
Register Now

Dominique Raviart & Andrew Morgan

Feb 26, 2019 10AM PT / 1PM ET

Online

Feb 27, 2019
See More Webcasts
Dynamic Testing Grids: The Secret to Continuous Testing Agility
Read More

So you’re on the verge of launching your new solution. Or, your digital initiative has gotten off to a great start...

See More Blogs

By submitting this form, you agree that you have read and understand Infostretch’s Terms and Conditions. You can opt-out of communications at any time. We respect your privacy.