Harness the Power of AI to Optimize Testing Processes

A joint presentation with Forrester & Tricentis

Hello and welcome to today’s web seminar. Harness the Power of AI to Optimize Testing Processes. Sponsored by Apexon and featuring speakers Diego Lo Gludiche, from Forrester Research. Sanil Pillai, from Apexon, and Wayne Ariola from Tricentis. I’m your moderator today, Beth Dominic. Thanks for joining us.

Before I hand you over to our speakers for their discussion panel, let me explain this console you see in front of you. Every panels can be moved around and resized. At the bottom of your console you’ll also see a widget toolbar, by clicking that handouts which are in the middle you can download some resources from Apexon. If you have any technical problems, please let us know by typing in the questions and answers window to the left. This is also how you can ask questions, there will be a Q&A at the end, feel free to ask questions throughout the presentation.

Finally make to optimize your bandwidth for the best feeling experience. Please close any unnecessary applications. This event is being recorded and will be made available later to watch on demands. Once the recording is ready I’ll send you an e-mail with instructions about how you can watch it. With that I’d like to hand it over to Avery Lyford who is Chief Customer Officer Apexon and he’ll be moderating today’s discussion, thanks Avery.

Avery Lyford: Well, thank you. Welcome everyone, very exiting program here in terms of AI and how it can impact testing enterprise. Let me start out with Diego you’ve spent a lot of time studying all of this. How do you even define AI, people have talked about it for years, people have been disappointed for years, people are really excited now and seeing an impact. What changed and how do you differentiate machine learning from deep learning and some other categories that people may have heard?

Diego Lo Gludiche: Thanks. First of all good afternoon and good morning to everyone around the world, thanks Apexon for having me on this interesting panel. It’s a great question, we can sit here hours defining what AI is, what it’s not. I would say that the way we define it in Forrester there is that one side you can look at this pure view of AI. Some people call it general AI, or pure AI for which a benchmark could be in that case AI is the technology that really strives to mimic human intelligence. That’s not really what we’re going to be talking about today because we’re pretty far from that in the way we can mimic human intelligence, especially when we look at even how fast we can learn and all the instances we can make.

We’ll talk more about what we call pragmatic AI and it’s a narrow in scope that often even narrow AI can exceed our human intelligence. Think about how fast you can get the results from a whole huge amount of information that have very intrinsic correlations that we don’t see that’s of courses we’re talking about data. AI is basically a new way or a different way of building systems where that are capable of learning and capable of dealing with situations that are not necessarily what they were built for. They deal with the unexpected right, just very different from software programming which most of us test today which is– we tell exactly the algorithm what to do. We know the recipient and everything.

So, AI is about these systems are able to learn by themselves, a little bit like a child of four or five years starts by looking, by watching the first four years of his life watching stuff, fearing stuff which nerves, and it starts distinguishing things and matching things up. AI that’s the recent way AI is now taking more and more an important role and the technology that is most popular in the AI technology is machine learning. We mentioned machine learning and deep learning.

Well, machine learning is the most popular, I guess AI technology. I know there’s a lot of debate about is machine learning AI or not? But usually machine learning is classified as a subcategory of AI, one of the important AI technologies. Machinery learning has been around for quite some time, it’s nothing new it’s used by data scientists, train probabilistic and predictive models. It’s getting better and better and better. Where it’s getting better is the deep learning which is a subfield of machine learning with probabilistic approach, predictive models.

Deep learning is a special case of machine learning and it’s based on the concept of know networks which superficially we call the architecture of the brain and that has little to do actually with how the brain really works as soon as it should.

In deep learning contrary to machine learning, these models learn the article and abstract concepts in a layer by layer and have more layers than a machine learning algorithms. For example, an image recognition machine learning and deep learning classification model will start learning the civil engines of the form of the face and then an input layer might just start focusing on understand ages around the nose, eyes, eyebrows and mouth. Another layer might put these things into relation and look at colors. As you go deeper into this classification of this model, basically level of abstraction we keep representing more and so that the layers higher in the layers higher in the hierarchy can basically abstract more and more and that’s a powerful representation by building on the presentations, that’s what deep learning does.

If you will another example is, letters form words, words form sentences and so on. This representational of power is what makes deep learning special. Deep learning is also more automated, deep machine learning, data scientists, are using to be able to engineer the features of the model and in deep learning that’s allowed to meet. It’s more of an automated approach which brings a little bit of understanding of people or definition that people call deep learning, a little bit more of an opaque approach, because it does similar things.

So, basically what this machine learning– this algorithms student, as they actually traverse by data, they learn about that data and they’re capable of recognizing new situations, new context, new data as it becomes available. So machine learning, is the parent field of deep learning and e-machine learning. In deep learning you’ve got network, but you’ve also decision trice, random Forrest for AI machines which are traditional machine learning, statistical algorithm as I was talking about.

The reason why these are important is because, for example, deep learning we’re seeing insurance companies using it for automatically accessing damages and costs, because deep learning does a– is much more precise than machine learning in recognizing images. Machine learning is used for personalizing customer experiences by learning individual characteristics. Our behavior is what we want, and we like and participate in those. They are using all to maintain customer service, providing practice service and self service to customers.

The way we should be looking at AI as humans, but the fact that AI can augment human beings right. There is a partnership between these two things and not a replacement or a conflict. We should look at AI as being augmenting individuals to do things that are now being faster, to do things in a more secure way perhaps, in a safer way; because of the position that the technology can bring us.

Avery: Diego thank you man. What strikes me is that, for chess playing is not a computer that’s best or an AI that or a person as best as the combination AI, working with a person and is the best chess player right now. So, Sanil, One of the interesting things is that application of all this for software and application delivery, what are your thoughts about how AI could make a difference in that field?

Sanil Pillai: Sure Avery. Really if you look at AI, AI is really applicable to the areas where there is a lot of data, there are obviously productive gains to be had and that fill the good application for AI. So, if you really think about software delivery, there’re different aspects of software delivery and AI is making in roles, already made inroads in some different aspect. If you look at some of the upstream processes for example, developer productivity So you have code review that happened today, and they’re already told you that they’re reviewing, et cetera, which really help– automatically code review review and make it–

make sure they catch errors before they get into the other web cycle. That’s where AI is coming to play.

Another upstream cycle that we’ve seen is pair programming. There are companies like Toyota, et cetera, which helps you write code, based on millions of other pieces of code that have been written. So, really crowdsource a lot of the code and automatically creates code for you. That can really enhance your productivity as a developer.

I think for downstream, obviously AI, does have a lot of applications around the test automation space. The reason being, there are a lot of decision points to be done there in terms of infrastructure decision, business allocation decisions. There’s a lot of data to make those decisions on. AI is squarely fixed on there to really help optimize those processes. If you do see AI helping with the test automation processes, it could be test case optimization to infrastructure optimization, and anything in between.

If you look at the entire DevOps cycle itself, in the DevOps cycle, you have lot of different data being produced through logs. Different tools produce different pieces of data. AI, definitely, has a lot of role to play here, because it helps correlate logged data coming from different sources, and really glean information and patterns which would not really be obvious to typical analytic tools.

How do you really look at data coming from your infrastructure, and compare it with your logs which are coming from initial unit heads and figure out what kind of infrastructure optimization you need to do and apply a machine learning model to do that? So, those are a few areas where I see AI really helping the software application delivery process. As Diego mentioned earlier, it is not going to replace a lot of the programmers and the experts that we need to make it happen. It is definitely augment and aid these experts to make sure it runs in a much more streamlined fashion.

Avery: Wayne, you’ve been involved with the world of testing for a long time and have a lot of experience. You’ve seen different technologies come and go. What started to make AI more relevant now, more than it has been in the past?

Wayne Ariola: I’m trying to hide that fact that I’m so old. Thanks, Avery.

Avery: [laughs] No, experience. Probably, you’re very young.

Wayne: Thank you. First of all, Apexon, thanks for putting this on. I think it’s just a very, very hot topic right now. I think getting down to the basis of the conversation is critical at this point in time. To your question though, I think, really where we’re at today is the hype associated with any technology is awesome, because it allows us to really be and promote a series of real creative thoughts that allows us to get better. That’s what I’m most excited about.

Whether you call it AI, or machine learning, deep learning, neural networks, any kind of applied technology along the way, what we’re doing is actually as an organization or as an industry is getting better. It’s critical today. If you really look at where software testing is in a maturity cycle today, there’s a long way to go. A lot of our activities are still primary. Manual and or the current testing that we have today somewhat produce a pass/fail or binary decision. I hope you guys don’t hear the same echo that I hear.

What’s going on today gives us the ability to also start improving upon that. What I think’s going to happen over the next, really, 18 months, is we’re going to see a lot of incremental improvements associated with how we’re handling the data exhaust associated with the testing activity. Whether that’s machine learning or whether that’s AI, or whether that’s pattern matching, I think what’s going to ultimately happen is we’re going to be able to improve the efficiency and the feedback of the activity of testing, number one, and try to incrementally improve- I’m going to steal this from you, Diego– our quality of speed.

Avery: That’s true. It’s interesting challenge. Actually, if I throw a combo to all of you, what are some of the challenges you all are seeing as– A lot of companies are trying to do digital transformations. What are the enterprise software testing challenges you’re seeing as people are working to make that shift? Any of you want to take that? Maybe start, Diego, since I know you’re coming to us from Italy.

Diego: Digital is transforming the way we test. It’s funny how the link between digital testing is very strong, but it goes through the following reasoning. Basically, what is it that makes

a business digital. How can companies become more digital? Well, it’s basically building better software. It’s all about software. You have to be successful at building software which will implement services, products, whatever it is as a company that you’re doing. This is true across all the industries from finance to insurance, to oil and gas and engineering. Software is eating the world. That’s someone else said. It’s definitely not my invention, I would say.

Therefore, that means that everybody has to become excellent at developing and testing. It’s just becomes a primary business process of any organization to be able to build great software and deliver it and test it and have it high-quality faster. Now, whether they do in-source, out-source, code-sourced with AI, not with AI that’s another almost a Agile and DevOps, but all these are actually enablers to make that happen. That’s the connect between digital and testing.

The requirements on testing with all these enabling, building software faster with Agile, using DevOps in automating more just puts all that pressure which turns into big challenges for testing which is as Wayne said, well, first issue is we still do a lot of manual testing. How do we get rid of that? How do we automate more? Well, I think that we’ll be working on automation for years, we’re getting better at it, but still what no way close where we need to be. AI probably can help make that better. We can talk about some case studies later, but actually we already mentioned a few.

The challenging for testers is being part of the fast speeding process that organizations are trying to adopt and not becoming the bottlenecks as we used to be, not being afterthought– step testing being the afterthought step during development, but it also means doing things smarter, it’s not just that speed. We can automate everything assuming we become really good at automation. It has a cost. The other key aspect and challenges, how can organizations automate smarter and test smarter. That they have to automate less perhaps, but cover maybe risk better, business risk.

That’s requires a bit more intelligent testing. It’s right-sizing the test strategy what we call it. How can we use all the data that we have to understand how we can best organize out a strategy around the type of business applications that we need to build. Apps are disappearing. They’re becoming more digital experiences which is basically largely smaller fine granular pieces of code or micro services that need to be orchestrated in advance which is a complete different way of building applications. That’s a challenge for testing again. How do we test in that type of environment?

We used to test these large silo gaps, but what about these small fine runner or distributed architectures? The lack of skills is another big issue. Testing is becoming also more technical as a job. Dev testers, we see these full-stack developers. How do we automate and support testers? Also, there’s much more harder and sophisticated applications that were starting to see, with IoT experiences. With AI itself, how are we going to test AI systems? Right now, we’re talking about using AI for improving the way we test existing software.

That was one of my key researchers that I did on AI which was how do we use AI to improve what we do today, what we used to do today adopts? The other questions is, well, application is actually clients are starting to introduce AI in their organizations. 55% of the organizations are either improving what they did last year with AI, expanding it 10%, not 55 and so AI itself is being adopted as AI a machine learning and I’m not going to distinguish between the two given that we gave the definition at the beginning, but what our testers going to do with the existing testers. Are they the existing testers or is there someone else that’s going to take the job of testing AI systems?

So, a whole range of challenges I see are– Challenges are just going to be okay.

Wayne: Could I jump on something there Avery.

Avery: Sure.

Wayne: Diego, you mentioned one thing and I think it’s the first critical component of it which is this idea of

the process cadence mismatch that’s currently happening between development and testing.

Today it’s really what’s killing things, because still if you look at testing and what’s really being executed today, we see really a series of fairly soloed activities still broken up by the capability of the primary contributor. The developer really does what they’re doing best. And what they’re doing best is when they’re looking at the particular element of the code, hope they’re using static analysis but we can argue about that later. But they’re constructing their unit tests maybe using a broader more collaborative method maybe TDD BDD, one of the DDs out there and assisting to close the gap. But today there’s still a series of siloed activities. As you start to branch out and you start looking at the total transaction or protecting the end-user experience, you’re obviously moving beyond really what could be the primary silo of the developer.

Now back to your point, Diego. If you look at more distributed architectures like micro-services or potentially cloud-native apps and micro-services, perhaps the isolation of the quality activity can be isolated to that asynchronous component of the transaction yet what we find today is applications are still fairly complex. This idea of keeping the testing sequence in line with the expected development delivery is really the first step. I think we should be focusing on when it comes to AI. Meaning that using AI and everything that comes under the umbrella of it in terms of optimization to close the sequencing gap between when code is altered to the point when we feel like a risk candidate has an acceptable level of risk for release.

That’s the biggest gap I see today and probably the biggest opportunity. The reason why I want to jump on there is that Diego, you said this process cadence mismatch and I think it’s really misunderstood as we look at a broad release structure. I think it’s one of those elements that we assume is going to catch up, but there’s really not a ton of activity happening to close that gap today.

Avery: Wayne thanks. Sanil, what are your thoughts about this?

Sanil: I have been going back to the point about digital and the challenges. I think there are a couple of areas why AI definitely seems to be a very good fit. One is, digital really equates to speed but it is speed with quality. When you try to look at quality, historically we try to kind of get quality by putting a lot more resources, a lot more infrastructure, maybe sacrifice on speed and then get call it. Those have become table stakes for digital, so you cannot really do that. How do you get high quality with high speed? That really again goes back to a lot of automation as Diego mentioned. You have a lot of automation on your high speed.

Now, in order for all of this to work like a well-oiled machine, you need lot of data analysis that’s happening on the backend and optimizations that’s being done for you without you having to manually do the intervention and that is why AI becomes a very big play for digital, for that one challenge of digital which really links itself well to the IP solution. The second aspect I think is also around the fact that digital really now equates to a lot of different kind of interfaces. You have wise which is pretty big of this, you have touch interfaces, you may look at interfaces which really just could be based on environmental changes et cetera.

How do you test these? Testing a simple NLP, a natural language processing engine, is not trivial. You need to use some kind of an intelligent engine to do that and that is why these digital challenges, again can deal itself well to AI based

Avery: This is good. Diego you’ve been talking, you’ve been publishing a lot in this whole field. I think it’ll be really interesting with some use-cases, some examples where people have applied AI, because there’s always this question of, okay, it’s good in theory, but practically speaking how can it help me with my problems today, because I have a budget today, I have a set of deliveries, how are you going to help me make that easier and cheaper?

Diego: Well. Unfortunately, I must say, I know that this is all about AI and testing. The business cases of AI, are although I do consider development and testing an important business case itself. It’s not where AI is inducted, where AI is mostly being adopted

of course from clients is, they’re trying to operate use AI to improve their own operations. Actually, that’s the top news case. Their operations meaning their business operations. What companies that are improving business processes and they’re improving customer engagement, they’re augmenting their call center folks which instead of dealing with five, ten clients a day with an AI system supporting and then giving them more information about their clients can support 20 or 30 or 40 becoming more performing rather than talking about cost-cutting.

Most of that 55% of active clients around adopting AI is more about building chat box building, voice introducing, it’s about business applications, its early days even for using AI in the software development lifecycle itself. Probably one of the areas that heat map one of the research here was a heat map that they did about a year ago on asking how AI will include a softer development life cycle. It’s true that the red-hot area of the heat map was testing. In other words, people thought analyzing technical code. Some of those examples you just already heard. How to make test case generation, suggests test strategy, identifying bugs and focused testing on areas most lovely where there is a higher potential for defects, recognize images user interface voice, predict outcomes and please test coverage and so forth. Testing was like, this is the area we plan a year ago people said this is the area we plan on using AI.

Most of that is a population of vendor system integrators, people who have some more technical background in these areas then end-user clients themselves. Across the lifecycle, the heat map said, that the next important area would also be the higher and the lifecycle ideation planning requirements finding test cases that might be repetitive. Also, the end of the life cycle somebody mentioned DevOp’s a minute ago but think about all the automation at the release level our all the automation including the automation of our predictive maintenance increasing. CIOs are asking the question, well, you know it’s great to know that my ATM is not working, but I would like to know that before it’s actually happening. Just as companies do for their airplane engines.

So, it’s starting to be utilized as well in the business process of IT let’s call it more generally speaking and for testing. I think I’m getting less inquiry’s directly from clients which are at the stage of learning what are AI’s and what it can do for them. It’s one thing at first step is for clients to understand and use it to understand, well, what can I do differently with this technology. In testing, it’s the same. Part of it is going to come from vendors that sell software that is fueled with AI. Part of it is going to come through the partners that they work with and come with experience in this space so the ISO, its tools and services and part of it is going to come I would say by the pressure that they have on their digital strategy, you know how fast they need to move with quality. I totally I’ve been writing about quality at teams for quite a number of years now. Probably I can improve that.

Part of this has been also going to come on how much is our enterprises using and building the AI systems themselves because at some point we as testers we need to have a seat at the table to find out how those systems are really being tested.

Avery: Sanil, one thing that is interesting is there’s little bit of a physician heal thyself. You guys in IT, AI is wonderful. How can you apply it? Give us some examples. From your experience what are some quantifiable things that people do in terms of dealing with backlogs, dealing with optimization other ways they’ve applied AI and practice to make a difference for IT operations?

Sanil: We’ve seen a couple of engineers but definitely, AI is unable to provide value today AI of the US. One is of course if you look at the entire testing cycle, there is the elephant in the room typically is that I’m going to have these a million test cases that I need to kind of deal with

with probably no one’s asking the question. Is there a way to make that down to half a million?

That is an area where you’ve seen AI help and some of the services engagement that we do where we try to look at these test cases and try to figure out what can we do to optimize these test cases and little it down to a much manageable number which means remove it in densities, duplicates and componentization and bunch. AI is very deeply used in solving this problem by really looking at test cases and trying to figure out semantic meaning between them and figure out efficiencies, figure out duplications essentially.

Those have really given quantifiable outcomes in terms of reduced number of test cases to have to deal with which really impact all the downstream processes down the line. Those are clear AI numbers that we can actually quantify because the amount of the resources you spend, the infrastructure you spend that you have et cetera are proportional reviews. The other area where we also have seen quantifiable outcomes come out is really as part of the testing cycles itself. Testing the QA infrastructure is a goldmine for data. It is being there. It doesn’t mean it’s easy to harness it but it’s a goldmine of this.

What we’ve seen is basically to use this gold mine of data of the effects et cetera and try to apply machine learning models to it to really predict what kind of defects could emerge in your products going to down the line. Hence what kind of resource allocation should you do, what kind of research infrastructure do you need. Not only that but also do some more prescription and basically say that hey, out of these you know 100,000 test cases that test automation steps that will get run every night. Now because of what the tool predicts, you just need to run probably 25,000 test parts. That is again a great quantifiable outcome that you’ve seen which gives them value today.

As I mentioned earlier, it is a goldmine of data that’s available. There’s a process to get to it but there is a clear path for getting a quantifiable outcome in incremental of stages. Those are two main areas that you see outcome.

Avery: It’s interesting, because you really picked up on Wayne’s point about data exhaust and being able to use that data exhaust in a smart way. The other thing I took away was it’s not AI for the sake of AI, but AI for very focused results. In frigate results around both increase as people mentioned quality at speed. Now, one thing I’d ask everyone it’s that questions you can analyze them and we can start to start off. We’ve some of those in and if you’ve got questions please post them.

Wayne, I want to go and pick up on your whole notion of quality of speed and how you can use automation for DevOps. Your thoughts about the key obstacles and how you can use some of the data exhaust to improve testing

Wayne: There’s so much that comes to mind when you ask me that question. What we need to do is focus and have an outcome that we need to solve and prioritize our approach to that problem. Whether we’re using any sub-segment of the AI paradigm or thought.

For example, let me look at maybe two or three items here. First of all, today our test cases are bloated. They need to go on a massive diet. If you look at what we have, we have a lot of duplication, we have a significant number of false positives and all that is sucking time away from what is required from us in terms of not only the cadence of how we execute tests, but also when we need to get those answers to the right people at the right time.

Let me just pause one second and just emphasize one piece here. Not only do we need to look at these types of technologies to assist us to do things better, but we need to go back and ask the question at what point in the process can we actually distribute information in which it is actionable. The time horizon that we traditionally looked at for a test has been somewhat broad. We’ll just run everything. What we need to do is deliver the right information just in time to the right person so they can actually make an adjustment to something that has a real definitive impact to the quality of that released candidate.

The first thing I would be looking at today is this idea of potentially automated test design where we’re actually using business-related rules to really avoid with editing over cost effective, or

cost effective or even manual cleanups that we’re doing that is wasting a lot of time. The second piece very, very tightly correlated to the automated test design component would be risk coverage.

Diego mentioned business risk coverage. I run a couple surveys on this idea of risk and business risk. Today, there’s a huge disconnect between what your CIO would consider a business risk and what your tester or developer considers a business risk. Just understanding the pure nature of the activities of these two people, the CIO versus the hands-on contributor; the idea or the concept of business risk is a massive disconnect today. What can we do to potentially optimize test sets, so we can maximize business risk coverage and given time resources execute what is required for the time horizon of the release that were in? We can do that with machine learning, we can do that with pattern matching today. That is another massive priority for us in terms of optimizing this whole cycle.

I’m going to make one more point and then we hand it back to you Avery, which is undeniably digital transformation and Enterprise DevOp’s has altered the roles responsibilities of the tester. We haven’t caught up yet. We’re still at this Delta zone in which testing is catching up. I want to add just one thing. AI and machine learning is not going to be the pure magic that is going to fill that gap. It’s going to have to be highly focused on very specific components that are painful to that organization. Whatever came as an organization is going to be pretty much honed into that particular organization itself. So, you need to prioritize in terms of what’s your good attack.

Across the board what I see when I’m out there with clients is this test automated test design in which you can actually reduce the bloat of your test suite. Number one and then number two optimizing what you’re actually going to execute for optimal risk coverage would be the second thing I would focus on

Avery: Those are really good points that it’s things we’ve seen where you can use the AI to improve your efficiency and still maintain the same level of risk coverage. Then also in other cases be smarter about how you refactor test case that make them more maintainable as well. There are a number of different aspects. One question that I got asked was, are any people currently using AI for testing? Well, yes, a couple of us around the call here and that the challenge is how to broaden that. It’s not just impacting one area. In some cases that the AI may not be as visible, it’ll be embedded into processes not broken out and with a big banner on it saying, “AI just you’ll see better results.”

Sanil, one of the things that people question and I’ll always take this question that came in is, how do people assess if they’re AI ready? Again, what does it mean to be AI ready? Is there anything special someone needs to do, or and what prerequisites it needs? Is it important that you already have automation? What about your past repositories of test results? What other things do people need to be thinking about and doing?

Sanil: Are we ready for the edition of the robot? That’s the larger question. It’s a great question. AI already is something we talk about a lot at a push way. The reason is because, one of the basic premise of AI is you need to have good data. If you ask any data scientist out there today, they will tell you that they spend more than 70% or 80% of the time actually getting a data– can actually use. That’s really the reality. Not in test automation alone in any AI project essentially. Data is very important and what that means is that your systems need to be of course be automated. If you a lot of manual processes is in place and you really say that okay now I’m going to make my thing AI next week, it’s not going to happen.

You have to basically take it from where you are building the right automation processes because, as I mentioned earlier you need to have speed with quality. You need to put in the right automation. That’s how the data that you really need for AI will really come into play. You get your automation processes in place and you need to have your data in a way that is at least amenable to any AI solution. What that means really is that you could have a lot of different data come with a lot of different sources,

but AI needs to be able to correlate the data in some form. So, how you set up your automation strategy and what a fall-side do you put in place to make that happen is very important.

As a best practice, we always say that, if you have data automation, if you’re really mature and you’re ready for AI, still start with a small pilot project. There’s always a trailblazer in optimization but if someone want to start with AI and start playing with it. Start with the smaller projects, get them pinned to get the AI ready and see the benefits coming out of and, then, pan out. You don’t do a full revamp of your existing infrastructure. Those are some of the best practices I think we need to make in place. Going back to the original question, AI really, really means that, does it have the data that AI would need? Audio systems automated enough that you can really leverage it, otherwise the bottlenecks are going to be manual processes.

One of the things I also like to point out always is that if you try to build AI solutions on your own, most of the time you’ll actually hit into a skill bottleneck, which means that in AI readiness for you could also mean that you have the skills for doing that. It is not always a recommended solution, but if something is an organization could use, other companies providing services or tools to really make it happen, but skills sometimes also become in added as issue if you want to do an on-going solution.

Avery: One of the things I’m taking away, I’d love to get the other speakers thoughts on that in a second, is that being AI-ready is in many ways the same prerequisites you’d have to be ready for DevOp’s, to be ready for In-Sprint automation and Agile. Is that true, that if you’re doing the right things to get ready for DevOp’s and Agile, you’re doing the right things to get ready for AI?

Diego: Well, I think some of them, which is the most important thing is change. What’s common to all of that is cultural change and that’s the biggest challenge. The rest, yes, you can learn the technicalities behind AI, you don’t need to be just a data scientist, that’s why we talk about AI engineers and bosses as well. Even my daughter, she’s doing a thesis for a master thesis and she’s using now tensor flow. She doesn’t have an AI background, she studied a few things and here she is really building something interesting with a machine-learning algorithm.

Actually, I’ve interviewed lots of start-ups, where they just grab a software developer– in six months’ time with good training and using algorithms and getting his hands dirty, he’ll learn how to master some of these technologies. There are tools with API’s, Google, IBM, et cetera, they all are providing platforms. The technicalities is always not the biggest issue. The biggest issue is really the culture change, I think for Agile and DevOp’s but AI is also going to be a big cultural shift, because when we talk about having the robot in your– How do you collaborate with the robot in your cubicle? Which is something that people– start predict happening.

This really what might be a software robot, not necessarily a physical one. I just wanted to summarize, take a step back and summarize the use cases, that I’ve formalized in my research recently. Again, it is a new thing and, therefore, we’re starting to see the pickup, but what is the potential? The potential is huge in my opinion, it’s a great opportunity, I totally agree. The categories and in testing that I formalize is test case design generation, we talked about that. I think that’s one of the biggest and wonderful form, because it will make testing smarter, it will reduce the tests that need automated and optimized also to better coverage.

A script code generation is another area that AI will help with, well, it’s machine learning, let’s say. Unit test generation, until we’re going to start seeing, Microsoft has already released the unit test generator. Developers can write code and unit tests will be generated with that code that is written, that halves the time that it takes to build code for developers.

Image object recognition, of course, is another area that I see as a use case. Recognition of image video, there’s more of that going on, of course, with new apps. There is the fixed management or predictive clustering of defects, a smart way of clustering defects and dealing with them in the same way, the impact of change, correlating what type of change, what it will require from a testing perspective outcome analysis. This is the biggest area actually.

Machine learning, data, outcome analysis got lots of data, got lots of data coming from running testing, during the software development life cycle, deadlocks, all these tools are generating data and using that data to have insights to improve the way we build the software, we test the software is a big one, but also is a big opportunity including, of course, the quality of software itself.

Test assessment optimization and the last one, very briefly is test data, variation. We need to use a lot of tests data in testing and it can become even a big data problem, especially, when we go into the IoT world and some of these users. These are seven or eight use cases where AI can improve the way we test. For me, I’d like to be provocative here, it’s the guys around the services, you guys services, you guys direct– tool vendors that also have to make use of AI in the technologies that you’re building and the services that you’re delivering to clients and to help this make this happen.

Avery: If you talked about– I think about different categories and also alluded to the fact that there’s room for AI to upfront to change how you do the test case generation and then during performance of test to optimize which tests get run and perhaps not, optimize around risk levels then ultimately, what we’ve seen also is the outcome analysis. There is one customer said to me the answer is somewhere in there. Meaning, I get all of these test results filling my email box, but I don’t which one actually matter in that huge list of automated results.

Wayne, you’re dealing with all of this also and you’ve dealt with changes. People are trying to evolve their test strategies. What advice do you have for people as they start to look at how to get their organizations to adopt this and where you’ve seen applications that really make sense to try some of these new technologies?

Wayne: Let me be pragmatic and just say as technologists, we are the worst when it comes to really understanding culture and change. We feel if it’s a tool or code we can master it, we can do it. This has been our downfall in terms of really improving or incrementally improving process. I’m with you guys. I believe that having a partner in this journey is required, because you don’t know what you don’t know and that’s what’s going to actually seize the project right or stop the project.

I can’t emphasize it more that if you look around your entire organization most of your organization or other supporting processes are going to have assistance in terms of process, process redefinition or refinement. They’re going to take a very strict approach to technology enablement. We, as technologists, tend to skip that because we could write it, we could do it ourselves. It’s easy, it’s that simple. It can’t be overlooked, so I can’t emphasize the cultural element of this enough. By the way, I’m really encouraged on the priority that Enterprise DevOps has put on culture and they’re very productive and changes that we’ve seen, so it’s a step in the right direction once in a while.

To the second part of your question in terms of really where to go next, I’m split on two things. First of all, the definition of AI or what we consider AI the huge umbrella term for really a lot of advanced automation. Like I said, in the beginning of our conference call today was that, no matter what it’s moving the needle. The idea of what can happen is moving the needle in a very, very positive direction. However, what I would encourage in particular testing teams to do is look at the biggest pain and start to dissect what the barriers are that are building into that pain. Is it time if the test we do bloated, are you able to get at the correct test environment anytime, anywhere, for example or is test data your major problem?

Then, look at an organization that’s going to assist you to bridge that gap or series of gaps in order to meet the expectation in terms of time or the cadence of test in terms of time.

By the way, what you’re going to find is that there’s not one answer for any of these things. But if it is basically, what is the priority of the organization and solve that problem. The biggest mistake I see today first is cultural, but the second mistake I see today is really an over exuberance of the fact that the black box is going to take care of the problem and it’s not. It does require process, that process requires data and that data requires, if you’re talking in the AI world a lot of it in order for you to get to the optimized answers. Start at the problem and define the problem. Don’t just throw the technology at it right now.

Avery: It’s the easy advice to forget which is, you got to start from the problem and solve back to the technology, not get AI as the hammer run around looking for a nail.

Wayne: It seems to be the exuberance to that, right?

Avery: Well, it’s true for everything, right? People weren’t exuberant. Get some excited. Sanil, one interesting– Go ahead, Diego.

Diego: I’m just going to say, we use that hammer and nail quite often and very much. But I must say again, I want to be cooperative here, is that action with AI, that works.

Avery: Go find the right– Well, if we find the right problem it has got free broad application. That’s the key.

Diego: It’s understanding– Yes. Now, we just say it’s understanding what we can actually do and the potential it has, right? Because the big confusion is, why can’t I just do this with what I have, right? What’s difference, what can I do different with it? If you don’t understand the potential of it as an organization, it’s going to be really hard to even endorse it. But that’s and organization that wants to get into the forefront. On the other hand, some cases in slide you know what? I have a partner, I want my partner to do– this is what I want. Go ahead and do it. Whether you’re using AI, Agile, DevOps. I don’t care. I just want to look at the outcome that some organizations that also think that way.

Avery: Sanil, there’s nowhere some examples of– One of the questions that they came in was some examples of some optimizations. We’ve talked about reducing the number of tests, but what are some other ways you could apply AI to achieve a business outcome. This isn’t about AI per se, but it worsens some of the business impacts you could have a different stage in the cycle that you’ve seen that successfully delivered.

Sanil: Sure. If I startup just from a testing process, but that’s upon some of them earlier which is optimizing our test cases componentization, et cetera. The real business value in doing that essentially, is basically you’re trying to get speed and agility so as to meet your real demands of the customer. By applying technologies either model-based testing combined with combinatorial analysis of data, what you’re able to do that you’ll able to get right quality of testing done with the minimum amount of resources and time spent on it, right? That’s definitely a direct business value that you can really, really get out of it.

The other area that you can really get business value out of it is– Venco digital, as I mentioned earlier, you’re looking at new setup interfaces, right? You’re looking at voice, you’re looking at energy-based engine which is powering your interfaces et cetera. You need to be able to test it in a way that it really takes your new customer base into how, right? You hit in your demographics far for these solutions. To be able to test these effectively, it’s a case of AI test in AI now. To get the right business outcome in those cases, can I reach, can my product reach out to at this whole new geography that I was aiming to reach in a very short period of time with my existing product. What is the data that I need to build that test?

AI can come into play to really test those effectively in a very optimal manner, so that you all go to market and these new geographies can be much faster than it was before. Especially if you’re looking at new interfaces like Yspot and Chatbot, because you need more intelligence or test them than typical test scripts that you could run and automate for testing to do, right? Those are couple of videos that we definitely see business outcomes in.

In the third area of courses that, everything that we talked about really needs a lot of infrastructure to run and execute. Of course it’s moving to the Cloud, if you’re talking about on-demand infrastructure to be able

to spend them on demand et cetera. They’re not cheap and they’re aren’t free for sure. You need to be able to apply AI to make sure that you spend on getting the agility, with the kind of output that you need is tempered with the right optimization tool for infrastructure provisioning, so that your business outcome that you would have got otherwise, is not discounted by extra spend you’re doing on infrastructure. If I would say, these are some areas that definitely are very impactful from a business perspective.

Avery: Wayne, any parting advice for people is there thinking about how to use AI and how that might work with low code no code environments. What other thoughts you’d want people to have as they’re thinking about this?

Wayne: I always go back to one major theme which is, have your expectations associated, prioritize your expectations in terms of what you want to achieve. Knockout the problem incrementally doesn’t try to solve it all. Because what we face today is somewhat overwhelming in terms of all the barriers associated cell for test automation. It’s not going to solve itself in a very easy simple clean manner.

Focus on it, focus on the process improvements around it and then apply the technology as needed to improve that. And I know this is just sounds so basic and I even catch myself saying it sometimes saying, “Wow this is the thing, words of wisdom that we’ve applied for years.” But it does make sense in terms of balancing the expectations associated with AI in the software development life cycle these days.

Avery: Diego, as you look at this any parting words of wisdom for people?

Diego: There’s the pragmatic side of AI and as I said the pure the generally scary AI that people make us think is behind the corner, I think it’s nowhere close. That this pragmatic stuff that, pragmatic AI can help us do pragmatic things and whether it’s machine learning or deep learning or whatever technology. It’s definitely a new I’m not sure we got into that, to the basket that we have.

Technology that we have, to build better software faster. Minimum investments starting small is important, having someone, throw someone at starting to play and understand the technologies and then match that to your biggest pain. And start significant if we can really solve, further we can solve one of those big things that today we haven’t really been able to improve with the traditional testing. Again, you’re not alone in doing that, leverage all the internal and external skills that you might need. Don’t be skeptical with it, don’t think that AI is just for improving business operations which it is of course you’ll be doing that. But also improving the key business process of building software and testing software and it’s a great technology it’s going to help us there.

Still needs improvement, still need to make sure we’re at the beginning, I will be very pragmatic about it and realistic about it. It’s just ahead of us.

Avery Lyford: Look, I think everyone is so for speaker certainly I’ll speak for Inforstretch. I would be glad to continue this conversation with anyone is interested individually and with that let me hand it back for our wrap up and thank you.

Beth: Great, thank you all very much. That will end our event for today in that case. Thank you everyone for attending. Be sure to check out and subscribe to this DTV channel the link is right there. You will also be redirected to it as soon as we’re done here. So please check that out and subscribe and I would like to thank all the speakers who participate today. Avery for moderating and Apexon for sponsoring this web seminar and of course all of you in the audience who spent this last hour with us. Everybody have a good day and we hope to see at a future event.

Diego: Thank you very much. Bye-bye.

Wayne: Thank you.