Predictive QA

When Analytics Meets Software Testing

Sarah-Lynn Brunner: Good morning and welcome to our webinar, Predictive QA — When Analytics Meets Software Testing. My name is Sarah-Lynn Brunner and I will be your host for today. And today’s presenter is Sanket Vaidya. Sanket is the QE Solutions Architect at Infostretch and he carries extensive IT experience in quality engineering. He has experience in both delivery and the business side of IT. While working at Infostretch, Patni and BNP Paribas, he has provided the QA solutions for multiple applications in system. He has led multiple QE teams for mobile, desktop, web, mobile web applications and ensured timely delivery and quality.

Before we begin, let me review the housekeeping item. First this webcast is being recorded. It’ll be distributed via email allowing for you to share it with your internal teams for watching it again later. Second, your line is currently muted. Third, please feel free to submit any questions during the call by accessing the Q&A button, the navigation on the top center of your screen. We will answer all questions towards the end of the presentation. Also, during the presentation we encourage for you to engage in the pole questions and maximize your experience during this time. We will do our best to keep this webinar to the 45 minute-time allotment. At this time, I’d like to turn the presentation over to Sanket Vaidya.

Sanket Vaidya:As well as competition threat. This is how the situation at ground zero looks like. We have a product owner, who wants to ship as many storages as possible in the sprint. At the same time, we have QA manager who fulfill a humongous challenge to maintain the quality or deliver the same quality in short amount of time. To do this, what a QA manager typically does is, he plans a number of tasks such as regression testing, functional testing, performance testing, security, defect rate testing and all those things during the sprint.

To plan this optimally, he needs to make some decisions which help him to prioritize task, as well as to make effort distribution throughout the task efficient. To do this, he makes his decision based on certain traditional norms. For example, he would allocate more time and efforts to a module which is more complex because normally we tend to believe that if a module is more complex, it would have more number of defect and more critical defect. In the similar fashion, manager would allocate more time to a module who has 10,000 line of code compared to the 50,000 line of code.

However, in our actual experience we know that in many cases, this is not true. For example, in the recent project for which I was working, the search module had a lot of critical defects that if the entire users shopping experience right from selecting the product, to adding the product in the cart, check-out, as well as payment and order confirmation was seamless. This entire business flow had very less number of defects. The other thing which we do is allocate more time on resource. In other words, we spent more time on testing.

This works many times. However, this is not the most efficient way to do things. Ideally, we all know that the ideal way to do things would be to get the historical data of the project releases, analyze them, churn them and get meaningful recommendations out of it. However, considering the short timeline and the actual situation in the project, we all know that this task is next to impossible. It is practically very difficult to do. This is exactly where Predictive QA comes at rescue. We talked about Predictive QA. It’s basically a tool based on machine learning and AI that helps us to channelize our QA efforts.

It uses historical data from defect management or project management tool from our previous releases and it gives concrete recommendation to optimize the QA efforts as well as, it also predicts the risks of various defects modules of the project as well as how many defects our module is more likely going to encounter. This is an overview of how this tool works.

In the first step, what we do is we take data from defect management or project management tool, thereafter, in the next step the tool does something called automated component tagging as a part of Data Health Check. I’ll come to this part later, what does this mean. Then after we have a User Verification step, this is basically to verify or to validate the component tagging done by the tool and at the end it filters the user data because there are certain data which are not usable and if we use that data, then it would skew the accuracy of the result.

Now, once it has filtered the data, what it does is, it churns the data and it gives the predictions of risk for various module so which modules have high risk, which modules have low risk. On the other hand, it also predicts the defect range which each module is going to encounter. For example, module X is going to have a defect between five to nine during the release.

In addition to this, it also recommends certain things, for example, what kind of resource would be helpful for the testing of this module, then what kind of testing would be recommended for this module. Then if you are automated, it also gives a user advantage. It generates a configuration file which can help you trigger your automated test run based on the recommendation.

This is how data is imported in the tool. The thing to appreciate over here is you don’t need to have any technical skills or you don’t need to do any data formatting or any sort of thing. What you need to do is you need to enter the project URL of JIRA as well as your login credential and the tool will do the importing part for you. This is how the filtering works.

Now, once you import the project data or once you enter the JIRA URL in that import box, it would ask you to pick up the previous versions of your project. For example, if you have done six releases of the project in the past, it would ask you to the versions from which it can pick the data to train the intelligent algorithm to make predictions so you can select more than one version, you can select four or five or you can select all six to predict.

Now, once you have selected the version number, it takes the components from the component field of JIRA, right. We talked earlier about that, it helps to predict the vulnerability of the module. From the component field, it takes the module name. The algorithm is so intelligent, we all know in certain cases people miss to add components to the defects or the JIRA tickets, right? In that case, what it would do is it would analyze the summary as well as the descriptions and come up with the recommended component for that ticket.

Once it has recommended, it would ask user for confirmation whether this is the correct component. For example, let’s say it identified, it recommends component login for a ticket, however, if we think that the component should not be login but it should be registration, we can change that component tagging from the drop-down.

Now, once it has identified the component, what it would do is it would try to filter the outliners. These outliners are kind of those data which if taken for the training data it would skew the result. For example, let’s say, you are taking around 100 observations for– to train your algorithm and all the observations have value, let’s say 95 observations have value between 80 to 100. There are three observations which have value between 500 to 600 and there are another two observation which have value of one to two — one to 10, right?

If we take those five observations, then, what it would do is, it would skew the prediction. It would add more deviation to our predictions and what it does, it identifies those version for which the data fall into this category of outliners. Once it has identified those version, we can discard that from the prediction or if we want to keep, we can keep them for the prediction, right? Once it has done that, it is ready to display the results.

The first thing that it predicts is module vulnerability. We’ll see this in the actual demo. All right. Once we have imported the data, this is how the screen looks like. Now, the horizontal bar that you see at the bottom, has a label version. These are the version numbers which we have used as a data to make predictions for future release. The upcoming releases is 7.0.2 and we have used the data from versions 6.0.5 and 6.0.6, then 6.0.7, 6.0.8 and up to 7.0.1 and the 7.0.2 is the upcoming version.

If you see over here, you would see a wheel, basically, which has covered all the modules of your project. Once that you see in higher other modules which have high risk or high vulnerability and the one in green, are the ones which have low risk or vulnerability. If you are handling a QA team, during the release or a sprint, we should focus more efforts on the modules which are highlighted in red.

Now, we saw the prediction of risk module. Now, let’s say, we have completed the release, right? Then, we will have the actual data of the release in our JIRA, right? To compare how our predictions went, compared to actual release or actual data, let us move back to some past release.

Let’s say, for example, we move to release seven, right? In release seven, this is the prediction for high and low-risk module, right? On the left side, you would see a link to see the actual vulnerability that was encountered after the release or after our test cycle was completed. You might have noticed that it has changed slightly, right? How much it has changed? To see that, we need to navigate to this compare link. Now, once you navigate over here, you would see that certain modules are highlighted in darker shades than the pop-ups, right? These are the ones for which the predicted vulnerability was correct. If you look at this chart, we would see that there are three modules for which the predicted risk was incorrect, whereas, for remaining all the predicted risk was correct.

Now, let us have a look at the prediction of defects, right. For that, let’s again move back to the upcoming release. There you go, to see that defect prediction, what you need to do is navigate to defects’ tab. What you would see would be a chart with bars on it. The horizontal axis has the module name, whereas, the y-axis has the defects count. Basically, this tool predicts the defect in range. For example, from the data, what we have said, it has predicted that a module called test case has a defect range from

21 to 35. So during 7.0.2 release, this test case module should encounter somewhere between 21 to 35 defects. Similarly for test suits, it has predicted the defect count to be from 29 to 44. There is a horizontal bar right beneath the chart so you can scroll that to see the defect ranges for all the modules.

Now let’s say once you have this prediction and once we are done with the actual release, we would have the actual defect count in the JIRA. Again to see this let’s move back again to some past release. Let’s move back again to release 7.0. If you look at the chart, you would see the charts are basically same with a small marker in the shape of triangle right over here. These are the actual counts of the defects that are found during the release.

For example for 7.0 release of test case, the tool predicted the defect range of 24 to 36, whereas the actual defect count that was found was 28. Similarly for test suits module, the predicted defect range was around 23 to 34 and the actual count was 30. In the same way, you can see the actual versus prediction comparison for all the modules by using the scroll bar. If you look at this module right generate the actual number of defects that were found during the release was slightly more than the predicted defect.

Another cool thing about this is the recommendation that it provides. Again let’s move back to the upcoming release 7.0.2 again. If you click on the arrow on the right side, what you would see is a small window popping up. It has recommendation based on modules for the project. Let’s have a look at some of the recommendations. For test case module, what it has recommended is you need to cover this module thoroughly. It mentions that functional testing with high medium and low priority test cases.

Another tool thing that it predicts is, we need to have early regression cycle for this module. Normally, regression cycle we start late in the sprint or late in the release, but in this case the tool recommended that it is advisable to start regression cycle early. Then another recommendation is to allocate QA engineer one. The way to interpret this is the tool has analyzed the defect and work done by QA engineer one in the JIRA, and based on his work and defect raised on this module he has recommended, the tool has recommended QA engineer one.

The way we interpret this is we can assign QA engineer one over here, or assign some resource having similar skill set for this module. Now let’s take for example another prediction. Let’s say, for example, this aggregation screen. In this case, what it has recommended is we should follow functional testing for high and medium priority test cases, and again in addition, we should also do exploratory testing. In this case, in case we are running short of time, the low priority test cases for this module are at less risk, and we can skip that if we want to manage time or we want to shift priorities.

Another tool feature that you see is the test configuration button right under the top of this recommendation window. If you open this, you would see the module name on the leftmost column, and in the extreme right three columns you would see the high, medium and low. Those are priority of the test cases. If you see in the recommendation we saw that the test case module we should execute high, medium and low priority test cases.

These are already checked over here. Same way for execution screen, we should execute high and medium test cases and the low priority test cases were not recommended for the tool. So, for low priority, this is unchecked. These checkboxes are editable. Let’s say the tool recommended that we should execute only high and medium test cases. However, if we think that we should also execute low test cases, then we can check this. These fields are editable. We can overwrite tool recommendation through this checkbox.

Once you click “start test run” it would trigger the automation scripts that would run the selective test cases to get the coverage mentioned in the recommendations. All right, these are basically screenshots of predicted vulnerability which we already saw in the demo. Then these are the screenshots for defect range. We can also see the actual defect by the triangle marker. These are actual vulnerabilities. These are comparisons. These are actual defect count, and finally the comparison. This is the same recommendation window which we walked through just few minutes back.

Now, the most important thing about this algorithm is it is self-learning. As you feed more and more data to the algorithm, it keeps on improving itself and thereby improving the accuracy. Let’s say you feed the data of six versions to the algorithm, and if you feed more number of data, let’s say 10 or 11 versions, it would be more accurate. In other words, if you translate in real-life scenario, let’s say you have predictions for six release and by the time you do another four release, so total 10 release, the prediction of tool would be getting better and better. It gets better and better as we do more and more releases.

These are the integration– it works with JIRA, it works with Quality Center, Mantis, Bugzilla. All right, what are the actual benefits that we can achieve? First of all, based on the recommendations or the predictions given by the tool, we know how to channelize our effort to gain maximum efficiency from our QA activities. We can define scope with more precision, we can have better schedule with less margin of error, and we can also figure out what kind of skill set would be ideally suited for testing of particular types of modules so that we can gain maximum out of our testing efforts.

Then if we are automated we can also get insights on the automation, for example, for the test configuration file we saw, we had the test cases with selected priority that was recommended by the tools to run. For certain modules, we were running high, medium, and low, all test cases, and for certain we were running high and medium test cases. In real life, it would be very time consuming to configure those scripts to run this and then again run based on the configuration. What this tool does is it would give you the automated configuration file and you can trigger the automated test cases based on a single click.

Since these are targeted recommendations it would help you to optimize test coverage. In other words, it would also help you to prioritize things when there is a time crunch.

That’s all from me guys. Thank you. Any questions?

Sarah-Lynn:At this time we would like to get your questions answered here and as a reminder at any time you could ask a question in the Q&A window at the center of your screen, excuse me in the QA window at the center on the top of your screen. The first question we have here is, “Does the module risk shown in the chart take into consideration the total defect count only or priorities as well?

Sanket:That’s a good question. It takes into account the number of defects plan as well as the priority as well and not only this so let’s say, for example, if there is a module which has nine minor defects and there is another module which has three critical defects it would give more weightage to the modules which has critical defect. Not only this, it also gives more weightage to the most recent data.

Let’s say, for example, you have a module which has, let’s say, five releases back it had a very high number of critical defects however in the last four releases it has been showing minor defects. It would show a low vulnerability so it also takes into account how recent is the behavior of the module. Your recent releases would have more weightage compared to your very past releases.

Sarah-Lynn:Great. Thank you, Sanket. We do have another question for you here and it’s, “How did the tool determine sizeable work plan? Does it have granulite to map the work plan to modules?”

Sanket:Sorry, I did not get the question.

Sarah-Lynn:It’s, “How did the tool determine size of the work plan?”

Sanket:Okay. What this tool does is basically it gives you the recommendation from your activities in the defect management tool. The recommended-recommendations that you get on priorities, priority for the module defect range as well as recommendation, it takes into consideration lot of things like defect density and who has worked on which particular module, how many defects he has logged, what is his defect acceptance ratio. Based on this, it recommends that what kind of, what priority test cases you should execute for a particular module or not.

Sarah-Lynn:Okay. Great, thank you, Sanket. Another question for you is, “Is a prediction only based on past defect only?”

Sanket:No, it is based on defects, then it also integrates with the source code repository. So, it also checks how many changes are there in the past, how many developers have worked on the module, so it takes a lot of complex factors into account into the algorithm with different weightage to figure out, to make out the prediction.

Sarah-Lynn:All right great. Another question Sanket is, “Is there any minimum data required for the tool to make predictions?”

Sanket:Yes, as I mentioned this tool needs training data to make prediction. We should have at least data for two releases to make a reasonable prediction out of that tool.

Sarah-Lynn:Perfect and we have one last question for you here. “When you import the data does the tool import overall JIRA data or project-specific data?”

Sanket:It imports project-specific data. The predictions are based on your project, that is the project URL that you enter at the time of input.

Sarah-Lynn:Okay great. I believe that’s all the questions we are able to answer at this time. I’d like to thank everyone for joining us today.

You’ll receive an email in the next 24 hours of the slide presentation and the link to the webcast replay. If you have any questions please contact us at info@infostretch.com or please call 1 408 727 1100 to speak with a representative. Again, thank you all for joining us and enjoy the rest of your day.

When Analytics Meets Software Testing

Sarah-Lynn Brunner: Good morning and welcome to our webinar, Predictive QA — When Analytics Meets Software Testing. My name is Sarah-Lynn Brunner and I will be your host for today. And today’s presenter is Sanket Vaidya. Sanket is the QE Solutions Architect at Infostretch and he carries extensive IT experience in quality engineering. He has experience in both delivery and the business side of IT. While working at Infostretch, Patni and BNP Paribas, he has provided the QA solutions for multiple applications in system. He has led multiple QE teams for mobile, desktop, web, mobile web applications and ensured timely delivery and quality.

Before we begin, let me review the housekeeping item. First this webcast is being recorded. It’ll be distributed via email allowing for you to share it with your internal teams for watching it again later. Second, your line is currently muted. Third, please feel free to submit any questions during the call by accessing the Q&A button, the navigation on the top center of your screen. We will answer all questions towards the end of the presentation. Also, during the presentation we encourage for you to engage in the pole questions and maximize your experience during this time. We will do our best to keep this webinar to the 45 minute-time allotment. At this time, I’d like to turn the presentation over to Sanket Vaidya.

Sanket Vaidya:As well as competition threat. This is how the situation at ground zero looks like. We have a product owner, who wants to ship as many storages as possible in the sprint. At the same time, we have QA manager who fulfill a humongous challenge to maintain the quality or deliver the same quality in short amount of time. To do this, what a QA manager typically does is, he plans a number of tasks such as regression testing, functional testing, performance testing, security, defect rate testing and all those things during the sprint.

To plan this optimally, he needs to make some decisions which help him to prioritize task, as well as to make effort distribution throughout the task efficient. To do this, he makes his decision based on certain traditional norms. For example, he would allocate more time and efforts to a module which is more complex because normally we tend to believe that if a module is more complex, it would have more number of defect and more critical defect. In the similar fashion, manager would allocate more time to a module who has 10,000 line of code compared to the 50,000 line of code.

However, in our actual experience we know that in many cases, this is not true. For example, in the recent project for which I was working, the search module had a lot of critical defects that if the entire users shopping experience right from selecting the product, to adding the product in the cart, check-out, as well as payment and order confirmation was seamless. This entire business flow had very less number of defects. The other thing which we do is allocate more time on resource. In other words, we spent more time on testing.

This works many times. However, this is not the most efficient way to do things. Ideally, we all know that the ideal way to do things would be to get the historical data of the project releases, analyze them, churn them and get meaningful recommendations out of it. However, considering the short timeline and the actual situation in the project, we all know that this task is next to impossible. It is practically very difficult to do. This is exactly where Predictive QA comes at rescue. We talked about Predictive QA. It’s basically a tool based on machine learning and AI that helps us to channelize our QA efforts.

It uses historical data from defect management or project management tool from our previous releases and it gives concrete recommendation to optimize the QA efforts as well as, it also predicts the risks of various defects modules of the project as well as how many defects our module is more likely going to encounter. This is an overview of how this tool works.

In the first step, what we do is we take data from defect management or project management tool, thereafter, in the next step the tool does something called automated component tagging as a part of Data Health Check. I’ll come to this part later, what does this mean. Then after we have a User Verification step, this is basically to verify or to validate the component tagging done by the tool and at the end it filters the user data because there are certain data which are not usable and if we use that data, then it would skew the accuracy of the result.

Now, once it has filtered the data, what it does is, it churns the data and it gives the predictions of risk for various module so which modules have high risk, which modules have low risk. On the other hand, it also predicts the defect range which each module is going to encounter. For example, module X is going to have a defect between five to nine during the release.

In addition to this, it also recommends certain things, for example, what kind of resource would be helpful for the testing of this module, then what kind of testing would be recommended for this module. Then if you are automated, it also gives a user advantage. It generates a configuration file which can help you trigger your automated test run based on the recommendation.

This is how data is imported in the tool. The thing to appreciate over here is you don’t need to have any technical skills or you don’t need to do any data formatting or any sort of thing. What you need to do is you need to enter the project URL of JIRA as well as your login credential and the tool will do the importing part for you. This is how the filtering works.

Now, once you import the project data or once you enter the JIRA URL in that import box, it would ask you to pick up the previous versions of your project. For example, if you have done six releases of the project in the past, it would ask you to the versions from which it can pick the data to train the intelligent algorithm to make predictions so you can select more than one version, you can select four or five or you can select all six to predict.

Now, once you have selected the version number, it takes the components from the component field of JIRA, right. We talked earlier about that, it helps to predict the vulnerability of the module. From the component field, it takes the module name. The algorithm is so intelligent, we all know in certain cases people miss to add components to the defects or the JIRA tickets, right? In that case, what it would do is it would analyze the summary as well as the descriptions and come up with the recommended component for that ticket.

Once it has recommended, it would ask user for confirmation whether this is the correct component. For example, let’s say it identified, it recommends component login for a ticket, however, if we think that the component should not be login but it should be registration, we can change that component tagging from the drop-down.

Now, once it has identified the component, what it would do is it would try to filter the outliners. These outliners are kind of those data which if taken for the training data it would skew the result. For example, let’s say, you are taking around 100 observations for– to train your algorithm and all the observations have value, let’s say 95 observations have value between 80 to 100. There are three observations which have value between 500 to 600 and there are another two observation which have value of one to two — one to 10, right?

If we take those five observations, then, what it would do is, it would skew the prediction. It would add more deviation to our predictions and what it does, it identifies those version for which the data fall into this category of outliners. Once it has identified those version, we can discard that from the prediction or if we want to keep, we can keep them for the prediction, right? Once it has done that, it is ready to display the results.

The first thing that it predicts is module vulnerability. We’ll see this in the actual demo. All right. Once we have imported the data, this is how the screen looks like. Now, the horizontal bar that you see at the bottom, has a label version. These are the version numbers which we have used as a data to make predictions for future release. The upcoming releases is 7.0.2 and we have used the data from versions 6.0.5 and 6.0.6, then 6.0.7, 6.0.8 and up to 7.0.1 and the 7.0.2 is the upcoming version.

If you see over here, you would see a wheel, basically, which has covered all the modules of your project. Once that you see in higher other modules which have high risk or high vulnerability and the one in green, are the ones which have low risk or vulnerability. If you are handling a QA team, during the release or a sprint, we should focus more efforts on the modules which are highlighted in red.

Now, we saw the prediction of risk module. Now, let’s say, we have completed the release, right? Then, we will have the actual data of the release in our JIRA, right? To compare how our predictions went, compared to actual release or actual data, let us move back to some past release.

Let’s say, for example, we move to release seven, right? In release seven, this is the prediction for high and low-risk module, right? On the left side, you would see a link to see the actual vulnerability that was encountered after the release or after our test cycle was completed. You might have noticed that it has changed slightly, right? How much it has changed? To see that, we need to navigate to this compare link. Now, once you navigate over here, you would see that certain modules are highlighted in darker shades than the pop-ups, right? These are the ones for which the predicted vulnerability was correct. If you look at this chart, we would see that there are three modules for which the predicted risk was incorrect, whereas, for remaining all the predicted risk was correct.

Now, let us have a look at the prediction of defects, right. For that, let’s again move back to the upcoming release. There you go, to see that defect prediction, what you need to do is navigate to defects’ tab. What you would see would be a chart with bars on it. The horizontal axis has the module name, whereas, the y-axis has the defects count. Basically, this tool predicts the defect in range. For example, from the data, what we have said, it has predicted that a module called test case has a defect range from

21 to 35. So during 7.0.2 release, this test case module should encounter somewhere between 21 to 35 defects. Similarly for test suits, it has predicted the defect count to be from 29 to 44. There is a horizontal bar right beneath the chart so you can scroll that to see the defect ranges for all the modules.

Now let’s say once you have this prediction and once we are done with the actual release, we would have the actual defect count in the JIRA. Again to see this let’s move back again to some past release. Let’s move back again to release 7.0. If you look at the chart, you would see the charts are basically same with a small marker in the shape of triangle right over here. These are the actual counts of the defects that are found during the release.

For example for 7.0 release of test case, the tool predicted the defect range of 24 to 36, whereas the actual defect count that was found was 28. Similarly for test suits module, the predicted defect range was around 23 to 34 and the actual count was 30. In the same way, you can see the actual versus prediction comparison for all the modules by using the scroll bar. If you look at this module right generate the actual number of defects that were found during the release was slightly more than the predicted defect.

Another cool thing about this is the recommendation that it provides. Again let’s move back to the upcoming release 7.0.2 again. If you click on the arrow on the right side, what you would see is a small window popping up. It has recommendation based on modules for the project. Let’s have a look at some of the recommendations. For test case module, what it has recommended is you need to cover this module thoroughly. It mentions that functional testing with high medium and low priority test cases.

Another tool thing that it predicts is, we need to have early regression cycle for this module. Normally, regression cycle we start late in the sprint or late in the release, but in this case the tool recommended that it is advisable to start regression cycle early. Then another recommendation is to allocate QA engineer one. The way to interpret this is the tool has analyzed the defect and work done by QA engineer one in the JIRA, and based on his work and defect raised on this module he has recommended, the tool has recommended QA engineer one.

The way we interpret this is we can assign QA engineer one over here, or assign some resource having similar skill set for this module. Now let’s take for example another prediction. Let’s say, for example, this aggregation screen. In this case, what it has recommended is we should follow functional testing for high and medium priority test cases, and again in addition, we should also do exploratory testing. In this case, in case we are running short of time, the low priority test cases for this module are at less risk, and we can skip that if we want to manage time or we want to shift priorities.

Another tool feature that you see is the test configuration button right under the top of this recommendation window. If you open this, you would see the module name on the leftmost column, and in the extreme right three columns you would see the high, medium and low. Those are priority of the test cases. If you see in the recommendation we saw that the test case module we should execute high, medium and low priority test cases.

These are already checked over here. Same way for execution screen, we should execute high and medium test cases and the low priority test cases were not recommended for the tool. So, for low priority, this is unchecked. These checkboxes are editable. Let’s say the tool recommended that we should execute only high and medium test cases. However, if we think that we should also execute low test cases, then we can check this. These fields are editable. We can overwrite tool recommendation through this checkbox.

Once you click “start test run” it would trigger the automation scripts that would run the selective test cases to get the coverage mentioned in the recommendations. All right, these are basically screenshots of predicted vulnerability which we already saw in the demo. Then these are the screenshots for defect range. We can also see the actual defect by the triangle marker. These are actual vulnerabilities. These are comparisons. These are actual defect count, and finally the comparison. This is the same recommendation window which we walked through just few minutes back.

Now, the most important thing about this algorithm is it is self-learning. As you feed more and more data to the algorithm, it keeps on improving itself and thereby improving the accuracy. Let’s say you feed the data of six versions to the algorithm, and if you feed more number of data, let’s say 10 or 11 versions, it would be more accurate. In other words, if you translate in real-life scenario, let’s say you have predictions for six release and by the time you do another four release, so total 10 release, the prediction of tool would be getting better and better. It gets better and better as we do more and more releases.

These are the integration– it works with JIRA, it works with Quality Center, Mantis, Bugzilla. All right, what are the actual benefits that we can achieve? First of all, based on the recommendations or the predictions given by the tool, we know how to channelize our effort to gain maximum efficiency from our QA activities. We can define scope with more precision, we can have better schedule with less margin of error, and we can also figure out what kind of skill set would be ideally suited for testing of particular types of modules so that we can gain maximum out of our testing efforts.

Then if we are automated we can also get insights on the automation, for example, for the test configuration file we saw, we had the test cases with selected priority that was recommended by the tools to run. For certain modules, we were running high, medium, and low, all test cases, and for certain we were running high and medium test cases. In real life, it would be very time consuming to configure those scripts to run this and then again run based on the configuration. What this tool does is it would give you the automated configuration file and you can trigger the automated test cases based on a single click.

Since these are targeted recommendations it would help you to optimize test coverage. In other words, it would also help you to prioritize things when there is a time crunch.

That’s all from me guys. Thank you. Any questions?

Sarah-Lynn:At this time we would like to get your questions answered here and as a reminder at any time you could ask a question in the Q&A window at the center of your screen, excuse me in the QA window at the center on the top of your screen. The first question we have here is, “Does the module risk shown in the chart take into consideration the total defect count only or priorities as well?

Sanket:That’s a good question. It takes into account the number of defects plan as well as the priority as well and not only this so let’s say, for example, if there is a module which has nine minor defects and there is another module which has three critical defects it would give more weightage to the modules which has critical defect. Not only this, it also gives more weightage to the most recent data.

Let’s say, for example, you have a module which has, let’s say, five releases back it had a very high number of critical defects however in the last four releases it has been showing minor defects. It would show a low vulnerability so it also takes into account how recent is the behavior of the module. Your recent releases would have more weightage compared to your very past releases.

Sarah-Lynn:Great. Thank you, Sanket. We do have another question for you here and it’s, “How did the tool determine sizeable work plan? Does it have granulite to map the work plan to modules?”

Sanket:Sorry, I did not get the question.

Sarah-Lynn:It’s, “How did the tool determine size of the work plan?”

Sanket:Okay. What this tool does is basically it gives you the recommendation from your activities in the defect management tool. The recommended-recommendations that you get on priorities, priority for the module defect range as well as recommendation, it takes into consideration lot of things like defect density and who has worked on which particular module, how many defects he has logged, what is his defect acceptance ratio. Based on this, it recommends that what kind of, what priority test cases you should execute for a particular module or not.

Sarah-Lynn:Okay. Great, thank you, Sanket. Another question for you is, “Is a prediction only based on past defect only?”

Sanket:No, it is based on defects, then it also integrates with the source code repository. So, it also checks how many changes are there in the past, how many developers have worked on the module, so it takes a lot of complex factors into account into the algorithm with different weightage to figure out, to make out the prediction.

Sarah-Lynn:All right great. Another question Sanket is, “Is there any minimum data required for the tool to make predictions?”

Sanket:Yes, as I mentioned this tool needs training data to make prediction. We should have at least data for two releases to make a reasonable prediction out of that tool.

Sarah-Lynn:Perfect and we have one last question for you here. “When you import the data does the tool import overall JIRA data or project-specific data?”

Sanket:It imports project-specific data. The predictions are based on your project, that is the project URL that you enter at the time of input.

Sarah-Lynn:Okay great. I believe that’s all the questions we are able to answer at this time. I’d like to thank everyone for joining us today.

You’ll receive an email in the next 24 hours of the slide presentation and the link to the webcast replay. If you have any questions please contact us at info@infostretch.com or please call 1 408 727 1100 to speak with a representative. Again, thank you all for joining us and enjoy the rest of your day.