Get Smart with Intelligent Testing & Improve Your Digital Transformation Journey

Discover the benefits that Intelligent Testing brings to DevOps and Agile environments

Josh: Good morning everyone, and welcome to our webinar, Get Smart with Intelligent Testing & Improve Your Digital Transformation Journey. My name is Josh, and I will be your host today. Today’s presenters are Andrew Morgan and Siva Anna. Andrew is the director of product marketing in Infostretch. He is an experienced leader in strategic analysis, opportunity assessment and roadmap execution. With his guidance and expertise, he has helped companies expand their digital initiative to groundbreaking level. Andrew has over 10 years of experience in working with a wide range of companies including global automotive, pharmaceutical and technology manufacturers. Additionally, he has directed development of Market First such as life science application, customer engagement programs and predictive analytic platforms.

Siva Anna is the VP of quality services here in Infostretch. He has more than 20 years of experience developing and managing QA strategies for Fortune 500 companies. In the nine years with Infostretch, Mr. Anna has been instrumental in leading strategic enterprise engagement, which has resulted in significant value for clients such as Kaiser Permanente, The Body Shop, and minted.com. Before joining Infostretch, Mr. Anna excelled as a technical product manager for both eBay and Cognizant and also served as QA and automation strategy consultant for a large bank.

Before we begin, let me review some housekeeping items. First, this webcast is being recorded and will be distributed via email to you layer. Allowing you to share with your internal teams or watch again at your leisure. Second, your line is currently muted. Third, please feel free to submit any questions during the call by utilizing the chat function on the bottom of your screen. We’ll answer all questions towards the end of the presentation. We will do our best to keep within the 45 minute time allotment. At this time, I like to turn the presentation over to Andrew.

Andrew: Thank you Josh. Thank you also Siva and everyone on the call for joining us today. I just want quick agenda in terms of what we’ll cover today, make sure it aligns with everything that you’re expecting to see and really cover everything that you need to know to go forward to make sure that you’re testing life cycle and quality processes are capable of incorporating the latest and greatest technologies. We’ll cover why intelligent testing, what intelligent testing is. Then intelligent testing in action, a great case study that we have focusing on all the different stages that we can cover as well as implement for different organizations. Then how we implement intelligent testing.

Why intelligent testing? This gives you some insight into what we’ve seen within the marketplace. As you can see from the statistics above, consumers are really evolved in terms of their number of devices that they have.

I know for myself I have a phone, a tablet, two computers and at least three different monitors that I watch different entertainment programs on. Well, companies that are seeing a streamline strategy in integration with their technology across the different platforms they’re actually seeing a 91% greater year over year retention of their customers.

Siva: There’s a mistake. I don’t see anything on the screen.

Andrew: Are we not able to see anything. One second. Good to go now? Okay, perfect, thank you everyone for the technical difficulties. We’ll make sure to send a RFP to Zoom so we can assist them with their technology. As you can see here, the organizations that are really adopting this omni-channel strategy to have a seamless experience across all these consumer devices, from shopping on Amazon, to then checking out on your desktop, to seeing different curated content on your Amazon Prime channel. Making sure that all these strategies align with what your consumer is expecting and wants to do along your specific journey.

Now, on the flip side, these challenges are pushing a lot of pressure on the IT department, on development, on testing, on operation. These are some of the questions that our specific customers are asking. “Can we shift?” or “Can they test better? Can they experience putting more priority on agility? Has operation cost and inability been highlighted more frequently during these expansions into these higher developed new technologies?

Lastly, we talk about data as well as the improvement of UX in utilizing cloud services. It’s really around adapting user interfaces. You think about Uber and Google Maps already has your predetermined destinations based off of where you are in a location and where you’ve most recently or most commonly used. I see it all the time when I travel to my favorite destination that pops up, “Do you want to go this hotel? Do you want to go to this restaurant?”

It’s pretty impressive but you think about it from a development and testing standpoint, these UIs need to change in real-time based off of each specific user’s interaction and pattern and what they are attempting to do with the application. How do we build that testing suite? How do we automate these tests and procedures to account for those different testing components?

Now, as we go into a example here, let’s look at a great application which has a great UX but what does this really mean for testing. As you think about a pharmaceutical application that uses biometric identification to open up an application and uses the camera feature to launch a augmented reality capability in a prescription bottle or prescription pack and has the patient’s ability to reorder the prescription or submit side effects or be able to communicate with the doctor.

Well, that’s great for you. That’s great for the consumer. It’s very user-friendly. However, from testings side, there’s a lot of components at that API and unit level that we really need to address. I’m talking about electronic medical records, shipping, point-of-sale and even the inventory adjusters as well as doctor authorization. These are all different personas within this lifecycle that need to be able to use this platform effectively and we have to be able to test and automate the entire processes to make sure that it’s working just right just for them.

As you know, it’s not just about whether it works correctly but making sure it works correctly for it’s intended user. When we have all the different types of data that are shifting within, again these types of repositories from doctor authorizations to the EMR need to be able to update that immediately.

Here are some of the testing challenges we hear when companies try to take these agile initiatives to market. As you can see, the time to market has really decreased substantially in the last three years. Even today, we’re seeing at about 60%. Enterprises want to deploy a new build daily while even 6% want to do hourly. What we’ve really seen and Siva can maybe help me out a little bit on this, if he has any specific examples in requirements management, automation, test environment, and test data, these are big deterrents on getting these new deliverables to market or developing and testing within the agile methodology.

As you can see here, we have some business and operational impacts around the confidence and quality as well as ability to prioritize the specific testing suites. When we talk about as you will see later, risk-based testing and how this all brings together a higher total cost if we’re not able to adjust accordingly per the information and per the requirements based off of the business objectives and development. Anything else, Siva?

Siva: No.

Andrew: Okay. Now, let’s get into the meat of it, what you guys are really here for. As we talk about intelligent testings, it’s really about the strategy of testing and aligning it to what it is we’re trying to achieve. What you’ll see here is that testing has really evolved, addressing quality, addressing agility, addressing efficiency. What we do is we really make sure that when we’re going through the strategy of intelligent testing and mapping that back to your business requirements and the capabilities to deliver the best quality efficiency and agility within your organization. These are the questions that we’re commonly asking our customers as we’re figuring out how testing can be an enabler within their digital initiatives.

Now, I’m not going to go through each one of these. As you can see, it’s really about understanding what it is we’re trying to achieve, what the business value is, and what the metrics are to make sure that we can be successful in our operations.

In terms of intelligent testing, and we talked about starting with those business objectives and that user experience, then moving on to what the requirements use case and UX expectation is. From this component, we’re able to automate some elements like functional in UI test cases. Then, there is Union API which do take a little bit more skills to be able to automate those components. It’s not really a turnkey solution. This leads straight into our creation phase which is around a two approach methodology that we can take.

At the top, you can see our left to right approach where we have model-based testing within an application, primarily within new applications being brought to market. We then use ATD and BDD to automate that test case, then a layer on top of that, auto test generation and healing tools like Maple and Functionize. What we see here in– ATD is just an acronym that we just randomly came up with because we’re out of space. It doesn’t necessarily mean it’s in market or being adopted widespread. From space spatial reasons for this diagram specifically, we wanted to speak to it, rather than having a bunch of text.

On the bottom, you see, we have our right to left approach. As you look at the creation phase above, we have our backlog. That’s when an application is already within the optimization process, or when it’s already within the life cycle and it’s being pushed alive. When we talk about the ability to optimize a test case or optimize case repository, we need to be able to look at the entire coverage, the entire capabilities within that test repository. As you look, we have a center ground that we start off with. We actually have our own internal IP platform called [unintelligible 00:13:10] That’s where we optimize, or we have different tools and accelerators that can leverage artificial intelligence to help you optimize these specific components within the testing cycle.

Before Toe Bot, we use Toe Bot for this backlog analysis and an optimization. Then, we do a gap analysis with model-based testing, or we use tools like Hexawise to determine where we’re at and where we need to add test cases to increase the coverage. Simultaneously, we can then add the same auto test case generation and healing tools on top of it. As development is working on specific projects and able to bring the code and deliverable to market, we’re not slowing them down with our optimization efforts.

This goes on to our continuous testing foundation. Our foundation is really around the fabric and the quality and understanding how we need to be set up from an environment standpoint to an infrastructure standpoint, to be able to leverage all the latest and greatest tools and technologies and web services, to be able to get these deliverables to market.

Something that’s very interesting, I was on a call with one of our customers the other day, and they’re talking about pulling one of their application out of their existing architecture, simply because it was an application they wanted to integrate chat features and functionalities. They knew they needed to have a microservices component to be able to take into account the amount of data and changes that we’re going to be occurring within that application.

I asked specifically where they were engaging with us. It was around taking this application our of their existing architecture, implementing microservices to have the correct service virtualization as well as testing components to make sure their application not only could get up and running faster but ensure the quality as the development phases increase along the roadmap. Now once we have all of this, all the capabilities in identifying the test cases, the componentization. Being able to understand previous tests test runs of what has passed and what has failed.

Then pushing that through and understanding where we can actually test everything more efficiently, effectively. We have that risk-based testing and this is about either what features and functionalities change most often? What features and functionalities fail most often? These are two types of test cases that we want to make sure that we prioritize early in the process in the execution phase so that we’re able to not waste time testing tests that we already know are going to pass.

Then in that execution phase, we talk about the AUT environment provisioning. Now, this is really around where we see a lot of counterparts. Really discussing of performance testing, security testing, functional testing and there’s two components here, omni-channel testing like I talked about before, which we’re seeing a big uptick in and really almost a starting point with a lot of our conversations and this unsolved testing.

Where it’s unsolved automation test cases. There’s a couple of good examples here that I can talk about. One is with the large national bank that will go over in our case study but what we did for our project was they had take a picture, check deposit, what’s it called Siva? Automated check deposit. Automated check deposit is the tongue twister for me, I guess. [chuckles]

What that is I’m sure you guys have seen it or used it. Where you take a picture of a cheque with your bank application, then automatically deposit that into your account. Well, that’s great but how do we make sure that actually can be automated within the cycle. What we actually did was we took a, I believe a studio-grade great camera and set up a hardware component to be able to take that picture automatically as the cycle was running through.

Then once we set up that hardware component to do it without manual intervention. We were able to do a software layer where it actually pulled the image from one of our test libraries to mark the action of taking a picture. Now it’s all done behind the scenes without using the camera anymore. That’s some of the examples of unsolved testing. We also did one for a medical device company where they needed to measure the sweat glands on a person. We actually created an entire lab that was able to provide a simulation of that activity.

Then now that we’re able to automate a lot of these processes just like with any other process you automate. You have the ability to extract data. Anything that you do manually it’s harder to measure unless you’re tracking down exactly what you’re doing when you’re doing it manually. When you automate, now you have the power of computers to be able to instantaneously track what it is that we’re doing. What the results are and then able to provide us the capability to analyze the efficiency and effectiveness of those activities.

As you can see there’s four components we have within our analyze phase now the auto-results-analysis, automated test acceptance, dynamic page test profiling and then visual verification. Now, these are all components where we’re actually testing and assuring the quality of the test cases that are being run. Whether they’re passing as well as the code as we’re running through to making sure that the test code is up to standard so it’s not impacting the results as well.

Now, this analysis now circles back through and to provide intelligent-test selection along with the risk-based testing so not only are we prioritizing based off of the historical component and our future insights into how the product is developing. We’re also able to test based off of what we’re seeing within that automation code to make sure that we’re able to provide the best possible results for our customers.

Then lastly, we have our optimized phase and this really creates the feedback loop into the business objective and consumer experience. What we do here is we take all this information along with all of the external data that is back up there. Log in with it, app analytics, as you can see, and then production and app monitoring. Now, this is around everything that is really done within the application in real time. What’s happening in the real world, how do we incorporate that into our development and testing cycle? How do we account for those capabilities to make sure that we’re providing not only the best product for our business goals but also providing the best product for our consumers? There’s tools out there that are starting to develop more maturity within this element and what they do is they track users on specific components and are able to add that journey per se into the test case repository and if it’s not there, or if it is there, be able to elevate as one of those priority test cases and this is the functionality or pathway that your consumers are using 80% of the time.

We want to make sure that is the critical path that we’re testing at the beginning of every process. Then we also add predictive analytics and prescriptive analytics on top of this feedback loop back into this intelligent testing. This can predict and prescribe exactly where it is that we are within our testing maturity, and be able to give us some suggestions as well as predicting what is most likely to pass or fail and at which level of confidence based off of the historical amount of data. Again more data, more amount we’ll be able to predict and it’s just one of those things where as we go and train these models they become more and more accurate as we go.

Now, I’m going to pass it over to Siva and he’s going to talk about these specific components around intelligent testing in action and get some real-world examples on our expertise and Siva if you have any questions, I’m sure you can ask me but I think you got it [laughs].

Siva: Hello everyone. Let me go back to the previous slide. I just want to reiterate what Andrew talked about. I think this is the intelligent testing lifecycle that we see in the industry today. A lot of our customers do follow through this one. The one quick thing I would like to point out because some of my case study information that I will be sharing in the subsequent slides will address that. For any customer, they can pick and choose the specific area of optimization that they can focus on. In my case study also I divided this into four sections. One, I talk about the identification creation as one key area, then the fabric which is the fundamental for the entire intelligence testing to work as a gluing component in terms of the test data, the environment, plus the execution and the optimization. I talk about my case study in these four sections.

Just want to give you the context that you can pick and choose your solution or the problem that you would like to solve in any of these sections. This is one of the large national bank that we have been working for almost a decade. I’ve been leading this engagement and some of the data point that you will see today, it’s been getting in process, are implemented over the year. It’s not something that you get it done in day one. This is a continuous improvement that we see on the processes or the automation that we do within any organization.

To begin with, I want to focus on what we have achieved in the creation and the identification phase. If we look at it– This is also, you will see the structure of this slide is divided into three categories. What was the customer ask that we have seen? We have been working with them for almost a decade. We continuously monitor and see how we can improve the overall process and that’s the key thing that we documented here. I think as we hear not only with this customer but many customers, the time to market is the key thing. We do see that release cycle from the definition to the production, the release cycle has been coming down dramatically.

In this particular case, the need or the requirement was they want to reduce the QA cycle or to enter into and release cycle from eight weeks to two weeks. Agreed there is some– The chunk of deployment is smaller but imagine that you have to go through the entire cycle from business requirement documentation, to user stories, to writing the test cases, to automation. It needs to happen within the three-week cycle. That’s where some of this intelligence testing is coming into a big play because there are tools have matured, there are products available that would help you to– Identification of the test cases, of the test scenarios, a lot more automated. It’s not a manual process anymore. We do see there are quite a few tools available that allows you to define test scenarios automatically by giving the equipment either in the form of user stories or any other mechanism that you can pass it on to these tools and what we saw was directly

The first thing that we continuously do is baseline ourselves and see how do you make sure that you achieved certain improvement that you’re looking for? In this particular case, the release SLA is one thing but we also wanted to make sure that we are improving the quality. We are not compromising in that process the overall quality. The baselining the quality by looking at the various metrics in terms of the productivity or in terms of the defect to production leakage metric. All that taken into consideration to identify what is the outcome that the client was looking for. Some other metrics that you’ll see on the right side is the actual data point that we were able to produce and validate with the customer in terms of the overall quality, 15% improvement is the big number that we have achieved in a very short time working with the customers in terms of in the recent release cycle.

This is primary data point that you’ll see here is in nine to twelve months period that we helped the customer to reduce the release cycle from eight weeks to three weeks and these are some other metric that you’ll see, are the outcome of that initiative. This is more around the identification and the creation process. This includes both the functional and automation scenarios.

Moving on, I think the core foundation for the intelligent testing is all around the fabric, which is what spans across the three or four core to automation. The test data management, test involvement management, in the recent past we see service virtualization playing a critical role. I think this slide is all about this three area that what did we– What was the problem that we see and what did we achieve as an outcome in this particular section. I think the key ask from the customer was to see how they can run their test scenarios for any application that is in development.

I think the service virtualization was that one of the key elements that we saw that would help the customers that they can start doing the development of the test scenarios, automation of those test scenarios, even during the application that are in development phase. That was the key thing that we saw. I think the other thing that we see continuously across large enterprises is the test data management. Because of the legacy systems and there are like a lot of system of record that you may or may not have using mechanism to provision the data. We do see that having a centralized self-serving portal that allows the people to request the necessary data and also you get to monitor whether the data is getting used effectively or not. We have built portals for a few former customers that help to leverage the test data more effectively.

Many times we do see a challenge that it was provision to their team but they may or may not be using or don’t have a mechanism to measure whether their data is used correctly or not. Having a portal would help you to not only automate the process of provisioning, both automation as well as the manual functional team, can use that for tests, but you also validate the usage of it or any correction that needs to happen can happen automatically through the portal. It’s the same thing we have built for the service virtualization also because that’s another key thing that we continue to see that we need for many customers that they want to have a mechanism to enable or disable to run your test scenarios to the back-end system.

Because you don’t want to be stopped or you don’t want to become as a bottleneck, because there is no back-end system you should be able to run or you should not be able to run. Depending on what is the strategy that is applied in your organization. Moving along, I think the execution is another key area that you have seen a lot of tools and solutions available for this one. This is all around. Once the execution– because there is a lot of agile processes, there is a lot of test execution happens, say every taking time.

Some customers do execute on a nightly basis. The execution frequency is much, much higher compared to where it was before five years. We continue to see this as a problem. As we generate a lot of execution that indirectly or directly increases the workload on analyzing. That’s where a lot of innovation happened and the analyzing has become an automated process that gives you an indication of what are the repeated failures that you see so that you can fix only those things.

Also, you can prioritize and there is a lot of AI-based learning happening. The system are the back-end analytics engine can learn the failures and autosuggest. That is what we talked about self-healing. Where it not only detects this is the problem, but also it comes with a recommendation. These are all the few things that can be done to improve it. This is one of the biggest challenge that we see in large interfaces where there is large execution happen on a very frequent basis.

Also, we do see that having a centralized unified framework would reduce your test execution analysis. Having a centralized or framework or having a uniform automation approach across the different layers would help you to simplify or optimize the test execution. The last section is about the optimization phase that Andrew talked about. This is where I think a lot of innovation happens in the recent past.

I think so far, all some of the topics that we talk about in the creation or the environment provisioning or the execution, this has evolved over that time but the automation is the phase that has seen a lot of innovation in the last few years. Around what we call it as a predictive and prescriptive. In this area, what you see is that — I mean customers are demanding. They want to have a high precision planning, which means that you have so much data available in the system either in the form of JIRA or [unintelligible 00:33:50] or any test management system has information.

In terms of what was your trend in terms of the execution cycle or in terms of the defect or the quality of the application, all that can be fed into this bots. What we call it as the bots are some solution that helps you learn from that and start predict what is the potential whether it’s a resourcing need or the time need or infrastructure need, it can predict. Also, it can prescribe based on the information that is being fed into the system.

There’s a lot of learning. This is where I think all the Artificial Intelligence and machine learning comes into handy. That helps you to increase the overall outcome by which you can reduce your schedule variance or have a better usage of 40.

To summarize, I think they face some of the key numbers so that I would like to highlight our key to achieve on this particular engagement. The time to market is one of the key things that we continuously see because that’s where the pressure in the industry today. Also having the maintenance because of multiple automation technologies being used, multiple solutions being adopted there is inherently increased maintenance. We have seen adopting single unified framework as part of the automated or the intelligence approach helps you to kind of minimize the overall maintenance costs, right?

These are all the key benefits that you will achieve with this customer. We’ll be more than happy to share more specific questions that you might be having. But we continuously are seeing adoption of this bot or intelligence testing approach in chunk, may not be as a whole. You will be able to kind of pick and choose the area that you are interested or the help that you are looking for. These solutions would allow you to kind of achieve the outcome that you’re looking for. Right. Let me hand it over to Andrew to take it forward and share a few more information. Andrew.

Andrew: Thank you, Siva. Great insight. We’ll just go over a couple of different slide here. Well, just one exactly, I forgot I had taken the other one out because I wanted to give you guys an opportunity to look at what we’re talking about from strategically driving outcome and being able to walk through our complex, maturity model with you guys specifically.

From the engagement process, what we do is we really look at again that quality and our ROI. And understanding what it is within your processes that is make giving you the most challenges or is causing the most headaches in terms of your higher-ups constantly pressuring you on specific elements, as you talk about costs, time, efficiency. What it is, you know, that’s really making your testing more effective.

Then as we go into the testing tool strategy, we make sure to create a strong foundation, build a compelling business case for the stakeholder, buy-in. Then as you can see, like I talked about assessing to get standard models to build that roadmap. That roadmap is really around creating that journey.

Not only can you understand where you’re going to be able to provide the most impact and enable business from a testing perspective, but also be able to communicate that upstream. Everyone within your organization department understand exactly what you’re doing to enable that, you know, talk about development, and talk about data engineering. You’re your operations, talking about how we’re able to then enable these different organizations can be proponents within their departments.

What we do just for a few examples is we have some tools, some evaluations that we provide for our specific customers. This is just one example. In terms of the model-based testing, what we do is we really look at the specific challenges that are in the market, and then within that customer set, that environment. Then in terms from the solution, understanding what components are needed to make this most effective, for that end result, that outcome.

As you can see here now we create a user flow in terms of what we’re actually doing. That way, within this specific component, you know exactly what it is that we’re addressing and what benefit that we’re going to provide.

Now, as you saw previously in the intelligent testing life cycle, there’s lots of components to intelligent testing. Now there’s also lots of different tools out there for each one of those components. I’m not going to spend the next two hours going through all the slides that we have around each component, each tool that we have or have evaluated in the market.

But this just gives you an example of how we can, how we’re going to present it to you to showcase what it is that you’ll actually receive at the end result.

Then lastly, this is an example of some of the components that we add. As you see here, this addresses the users within the lifecycle and the users and the components that are needed. This addresses what we need to do within these needs. This one’s specifically for test data management, and then how and why the current situation is? We’re able to clearly explain what needs to be done for each one of these users and stakeholders within this specific user flow. Then the impact that they’ll receive from this benefit. As you can see we do this not only for your users but also from the standpoint of how we’re going to implement this within your environment? That’s part we don’t have but that’s more around early architects or senior developers around being able to make this solution work right for your environment.

Josh: Great. Thanks, Siva and Andrew. At this time we would like to get your questions and answered. It looks like we have a few more minutes to answer those questions. You can do that by submitting a question in the chat feature at the bottom of your screen. Looks like we’re getting a few questions submitted. Looks like our first one here is around someone’s asking about, reminding, what does ATGH mean?

Andrew: That’s auto test case generation and healing. It’s just a way to condense that that term for that tool does. Those tools in the market like, Maple and Functionize they run on top of existing websites that are able to adjust test cases as development is actually updating the features and functionalities or UI and UX.

Josh: Another question here around there was a mention of test data management in terms of intelligent testing. What does this mean?

Siva: I’ll answer the question Josh. Test data management in short, TDM means having a defined approach for how do you handle the test error? Like I said in my case study also, this was one of the critical problems that we solved. TDM basically it’s a various levels of management techniques that you can apply. Whether you want to have a mechanism to provision the data or you want to have a mechanism to maintain and monitor the usage of the test data. I think the key thing is also having a self-serving portal is the key thing that would simplify the whole test data management. That’s what we kind of mean here the test data management.

Josh: Got a great question here around. Good question here. What is intelligent testing meant only for newly created test cases? In other words, can any of these techniques be used for the existing test case scenarios?

Siva: This is a great question. Let me answer to this one. Yes, I think the short answer is yes, you can use the intelligent testing both for existing test case that you have or you can lose it for the newly generated to be created test scenarios. In fact, I think some of the customers that we are working today, we have an ongoing engagement that is where we are using a lot of bot I think, Andrew did mention about Turbot. It’s basically the test case optimization tool. It allows you to kind of optimize the existing scenarios and also component that you are able to increase the reusability of the test scenarios.

What we have seen is by going through this process you kind of reduce 25% to 30% of your test scenarios because of the redundancy in the test scenarios or some of the test cases were not being executed or that is out of scope. The bots like Turbot helps you to optimize. Yes, you can use it for existing scenarios also. It’s both for automated scenarios as well as the functional scenarios so that you can use there some of these intelligent testing techniques to effectively reduce the overhead that you might have it today.

Josh: Got another question here. Someone’s asking about that if you could cover the three key benefits around adopting intelligent test.

Andrew: I’ll take this one. Just to give a quick answer no specific order. The three that we often see is really around helps faster time to the market. It really enables and accelerates what it is you’re trying to achieve from a business standpoint and how many releases you’re trying to make. Secondly, we talk about the third action of dependency and highly trained technical fields. This gives us the capability like we talked about that service virtualization portal to utilize the latest and greatest tools in technologies without needing the latest and greatest skills to be able to make them effective for your organization.

Mostly obviously improved quality. If our products and our deliverables aren’t up to standards in our improving the quality then our consumers aren’t going to enjoy their experience. They’re not to spend time utilizing what we’re offering. Those are the three.

Josh: Another question here. It looks like we have a few minutes to take questions. Question from Robin around, what constitutes intelligence testing? I think she means what is it made up of?

Andrew: Really about the methodology and best practices I would say. We’ve been doing a lot of these components in a lot of our customers and what we did is as we’re seeing the digital drive of the product development and just organizations in general. Digitally being able to take in new technologies like Alexa, like augmented realities, like wearables. How do we account for the best testing practices within those initiatives of the organization?

What we’ve done across our longevity of being a digital-first company is being able to really understand the best ways to make digital right from the first time, first initiative from the first step. That’s where we’re really focused on. Being able to intelligently look at the test case, test life cycle and understand how do we make it most impactful, most effective. Not just simply receiving the order and making it happen but making sure that we’re actually enabling the success of the business.

Josh: Got another question here for you Siva from John. When doing the end testing and today data has to be synced between upstream and downstream application. What techniques can be used to break this dependency?

Siva: I think the key thing that comes to my mind is using some of the marking services or what we call the service virtualization because that’s where you’ll be able to at least reduce the dependency. To some extent as much as that so that you can split the dependency between the front and the back end system. I think that’s one technique that you can use to make sure that you’re able to to have the data sync happening between the front end systems and the back end systems.

Josh: Last question, looks like we have time for is someone is asking if the application is primarily mobile. This application, does intelligence testing apply to mobile customers as well?

Andrew: Yes, definitely. It’s great for mobile especially with all the features and functionalities in mobile. You talk about Bluetooth, connectivity with NeoFace, you talk about the camera, multiple cameras now on the phone and not just one side of the camera. How do you then integrate that with the predictive text and the other features and functionality that you need to be integrated with on the back end? The great example of how to make mobile not only work but work out of its greatest potential.

Josh: Sorry, we’ve got one more question just slipped in from Mathew. Is your engagement model to come in with the methodology and drive through a particular workstream? I.e. a consulting engagement or is there a particular tool case that coexists with the current test automation and CICD in place as the client site?

Andrew: The best way to answer that is there is not a specific toolkit because depending on your environment and your needs and what you’re really trying to achieve, there is different tools that are going to be out there. We have the expertise within our organization to be able to pick those right tools and be able to implement them so they’re most effective. We don’t actually utilize just one specific tools.

We don’t want to pigeon hole ourselves to make sure that we can provide you with the best possible outcome. We do have our own internal accelerators like I talked about and Siva mentioned predictive QA, test case optimization bot. Those are all highly dependant on artificial intelligence and machine learning capability. Those require a lot of data and training to be able to make sure that they’re effective, AI isn’t a Turnkey Solution. It never has been, it never will be. Think about Watson, just starting to realize some of its value. We have to still be able to be in there and be able to train the model. The SOW and the engagement may look a little different for those types of engagement, but nevertheless, we can come together and determine the best way to provide you with the best possible outcome.

Josh: I believe that’s all the time we have for questions. Thank you all for submitting your questions. This was great. Be sure to check out and subscribe to DTV, the new digital transformation channel that brings in industry experts. The link is there. You can also use bitly url. Many thanks to our speakers, Andrew and Siva, and thank you, everyone, for joining us today. You will receive an email in the next 24 hours with the slide presentation and link to the webcast replay. If you have any questions, please contact us at info@infostretch.com, or call us at 408-727-1100 to speak with a representative. Thank you all, enjoy the rest of your day. Have a good day. Bye-bye.

Siva: Thank you. Thank you all.

Andrew: Thank you.

Discover the benefits that Intelligent Testing brings to DevOps and Agile environments

Josh: Good morning everyone, and welcome to our webinar, Get Smart with Intelligent Testing & Improve Your Digital Transformation Journey. My name is Josh, and I will be your host today. Today’s presenters are Andrew Morgan and Siva Anna. Andrew is the director of product marketing in Infostretch. He is an experienced leader in strategic analysis, opportunity assessment and roadmap execution. With his guidance and expertise, he has helped companies expand their digital initiative to groundbreaking level. Andrew has over 10 years of experience in working with a wide range of companies including global automotive, pharmaceutical and technology manufacturers. Additionally, he has directed development of Market First such as life science application, customer engagement programs and predictive analytic platforms.

Siva Anna is the VP of quality services here in Infostretch. He has more than 20 years of experience developing and managing QA strategies for Fortune 500 companies. In the nine years with Infostretch, Mr. Anna has been instrumental in leading strategic enterprise engagement, which has resulted in significant value for clients such as Kaiser Permanente, The Body Shop, and minted.com. Before joining Infostretch, Mr. Anna excelled as a technical product manager for both eBay and Cognizant and also served as QA and automation strategy consultant for a large bank.

Before we begin, let me review some housekeeping items. First, this webcast is being recorded and will be distributed via email to you layer. Allowing you to share with your internal teams or watch again at your leisure. Second, your line is currently muted. Third, please feel free to submit any questions during the call by utilizing the chat function on the bottom of your screen. We’ll answer all questions towards the end of the presentation. We will do our best to keep within the 45 minute time allotment. At this time, I like to turn the presentation over to Andrew.

Andrew: Thank you Josh. Thank you also Siva and everyone on the call for joining us today. I just want quick agenda in terms of what we’ll cover today, make sure it aligns with everything that you’re expecting to see and really cover everything that you need to know to go forward to make sure that you’re testing life cycle and quality processes are capable of incorporating the latest and greatest technologies. We’ll cover why intelligent testing, what intelligent testing is. Then intelligent testing in action, a great case study that we have focusing on all the different stages that we can cover as well as implement for different organizations. Then how we implement intelligent testing.

Why intelligent testing? This gives you some insight into what we’ve seen within the marketplace. As you can see from the statistics above, consumers are really evolved in terms of their number of devices that they have.

I know for myself I have a phone, a tablet, two computers and at least three different monitors that I watch different entertainment programs on. Well, companies that are seeing a streamline strategy in integration with their technology across the different platforms they’re actually seeing a 91% greater year over year retention of their customers.

Siva: There’s a mistake. I don’t see anything on the screen.

Andrew: Are we not able to see anything. One second. Good to go now? Okay, perfect, thank you everyone for the technical difficulties. We’ll make sure to send a RFP to Zoom so we can assist them with their technology. As you can see here, the organizations that are really adopting this omni-channel strategy to have a seamless experience across all these consumer devices, from shopping on Amazon, to then checking out on your desktop, to seeing different curated content on your Amazon Prime channel. Making sure that all these strategies align with what your consumer is expecting and wants to do along your specific journey.

Now, on the flip side, these challenges are pushing a lot of pressure on the IT department, on development, on testing, on operation. These are some of the questions that our specific customers are asking. “Can we shift?” or “Can they test better? Can they experience putting more priority on agility? Has operation cost and inability been highlighted more frequently during these expansions into these higher developed new technologies?

Lastly, we talk about data as well as the improvement of UX in utilizing cloud services. It’s really around adapting user interfaces. You think about Uber and Google Maps already has your predetermined destinations based off of where you are in a location and where you’ve most recently or most commonly used. I see it all the time when I travel to my favorite destination that pops up, “Do you want to go this hotel? Do you want to go to this restaurant?”

It’s pretty impressive but you think about it from a development and testing standpoint, these UIs need to change in real-time based off of each specific user’s interaction and pattern and what they are attempting to do with the application. How do we build that testing suite? How do we automate these tests and procedures to account for those different testing components?

Now, as we go into a example here, let’s look at a great application which has a great UX but what does this really mean for testing. As you think about a pharmaceutical application that uses biometric identification to open up an application and uses the camera feature to launch a augmented reality capability in a prescription bottle or prescription pack and has the patient’s ability to reorder the prescription or submit side effects or be able to communicate with the doctor.

Well, that’s great for you. That’s great for the consumer. It’s very user-friendly. However, from testings side, there’s a lot of components at that API and unit level that we really need to address. I’m talking about electronic medical records, shipping, point-of-sale and even the inventory adjusters as well as doctor authorization. These are all different personas within this lifecycle that need to be able to use this platform effectively and we have to be able to test and automate the entire processes to make sure that it’s working just right just for them.

As you know, it’s not just about whether it works correctly but making sure it works correctly for it’s intended user. When we have all the different types of data that are shifting within, again these types of repositories from doctor authorizations to the EMR need to be able to update that immediately.

Here are some of the testing challenges we hear when companies try to take these agile initiatives to market. As you can see, the time to market has really decreased substantially in the last three years. Even today, we’re seeing at about 60%. Enterprises want to deploy a new build daily while even 6% want to do hourly. What we’ve really seen and Siva can maybe help me out a little bit on this, if he has any specific examples in requirements management, automation, test environment, and test data, these are big deterrents on getting these new deliverables to market or developing and testing within the agile methodology.

As you can see here, we have some business and operational impacts around the confidence and quality as well as ability to prioritize the specific testing suites. When we talk about as you will see later, risk-based testing and how this all brings together a higher total cost if we’re not able to adjust accordingly per the information and per the requirements based off of the business objectives and development. Anything else, Siva?

Siva: No.

Andrew: Okay. Now, let’s get into the meat of it, what you guys are really here for. As we talk about intelligent testings, it’s really about the strategy of testing and aligning it to what it is we’re trying to achieve. What you’ll see here is that testing has really evolved, addressing quality, addressing agility, addressing efficiency. What we do is we really make sure that when we’re going through the strategy of intelligent testing and mapping that back to your business requirements and the capabilities to deliver the best quality efficiency and agility within your organization. These are the questions that we’re commonly asking our customers as we’re figuring out how testing can be an enabler within their digital initiatives.

Now, I’m not going to go through each one of these. As you can see, it’s really about understanding what it is we’re trying to achieve, what the business value is, and what the metrics are to make sure that we can be successful in our operations.

In terms of intelligent testing, and we talked about starting with those business objectives and that user experience, then moving on to what the requirements use case and UX expectation is. From this component, we’re able to automate some elements like functional in UI test cases. Then, there is Union API which do take a little bit more skills to be able to automate those components. It’s not really a turnkey solution. This leads straight into our creation phase which is around a two approach methodology that we can take.

At the top, you can see our left to right approach where we have model-based testing within an application, primarily within new applications being brought to market. We then use ATD and BDD to automate that test case, then a layer on top of that, auto test generation and healing tools like Maple and Functionize. What we see here in– ATD is just an acronym that we just randomly came up with because we’re out of space. It doesn’t necessarily mean it’s in market or being adopted widespread. From space spatial reasons for this diagram specifically, we wanted to speak to it, rather than having a bunch of text.

On the bottom, you see, we have our right to left approach. As you look at the creation phase above, we have our backlog. That’s when an application is already within the optimization process, or when it’s already within the life cycle and it’s being pushed alive. When we talk about the ability to optimize a test case or optimize case repository, we need to be able to look at the entire coverage, the entire capabilities within that test repository. As you look, we have a center ground that we start off with. We actually have our own internal IP platform called [unintelligible 00:13:10] That’s where we optimize, or we have different tools and accelerators that can leverage artificial intelligence to help you optimize these specific components within the testing cycle.

Before Toe Bot, we use Toe Bot for this backlog analysis and an optimization. Then, we do a gap analysis with model-based testing, or we use tools like Hexawise to determine where we’re at and where we need to add test cases to increase the coverage. Simultaneously, we can then add the same auto test case generation and healing tools on top of it. As development is working on specific projects and able to bring the code and deliverable to market, we’re not slowing them down with our optimization efforts.

This goes on to our continuous testing foundation. Our foundation is really around the fabric and the quality and understanding how we need to be set up from an environment standpoint to an infrastructure standpoint, to be able to leverage all the latest and greatest tools and technologies and web services, to be able to get these deliverables to market.

Something that’s very interesting, I was on a call with one of our customers the other day, and they’re talking about pulling one of their application out of their existing architecture, simply because it was an application they wanted to integrate chat features and functionalities. They knew they needed to have a microservices component to be able to take into account the amount of data and changes that we’re going to be occurring within that application.

I asked specifically where they were engaging with us. It was around taking this application our of their existing architecture, implementing microservices to have the correct service virtualization as well as testing components to make sure their application not only could get up and running faster but ensure the quality as the development phases increase along the roadmap. Now once we have all of this, all the capabilities in identifying the test cases, the componentization. Being able to understand previous tests test runs of what has passed and what has failed.

Then pushing that through and understanding where we can actually test everything more efficiently, effectively. We have that risk-based testing and this is about either what features and functionalities change most often? What features and functionalities fail most often? These are two types of test cases that we want to make sure that we prioritize early in the process in the execution phase so that we’re able to not waste time testing tests that we already know are going to pass.

Then in that execution phase, we talk about the AUT environment provisioning. Now, this is really around where we see a lot of counterparts. Really discussing of performance testing, security testing, functional testing and there’s two components here, omni-channel testing like I talked about before, which we’re seeing a big uptick in and really almost a starting point with a lot of our conversations and this unsolved testing.

Where it’s unsolved automation test cases. There’s a couple of good examples here that I can talk about. One is with the large national bank that will go over in our case study but what we did for our project was they had take a picture, check deposit, what’s it called Siva? Automated check deposit. Automated check deposit is the tongue twister for me, I guess. [chuckles]

What that is I’m sure you guys have seen it or used it. Where you take a picture of a cheque with your bank application, then automatically deposit that into your account. Well, that’s great but how do we make sure that actually can be automated within the cycle. What we actually did was we took a, I believe a studio-grade great camera and set up a hardware component to be able to take that picture automatically as the cycle was running through.

Then once we set up that hardware component to do it without manual intervention. We were able to do a software layer where it actually pulled the image from one of our test libraries to mark the action of taking a picture. Now it’s all done behind the scenes without using the camera anymore. That’s some of the examples of unsolved testing. We also did one for a medical device company where they needed to measure the sweat glands on a person. We actually created an entire lab that was able to provide a simulation of that activity.

Then now that we’re able to automate a lot of these processes just like with any other process you automate. You have the ability to extract data. Anything that you do manually it’s harder to measure unless you’re tracking down exactly what you’re doing when you’re doing it manually. When you automate, now you have the power of computers to be able to instantaneously track what it is that we’re doing. What the results are and then able to provide us the capability to analyze the efficiency and effectiveness of those activities.

As you can see there’s four components we have within our analyze phase now the auto-results-analysis, automated test acceptance, dynamic page test profiling and then visual verification. Now, these are all components where we’re actually testing and assuring the quality of the test cases that are being run. Whether they’re passing as well as the code as we’re running through to making sure that the test code is up to standard so it’s not impacting the results as well.

Now, this analysis now circles back through and to provide intelligent-test selection along with the risk-based testing so not only are we prioritizing based off of the historical component and our future insights into how the product is developing. We’re also able to test based off of what we’re seeing within that automation code to make sure that we’re able to provide the best possible results for our customers.

Then lastly, we have our optimized phase and this really creates the feedback loop into the business objective and consumer experience. What we do here is we take all this information along with all of the external data that is back up there. Log in with it, app analytics, as you can see, and then production and app monitoring. Now, this is around everything that is really done within the application in real time. What’s happening in the real world, how do we incorporate that into our development and testing cycle? How do we account for those capabilities to make sure that we’re providing not only the best product for our business goals but also providing the best product for our consumers? There’s tools out there that are starting to develop more maturity within this element and what they do is they track users on specific components and are able to add that journey per se into the test case repository and if it’s not there, or if it is there, be able to elevate as one of those priority test cases and this is the functionality or pathway that your consumers are using 80% of the time.

We want to make sure that is the critical path that we’re testing at the beginning of every process. Then we also add predictive analytics and prescriptive analytics on top of this feedback loop back into this intelligent testing. This can predict and prescribe exactly where it is that we are within our testing maturity, and be able to give us some suggestions as well as predicting what is most likely to pass or fail and at which level of confidence based off of the historical amount of data. Again more data, more amount we’ll be able to predict and it’s just one of those things where as we go and train these models they become more and more accurate as we go.

Now, I’m going to pass it over to Siva and he’s going to talk about these specific components around intelligent testing in action and get some real-world examples on our expertise and Siva if you have any questions, I’m sure you can ask me but I think you got it [laughs].

Siva: Hello everyone. Let me go back to the previous slide. I just want to reiterate what Andrew talked about. I think this is the intelligent testing lifecycle that we see in the industry today. A lot of our customers do follow through this one. The one quick thing I would like to point out because some of my case study information that I will be sharing in the subsequent slides will address that. For any customer, they can pick and choose the specific area of optimization that they can focus on. In my case study also I divided this into four sections. One, I talk about the identification creation as one key area, then the fabric which is the fundamental for the entire intelligence testing to work as a gluing component in terms of the test data, the environment, plus the execution and the optimization. I talk about my case study in these four sections.

Just want to give you the context that you can pick and choose your solution or the problem that you would like to solve in any of these sections. This is one of the large national bank that we have been working for almost a decade. I’ve been leading this engagement and some of the data point that you will see today, it’s been getting in process, are implemented over the year. It’s not something that you get it done in day one. This is a continuous improvement that we see on the processes or the automation that we do within any organization.

To begin with, I want to focus on what we have achieved in the creation and the identification phase. If we look at it– This is also, you will see the structure of this slide is divided into three categories. What was the customer ask that we have seen? We have been working with them for almost a decade. We continuously monitor and see how we can improve the overall process and that’s the key thing that we documented here. I think as we hear not only with this customer but many customers, the time to market is the key thing. We do see that release cycle from the definition to the production, the release cycle has been coming down dramatically.

In this particular case, the need or the requirement was they want to reduce the QA cycle or to enter into and release cycle from eight weeks to two weeks. Agreed there is some– The chunk of deployment is smaller but imagine that you have to go through the entire cycle from business requirement documentation, to user stories, to writing the test cases, to automation. It needs to happen within the three-week cycle. That’s where some of this intelligence testing is coming into a big play because there are tools have matured, there are products available that would help you to– Identification of the test cases, of the test scenarios, a lot more automated. It’s not a manual process anymore. We do see there are quite a few tools available that allows you to define test scenarios automatically by giving the equipment either in the form of user stories or any other mechanism that you can pass it on to these tools and what we saw was directly

The first thing that we continuously do is baseline ourselves and see how do you make sure that you achieved certain improvement that you’re looking for? In this particular case, the release SLA is one thing but we also wanted to make sure that we are improving the quality. We are not compromising in that process the overall quality. The baselining the quality by looking at the various metrics in terms of the productivity or in terms of the defect to production leakage metric. All that taken into consideration to identify what is the outcome that the client was looking for. Some other metrics that you’ll see on the right side is the actual data point that we were able to produce and validate with the customer in terms of the overall quality, 15% improvement is the big number that we have achieved in a very short time working with the customers in terms of in the recent release cycle.

This is primary data point that you’ll see here is in nine to twelve months period that we helped the customer to reduce the release cycle from eight weeks to three weeks and these are some other metric that you’ll see, are the outcome of that initiative. This is more around the identification and the creation process. This includes both the functional and automation scenarios.

Moving on, I think the core foundation for the intelligent testing is all around the fabric, which is what spans across the three or four core to automation. The test data management, test involvement management, in the recent past we see service virtualization playing a critical role. I think this slide is all about this three area that what did we– What was the problem that we see and what did we achieve as an outcome in this particular section. I think the key ask from the customer was to see how they can run their test scenarios for any application that is in development.

I think the service virtualization was that one of the key elements that we saw that would help the customers that they can start doing the development of the test scenarios, automation of those test scenarios, even during the application that are in development phase. That was the key thing that we saw. I think the other thing that we see continuously across large enterprises is the test data management. Because of the legacy systems and there are like a lot of system of record that you may or may not have using mechanism to provision the data. We do see that having a centralized self-serving portal that allows the people to request the necessary data and also you get to monitor whether the data is getting used effectively or not. We have built portals for a few former customers that help to leverage the test data more effectively.

Many times we do see a challenge that it was provision to their team but they may or may not be using or don’t have a mechanism to measure whether their data is used correctly or not. Having a portal would help you to not only automate the process of provisioning, both automation as well as the manual functional team, can use that for tests, but you also validate the usage of it or any correction that needs to happen can happen automatically through the portal. It’s the same thing we have built for the service virtualization also because that’s another key thing that we continue to see that we need for many customers that they want to have a mechanism to enable or disable to run your test scenarios to the back-end system.

Because you don’t want to be stopped or you don’t want to become as a bottleneck, because there is no back-end system you should be able to run or you should not be able to run. Depending on what is the strategy that is applied in your organization. Moving along, I think the execution is another key area that you have seen a lot of tools and solutions available for this one. This is all around. Once the execution– because there is a lot of agile processes, there is a lot of test execution happens, say every taking time.

Some customers do execute on a nightly basis. The execution frequency is much, much higher compared to where it was before five years. We continue to see this as a problem. As we generate a lot of execution that indirectly or directly increases the workload on analyzing. That’s where a lot of innovation happened and the analyzing has become an automated process that gives you an indication of what are the repeated failures that you see so that you can fix only those things.

Also, you can prioritize and there is a lot of AI-based learning happening. The system are the back-end analytics engine can learn the failures and autosuggest. That is what we talked about self-healing. Where it not only detects this is the problem, but also it comes with a recommendation. These are all the few things that can be done to improve it. This is one of the biggest challenge that we see in large interfaces where there is large execution happen on a very frequent basis.

Also, we do see that having a centralized unified framework would reduce your test execution analysis. Having a centralized or framework or having a uniform automation approach across the different layers would help you to simplify or optimize the test execution. The last section is about the optimization phase that Andrew talked about. This is where I think a lot of innovation happens in the recent past.

I think so far, all some of the topics that we talk about in the creation or the environment provisioning or the execution, this has evolved over that time but the automation is the phase that has seen a lot of innovation in the last few years. Around what we call it as a predictive and prescriptive. In this area, what you see is that — I mean customers are demanding. They want to have a high precision planning, which means that you have so much data available in the system either in the form of JIRA or [unintelligible 00:33:50] or any test management system has information.

In terms of what was your trend in terms of the execution cycle or in terms of the defect or the quality of the application, all that can be fed into this bots. What we call it as the bots are some solution that helps you learn from that and start predict what is the potential whether it’s a resourcing need or the time need or infrastructure need, it can predict. Also, it can prescribe based on the information that is being fed into the system.

There’s a lot of learning. This is where I think all the Artificial Intelligence and machine learning comes into handy. That helps you to increase the overall outcome by which you can reduce your schedule variance or have a better usage of 40.

To summarize, I think they face some of the key numbers so that I would like to highlight our key to achieve on this particular engagement. The time to market is one of the key things that we continuously see because that’s where the pressure in the industry today. Also having the maintenance because of multiple automation technologies being used, multiple solutions being adopted there is inherently increased maintenance. We have seen adopting single unified framework as part of the automated or the intelligence approach helps you to kind of minimize the overall maintenance costs, right?

These are all the key benefits that you will achieve with this customer. We’ll be more than happy to share more specific questions that you might be having. But we continuously are seeing adoption of this bot or intelligence testing approach in chunk, may not be as a whole. You will be able to kind of pick and choose the area that you are interested or the help that you are looking for. These solutions would allow you to kind of achieve the outcome that you’re looking for. Right. Let me hand it over to Andrew to take it forward and share a few more information. Andrew.

Andrew: Thank you, Siva. Great insight. We’ll just go over a couple of different slide here. Well, just one exactly, I forgot I had taken the other one out because I wanted to give you guys an opportunity to look at what we’re talking about from strategically driving outcome and being able to walk through our complex, maturity model with you guys specifically.

From the engagement process, what we do is we really look at again that quality and our ROI. And understanding what it is within your processes that is make giving you the most challenges or is causing the most headaches in terms of your higher-ups constantly pressuring you on specific elements, as you talk about costs, time, efficiency. What it is, you know, that’s really making your testing more effective.

Then as we go into the testing tool strategy, we make sure to create a strong foundation, build a compelling business case for the stakeholder, buy-in. Then as you can see, like I talked about assessing to get standard models to build that roadmap. That roadmap is really around creating that journey.

Not only can you understand where you’re going to be able to provide the most impact and enable business from a testing perspective, but also be able to communicate that upstream. Everyone within your organization department understand exactly what you’re doing to enable that, you know, talk about development, and talk about data engineering. You’re your operations, talking about how we’re able to then enable these different organizations can be proponents within their departments.

What we do just for a few examples is we have some tools, some evaluations that we provide for our specific customers. This is just one example. In terms of the model-based testing, what we do is we really look at the specific challenges that are in the market, and then within that customer set, that environment. Then in terms from the solution, understanding what components are needed to make this most effective, for that end result, that outcome.

As you can see here now we create a user flow in terms of what we’re actually doing. That way, within this specific component, you know exactly what it is that we’re addressing and what benefit that we’re going to provide.

Now, as you saw previously in the intelligent testing life cycle, there’s lots of components to intelligent testing. Now there’s also lots of different tools out there for each one of those components. I’m not going to spend the next two hours going through all the slides that we have around each component, each tool that we have or have evaluated in the market.

But this just gives you an example of how we can, how we’re going to present it to you to showcase what it is that you’ll actually receive at the end result.

Then lastly, this is an example of some of the components that we add. As you see here, this addresses the users within the lifecycle and the users and the components that are needed. This addresses what we need to do within these needs. This one’s specifically for test data management, and then how and why the current situation is? We’re able to clearly explain what needs to be done for each one of these users and stakeholders within this specific user flow. Then the impact that they’ll receive from this benefit. As you can see we do this not only for your users but also from the standpoint of how we’re going to implement this within your environment? That’s part we don’t have but that’s more around early architects or senior developers around being able to make this solution work right for your environment.

Josh: Great. Thanks, Siva and Andrew. At this time we would like to get your questions and answered. It looks like we have a few more minutes to answer those questions. You can do that by submitting a question in the chat feature at the bottom of your screen. Looks like we’re getting a few questions submitted. Looks like our first one here is around someone’s asking about, reminding, what does ATGH mean?

Andrew: That’s auto test case generation and healing. It’s just a way to condense that that term for that tool does. Those tools in the market like, Maple and Functionize they run on top of existing websites that are able to adjust test cases as development is actually updating the features and functionalities or UI and UX.

Josh: Another question here around there was a mention of test data management in terms of intelligent testing. What does this mean?

Siva: I’ll answer the question Josh. Test data management in short, TDM means having a defined approach for how do you handle the test error? Like I said in my case study also, this was one of the critical problems that we solved. TDM basically it’s a various levels of management techniques that you can apply. Whether you want to have a mechanism to provision the data or you want to have a mechanism to maintain and monitor the usage of the test data. I think the key thing is also having a self-serving portal is the key thing that would simplify the whole test data management. That’s what we kind of mean here the test data management.

Josh: Got a great question here around. Good question here. What is intelligent testing meant only for newly created test cases? In other words, can any of these techniques be used for the existing test case scenarios?

Siva: This is a great question. Let me answer to this one. Yes, I think the short answer is yes, you can use the intelligent testing both for existing test case that you have or you can lose it for the newly generated to be created test scenarios. In fact, I think some of the customers that we are working today, we have an ongoing engagement that is where we are using a lot of bot I think, Andrew did mention about Turbot. It’s basically the test case optimization tool. It allows you to kind of optimize the existing scenarios and also component that you are able to increase the reusability of the test scenarios.

What we have seen is by going through this process you kind of reduce 25% to 30% of your test scenarios because of the redundancy in the test scenarios or some of the test cases were not being executed or that is out of scope. The bots like Turbot helps you to optimize. Yes, you can use it for existing scenarios also. It’s both for automated scenarios as well as the functional scenarios so that you can use there some of these intelligent testing techniques to effectively reduce the overhead that you might have it today.

Josh: Got another question here. Someone’s asking about that if you could cover the three key benefits around adopting intelligent test.

Andrew: I’ll take this one. Just to give a quick answer no specific order. The three that we often see is really around helps faster time to the market. It really enables and accelerates what it is you’re trying to achieve from a business standpoint and how many releases you’re trying to make. Secondly, we talk about the third action of dependency and highly trained technical fields. This gives us the capability like we talked about that service virtualization portal to utilize the latest and greatest tools in technologies without needing the latest and greatest skills to be able to make them effective for your organization.

Mostly obviously improved quality. If our products and our deliverables aren’t up to standards in our improving the quality then our consumers aren’t going to enjoy their experience. They’re not to spend time utilizing what we’re offering. Those are the three.

Josh: Another question here. It looks like we have a few minutes to take questions. Question from Robin around, what constitutes intelligence testing? I think she means what is it made up of?

Andrew: Really about the methodology and best practices I would say. We’ve been doing a lot of these components in a lot of our customers and what we did is as we’re seeing the digital drive of the product development and just organizations in general. Digitally being able to take in new technologies like Alexa, like augmented realities, like wearables. How do we account for the best testing practices within those initiatives of the organization?

What we’ve done across our longevity of being a digital-first company is being able to really understand the best ways to make digital right from the first time, first initiative from the first step. That’s where we’re really focused on. Being able to intelligently look at the test case, test life cycle and understand how do we make it most impactful, most effective. Not just simply receiving the order and making it happen but making sure that we’re actually enabling the success of the business.

Josh: Got another question here for you Siva from John. When doing the end testing and today data has to be synced between upstream and downstream application. What techniques can be used to break this dependency?

Siva: I think the key thing that comes to my mind is using some of the marking services or what we call the service virtualization because that’s where you’ll be able to at least reduce the dependency. To some extent as much as that so that you can split the dependency between the front and the back end system. I think that’s one technique that you can use to make sure that you’re able to to have the data sync happening between the front end systems and the back end systems.

Josh: Last question, looks like we have time for is someone is asking if the application is primarily mobile. This application, does intelligence testing apply to mobile customers as well?

Andrew: Yes, definitely. It’s great for mobile especially with all the features and functionalities in mobile. You talk about Bluetooth, connectivity with NeoFace, you talk about the camera, multiple cameras now on the phone and not just one side of the camera. How do you then integrate that with the predictive text and the other features and functionality that you need to be integrated with on the back end? The great example of how to make mobile not only work but work out of its greatest potential.

Josh: Sorry, we’ve got one more question just slipped in from Mathew. Is your engagement model to come in with the methodology and drive through a particular workstream? I.e. a consulting engagement or is there a particular tool case that coexists with the current test automation and CICD in place as the client site?

Andrew: The best way to answer that is there is not a specific toolkit because depending on your environment and your needs and what you’re really trying to achieve, there is different tools that are going to be out there. We have the expertise within our organization to be able to pick those right tools and be able to implement them so they’re most effective. We don’t actually utilize just one specific tools.

We don’t want to pigeon hole ourselves to make sure that we can provide you with the best possible outcome. We do have our own internal accelerators like I talked about and Siva mentioned predictive QA, test case optimization bot. Those are all highly dependant on artificial intelligence and machine learning capability. Those require a lot of data and training to be able to make sure that they’re effective, AI isn’t a Turnkey Solution. It never has been, it never will be. Think about Watson, just starting to realize some of its value. We have to still be able to be in there and be able to train the model. The SOW and the engagement may look a little different for those types of engagement, but nevertheless, we can come together and determine the best way to provide you with the best possible outcome.

Josh: I believe that’s all the time we have for questions. Thank you all for submitting your questions. This was great. Be sure to check out and subscribe to DTV, the new digital transformation channel that brings in industry experts. The link is there. You can also use bitly url. Many thanks to our speakers, Andrew and Siva, and thank you, everyone, for joining us today. You will receive an email in the next 24 hours with the slide presentation and link to the webcast replay. If you have any questions, please contact us at info@infostretch.com, or call us at 408-727-1100 to speak with a representative. Thank you all, enjoy the rest of your day. Have a good day. Bye-bye.

Siva: Thank you. Thank you all.

Andrew: Thank you.