with Tricentis & Infostretch
Host: Good morning and welcome to our webinar. How to Flip You Test Automation Pyramid for Agile. My name is Sarah-Lynn-Lynn Brunner and I will be your host for today. Today’s presenters are Thomas Stocker, Product Manager over at Tricentis and Siva Anna, Senior Director of Enterprise QA at Infostretch. A little quick background on both; Thomas is currently a product manager at Tricentis. He’s mainly responsible for distributed execution, continuous integration in the automation platform. Prior to his current role, Thomas has been working directly on the Tricentis Tosca Testsuite as a Lead Engineer.
Thomas studied Business Informatics at the Vienna University and has over 10 years of experience of software development and over three years’ experience of Agile Product Ownership. Siva Anna, as a Senior Director of Enterprise QA at Infostretch, Siva Anna has more than 20 years of experience developing and managing QA strategies for fortune 500 companies. In 12 years with Infostretch, Mr. Anna has been instrumental in leading strategic enterprise engagement, which has resulted in significant value for clients such as Kaiser Permanente, The body shop and Minted.com.
Before we begin, let me review some housekeeping items. First, this webcast is being recorded and will be distributed via email to allow you to share with your internal teams or to watch it again for later. Second, your line is currently muted and third, please feel free to submit any question during the call just by accessing the Q&A button the navigation in the top center of your screen. We will answer all questions towards the end of the presentation. Lastly, we will do our best to keep this webinar to 45 minutes time allotment. At this time, I’d like to hand the presentation over to our first speaker Siva.
Siva: Thank you Sarah-Lynn. Good morning everyone. Let me quickly go over the agenda I’d like to see. I’ll be sharing a decision with Thomas from Tricentis. We’ll begin stating the agenda. I’ll be covering to begin with, what I see as with the experience that I have working with multiple enterprises. What I see the quality engineering workflow and what kind of changes. Then the last couple of bullet points would be covered by Thomas. Right?
Moving along. Before I go into the specifics of what is the interest in today’s topic about the test pyramid, how do you achieve and what are the benefits. I want to lay down what I have seen with enterprises and what are some of the success factors that become so important in today’s quality engineering. The first– I’ll touch upon two, three aspects, but what you see in the screen are the top reasons of the success factors for successful quality engineering. The first one is about the Shift Left and BDD. With the time to market has become so important in any business.
We have seen the Shift Left or the TDD, BDD has become one of the key element that you need to adopt to make your overall quality engineering very successful. There are a lot of tools and technologies available that supports or enables you to do the automation, what we call it as a Lead with Automation or in Sprint Automation. You want to write your test automation to begin with, before even the application of the component that you are able to test is available.
As I’ll continue going through the other slides, you will see how we can leverage the automation and how you can do the automation much closer to the code. What you see– the test automation or the in sprint automation is one of the key thing. Couple of other things as you might have seen in the screen. The DevOps integration has become one of the key thing. With the DevOps integration, it means that you want to start your automation very early in the cycle and you want to do the automation all different layers.
The integration with the DevOps and continuous integration is one of the key success factor for your quality of engineering. The service virtualization has become one of the key thing, I think Thomas will touch upon how you can leverage the service virtualization to do the various layered approach which is the primary objective of today’s session. You will see that service virtualization will play a critical role in enabling the pyramid approach that we recommend for the quality engineering. Moving along.
Another key thing that we see working with many enterprises and many of the organization– the quality engineering workflow has the multiple views. What we see is the design view, execution view and analysis view. What you can see from design view to all the way to the analysis view, there is lot of enhancements and lot of discovery has happened to enable that you are able to do the right test design from each– focusing on different aspects, right? For example, from the test process all the way to the visual validation that you want to do for your application. Because there are applications having all kind of testing interface. You want to make sure that during that design itself that has been taken into consideration, each of the interface you are able to test.
The important aspect here is that you want to start your testing very early in the cycle and you want the status at the code level as much as possible. You want to kind of minimize your validation to the end to end cycle. In the execution flow is where we see not a lot of innovation has happened and we continue to see the adoption of cloud and there is the multi-channel execution that is happening in the industry. The last but not least, the analysis view also is where some of the greatest improvements have happened over the recent years, in terms of the machine learning and the analytics improvement that has happened. That continuously feed that information back to the design view, right? It is a continuous cycle, what we design today. It needs to be optimized again and again as we continue to learn from the execution and the analysis tools that are available in the market.
Just to recap on what I talked about in the couple of slides, the DevOps has become one of the key thing. Working with the top thousand fortunes hundred companies– fortune thousand companies, we see that the frequency is one of the key thing that they want to increase. Because the time to market has become one of the top priority for many organization. To achieve that, DevOps has become as one of the key thing and also the layered automation approach, is also one of the enabler to achieve the DevOps movement that we are seeing in the industry.
Another concept that we have seen in the recent past with many of the enterprises, there is a big movement that is happening, and this was also mentioned in the recent world quality report. The movement that is happening from testing center of excellence to a concept called center of quality engineering or test excellence center. What it means is there is a centralized responsibility or centralized authority– what it was before– are focused in one centralized organization, that is slowly shifting away. That each of the group can decide, in terms of how they want to execute it or what level of testing that they want to do. It doesn’t mean that the centralized body is going away, but the responsibility is shared and the centralized body– the test center of excellence is more focusing on providing the best practices or act as the tool or the policy.
They identify– continuously evaluate various tools and technologies and provide the guidance to the actual team that is working on the project. Along with that, they provide what I call it as more of a guidance or more of a governance. But at the end of the day the individual project team is responsible to execute or enable what level of automation that they want to do. Whether it is an automation at each different layer or they want to execute ad hoc way. All that is designed at the each of the test center of excellence. That big shift that is happening, it is all because of the agile that has taken over the years, there is so much of enterprises are moving from waterfall to agile and some of these concepts are enabling to achieve the waterfall to agile transformation, that is happening in the quality engineering aspect.
This is one of the critical slide and this is the primary focus for today’s topic that we are talking about. This slide highlights two or three points, let me take one by one. As you can see in the layered automation pyramid, what we call it as the right pyramid for quality engineering. As you can see what it is indicating is that you want to have more automation– should be closer to the core, which is basically the unit there. Indirectly it’s indicating that you want to have as much as possible a test automation towards the unit test and it’s slowly going from bottom to top. Then what it means is the visual test, you want to have all the end to end test automation to be somewhat limited, but the more automation should be at the unit test, component test and the integration test.
By achieving this, though it is kind of a dual concept, but lot of tools and frameworks have enabled automation to achieve using one framework. Definitely Thomas would talk about how a Tricentis achieves this kind of a pyramid structure and how the tool is enabling you to do automation at various layers. The other benefit that you get is basically the test execution, as you’re probably aware, the unit test or the component test which is basically what we call it the headless test. It’s much, much efficient in execution as well as less dependent on the components. It goes more higher– the dependencies are increasing, and the test execution becomes slower. You want to make sure that in your strategy you have– in your approach that you are putting more automation towards the bottom of the triangle, compared to the to the top of the triangle right.
I listed some other business benefits, what you see here is this pyramid also allows you to do the In-Sprint Automation. You can do the Root cause analysis– root cause analysis as quickly as possible, because when seven test cases fail, you know exactly where the failure is and you know where the problem could be. This also allows you to do lot of automated test generation, we’ll seen many tools available to generate automated test scenarios unit test, the component test, the integration test and now there are tools available to generate even at the usual test as well as a GUI test. With this Sarah-Lynn I’ll probably will hand it over to you.
Sarah-Lynn: Thanks, so much Siva, I would like to turn the presentation over to our next speaker, Thomas Stocker.
Thomas: Hi. Thanks Sarah-Lynn, thanks Siva. Siva just presented the test parameters should look like based on a stable foundation, but actually what we see as of today is that the majority of testing efforts still goes into testing, making up half of the testing budget or more. Siva already mentioned the drawbacks such as In-Sprint test require a fully functional system landscape and providing these systems on compatible versions in a testing ground is a huge challenge. What’s more they are very costly and time consuming compared to tests from the integration level, they are about four times as expensive and take up to five to 10 times as long. The state of the system under test is checked at a very late stage, which leads to the issues that we find problems very late in the process.
Bottom line, the first pyramid as it is now standing on its head, makes up for a pretty unstable geometric body. The consequence of these downsides, we should now try to detect errors on the lowest possible test stage. The question that that stage is now, how can we invert this upside-down test pyramid to get to that point? I want to share with you three key points, we at Tricentis think are very important to do so. All right, I will start with the first, probably most important step from our point of view, which is to design your test cases and determine the ones you truly need, I mean that’s a no brainer you might say, agreed? But what we see on average, there is a redundancy of around 60% in test case portfolios, and even though the coverage of the business risk rarely exceeds 50%. What we provide, is our optimal functionality. This optimized functionality can help you in overcoming that very issue.
All right, let me quickly reflect on the situation we have to deal with. With the increasing amount of test, we run, we hopefully also can increase the level of coverage. That’s why we continuously add tests, if they make sense. We improve our product, have new features. We’d probably introduce new regression tests for that. But at some point, we will reach a critical limit where adding new tests to our portfolio will not make any more sense for several reasons, such as, let’s say time or budget constraints. The time and budget needed to exhaustively test all possible situations is pretty much infinite, which is why we have to take a decision on what to focus on. As a rule of thumb, of course, there’s a 20/80 rule with around 20% of the right test cases, you can already cover around 80% of your risk. Tricentis helps us optimized capabilities now support you here to find exactly the right test cases you really should focus on.
Tosca test case design is the first thing I want to talk about. That’s part of the optimized capabilities. It enables you to put your test cases into a logical structure with test sheets that allow you to create all combinations of possible test cases that are required to ensure the desired test coverage and with equivalence classes, they serve as a starting point for reducing the number of test cases. The final test sheet is then assigned to the test case template, that all required test cases are instantiated for execution with only a single click. I prepared an example here. Let’s quickly go for that easy example to explain it. Let’s say your system on the test is supposed to cook spaghetti al dente. To achieve this, it requires preconditions such as you have to have these raw spaghettis and you have to execute a certain amount of actions to get your spaghetti al dente, such as you need to boil water.
Test case design now allows you to describe this process and decide on what is really necessary and what has to be verified against. Therefore, we use the concept of the straight through, which is basically nothing else than a happy path. Actions that definitely will lead to the desired result when executed properly. Based on this straight through now, we adapt one parameter at a time, until we reach the maximum coverage possible. This all happens at one central place and can be populated into hundreds of test cases afterwards with a seen effect.
With this approach, pressing the soft code will keep you– the overall amount of test cases low, while maxing out your test coverage. This logic will help you to be faster and more consistent when building up your test case specification. Of course, it comes with some other benefits, such as it gives you the possibility to really focus on business risk, instead of simply the number of test cases. What’s more, you can achieve a substantial reduction of the overall number of test cases, and especially, removing redundant tests. Last but not least, you have a heavily reduced maintenance. Why? Because you have everything at a central place, a central test sheet that you can maintain your combinations and with one single click, you can now instantiate the test cases. Even if something changes, you simply instantiate them once again.
That was the first step, design your test cases. Of course, completely avoiding interim test is definitely not an option. We know that. Though end-to-end test cases can be reduced to a minimum and that’s what we strive for. With a simulation of virtualization of the connected systems, it’s possible to reduce the complexity of complex system landscapes. Therefore, we can physically decouple and break them up into standalone systems, still working as if they were fully connected with the landscape. That’s what virtualization is basically about. With this approach, tests that had to be previously conducted in interim tests can now be treated as system internal tests, so to say. Let’s have a closer look at the situation we have to deal with. Here is a representation of a disease chain network and you know what a typical enterprise system landscape is pretty similar to that. The target application is not a standalone app. You can use for processing data, but rather a part of a whole environment connected to thousands of additional applications consuming through several services and so on. This leads to the fact that on average more than thirty different systems are required to be up and running for development and testing. Guess what most of the time they are not. Some may currently be offline, others don’t exist or they are currently built/developed.
If you’re in this situation it’s simply possible to sustainably run your end to end tests and from a testing perspective you could say, welcome to tester’s hell, in that case. But of course, there is a cure. Our newest member of the Tricentis family which is called Tricentis orchestrated service virtualization, provides the most modern technology to decouple such systems for development and test purposes. Customers can decide on the level of each business scenario, which system they want to physically integrate in the test and which ones to simulate. So far, many customers hesitated to adopt the benefits of service virtualization general, why? Well because it appeared as a very technical task in the past. Requiring programmers to set up the virtualization and to run it, but we at Tricentis try to overcome the issue. We try to lift service virtualization up to the business level, making business testers productive who have no programming skills.
Here’s an example, let’s say you have the testers on the left-hand side as you can see here, the target application the middle and the required additional systems on the right. I got several systems your environment has to talk with, Tricentis Orchestrated tools Virtualization now let’s you record these systems to simulate them afterwards in absence of the actual system. During your test execution, your application is communicating with orchestrated service virtualization only. How will they support you and shifting testing to left you might ask? Well I would like to explain this with the Symphony Orchestra example. Let’s say you are playing Orchestra, you’re playing a certain instrument. But before practicing with the whole Orchestra, you will probably practice alone at home and probably mock the rest of the orchestra with a recording for example. Next time you meet others to practice the direct interaction and finally at the end you go for the real rehearsal.
The same is true for testing. OSV now enables you to replace part of your Orchestra with recordings, to be able to focus testing on a certain piece of your application. We often refer here to the approach as divide and conquer, so to say. Each team can now cut out dependencies and focus on its product only. There’s several development teams, they are producing several parts of your product and they do their testing within their sprints and the goal is to make them independent at least in certain testing stages from the rest of the environment. This provides you a tool for isolated component testing earlier within the delivery cycle and therefore you get early feedback and better quality before you move on to the next stage.
Let me show how this could look like, let’s say you’re still in development of your application. Most dependent systems are not yet available, possible still in development as well and by using Tosca you now can remove the dependencies to still be able to run your tests towards your product.
In the next stage the integration testing phase you add one or several environments in tests interaction between them, but your product is already fairly well tested at that stage and has a way better quality, which is why in that stage we focus on integration tests only. Now we enter the user acceptance phase, where all components now interact with each other. The number of tests you now have to learn can be massively reduced as you already enter this stage with a certain confidence that the quality of your system is acceptable and finally there may be an additional production stage, where you run now your interim test against the whole system to make sure your product works as expected in the production like environment and that’s basically it. What does the mean in numbers? I’ve got a chart here for you. Following the approach of the upside-down testing pyramid we talked about initially, the distribution of defects is covered per test phase. It looks something like the black parts of the chart we see here. The majority of defects is following integration phase, but still system and acceptance tests reveal a fairly high number of defects. Having successfully implemented the Tricentis OSV, now leads to the substantial shift to these numbers to at least one phase to the left. That’s important. That’s crucial. Shifting or discovering the vast majority of the issues, at least, one phase earlier saves you a lot of money. It’s remarkable that the cost of finding defects increases with every test phase. It lets you reduce the average cost of the defects by almost 40%, which is really remarkable.
These were the first two items. One is left. We have talked about designing our tests to reduce redundancies and focus on business risk. We have presented the possibility to shift left testing by Tricentis orchestrated service visualization. One last thing I’d like to propose for successfully flipping a test pyramid, to use the closest access to the tested logic. What do I mean with that? If we have a look at today’s distribution of testing approaches, we see that still a vast majority of tests are done manually. A small part which is covered by automated tests still mainly focuses on UI test. But guess what? This will change. We already see that it’s already changing.
The reason why automation rates are that low is automation with most tools at the moment we’ve had developers. But It does not with Tricentis Tosca. What we have seen is that automation rates will increase dramatically up to 80% or 90% even. What is most significant here is the fact that the majority will be API based tests, which brings me to the initial statement; API testing uses the closest access to the tested logic. Instead of going through an additional abstraction layer such as the UI, we directly run our tests on the service layer. That’s the basic idea of it.
There are strong arguments why you should focus on API testing instead of UI. Let’s say this is a typical sprint timeline. As we can see, the UI is — I can refer to that as really to be true, as the Product Manager, I see this on a daily basis. As we can see the UI is very likely to change throughout the development process and just right at the end of our sprint, the UI will get its final look and feel. This makes testing almost impossible throughout our sprint and shifts it to the very end with the risk of failing without having a change to our product in time.
APIs, on the other hand, are pretty stable right from the start and will change only slightly over time. This makes it the perfect candidate for early and progressive testing. Besides that, API tests can be traded way faster than your UI tests, as you do not have to take care of UI synchronization or difficult environment setups. What’s more, it can run in a fraction of the time required for UI time. Let’s have a look what a test case looks like in Tricentis Tosca.
Here’s a typical UI test that it would look like in Tricentis Tosca on the left-hand side and the corresponding application on the right-hand side. As you can see they look pretty similar. It should be straightforward for anyone to determine what’s really going on within their test case. Here we have a typical API test case in Tricentis Tosca. The actual definition of the web service on the left-hand side we have, may look a bit frightening and technical. But if you look at the test case itself, it still should be possible for anyone to get the idea of what is done here.
Now, let’s put the API test and the UI tests side-by-side. Don’t they look almost the same? Seriously. So far just another abstraction layer in a human-readable form for our application and the test. Bottom line is as a test engineer, you should not step away off the idea of creating API test cases, just because you think it’s a technical task that can only be done by developers. Tricentis Tosca provides you an easy to use interface that allows you to create, run and maintain your test cases no matter which technology or if it is UI based or non UI, they always look basically the same.
We have seen in the chart initially that manual testing will almost vanish but it’s certain part will still remain. What will that look like? We talked about automation, but there seems to be still some manual testing, what it will look like. Let’s pretend this is our system landscape and throughout our environment, issues appear here and there which leads to certain areas with risk the potentially should be covered by testing. That could be performance issues, usability issues, security issues you name it.
With our automated regression tests we are now mainly doing specification base testing. What do I mean with that? Basically it means we have a certain baseline or expected behavior of our application and we check if the actual behavior matches our expectations. With this approach we can cover a certain amount of risk areas sure. But as we have already mentioned we cannot test all the risks exhaustively, because of, for example time or budget constraints. Since the gray circle which represents the specification-based testing. Basically our automated regression tests run on a regular basis. This leaves us with quite some risky areas uncovered, partially because we have not enough resources to cover. But partially also because we simply don’t know these areas, and this is where exploratory testing now kicks in.
In contrast to specification-based testing, the tester here is equipped with a testing charter which basically states the area to focus on within the application and all the rest, the path through the app the tester will take, the details he/she will focus on, are up to the testers creativity and curiosity. Which makes exploratory testing the perfect addition to a specification-based regression testing, as it covers risk you would never have any on your radar otherwise. Therefore, for us at Tricentis we call it the Agile testing law, which is basically an equation which says properly tested is an equation of checked by automated regression test, plus explored through exploratory testing. That’s it basically from my side with that I want to hand over to Sarah-Lynn again.
Sarah-Lynn: Great, thanks so much Thomas and thank you Siva as well and at this time we would like to get your questions answered and as a reminder you can at any time just ask a question in the Q&A window at the center of your screen. Just pulling up the first question here, Thomas I believe this one is for you. Does your offering work for micro service or middle architectures?
Thomas: Very good question. Our key role is moving forward fast, we see that. We have micro-services, we have containers and cloud native architecture that’s all on the rise, but still classic middle ware has its right to exist we know that. We are aware of the needs to virtualize both architectures, which is why we support the test and virtualization of all kinds of API gateways and enterprise service bus implementation. So, long story short yes, we support both.
Sarah-Lynn: Great thank you and then Siva here’s a question for you. We have automation that needs to be run against multiple test environment, how do you manage the test data for each environment?
Siva: This is also a good question. With automation that needs to run against multiple environments, we see this as a constant challenge because you end up running– you may have to run the same automation against the developing environment and also run against multiple QA environments and also even to take it to the extent of running against the production environment also. The approach– the recommendation or what you have seen working to– if you have a situation that you need to run your automation against multiple environments, the way the framework needs to be capable enough to pick and choose what test data that you want to use and there are also concepts available for the test data manager. Where the tools would allow you to dynamically create the data based on the environment that you want to run, although automation framework or even Tricentis also supports that Tosca suite that we use for many of our engagement and other automation tools that I have personally used that support data-driven approach. You want to make sure that the framework that you use or if you end up developing something in the house, ensure that there is an opportunity or option to data drive the entire automation so that you can pick and choose what the test data that you want to use. Or if there is even to the extent you want to go and use the service virtualization or have the concepts like the test data manager, that would allow you to create the necessary test data on the fly based on the environment that you have.
Sarah-Lynn: Perfect thanks for sharing Siva. Actually, and another question for you Siva. What are the challenges in adopting In-sprint automation and how do we overcome them?
Siva: This is an ongoing improvement that we see. There are multiple solutions available. Again, it varies or depending on your particular need. Because the In-sprint automation requires a lot of coordination between team that you end up working with. For example, in some of the implementation with this handshake with the development team that you want to have certain object properties, you want to have it defined and there is a good understanding between the development team and automation team. When you start writing the automation you want to make sure that you are using those specific properties and development.
Also have an understanding that they don’t change certain properties, because that is one way to achieve the In-sprint automation or at least reduce the risk of the test automation scripts going out of sync. The other thing that we have seen that worked out well for In-sprint automation is you want to start your automation like we discussed in today’s session– that you want to start your automation at the code level as much as possible. How that’s being used by the development team also.
That way you are increasing the re-usability and at the same time you are also validating the script that is working at the development environment also. That if there are any failures you are able to get to know well in advance. These are two, three techniques, but there are quite a few options available. We’ll be happy to share if someone has any specific interest in knowing more about on this particular problem.
Sarah-Lynn: Great thanks so much Siva. Thomas a question for you, actually a couple. I mentioned how do I get started with API testing? I understood that Tricentis provides an easy way to create and maintain test based on API, but where do I get the specification for my test? I have my user interface in front of me, how do I know which services are underneath?
Thomas: Thanks Sarah-Lynn. Interesting question, sure. We talked about it, we’ve seen that so far API testing has been a developer only topic, yes it’s true. We try to change that. When we presented Tosca, we now try to really shift this to business testing. In a world of DevOps, where DevOps is really the most important focus on I would say it’s an excellent possibility to get now really dev and test engineers closer to each other.
Siva: Bottom line to start with that get in touch with your Dev and ask them for possible endpoints you want to check towards. They will be happy to support you with that, because they get rid of certain tasks like API testing they can take it over. Besides that we are already working on an order discovery functionality, that will make it easier for you to directly extract API test out of your existing UI test kit. Stay tuned on that.
Sarah-Lynn: All right, great. Thanks Thomas and I believe we have time for one more question, Siva I think this one’s for you. I like the pyramid approach that you showed in your presentation, are there any open source frameworks that support automation of these different layers?
Siva: This is a good question. I’ve been working with many enterprises and midsize companies as well. We see there is a lot of innovation happening on the open source technology, also in I think one of the slide you might have seen that one of the success factor also to how we can leverage the Open Source technologies. It comes with its own caveat. Certain tools may or may not have the right support.
That’s why we recommend depending on what is your need to leverage products like Tosca or other solution. But to answer your question, yes there are Open Source solutions available. In fact, Infostretch also has an Open Source framework called QMetry Automation Framework. We will be happy to share. Many of the frame-work does support the Pyramid approach. What it means is that, you would be able to use the same framework for automating the unit test as well as to the UI test.
In between the entire Pyramid layer that you saw you will be able to use framework like Open Source framework, like QMetry Automation Framework. You will be able to use that to automate the various layers. That not only helps you to reuse the test asset that you build across the layer, but also helps you from the overall organization perspective. Your learning is reduced to one specific tool set.
In terms of the standards or in terms of the reporting all that is mainly through single framework. That’s the benefit you get, but there are quite a few other frameworks available that support the entire Pyramid structure that you need along with the commercial tools as well.
Sarah-Lynn: Thanks, so much Siva. I believe that’s all the questions we are able to answer at this time. Be sure to check out and subscribe to DTV. It’s a new digital transformation channel that brings in industry experts. Very cool. Many thanks to our speakers and thank you everyone for joining us today.
A year ago, we identified trends and predictions for 2018 dealing with different facets of digital transformation and...