Share this page with a friend.

How to Automate Testing for Next-Generation Interfaces

Including Web, Mobile, and Applications like Siri and Alexa

Moderator: Hello and welcome to today’s web seminar. How to automate testing for next generation interfaces including web, mobile and applications like Siri and Alexa. Sponsored by Infostretch and featuring it Manish Mathuria, Founder and CTO of Infostretch and Sanil Pillai Director of Infostretch Labs. I’m your host and moderator Josiah. Thank you for joining us today. Before I head over to the speakers, let me explain the controls of this console. If you’ve experienced any technical issues, please send a note through the question field book beneath the panels, will be on help as quickly as possible. A response could be read on the Q&A panel to the left of the slides. It is also the same way to submit questions.

We welcome all comments and inquiries during today’s event. Feel free to ask as many as you’d like or answer as many possible during the Q&A session after the presentation. Today’s event contains pooling questions and a video which can be seen within the slide view. To optimize your panels the best you can experience possible, please close any unnecessary applications that may be running in the background. At the bottom of your consoles are widget toolbar which is open panels. The consoles open with three on your screen; the speaker, Q&A and the slide panels.

All these panels can move around and resize to your liking. If you accidentally close one, click the respective widget to reopen the panel in its original place. Hover your cursor over these icons; it will be labeled identifying each widget. Through these icons, to download the speaker’s slides as well as show it to that with friends and colleagues. This event is being recorded and will make available for on demand viewing. Once the recording is ready, you’ll receive an email and instructions to access the presentation and slides on demand. With that said, I like to introduce the past control to our first speaker, Manish.

Manish:Hi. This is Manish from Infostretch. Good morning, good afternoon and good evening, everyone. Today, we want to throw some light on the impact of Next-Generation interfaces on our traditional testing and test automation models. As we all know that we are really going into the hyper connected wall. Today, we want to start and begin with the emphasis on the how these new interfaces are impacting our life in terms of testing and test automation. And want to know – also want to touch up on that how the systems are impacting whole as TLC life cycle as well. Of course, then we wanna talk about challenges which are involved around test automation. Today we’ll mainly cover two aspects in this new generation interfaces around mobile and bots.

On the bot side, we’ll take care of and talk about voice and the text as well. So let’s go to detail of what do we mean by hyper-connected wall, right? So we know that in the past, we had mobile device and there was a Cloud behind it where we would transfer information or connected wires some sort of Internet. In this hyper-connected wall, what we are seeing is that the mobile app is becoming the hub, literally control panel from many devices. So if you think of it, I mean we are now controlling our thermostat at home. We are getting sync with our Apple watch or any other smart devices. We are collecting many data from different centers. We are using camera-like like never before that we are using for many different things including QR code tel, PDF document making.

So this new generation connectivity or hyper-connected apps are bringing different challenges in terms of how we have to test and how we can create test automation around it. Now, everyone talks about Omni Channel offerings. Majority of the enterprises we have seen has gone through the mobile 2.0, 3.0 whatever we call it. They also have traditional represents but at the same time now, they are thinking of how can I leverage, use – get this around smart watch apps? How can I leverage around new generation technologies which are virtual reality? Bot seems picked up pretty well in terms of how can I have a seamless interaction with our customers.

Lot of retails and new generation healthcare has tied to leverage smart TV applications as well. They are no more just content display kinda mechanism. They are trying to use it interactively with many different used cases. So what we wanted to do today was we want to now talk about few or a subset of the challenges which are involved in this kind of used cases and how do we test them. Let’s look at some of the problem statements here. So we know that – so let’s ask one question to everyone. I wanted to know is that how many of you are currently using any of the voice bot interfaces in your products or services? Many enterprises have embraced it, some are toying with it. Many are going through the ideation of try fast, fail fast.

It would be very interesting to know what is your thought process around it and this is – as we are seeing it, it’s coming almost equal. So it means some are trying, some are not and some are kind of not are planning to use it. Eventually, it will come around. It’s a very interesting data point we are seeing it. As you can see, these all – these new interfaces are in the infant stages at this point but lot of enterprises are toying with it. So let’s look at some of the challenges we are seeing which are of very simple in nature. Though they bring lot of complexity in terms of test and test automation. Many of this we see that are connected using dialing as you know I’m sure everyone of you are using it. Now, we see that how do you test in automated fashion connect, disconnect kind of things if there’s an interference from other devices. If the device has a different protocol, how are you going to test it, how are the syncing happening, how are the online offline is happening. How interactions are getting generated? So there are traditional toolsets that are not helping in this kind of scenarios. So how do you test them, how do you automate them? Other question comes is when there is a – many used cases use mobile phone as a camera so how do you test them? How do you test the algorithms behind it? How do you test the OCR capability based on the used cases?

So that is another challenges we are seeing it. Now, most of the applications we are seeing is now data to locations. Somewhat the other fashion, they tried to leverage location where are the data. Now when – let’s say for example, Opera. How do you test those kinds of scenarios using traditional testing methodologies? How do you familiar different conditions? How do you familiar the walk-through of the locations? So now what is happening is that the testing and the toolsets are maturing. We are seeing that they want the complexity of used cases are maturing and we are to catch up with that. Also though it looks – let’s ask one more question. How are you currently testing your voice bot interfaces? It would be very interesting to know how are you doing it.

I am sure it’s not applicable to some of you but I’m sure some of them are using it. As you can see that automation is already flat. I can zoom that the automation will be flat on this one because the toolsets are practically non-existent a very custom made. Whereas the manual process would be still relevant whoever is using it. But as you can see the data is going on that front where manual is more compared to automation. So this is what we tried to address in this webinar. Though it looks very simple that date and time is the natural things we need to test it, but the complexity of locale, date and time gets way beyond in terms of the used case testing or validation.

So what we’ll do in today’s webinar, we’ll touch upon that how can you do the automated testing around that as well. Now, of course as you know Apple phone and others are now trying to do the biometric authentication, that brings how can you run a whole automation around that. So that’s another area of we wanted to target and talk about it today. So we can think through these scenarios. There are many, many more such scenario. These are just small subset of it. To deep dive into this, let me introduce Sanil Pillai who is driving our labs to take and deep dive into these scenarios.

Sanil:Thank you, Manish. Hi, everyone. So what Manish just covered was – to give you an idea of how complex mobile applications can really get especially with the different interface points that mobile apps today. And not only is it complex from a development perspective but very soon, we do realized they have been actually get into testing it, it gets equally complex of it, right? So let’s kinda start looking at some of these challenges and strategies around testing them. And we’ve started mobile and then we move to bot. So what is really the challenge in mobile test automation? All right. So the key issue essentially is around simulation. Simulating the behavior of a simple mobile app, it’s pretty straightforward and there are no patterns of doing it.

But once you add different centers around it location axial meter etcetera, it gets very complex to figure out what the right simulation strategies and the tools around them should be. And then there are existing tools which are traditionally being used for automation. The problem is that they’ve been built for mobile phones but they’ve been built from a perspective of functional automation. And they don’t really do well, we need to really bring in a sense of level simulation. Then the third part essentially that if you want to really do simulation around hardware of our mobile, often – Developers do get into instrumental code but they actually write and some specific of these codes. So a lot of them do really tested when the binaries around that.

That was up to a point but beyond that gives really complex and really uneasy. And then finally, it’s a mobile application so how do you handle interrupts? How do you handle interrupts from either a phone incoming phone call, how do you handle interrupts if things going offline? How do you handle interferences etcetera? So all these put together really make testing mobile applications in the context of a real usage in new generation interfaces really complex and that is why new strategies are really important to help manage those. So there are multiple ways of really testing it out. What do you see up here on the screen really is a glimpse of what are the different aspects that needs to be tested.

So you have all the way something like Touch ID and all the way to GPS and BLE. And each aspect of testing for the mobile app, it individually has to be done in a very discrete manner. And sometimes they can be combined together; sometimes they really need to be tested individually by itself. And all put together, testing are manually as you can see it’s pretty uneasy. You can get some more coverage but if you want to really get a good coverage for testing across the different assets, it can get really cumbersome and unmanageable. So what we’ve done and one of the approaches of solving this is really using a mobile automation library.

This is library which can really reside as part of your mobile application package which were not really need any code level changes. But this is a package which can be controlled by backend interface as you can see on the right side. And you can really, at one time, during automation decide what aspects of the testing that you really want to control. For example, you could set a cadence and say that, “Okay, my camera needs to turn on and off every five minutes,” and capture specific image which you inject into your testing parameter. So this can be control by this package, by this mobile automation package and the configuration can be all be controlled from the backend.

So what that allows you to do is that first of all, it allows you to bake this as part of your regular automation that you’ve built in for your mobile application. Secondly, it gives you a level of flexibility and configuration so that a single mobile application when it is used for multiple scenarios can really be tested to the same package without having to make any code level changes. And what’s the sweet approach really show is that this is a very marvelous sweet. So you can really add on more interfaces in the future when they got added to the entire mobile spectrum. So you can imagine that Touch ID was not really relevant until probably last couple of updates of iOS.

Bu then this is a marvelous sweet where you can really keep on adding more functionality. And the way this mobile architecture and I’ll show you in the next slide as an example of how this can be leverage. But really the way to look at it is that you have to really have a way of solving your automation strategy in a modular form, in a non-intrusive form and in a configurable form. And this strategy really allows you to achieve all the three together. So let’s kinda move forward and take a look at an example of one of these really were. But before we even go there, the other important piece to remember is that what other automation strategy used for your mobile application testing but all is peripheral.

They need to be baked into your current CI approach for example because you have built out your CI form mobile application. And you cannot be doing a testing for these aspects of mobile in a very different manner. So what if about a binary is created needs to be packaged with either framework and needs to be built as part of the same CI process that you have in place. And entire thing should be seamless. So your final report is get ready for testing should not only include your function testing which you always had but should also include additional testing that you want to have as part of the process.

So what does one example of such an automation look like? Let’s take a case of location. The location is pretty ubiquitous in all applications and an example here is where – it’s a field service application where a particular job is assigned to a field service personnel. The requirement is that the job really needs to get active only when the person really reach a particular location. First, for example, of geo-fencing. Now to test it manually really would mean that you need to have a way of a really trying different methods of geo-fencing in the real location which can get a really complex when you start doing it manually.

And you cannot really spoof locations pretty easily. So now you have manual tester or really trying to move – actually physically move locations to kinda simulate this environment. All you try to use some kinda spoofing mechanism which kinda maybe achieve your purpose to a point but it’s really not scalable and it’s very burden. So by using this mobile automation framework, what you can really do is you can have this entire testing as part of your package in a very configurable form. And what this would look like at one time is what I will show you in the next slide. So what do you see here is an animation of your testing that’s happening through automation.

As you can see is that the location, the dot represents the location that the person is changing and this is done through the automation mechanism. And as soon as the person reaches a particular point which you can see by the – through the highlight, the next screen comes up automatically which indicates to – which basically indicates to the automation that now it is a time to trigger a used case. So this entire testing actually changing location from one point to the other and bringing up the next trigger point which is the screen in this particular case was all done through automation. And if you have to do it manually, it would have to – and remember that this is just one flow that really tested out.

But you can really scale it out to multiple different flows and in a very easy manner. So this is an example of how mobile automation framework can really help you to automate your interfaces. Now having talked about mobile which is pretty important. Let’s move on to another channel which is pretty getting pretty pervasive now and that is essentially bot. And as the next few slides, I’d like to cover different nuances about why does bot testing and automation of bot really is different and what are the different challenges are there for the user. And what would be the strategies for really testing it out.

So when you look at bot, just to put some terminology in place. And there are two different kinds of bots; a chatbot which is also known as textbot so text based bot which you can see on the left end side. And the other one is voice based bot. In both cases, really you have an automated mechanism on the backend which is listening to queries and responding to queries. On the frontend either your interface is through text or it’s coming through voice. And I will explain the next few slides some of the nuances between both of them and why we need slightly different strategy on how we want to kind of help test out these bots.

So one of the main things that bots really introduce and what that has changed is that that UI is now really got blended with the parent host system. So what do I really mean by parent host system? Bots typically reside in a parent host like either it could be Facebook Messenger bot, it could be a telegram bot, it could be a Skype bot. So in this case, your normal connotation channels like Skype, telegram, Facebook, they become your parent host. The UI for the bot is really the UI for the systems but there is no real specific UI these bots really need to build out. For example, a bot residing in Skype is just a contact in Skype.

For example a bot is residing in Facebook, it just a contact in its Messenger. And the testing for these bots is more around flows and less around UI. And then you have UI which is connotation in nature. The connotation UI is different than irregular web based mobile UI because it’s less rigid and it’s more fluent in nature. And can really change on the fly sometimes based on cognitive perimeters and this needs to be accounted for during your testing process. So what are some of the nuances of bot testing? It is best to look at bot testing in two different aspects.

One is common factors and another is specific factors for each bot. Every bot needs to be tested for two different things. One is for intent validation and response validation and I’ll talk about what each one is in the next couple of slide. But then there is also a specific factor that needs to be kept in mind. Specific factors are things which are very unique to that particular bot that is being tested. So all your testing in automation approach need to have a layout approach. So you need to have a framework layout which really test for commonalities between across all the bot around intent and response. And then your initial effort for testing should really be around building the specific automation for specific factors of bot. You don’t have to reinvent the wheel every time.

And by separating it this way, it helps you achieve what is called a modular automation. Also helps you scale your automation in a very easy manner. So let’s look at intent validation essentially. So intent validation is all about looking or listening to or reading a user’s question and trying to figure out what’s the user is trying to say. So Natural Language Processing or NLP becomes a very big part of intent validation and enhance your testing for intent validation not only needs to be looking at keywords that need to be processed against expected intent that the user is trying to say. But also need to account for natural language processing based automation that also is very important.

And some of the questions that we need to ask here is what is the threshold where NLP should be really failing. What is the threshold at which NLP should really kick in? What are the boundary conditions for your intent that you need to test about? So these are some of the aspects that are very important for intent validation which will become part of your common automation layout. Next is response validation and the response validation is really very important. Because the responses in bots, as you can see in the animation there are the response can either be a question that the bot asked or it could be an actual response that the bot gives you back.

For example, if the user types and I wanna watch a movie, most of the intelligent bots gotta respond back saying, “Okay, what kind of movie do you wanna watch?” And so that becomes a question and eventually it ends up being a response, “Hey, you know this is running near you. You can probably go and catch a show today.” So you have that new answer around response. And response validation automation can be built around current backend website without automation. Because if we’ll think about responses, responses are basically data that’s coming back from the backend bot intelligence. So if you have existing backend intelligence which is built for other purposes which are gonna be re-purpose for bot.

Response validation has to be built on top of them as the best practice so you can leverage all the backend intelligence that’s already been built out. And responses can also be auto triggered. For example, you could have bot which on your birthdays or of specific days, they actually send out messages saying that, “Hey, today there’s something you’d be interested in. What do you want to kinda look at?” So you want to build a response validation keeping these into account. Let’s look at some of the very specific aspects of responses that really need to be tested.

So as you can see respond validation and automation for those are non-monolytic. It is pretty multi-faceted. So you have things around you, you could have a same query laid out but you could have multiple responses come on. So you could say, “Yes,” or you could get a response that says, “Okay.” And you need to be able to support your validation which will support all of that. You also need to be able to kinda look at errors and typos coming in and kind of process that for these chatbots in the right way. Users can put a multiple queries in a single sentence as it says there. Because users are used to put connotation and they want to kinda mimic that with the bot.

So your chatbot automation cannot be naïve enough to think that it’s gonna be pretty rigid and one question at a time. You need to account for multiple queries in a single sentence. And you could have mixed languages eventually, so then you actually a globalbot, you have to support multiple languages. So the challenges with translation based automation carry forward to bot also but it just can’t accentuated to some extent. And then you have to also think about response time for bot. So response – when you actually do bot validations during testing, it is not only about actual responses coming back as a content but is also the latency for the response.

Because in a fluent connotation, those are very important for a great experience. So if you look at these aspects here, it’s pretty apparent that if you have a very manual way of testing out that bot, it will only help you to a point. After that, it gets really complex and unmanageable. And it’s a kind of automation is definitely indispensable. So what are the approaches that you could take for your automation? So you could break this up into two different parts. One is where you kind of trying to really follow user action which we call imitating user action. The other option is headless based testing.

And the next couple of slides will give you some strategies around why each one is important and where you could use one over the other. So what is imitating user’s actions? So here, as you can see on the screen, it is a bot that is being tested and is actually tied to the UI for the bot. Now the way the strategy could help is that you have some kind of a data file like an Excel spreadsheet a COC file which have – that the user uploaded to define. And it goes into an engine, an automation engine which could be built on top of any of the framework which then runs to your actual bot. Now obviously, it is pretty apparent from this approach that this will upgrade when you have to really use test one bot for a particular channel.

Because you are really tying yourself to the actual user action which could default from different channels. For example, this particular screen that you see could look differently when it’s on Skype versus when it’s on Messenger. So you are tying your approach to a particular bot platform. So the other approach, when you have multiple bots, let’s say you have bots that you’re building for multiple platform. You want to have it on Skype. You want to have it on Messenger or telegram. What is the right approach to take? So you could have headless approach of testing. So headless testing is a well-known paradigm in regular mobile and web testing where you really want to remove the variances in UI from a testing approach and really try to test all the logics.

And there is an approach that you can really use for really testing your bots too. So you could have a data file which is the same as the previous case which is a bunch of flows. And you can actually send it through another bot maybe which is connected to your main bot which actually goes and processes all your intents and responses and then creates the final output. The biggest advantage of something like this is when you really don’t wanna tie down to the UI of a particular bot and you want to kinda open it up to more different kinds of testing. So between headless and UI, the ways to look at it is that if you really want to build out a scalable automation framework, it is okay to start with the UI based testing for one bot.

And then you can put on a framework and place vary mix of UI and headless testing. Wherein the UI piece, you’re really testing out some of the finer UI aspects of bot. So I only did mention that we are almost getting to zero UI configuration, zero UI paradigms for bot. But some of the bot you would see have got specific UI based with a send back as responses sometimes. So UI based testing can really be catering to those while your basic flows and logic, the heavy lifting of the bot can really be tested through your headless approach. So that could be a good way to kind of combine those types.

So having looked at different approaches, one of the most important aspects that are on bot also is how do you really create this to a bot? So remember the bot flows are conversation in each. These are all chat – these are flows that you want to really mimic in a very easy way. And it makes sense for a product manager or a business analyst to be of easy write these flows in a simple Excel maybe. And then have an automation tool which can really suck it up and start processing these files in a very easy manner. There are multiple ways of doing it. You can either use plain Excel as it shows on the left end side or you can create mind map diagrams which can then be processed it can work to GSM format for example and then start into – put it into automation tool.

The important things to remember here is that the time from which or the time between creating these files for testing and actually testing should be minimized and automated as much as possible. If you have to set and work an Excel created flow into a bot specific language for your testing then you have lot of lost in translation and also a lot of complexities are introduced in the process. So we need to have an approach where you can take these files and have it into your – present it into your automation tool. And it processes it and gives you the results that we’ve talked about earlier. And here are some of the strategies that we used to help – kind of help with bot testing for some of our clients.

So we’ve talked about bots and then I talked about bots mostly they’re about text based bot and chatbot where interface is text. And then now we talk a little bit about voice based bot. The reason why is that important is because voice based interfaces are getting pretty common and they are going to become part of a lot of products that are gonna be out in the future. So having a testing strategy around it is very important. And why is it that important to look at voice based bot separately from a regular text based bot. Next slide will cover some of the nuances around that. So if you look at the complexities of testing voice bot, all the complexities that we’ve talked about for testing text based bot do exist.

And they do carry forward but then you have another layer of complexity that we need to think about. So we’re talking about speech here. So when you talk about speech, there are variances in speech which cannot be discounted undermine during and cannot undermine your test automation. So we’ve started with accent for example. So mostly in our testing process when we do test a voice based bot for a client, we do find that accents are one of the prime causes when bot’s basically fail. And so no matter how much you’ve trained your NLP in a bot to kinda take care different accents, a lot of testing needs to happen around that and enhance that needs to factor them.

You could – in case of voice bot, also you could do different things but it could mean the same thing. For example, you could say, “Yes.” You could say, “Yeah.” Put there, “True,” and everything else and it all really means the same intent. In text base, it’s very easy to process those and you can map multiple utterances to a single intent. In case of voice based not only do you need to do that mapping but you also need interface of voice aspect of it in a very easy manner.

We talked about pronunciation already; punctuations are very interesting aspect of voice based testing. It’s an idiosyncrasy of voice based testing which is very specific. And that is because of the way it is set. And I’ve got couple of examples out there which will give you a feel for why punctuations are important in your testing because the intent could be totally different if you don’t process those the right way. And one of the other important factors which in voice based testing is around background noise interferences. So remember your interface point in this case of bot is really your speech and the airways and then some kind of a device processes it.

So background noise which are prevalent all throughout are very, very – they can really be more than ready to kinda damage any kind of voice base intent processing that’s happening. And hence you are testing to a lot of conditioning built in to really take care of background noise effect. And then you have the effect of distance. So in case of voice part of a real voice based application to really work in an effective manner, it needs to be agnostic of movements from the user. And hence you’re testing around voice based interfaces need to account for that. So you cannot expect the user to be sitting at 10 inches from a device and speaking to it that way.

If the speaker is moving around the room and if you do not reflect that in your test automation, you are missing out a very important condition for testing. So hopefully this particular slides gives you an idea of why voice based testing adds another layer of an interesting complexity of testing. And hence automation of voice based testing becomes equally complex. Manual testing becomes, pretty and really as I’ve mentioned earlier, automation really needs to account for these aspects. The next slides will give you an idea of what something like that could look like. So in this particular slide you could see that you have some kind of a device, it’s some kind of a device that really show – you’re creating some data.

In this case, I’m showing Amazon echo here, you have a test sweet which has got a test case running a sequence. The test sweet could really be the same test sweet that’s been created for our text based chatbot. But then you have the layer, the middle which is really doing your TTS which is your text to speech now. All right. So you really need to be able to kind of feed in the kind of speech you need to process. And either you can have pre-recorded speech that you want to run through your automation or you can have text that is converted to speech in an automated manner. But that becomes a very important layer of testing because you need to create speech.

And then the last piece is really some kind for voice bot environment to really affect conditioning. So what the voice bot is really doing and this one show an example the voice bot is you need to use it for changing distance from the actual echo in this particular case. You need to be able to use it to add different kind of noise interferences. So you are creating a very real time – a real life condition environment where your voice bot is being tested. So these put together really is an approach of doing an intent validations around the bot. And as you can see if you build – and that’s what we typically do, if you built an automation strategies for that bot, we then add a layer of conditioning and TTS for voice bot.

And the bot together work in a very harmonious fashion. You don’t have agreement of real populating lot of uploads again. So this is one way in which a voice based interfaces can be tested. With that that kinda gives you – hopefully it will give you an idea and a feel for the complexities around bot testing, both chat and voice. And why manual testing is not a scalable strategy for sure and why your existing testing paradigm needs also to be changed. Just a point in time, we just wanna know if anyone’s interested testing but now we can move to Q&A.

Moderator:All right. Thank you so much. Now before we start Q&A, may we can ask Manish and Sani for questions put them in the field meets the panels and by clicking the submit button. We’ll try to get through as many questions as possible but for those questions we’re unable to answer live during Q&A, some will be answered offline. All right. Let’s start right here. This person asks how do you see test automation frameworks evolving support this newer interaction channels.

Manish:Sure. I can take that. The current test automation frameworks were really built keeping web and then of course a mobile in line. As you saw a lot of the concepts around test automation do not really change. So the primary concepts still remain true but what need to be done is they need to be augmented with a very specific setup, specific nuances for bot and that’s very important. So then the scenarios where we really need to build ground – build test automation framework ground up. And of course we do both maybe do augment existing test automation framework but we also have built ground up strategies for very specific nuances to bot. So that’s how we see it’s evolving.

Moderator:All right. Great. Thanks so much. We have another really good question here. This person ask can you run an automated checks through multiple peers for example the server in the phone to verify a business requirement that touches on both tiers?

Manish:Absolutely. That multiple based of doing it so you can have an automation that you run that’s really test only one particular layer if you want to isolate the problem. But most of the end-to-end automation really needs to be tested across bots are clear. So for example if you’re running a mobile automation for location as we saw today, it is a combination of multiple things. Do you have location changes in your phone? But then you also have your backend responding to the change location instinct. And what are you really testing to this process is not only how does your mobile app respond to your change location but how does a content that comes from the backend also respond to a change location which is send back as a perimeter to your backend web server.

So these strategies definitely can be – can cover your entire stack all the way from frontend to the backend.

Moderator:All right. Great. Thanks for clarifying that. We have another question here. This person asks what are some test automation tools that you would recommend for test automation of voice bots?

Sanil:It’s a very good question and the answer is pretty sharp too. There isn’t a single tool out there which exist that solves the problem today. What we’ve done in Infostretch is we’ve actually built something ground up to really solve this particular problem. For example, let’s say using app for testing a mobile application. Now if we try to use it for voice bot, you would see that some of the aspects around NLP processing etcetera needs to build outside to put automation tools to do it. So what – the short answer is that there isn’t a tool out there that does it today. We are to build something ground up to kinda help solve the problem in a generic fashion.

Moderator:All right. Great. This person says would you rely more on other bots such as Google or Bing voice APIs or go the route of ground up for any voice bot automation?

Sanil:So if I understand the question right, so maybe there are two part in that question. So one part is that in terms of voice bot automation eventually we could end up using some of the backend servers provided from – for example when we do testing on Alexa, we do rely on Alexa intent processing engine as part of the bot automation framework. So it’s best to rely on those framework is provide you either Google home provide you the core APIs with their actions or with the skills through Alexa. You have to dependently rely on them.

But you need to build a layer on top which is your orchestration layer for the entire voice bot automation and just provide you control for the flows. And also for the kind of intents and the triggers that you really need to control. So it is a combination of both. You would not definitely want to build something completely ground up so that you’re not relying on these in existing voice APIs. That would not be a good strategy. And when we build up framework, we really make sure that the orchestration layer is something we control but we do rely on the underlying framework because eventually a bot is going to be used by these – we’re gonna use this framework anyways.

Moderator:All right. Great. This person here asks what would be the key differences in the testing approach for response website versus a mobile application?

Sanil:I think the main difference is that when you’re looking at the mobile application. By the way mobile application can also be responsive in nature because you could have it for tablets and mobile phones too. The main difference is that in case of mobile application, you have a lot more control on the UI because you have kinda created it for a particular screen for a particular flow, etcetera. In case of responsive of applications, you are not only testing for different form factors for responsive app for example a web on a mobile phone whether a tablet or a desktop. But they also have to deal with multiple different browsers on each of this machine. So you are looking at combination which is pretty complex for testing responsive web.

And you could use Safari on your mobile phone or you could be using an IE11 on your desktop and your application should support all of them. So I would say that really increases the complexity of how you really test it and that is one important difference we do it.

Moderator:All right. Thank you so much. We have another good broad question here. This person asks where would you recommend a testing department that is entirely manual now begins venturing into automation message.

Manish:So that’s another great question. Now I would say that and it is also from my previous slide. The testing or the manual testing for these bots are gonna be really, really complex. So what I would recommend is that these department and these engineers needs to familiarize themselves with the tools that can be used for automating bot. And then focus on some aspects of this testing which would initially remain manual but then rely on lots more on the tool so that you are doing lot of the repetition – that the tool does a lot of the repetition testing and then the manual testing should really focus on the flows and the functional aspect of it.

So that becomes a very little combination of testing bot, so the short answer is that the focus should not on more to the flows and the functionality of the bot and rather than the nitty gritty of the bot itself. And the tool will handle it.

Moderator:All right. Thanks so much. And if you do have any more questions please submit them in the Q&A box. I will wait just for a few more to come in. Before we do that, I want to just make sure we’ve got over to this slide right here just to give guys a chance to kinda wrap things up and then we’ll have time to read one two more questions.

Sanil:Yeah. So thank you to everyone who attended the webinar today. So if you have challenges around your voice bot testing, we were more than happy to talk to you and help you solve your problem. And show you different strategies that can help you achieve that. You see the number and the emails to reach us at and ready to speak with you.

Moderator:All right. Thanks so much. So we just had one more questions come in so we’ll do this and maybe one last other one. This person asks what types of automation application would you recommend when testing the IVR or NLU?

Manish:Okay. In terms of automation it’s the same probably on the same lines and answer earlier is that there is no single tool that you would – can even test IVRs with today. And the testing for IVRs is not very different from testing for voice bot. The only difference is that IVRs tend to be more rigid and well laid out compared to voice bot which can be more fluent. But the complexities of testing and there’s one more difference actually. IVRs may have less conditioning problem than bot because in IVRs you are typically on a phone, you’re speaking through it in a very controlled environment. While bot you could be on the move, you could be in the room in a current environment. So the automation approaches thought of having – also defined in the Excel spreadsheet effective engine. We then feed them – the right text-to-speech which can be process.

But the IVR that should not change and then there’s no single tool that can do that today in easy fashion. And similarly, if you’re looking at the National Language Processing, there is no easy tool to kinda process those today too and they have to be build ground up. And as we kinda build some framework to help with the processing as far as the voice based testing.

Moderator:All right. Thanks so much. That ends our event for today. I’d like to thank our speakers Manish and Sanil for the time. I’d like to thank Infostretch for sponsoring this event. Also a special thank you for a lot of the audience for staying the last hour with us. Have a great day. We hope to see you at the future event.

Including Web, Mobile, and Applications like Siri and Alexa

Moderator: Hello and welcome to today’s web seminar. How to automate testing for next generation interfaces including web, mobile and applications like Siri and Alexa. Sponsored by Infostretch and featuring it Manish Mathuria, Founder and CTO of Infostretch and Sanil Pillai Director of Infostretch Labs. I’m your host and moderator Josiah. Thank you for joining us today. Before I head over to the speakers, let me explain the controls of this console. If you’ve experienced any technical issues, please send a note through the question field book beneath the panels, will be on help as quickly as possible. A response could be read on the Q&A panel to the left of the slides. It is also the same way to submit questions.

We welcome all comments and inquiries during today’s event. Feel free to ask as many as you’d like or answer as many possible during the Q&A session after the presentation. Today’s event contains pooling questions and a video which can be seen within the slide view. To optimize your panels the best you can experience possible, please close any unnecessary applications that may be running in the background. At the bottom of your consoles are widget toolbar which is open panels. The consoles open with three on your screen; the speaker, Q&A and the slide panels.

All these panels can move around and resize to your liking. If you accidentally close one, click the respective widget to reopen the panel in its original place. Hover your cursor over these icons; it will be labeled identifying each widget. Through these icons, to download the speaker’s slides as well as show it to that with friends and colleagues. This event is being recorded and will make available for on demand viewing. Once the recording is ready, you’ll receive an email and instructions to access the presentation and slides on demand. With that said, I like to introduce the past control to our first speaker, Manish.

Manish:Hi. This is Manish from Infostretch. Good morning, good afternoon and good evening, everyone. Today, we want to throw some light on the impact of Next-Generation interfaces on our traditional testing and test automation models. As we all know that we are really going into the hyper connected wall. Today, we want to start and begin with the emphasis on the how these new interfaces are impacting our life in terms of testing and test automation. And want to know – also want to touch up on that how the systems are impacting whole as TLC life cycle as well. Of course, then we wanna talk about challenges which are involved around test automation. Today we’ll mainly cover two aspects in this new generation interfaces around mobile and bots.

On the bot side, we’ll take care of and talk about voice and the text as well. So let’s go to detail of what do we mean by hyper-connected wall, right? So we know that in the past, we had mobile device and there was a Cloud behind it where we would transfer information or connected wires some sort of Internet. In this hyper-connected wall, what we are seeing is that the mobile app is becoming the hub, literally control panel from many devices. So if you think of it, I mean we are now controlling our thermostat at home. We are getting sync with our Apple watch or any other smart devices. We are collecting many data from different centers. We are using camera-like like never before that we are using for many different things including QR code tel, PDF document making.

So this new generation connectivity or hyper-connected apps are bringing different challenges in terms of how we have to test and how we can create test automation around it. Now, everyone talks about Omni Channel offerings. Majority of the enterprises we have seen has gone through the mobile 2.0, 3.0 whatever we call it. They also have traditional represents but at the same time now, they are thinking of how can I leverage, use – get this around smart watch apps? How can I leverage around new generation technologies which are virtual reality? Bot seems picked up pretty well in terms of how can I have a seamless interaction with our customers.

Lot of retails and new generation healthcare has tied to leverage smart TV applications as well. They are no more just content display kinda mechanism. They are trying to use it interactively with many different used cases. So what we wanted to do today was we want to now talk about few or a subset of the challenges which are involved in this kind of used cases and how do we test them. Let’s look at some of the problem statements here. So we know that – so let’s ask one question to everyone. I wanted to know is that how many of you are currently using any of the voice bot interfaces in your products or services? Many enterprises have embraced it, some are toying with it. Many are going through the ideation of try fast, fail fast.

It would be very interesting to know what is your thought process around it and this is – as we are seeing it, it’s coming almost equal. So it means some are trying, some are not and some are kind of not are planning to use it. Eventually, it will come around. It’s a very interesting data point we are seeing it. As you can see, these all – these new interfaces are in the infant stages at this point but lot of enterprises are toying with it. So let’s look at some of the challenges we are seeing which are of very simple in nature. Though they bring lot of complexity in terms of test and test automation. Many of this we see that are connected using dialing as you know I’m sure everyone of you are using it. Now, we see that how do you test in automated fashion connect, disconnect kind of things if there’s an interference from other devices. If the device has a different protocol, how are you going to test it, how are the syncing happening, how are the online offline is happening. How interactions are getting generated? So there are traditional toolsets that are not helping in this kind of scenarios. So how do you test them, how do you automate them? Other question comes is when there is a – many used cases use mobile phone as a camera so how do you test them? How do you test the algorithms behind it? How do you test the OCR capability based on the used cases?

So that is another challenges we are seeing it. Now, most of the applications we are seeing is now data to locations. Somewhat the other fashion, they tried to leverage location where are the data. Now when – let’s say for example, Opera. How do you test those kinds of scenarios using traditional testing methodologies? How do you familiar different conditions? How do you familiar the walk-through of the locations? So now what is happening is that the testing and the toolsets are maturing. We are seeing that they want the complexity of used cases are maturing and we are to catch up with that. Also though it looks – let’s ask one more question. How are you currently testing your voice bot interfaces? It would be very interesting to know how are you doing it.

I am sure it’s not applicable to some of you but I’m sure some of them are using it. As you can see that automation is already flat. I can zoom that the automation will be flat on this one because the toolsets are practically non-existent a very custom made. Whereas the manual process would be still relevant whoever is using it. But as you can see the data is going on that front where manual is more compared to automation. So this is what we tried to address in this webinar. Though it looks very simple that date and time is the natural things we need to test it, but the complexity of locale, date and time gets way beyond in terms of the used case testing or validation.

So what we’ll do in today’s webinar, we’ll touch upon that how can you do the automated testing around that as well. Now, of course as you know Apple phone and others are now trying to do the biometric authentication, that brings how can you run a whole automation around that. So that’s another area of we wanted to target and talk about it today. So we can think through these scenarios. There are many, many more such scenario. These are just small subset of it. To deep dive into this, let me introduce Sanil Pillai who is driving our labs to take and deep dive into these scenarios.

Sanil:Thank you, Manish. Hi, everyone. So what Manish just covered was – to give you an idea of how complex mobile applications can really get especially with the different interface points that mobile apps today. And not only is it complex from a development perspective but very soon, we do realized they have been actually get into testing it, it gets equally complex of it, right? So let’s kinda start looking at some of these challenges and strategies around testing them. And we’ve started mobile and then we move to bot. So what is really the challenge in mobile test automation? All right. So the key issue essentially is around simulation. Simulating the behavior of a simple mobile app, it’s pretty straightforward and there are no patterns of doing it.

But once you add different centers around it location axial meter etcetera, it gets very complex to figure out what the right simulation strategies and the tools around them should be. And then there are existing tools which are traditionally being used for automation. The problem is that they’ve been built for mobile phones but they’ve been built from a perspective of functional automation. And they don’t really do well, we need to really bring in a sense of level simulation. Then the third part essentially that if you want to really do simulation around hardware of our mobile, often – Developers do get into instrumental code but they actually write and some specific of these codes. So a lot of them do really tested when the binaries around that.

That was up to a point but beyond that gives really complex and really uneasy. And then finally, it’s a mobile application so how do you handle interrupts? How do you handle interrupts from either a phone incoming phone call, how do you handle interrupts if things going offline? How do you handle interferences etcetera? So all these put together really make testing mobile applications in the context of a real usage in new generation interfaces really complex and that is why new strategies are really important to help manage those. So there are multiple ways of really testing it out. What do you see up here on the screen really is a glimpse of what are the different aspects that needs to be tested.

So you have all the way something like Touch ID and all the way to GPS and BLE. And each aspect of testing for the mobile app, it individually has to be done in a very discrete manner. And sometimes they can be combined together; sometimes they really need to be tested individually by itself. And all put together, testing are manually as you can see it’s pretty uneasy. You can get some more coverage but if you want to really get a good coverage for testing across the different assets, it can get really cumbersome and unmanageable. So what we’ve done and one of the approaches of solving this is really using a mobile automation library.

This is library which can really reside as part of your mobile application package which were not really need any code level changes. But this is a package which can be controlled by backend interface as you can see on the right side. And you can really, at one time, during automation decide what aspects of the testing that you really want to control. For example, you could set a cadence and say that, “Okay, my camera needs to turn on and off every five minutes,” and capture specific image which you inject into your testing parameter. So this can be control by this package, by this mobile automation package and the configuration can be all be controlled from the backend.

So what that allows you to do is that first of all, it allows you to bake this as part of your regular automation that you’ve built in for your mobile application. Secondly, it gives you a level of flexibility and configuration so that a single mobile application when it is used for multiple scenarios can really be tested to the same package without having to make any code level changes. And what’s the sweet approach really show is that this is a very marvelous sweet. So you can really add on more interfaces in the future when they got added to the entire mobile spectrum. So you can imagine that Touch ID was not really relevant until probably last couple of updates of iOS.

Bu then this is a marvelous sweet where you can really keep on adding more functionality. And the way this mobile architecture and I’ll show you in the next slide as an example of how this can be leverage. But really the way to look at it is that you have to really have a way of solving your automation strategy in a modular form, in a non-intrusive form and in a configurable form. And this strategy really allows you to achieve all the three together. So let’s kinda move forward and take a look at an example of one of these really were. But before we even go there, the other important piece to remember is that what other automation strategy used for your mobile application testing but all is peripheral.

They need to be baked into your current CI approach for example because you have built out your CI form mobile application. And you cannot be doing a testing for these aspects of mobile in a very different manner. So what if about a binary is created needs to be packaged with either framework and needs to be built as part of the same CI process that you have in place. And entire thing should be seamless. So your final report is get ready for testing should not only include your function testing which you always had but should also include additional testing that you want to have as part of the process.

So what does one example of such an automation look like? Let’s take a case of location. The location is pretty ubiquitous in all applications and an example here is where – it’s a field service application where a particular job is assigned to a field service personnel. The requirement is that the job really needs to get active only when the person really reach a particular location. First, for example, of geo-fencing. Now to test it manually really would mean that you need to have a way of a really trying different methods of geo-fencing in the real location which can get a really complex when you start doing it manually.

And you cannot really spoof locations pretty easily. So now you have manual tester or really trying to move – actually physically move locations to kinda simulate this environment. All you try to use some kinda spoofing mechanism which kinda maybe achieve your purpose to a point but it’s really not scalable and it’s very burden. So by using this mobile automation framework, what you can really do is you can have this entire testing as part of your package in a very configurable form. And what this would look like at one time is what I will show you in the next slide. So what do you see here is an animation of your testing that’s happening through automation.

As you can see is that the location, the dot represents the location that the person is changing and this is done through the automation mechanism. And as soon as the person reaches a particular point which you can see by the – through the highlight, the next screen comes up automatically which indicates to – which basically indicates to the automation that now it is a time to trigger a used case. So this entire testing actually changing location from one point to the other and bringing up the next trigger point which is the screen in this particular case was all done through automation. And if you have to do it manually, it would have to – and remember that this is just one flow that really tested out.

But you can really scale it out to multiple different flows and in a very easy manner. So this is an example of how mobile automation framework can really help you to automate your interfaces. Now having talked about mobile which is pretty important. Let’s move on to another channel which is pretty getting pretty pervasive now and that is essentially bot. And as the next few slides, I’d like to cover different nuances about why does bot testing and automation of bot really is different and what are the different challenges are there for the user. And what would be the strategies for really testing it out.

So when you look at bot, just to put some terminology in place. And there are two different kinds of bots; a chatbot which is also known as textbot so text based bot which you can see on the left end side. And the other one is voice based bot. In both cases, really you have an automated mechanism on the backend which is listening to queries and responding to queries. On the frontend either your interface is through text or it’s coming through voice. And I will explain the next few slides some of the nuances between both of them and why we need slightly different strategy on how we want to kind of help test out these bots.

So one of the main things that bots really introduce and what that has changed is that that UI is now really got blended with the parent host system. So what do I really mean by parent host system? Bots typically reside in a parent host like either it could be Facebook Messenger bot, it could be a telegram bot, it could be a Skype bot. So in this case, your normal connotation channels like Skype, telegram, Facebook, they become your parent host. The UI for the bot is really the UI for the systems but there is no real specific UI these bots really need to build out. For example, a bot residing in Skype is just a contact in Skype.

For example a bot is residing in Facebook, it just a contact in its Messenger. And the testing for these bots is more around flows and less around UI. And then you have UI which is connotation in nature. The connotation UI is different than irregular web based mobile UI because it’s less rigid and it’s more fluent in nature. And can really change on the fly sometimes based on cognitive perimeters and this needs to be accounted for during your testing process. So what are some of the nuances of bot testing? It is best to look at bot testing in two different aspects.

One is common factors and another is specific factors for each bot. Every bot needs to be tested for two different things. One is for intent validation and response validation and I’ll talk about what each one is in the next couple of slide. But then there is also a specific factor that needs to be kept in mind. Specific factors are things which are very unique to that particular bot that is being tested. So all your testing in automation approach need to have a layout approach. So you need to have a framework layout which really test for commonalities between across all the bot around intent and response. And then your initial effort for testing should really be around building the specific automation for specific factors of bot. You don’t have to reinvent the wheel every time.

And by separating it this way, it helps you achieve what is called a modular automation. Also helps you scale your automation in a very easy manner. So let’s look at intent validation essentially. So intent validation is all about looking or listening to or reading a user’s question and trying to figure out what’s the user is trying to say. So Natural Language Processing or NLP becomes a very big part of intent validation and enhance your testing for intent validation not only needs to be looking at keywords that need to be processed against expected intent that the user is trying to say. But also need to account for natural language processing based automation that also is very important.

And some of the questions that we need to ask here is what is the threshold where NLP should be really failing. What is the threshold at which NLP should really kick in? What are the boundary conditions for your intent that you need to test about? So these are some of the aspects that are very important for intent validation which will become part of your common automation layout. Next is response validation and the response validation is really very important. Because the responses in bots, as you can see in the animation there are the response can either be a question that the bot asked or it could be an actual response that the bot gives you back.

For example, if the user types and I wanna watch a movie, most of the intelligent bots gotta respond back saying, “Okay, what kind of movie do you wanna watch?” And so that becomes a question and eventually it ends up being a response, “Hey, you know this is running near you. You can probably go and catch a show today.” So you have that new answer around response. And response validation automation can be built around current backend website without automation. Because if we’ll think about responses, responses are basically data that’s coming back from the backend bot intelligence. So if you have existing backend intelligence which is built for other purposes which are gonna be re-purpose for bot.

Response validation has to be built on top of them as the best practice so you can leverage all the backend intelligence that’s already been built out. And responses can also be auto triggered. For example, you could have bot which on your birthdays or of specific days, they actually send out messages saying that, “Hey, today there’s something you’d be interested in. What do you want to kinda look at?” So you want to build a response validation keeping these into account. Let’s look at some of the very specific aspects of responses that really need to be tested.

So as you can see respond validation and automation for those are non-monolytic. It is pretty multi-faceted. So you have things around you, you could have a same query laid out but you could have multiple responses come on. So you could say, “Yes,” or you could get a response that says, “Okay.” And you need to be able to support your validation which will support all of that. You also need to be able to kinda look at errors and typos coming in and kind of process that for these chatbots in the right way. Users can put a multiple queries in a single sentence as it says there. Because users are used to put connotation and they want to kinda mimic that with the bot.

So your chatbot automation cannot be naïve enough to think that it’s gonna be pretty rigid and one question at a time. You need to account for multiple queries in a single sentence. And you could have mixed languages eventually, so then you actually a globalbot, you have to support multiple languages. So the challenges with translation based automation carry forward to bot also but it just can’t accentuated to some extent. And then you have to also think about response time for bot. So response – when you actually do bot validations during testing, it is not only about actual responses coming back as a content but is also the latency for the response.

Because in a fluent connotation, those are very important for a great experience. So if you look at these aspects here, it’s pretty apparent that if you have a very manual way of testing out that bot, it will only help you to a point. After that, it gets really complex and unmanageable. And it’s a kind of automation is definitely indispensable. So what are the approaches that you could take for your automation? So you could break this up into two different parts. One is where you kind of trying to really follow user action which we call imitating user action. The other option is headless based testing.

And the next couple of slides will give you some strategies around why each one is important and where you could use one over the other. So what is imitating user’s actions? So here, as you can see on the screen, it is a bot that is being tested and is actually tied to the UI for the bot. Now the way the strategy could help is that you have some kind of a data file like an Excel spreadsheet a COC file which have – that the user uploaded to define. And it goes into an engine, an automation engine which could be built on top of any of the framework which then runs to your actual bot. Now obviously, it is pretty apparent from this approach that this will upgrade when you have to really use test one bot for a particular channel.

Because you are really tying yourself to the actual user action which could default from different channels. For example, this particular screen that you see could look differently when it’s on Skype versus when it’s on Messenger. So you are tying your approach to a particular bot platform. So the other approach, when you have multiple bots, let’s say you have bots that you’re building for multiple platform. You want to have it on Skype. You want to have it on Messenger or telegram. What is the right approach to take? So you could have headless approach of testing. So headless testing is a well-known paradigm in regular mobile and web testing where you really want to remove the variances in UI from a testing approach and really try to test all the logics.

And there is an approach that you can really use for really testing your bots too. So you could have a data file which is the same as the previous case which is a bunch of flows. And you can actually send it through another bot maybe which is connected to your main bot which actually goes and processes all your intents and responses and then creates the final output. The biggest advantage of something like this is when you really don’t wanna tie down to the UI of a particular bot and you want to kinda open it up to more different kinds of testing. So between headless and UI, the ways to look at it is that if you really want to build out a scalable automation framework, it is okay to start with the UI based testing for one bot.

And then you can put on a framework and place vary mix of UI and headless testing. Wherein the UI piece, you’re really testing out some of the finer UI aspects of bot. So I only did mention that we are almost getting to zero UI configuration, zero UI paradigms for bot. But some of the bot you would see have got specific UI based with a send back as responses sometimes. So UI based testing can really be catering to those while your basic flows and logic, the heavy lifting of the bot can really be tested through your headless approach. So that could be a good way to kind of combine those types.

So having looked at different approaches, one of the most important aspects that are on bot also is how do you really create this to a bot? So remember the bot flows are conversation in each. These are all chat – these are flows that you want to really mimic in a very easy way. And it makes sense for a product manager or a business analyst to be of easy write these flows in a simple Excel maybe. And then have an automation tool which can really suck it up and start processing these files in a very easy manner. There are multiple ways of doing it. You can either use plain Excel as it shows on the left end side or you can create mind map diagrams which can then be processed it can work to GSM format for example and then start into – put it into automation tool.

The important things to remember here is that the time from which or the time between creating these files for testing and actually testing should be minimized and automated as much as possible. If you have to set and work an Excel created flow into a bot specific language for your testing then you have lot of lost in translation and also a lot of complexities are introduced in the process. So we need to have an approach where you can take these files and have it into your – present it into your automation tool. And it processes it and gives you the results that we’ve talked about earlier. And here are some of the strategies that we used to help – kind of help with bot testing for some of our clients.

So we’ve talked about bots and then I talked about bots mostly they’re about text based bot and chatbot where interface is text. And then now we talk a little bit about voice based bot. The reason why is that important is because voice based interfaces are getting pretty common and they are going to become part of a lot of products that are gonna be out in the future. So having a testing strategy around it is very important. And why is it that important to look at voice based bot separately from a regular text based bot. Next slide will cover some of the nuances around that. So if you look at the complexities of testing voice bot, all the complexities that we’ve talked about for testing text based bot do exist.

And they do carry forward but then you have another layer of complexity that we need to think about. So we’re talking about speech here. So when you talk about speech, there are variances in speech which cannot be discounted undermine during and cannot undermine your test automation. So we’ve started with accent for example. So mostly in our testing process when we do test a voice based bot for a client, we do find that accents are one of the prime causes when bot’s basically fail. And so no matter how much you’ve trained your NLP in a bot to kinda take care different accents, a lot of testing needs to happen around that and enhance that needs to factor them.

You could – in case of voice bot, also you could do different things but it could mean the same thing. For example, you could say, “Yes.” You could say, “Yeah.” Put there, “True,” and everything else and it all really means the same intent. In text base, it’s very easy to process those and you can map multiple utterances to a single intent. In case of voice based not only do you need to do that mapping but you also need interface of voice aspect of it in a very easy manner.

We talked about pronunciation already; punctuations are very interesting aspect of voice based testing. It’s an idiosyncrasy of voice based testing which is very specific. And that is because of the way it is set. And I’ve got couple of examples out there which will give you a feel for why punctuations are important in your testing because the intent could be totally different if you don’t process those the right way. And one of the other important factors which in voice based testing is around background noise interferences. So remember your interface point in this case of bot is really your speech and the airways and then some kind of a device processes it.

So background noise which are prevalent all throughout are very, very – they can really be more than ready to kinda damage any kind of voice base intent processing that’s happening. And hence you are testing to a lot of conditioning built in to really take care of background noise effect. And then you have the effect of distance. So in case of voice part of a real voice based application to really work in an effective manner, it needs to be agnostic of movements from the user. And hence you’re testing around voice based interfaces need to account for that. So you cannot expect the user to be sitting at 10 inches from a device and speaking to it that way.

If the speaker is moving around the room and if you do not reflect that in your test automation, you are missing out a very important condition for testing. So hopefully this particular slides gives you an idea of why voice based testing adds another layer of an interesting complexity of testing. And hence automation of voice based testing becomes equally complex. Manual testing becomes, pretty and really as I’ve mentioned earlier, automation really needs to account for these aspects. The next slides will give you an idea of what something like that could look like. So in this particular slide you could see that you have some kind of a device, it’s some kind of a device that really show – you’re creating some data.

In this case, I’m showing Amazon echo here, you have a test sweet which has got a test case running a sequence. The test sweet could really be the same test sweet that’s been created for our text based chatbot. But then you have the layer, the middle which is really doing your TTS which is your text to speech now. All right. So you really need to be able to kind of feed in the kind of speech you need to process. And either you can have pre-recorded speech that you want to run through your automation or you can have text that is converted to speech in an automated manner. But that becomes a very important layer of testing because you need to create speech.

And then the last piece is really some kind for voice bot environment to really affect conditioning. So what the voice bot is really doing and this one show an example the voice bot is you need to use it for changing distance from the actual echo in this particular case. You need to be able to use it to add different kind of noise interferences. So you are creating a very real time – a real life condition environment where your voice bot is being tested. So these put together really is an approach of doing an intent validations around the bot. And as you can see if you build – and that’s what we typically do, if you built an automation strategies for that bot, we then add a layer of conditioning and TTS for voice bot.

And the bot together work in a very harmonious fashion. You don’t have agreement of real populating lot of uploads again. So this is one way in which a voice based interfaces can be tested. With that that kinda gives you – hopefully it will give you an idea and a feel for the complexities around bot testing, both chat and voice. And why manual testing is not a scalable strategy for sure and why your existing testing paradigm needs also to be changed. Just a point in time, we just wanna know if anyone’s interested testing but now we can move to Q&A.

Moderator:All right. Thank you so much. Now before we start Q&A, may we can ask Manish and Sani for questions put them in the field meets the panels and by clicking the submit button. We’ll try to get through as many questions as possible but for those questions we’re unable to answer live during Q&A, some will be answered offline. All right. Let’s start right here. This person asks how do you see test automation frameworks evolving support this newer interaction channels.

Manish:Sure. I can take that. The current test automation frameworks were really built keeping web and then of course a mobile in line. As you saw a lot of the concepts around test automation do not really change. So the primary concepts still remain true but what need to be done is they need to be augmented with a very specific setup, specific nuances for bot and that’s very important. So then the scenarios where we really need to build ground – build test automation framework ground up. And of course we do both maybe do augment existing test automation framework but we also have built ground up strategies for very specific nuances to bot. So that’s how we see it’s evolving.

Moderator:All right. Great. Thanks so much. We have another really good question here. This person ask can you run an automated checks through multiple peers for example the server in the phone to verify a business requirement that touches on both tiers?

Manish:Absolutely. That multiple based of doing it so you can have an automation that you run that’s really test only one particular layer if you want to isolate the problem. But most of the end-to-end automation really needs to be tested across bots are clear. So for example if you’re running a mobile automation for location as we saw today, it is a combination of multiple things. Do you have location changes in your phone? But then you also have your backend responding to the change location instinct. And what are you really testing to this process is not only how does your mobile app respond to your change location but how does a content that comes from the backend also respond to a change location which is send back as a perimeter to your backend web server.

So these strategies definitely can be – can cover your entire stack all the way from frontend to the backend.

Moderator:All right. Great. Thanks for clarifying that. We have another question here. This person asks what are some test automation tools that you would recommend for test automation of voice bots?

Sanil:It’s a very good question and the answer is pretty sharp too. There isn’t a single tool out there which exist that solves the problem today. What we’ve done in Infostretch is we’ve actually built something ground up to really solve this particular problem. For example, let’s say using app for testing a mobile application. Now if we try to use it for voice bot, you would see that some of the aspects around NLP processing etcetera needs to build outside to put automation tools to do it. So what – the short answer is that there isn’t a tool out there that does it today. We are to build something ground up to kinda help solve the problem in a generic fashion.

Moderator:All right. Great. This person says would you rely more on other bots such as Google or Bing voice APIs or go the route of ground up for any voice bot automation?

Sanil:So if I understand the question right, so maybe there are two part in that question. So one part is that in terms of voice bot automation eventually we could end up using some of the backend servers provided from – for example when we do testing on Alexa, we do rely on Alexa intent processing engine as part of the bot automation framework. So it’s best to rely on those framework is provide you either Google home provide you the core APIs with their actions or with the skills through Alexa. You have to dependently rely on them.

But you need to build a layer on top which is your orchestration layer for the entire voice bot automation and just provide you control for the flows. And also for the kind of intents and the triggers that you really need to control. So it is a combination of both. You would not definitely want to build something completely ground up so that you’re not relying on these in existing voice APIs. That would not be a good strategy. And when we build up framework, we really make sure that the orchestration layer is something we control but we do rely on the underlying framework because eventually a bot is going to be used by these – we’re gonna use this framework anyways.

Moderator:All right. Great. This person here asks what would be the key differences in the testing approach for response website versus a mobile application?

Sanil:I think the main difference is that when you’re looking at the mobile application. By the way mobile application can also be responsive in nature because you could have it for tablets and mobile phones too. The main difference is that in case of mobile application, you have a lot more control on the UI because you have kinda created it for a particular screen for a particular flow, etcetera. In case of responsive of applications, you are not only testing for different form factors for responsive app for example a web on a mobile phone whether a tablet or a desktop. But they also have to deal with multiple different browsers on each of this machine. So you are looking at combination which is pretty complex for testing responsive web.

And you could use Safari on your mobile phone or you could be using an IE11 on your desktop and your application should support all of them. So I would say that really increases the complexity of how you really test it and that is one important difference we do it.

Moderator:All right. Thank you so much. We have another good broad question here. This person asks where would you recommend a testing department that is entirely manual now begins venturing into automation message.

Manish:So that’s another great question. Now I would say that and it is also from my previous slide. The testing or the manual testing for these bots are gonna be really, really complex. So what I would recommend is that these department and these engineers needs to familiarize themselves with the tools that can be used for automating bot. And then focus on some aspects of this testing which would initially remain manual but then rely on lots more on the tool so that you are doing lot of the repetition – that the tool does a lot of the repetition testing and then the manual testing should really focus on the flows and the functional aspect of it.

So that becomes a very little combination of testing bot, so the short answer is that the focus should not on more to the flows and the functionality of the bot and rather than the nitty gritty of the bot itself. And the tool will handle it.

Moderator:All right. Thanks so much. And if you do have any more questions please submit them in the Q&A box. I will wait just for a few more to come in. Before we do that, I want to just make sure we’ve got over to this slide right here just to give guys a chance to kinda wrap things up and then we’ll have time to read one two more questions.

Sanil:Yeah. So thank you to everyone who attended the webinar today. So if you have challenges around your voice bot testing, we were more than happy to talk to you and help you solve your problem. And show you different strategies that can help you achieve that. You see the number and the emails to reach us at and ready to speak with you.

Moderator:All right. Thanks so much. So we just had one more questions come in so we’ll do this and maybe one last other one. This person asks what types of automation application would you recommend when testing the IVR or NLU?

Manish:Okay. In terms of automation it’s the same probably on the same lines and answer earlier is that there is no single tool that you would – can even test IVRs with today. And the testing for IVRs is not very different from testing for voice bot. The only difference is that IVRs tend to be more rigid and well laid out compared to voice bot which can be more fluent. But the complexities of testing and there’s one more difference actually. IVRs may have less conditioning problem than bot because in IVRs you are typically on a phone, you’re speaking through it in a very controlled environment. While bot you could be on the move, you could be in the room in a current environment. So the automation approaches thought of having – also defined in the Excel spreadsheet effective engine. We then feed them – the right text-to-speech which can be process.

But the IVR that should not change and then there’s no single tool that can do that today in easy fashion. And similarly, if you’re looking at the National Language Processing, there is no easy tool to kinda process those today too and they have to be build ground up. And as we kinda build some framework to help with the processing as far as the voice based testing.

Moderator:All right. Thanks so much. That ends our event for today. I’d like to thank our speakers Manish and Sanil for the time. I’d like to thank Infostretch for sponsoring this event. Also a special thank you for a lot of the audience for staying the last hour with us. Have a great day. We hope to see you at the future event.

Latest News, Events, and Thought Leadership

Stop by our table and Enter to Win a Drone!
Learn More
May 1 – 3, 2017
AMA Conference Center
New York, NY
May 3, 2017
See More Events
Testing Strategy for Healthcare Software
Register
April 27, 2017
10AM PT / 1PM ET
Online

Apr 27, 2017
See More News
Getting the Most out of Mobile Automation with Appium
Read More

Digital applications require quality at speed. But as a lot of enterprises are finding, that’s easier said than done....

See More Blogs