Share this page with a friend.

Moving from QA to QE – A Case Study

With Lifescan, a Johnson & Johnson Co.

Sarah Lynn Brunner: All right, well, good morning, and welcome to our webinar, Moving from Quality Assurance to Quality Engineering. My name is Sarah Lynn Brunner, and I will be your host today. Today’s presenter is Ed Hein. Ed is the manager of Digital Verification and Validation at LifeScan. In his role he oversees the development of implementation of testing strategy for all of LifeScan’s software products. He is responsible for reporting metrics showing the effectiveness of testing activities, and coordinating both internal and outsourced test resources to align on overall testing strategy.

He has over 20 years of experience in a variety of software testing, and validation rules, including Global Quality Manager at Honeywell, and Worldwide Manager of IT and Computer Systems. In addition to Penn State, Ed also holds three certifications from American Society for quality. Before we begin let me review some housekeeping items. First things first, this webcast is being recorded, and will be distributed via email to you, allowing you to share it with your internal teams, or watch it again later.

Second, your line is currently muted. Third, please feel free to submit any questions during the call by accessing the Q and A button in the navigation in the top center of your screen. We will answer all questions towards the end of the presentation. Also during the presentation we encourage you to engage in the poll questions to maximize your experience during this time. We will do our best to keep this webinar to the 45 minute allotment. Now, let me turn it over to Ed.

Ed Hein: Hi Sarah, thank you for that introduction. Hopefully everybody can hear me well. Sarah, I guess.

Sarah: Yes, we can hear you just fine.

Ed Hein: Okay, this is just a list of the topics that we’re going to go through a high level agenda here. Just a brief introduction to LifeScan and our products, and what we do. Talking about some digital testing that we’ve done. Introduction to quality engineering, and the shift left. Also our journey on that role, on that path from quality assurance to quality engineering. We will take some questions and answers going forward. Have you given me control Sarah? I saw a remote control thing pop up.

Sarah: Yes, you now have control Ed, thank you.

Ed Hein: There we go. All right, the introduction to LifeScan, and our products. We are a Johnson & Johnson company, part of their diabetes care franchise, and specifically targeted towards blood glucose meters. The digital group does the software that is embedded in the meters themselves, and then native mobile apps on Android and iOS platforms, and also a website that stores that data, and allows access to outside medical providers. We’ve been committed to improving the quality of life for over 20 plus years in this digital group. Our OneTouch system is the most prescribed blood glucose meter brand in thee world, and it’s used by over 5,000,000 people in the US every day.

We’ve been a part of Johnson & Johnson since ’86 as I mentioned, and had moved our headquarters from Milpitas, California to Chesterbrook, Pennsylvania. We’re based just southwest of [unintelligible 00:03:24] and near Philadelphia. We employ more than 3,000 people worldwide. What we’re looking for is to make a transition from a more traditional quality assurance post product development inspection cycle to a more proactive quality engineering, where we would develop products with quality built into them as we go along. As you might be aware, the blood glucose monitoring system market is growing at a healthy rate, with the unhealthy rate of diabetes epidemic around the world.

There’s fierce competition from other larger companies like Abbot, Bayer, Roche and many innovative newcomers in the area of insulin delivery. I should also mention we have a sister company, Animas, which provides insulin pumps for diabetics, type one diabetics. It’s imperative to accelerate accelerate cycle time, and get new software products out there to help differentiate our products from our competition. We do feel that, that’s one of the things that we do very well, and that our OneTouch reveal mobile apps and websites are a differentiator for our products. We’re presented with a lot of complexities.

Though we are in a regulated environment submitting to the FDA and other regulated bodies around the world in the 10 different countries that we’re marketed in, we do need to maintain strict vigilance on the testing of our product, and produce final end results while at the same time doing that rapid development. We’ve adopted an agile approach with formal testing at the end to provide that documentation. The complexities we’re presented with are not only different OSs. That’s in the areas of the PCs for the websites that we’re on, to various types of browsers, Chrome, IE, Firefox, different networks.

When you get into the mobile device area then you start looking at iOS and Android platforms and all the variations among them, they have different operating systems as well, and how to support them. Then we try to target different capabilities within them, and use functions such as the touch ID that sort of thing, and incorporate that into our products as well.

You can see on this slide we’re just listing the various types of browsers and things that we need to support, and it gets more complex all the time. We try to eliminate older browsers as we advance, and incorporate new ones, typically only supporting two or three different operating systems if possible. Sorry I went too far. How does it matter?

Well, as cycle time shrinks for these digital apps, and customer expectations are rising, testing gets harder and harder to meet those customer needs. We have a limited amount of time to define what the test requirements are, what the customers need and requirements are, and to develop tests around them, because we can’t have low confidence on quality when we go to release and produce a product for our customers. We’re constantly challenged to incorporate user feedback, which comes onto our tier three and tier two channels.

We actually do write tickets that go into JIRA, and get addressed with future releases, or they go through a committee where it’s considered, “Which of these should be logged as improvements? Which are actual bugs?” We go in, and research the bugs. That’s part of what my test group does to ensure that any tickets reported by our customers are real, or any tickets we find either on our on or offshore testing teams, manual or automated are real bugs in they code, and they get scheduled for repair or inclusion in a future point releases. We also wanted to try and do limited coverage for manual testing. There’s several basic types of tests that we do.

Ad hoc exploratory testing, which is a little more well documented form. We’ve always been trying to push for more automated testing throughout the sprint development, and those features are developed in the two to three week sprint cycles. Trying to advance the slide here. Right, what experts suggest is to refocus the QA and testing on the customer experience, and assure that customer need– I just like to think of it like the three amigos, the development group, our testing group and our customer representatives, which our product life cycle management team which is comprised of the tier two, tier three interface teams. We work together to try and provide that enhanced experience for our customers.

Transforming the traditional use of agile and DevOps practices to adopt in quality management. What we did in our last release, which is our new product line, which has just launched last week, OneTouch reveal, 3.0, has incorporated more automated testing as part of our strategy in the sprint development, as I mentioned. Typically you’re running one sprint behind so the functionality is delivered in sprint one, we have automated tests testing that functionality in sprint two.

Functionality development in sprint two has automated tests in sprint three. You’re assuring that as you progress, and build on those functions, and add more functionality, the stuff you’ve already made is still working fine, and doesn’t break with the integration of other functions. This has also helped expand our testing skill beyond manual and test automation to think more strategically, and allows more time to apply for specific exploratory tests that are more customer-based.

We prioritize testing with predictive analytics, continuous feedback from customers as well. We think that that’s gone pretty well. Actually trying to develop tests around some of the weak points and failure points that we’ve observed. Where we see more tickets and more breakage, we could expand test coverage in those areas.

Transforming from quality assurance to quality engineering; that’s a big step in any process, and one that’s very important is this called shift left. What is quality engineering? I think this is a good diagram showing what your typical quality model is and what a lot of standard v-development cycles are where you do all your testing on the back end or the back side after a long development period. It tends to take much longer towards the end because every time you fix problems, as you get further to the right and further through your documentation process there’s more paperwork and more changes, more potential to break other things as you move in that direction.

If you can identify problems earlier, they’re cheaper to fix. You can fix them faster and that’s really the whole goal behind quality engineering. Driving the development of the quality product and process while enabling effective testing parallel to development. As we’re developing we’re also developing automated tests and doing testing, ad hoc testing and exploratory testing as we go along. Automated testing obviously is much faster, gives you better bang for the buck, you can run more tests in a shorter period of time getting more coverage.

Getting the groups to work together, development operations and quality, be one team and develop and execute automated tests continuously, emphasis on executable requirements and acceptance. The requirements may be written such that you can develop tests around them and also target certain areas of the code with those automated tests. There’s immediate feedback from the automated tests, emphasis on automation assessment. You can take that feedback right back to the development team. In our particular example in 3.0, our recent release, we’re running at a large battery of regression tests overnight as many as 135 as we got towards the end of the program.

Basically all our whole library of automated tests. We would know if we ran that test at night US time when the offshore development team came in they would get results in three to five hours. They would take those failures that they observed and look and see was it an issue with the testing platform or perhaps a test script that might have been an error or was it a code problem and a breakage in the code. Then the development team would work on fixing that issue that could come up in that sprint. We scaled it up better as we went along also.

Moving on, still having trouble advancing the slide. Here we go. I think I got it.

A successful quality engineering process brings the automation much closer to the coding. The traditional is more like the ice cream cone shown on the left where you’re doing a little bit of unit testing during development, more integration tests. The bulk of your testing is done near the end of the project after you’ve invested much more time and cost and it’s more costly to fix issues. You see the top of the ice cream cone the scoops or the manual tests and a lot of the automated GUI tests.

Whereas if you can shift that and make it more like the pyramid on the right, as you’re going along to development sprints you’re developing these automated unit tests and doing more component integrated tests, automated API testing. Also I do want to stress that we continue to do ad hoc testing as we go along as well. The automated GUI tests can be reduced and the manual session based testing you can focus more on that at the end and achieve much faster results and fewer failures. Hopefully you drive those failures out early on and not later in the process.

How is quality engineering different from QA? In QA you’re assuring the quality of the process and the product more on the back end as I mentioned, quality engineering drives development of quality product and the process, early on. By focusing on quality from the ideation to commercialization, as you’re developing code you’re developing tests hand-in-hand and also there’s automated tests that run faster. You can define, support and implement the processes such as BDD/TDD, Agile, Kanban. All of those we use primarily the agile in our new product development.

Kanban sometimes on point releases, and utilizing these quality tools you can make those quick decisions and drive quality in enabling DevOps to work more efficiently. I should note the tools we shifted between a C test platform originally. Now we’re using more of a perfecto appium blend with the QMetry automation studio QAS from Infostretch. Some milestones in the quality engineering; first of all strategizing, building that strong foundation, see how that return on investment, get buy-in from the stakeholders and customers. We engage our marketing people and customer base early on in selecting what tickets in a point release will be prepared and what the interfaces look like for a new release.

Get a jump start by clearing out backlogs and harness automation tests to manage those backlogs and building a risk-free cost effective test factory model and clear those backlogs out. During the execution we want to manage the scale so that things don’t get too out of hand, integrate with test management and CI tools, continuous integration as we go along. Then when you have a more mature program you start gearing up, preparing for next generation challenges, leveraging the latest technologies and devices, building on the system that you have and moving forward.

What automation does, it allows you a little more time to focus when at mature area as you go through. Currently LifeScan is probably more in the execution phase, I would say. That the milestone in an area that we’re in which managing and building up still at this point, integrating those tests. The benefits of quality engineering; lower cost, you can reduce your headcount and effectively run more tests with fewer resources particularly leveraging that automation, faster time-to-market. By eliminating problems earlier you reduce the end effect of finding issues at the end and stretching out that final release cycle.

You can accelerate more functionality in a shorter period of time by detecting defects earlier. Overall then it’s an improved user satisfaction. If you’ve done the right thing early on by incorporating user feedback, fixing major issues and bugs that are out there on your product your customer is going to be happier, more engaged and stick with your product and continue to be a user of our apps and our website.

Now we’ll talk about the LifeScan journey, and I mentioned some of the techniques and the tools that we’re using and how we got from our existing process. I should mention that I’ve just been with the company about two years, and one important thing we did was just in trying to select the tool. We had a vendor we were working with and we do automation on both our website using application Lifecycle Manager, the HP’s Quality Center tool and on the mobile side and we sort of have different tools and different focus in each of those areas.

We seem to be much more mature on the web side. It was the mobile side that we really need to focus on. That’s what I’m speaking towards a lot. This most recent release in OneTouch Reveal was a mobile candidate for both iOS and Android platforms. Some of the challenges that we saw at LifeScan were, again, I mentioned earlier we’re in a regulated environment. Some countries are regulated more tightly than others.

United States FDA is enforcement discretion so you can do 510k filings but you don’t necessarily have to file for repoint release. But you’ve got to make sure that you have backup available if they come to look at your data and want to inspect. You better be able to produce testing and quality results per your quality system. Other countries Canada, Japan and Europe, France, they have specific requirements. The French require that any data is stored on French soil on a French-based server. Unlike our other EU partners which are all located in one central server in the UK, we had to open up a separate server.

There’s things like that that we have to do for our operations group and qualifying those different servers. There were lengthy cycle times leading to delays in the go-to-market plan. We were trying to hold two quarterly releases and try to get as much functionality as we could in those quarterly releases. What we found was happening is that functionality wound up being cut and we would slit delivery by a month or two and if you’re going quarter to quarter, a month or two is almost put you into that next quarterly release.

Dealing with those difficulties and integrating various aspects with a testing process, working those different tools together and coordinating between the various types I mentioned, like ad-hoc exploratory and automated, trying to get them all on the same page, no gaps in between. Our process was very manual. Predominantly running manual tests with multiple steps. We were trying to embrace automation and just looking for the right tool and the right vendor to help us do that. As always, the main concern is the accuracy of the monitors and getting that data into the mobile app for a customer.

Then synced up with the back-end on the web, maintaining a data integrity and being able to show it in various trend graphs and logbook views. What I did, the step of this process was sending one of my technical test guys out to a conference and evaluating some various tools that were available. As I mentioned we were using C-test with one of our vendors. That wasn’t working so effectively.

We were looking at Perfecto Mobile, a little mobile cart, HP’s tool that they have that’s integrated within ALM and a few other tools that we all examined, had the minotaur site look at them. It was actually Perfecto that recommended Infostretch as an integrator to help us adopt Perfecto Mobile and develop tests with that tool. Infostretch came out with that a proof of concept along with Perfecto to validate the information flow we have from the meters through the mobile devices.

We took standard test cases that we had, hundreds of manual test cases written and selected some that could be automated, went through and started developing automated tests. Some of these basic tests and functionalities. We partnered with Perfecto to expand the device coverage. We went from a handful of Android and iOS devices to dozens because Perfecto mobile has public cradles that are available. Most of our data is housed on private cradles with devices that we own.

We got the best of both worlds. One of the solutions we looked at was a local cart you could plug devices into and run tests on. We have some local cradles managed through Perfecto server that run the same tests. We can basically swap out any mobile device that we want in those local cradles, iOS or Android, and run tests on them. We’ve increased the amount of devices and OSs that we’re covering in our testing.

We can handle as many as four to five bills a day. Test cases are being automated in the sprints as I mentioned before as a development team builds these features. We have several types of automated tests that run, the smoke tests, are ones that run every time a build is developed, we use a Jenkins tool to automatically kick off the smoke tests and run a small subset, a battery of core functionality in that smoke test.

Every evening, as I mentioned before, whatever that last build was for that day, we typically run a whole full battery regression tests on it. The output from that is what we use to move forward in determining what may have broken in that latest build, what needs to be improved and hopefully a majority of tests pass and all is well. Moving on. Organizational changes, we did have a little shift in that initially my position was reporting into engineering lead or into the same manager as the engineering operations team. Now I’m reporting directly into the Director of Digital R&D. QA has become more tightly integrated with development.

That’s twofold. What we’ve done is the Infostretch is our partner not only helped us develop this library of automated tests, but also helped us develop this automatic testing solution I mentioned using Jenkins. The builds that we had our development team putting together are delivered automatically and the tests pick it up and run the tests automatically as well. We are working more closely with the development team. Also, Infostretch has more recently started doing development work as well. They are responsible for both the point release development and testing for our latest point release. I would stress that we still work with several different test vendors. It’s good to get testing ideas and sort of independent opinions from from a bunch of different sources. I’m still a firm believer in maintaining some sort of independent testing.

Whether it’s internal employees or contract employees that we use on-site from a contract service. We still do our own independent ad-hoc testing and test evaluation. In partnership with Infostretch, LifeScan has automated 60% of the test suite. This is in the 3.0 release. Our goal is close to 75-80% and we think we can get there by taking a harder look at some of the tests that were run manually in the 3.0 release and convert them over to automated tests.

Also Perfecto’s constantly coming out with new functions and features. They have the recording features they rolled out recently. They’re telling us that they have a Bluetooth emulation feature as well which will help us emulate our blood glucose meters and allow some more testing to be run from end to end testing, from an emulated Bluetooth blood glucose meter, through the Bluetooth into the devices and up into the cloud.

Of course we’re still going to run manual tests using actual live meters. I’d want to stress that. I don’t think we probably wouldn’t want to go much more than 80 or 90% automated. You still always have to do some sort of manual testing so that there’s some evaluation particularly with our products too and the way they run. The manual QA cycles that took five to seven days are now automated and completed in five to six hours of emission before these regression test suites that we run.

I’m particularly was inspired by our 3.1 release as we had issues with the Android build delivery and it was delivered late. We’re still actually able to complete the formal testing on time and that’s mostly because of the automated testing that we’re able to run instead of our planned three or four week cycle that we had. We actually were able to complete it in about a week and a half to two weeks. Now the development cycle that took an entire month may be reduced to less than one week with the help of these tools.

Sarah: All right, now it’s time it’s time for Q and A. We’d certainly like to get most of your questions answered here. As a reminder at any time, feel free to ask the question in the window at the center of your screen. Let me just pull some of those questions up. All right, Ed, what is the difference between shift left and QE? Is QE all about automation?

Ed: The difference between shift left and QE. I think they’re somewhat synonymous. I think you can do a shift left– Quality engineering is building the quality into the product. Like designing with the test in mind and making sure that your requirements are testable. Then you have a test methodology around that as well. It’s not necessarily all about automation.

As I said, we still do lots of ad-hoc testing in the sprints and even do manual tests and exploratory testing, before we even develop the automated tests. Just an effective dry running the automated tests you’re building quality in. Hopefully I answered that question there.

Sarah: Perfect, what are the key components of a business case for QA to QE?

Ed: Key components of the business case. I think you really need to focus on the speed, the rapid delivery and the lower cost. There’s an initial investment, an investment in the tools, you’re going to pay some money to have folks come in and develop a library of tests for you. Once you’ve overcome that initial investment I think the payback is immediate and ongoing in the cycles that you run and your point releases after that, and maintaining that level of quality as you go through building in new features and fixing bugs on the point releases.

Sarah: Thanks, Ed, looking at another question here. Do you do performance testing for BGM? Did you explore Mobile Lab?

Ed: Performance testing for the blood glucose meters? Yes, we do, in terms of the time it takes to sync, the amount of data that syncs. Those are some of the manual tests that we run. We actually stuff our meters with the maximum amount of like 500 readings and we measure how long that takes to sync up. That’s the performance testing we do. Right now, as I mentioned we’re not quite all the way through that maturation process. I think they have a lot more performance testing to do on the back end of things in terms of the mobile app and that speed but, yes. I’m sorry what was the second part of my question, Sarah?

Sarah: The second question is, did you explore Mobile Lab?

Ed: Mobile Lab. I think that lost the cart thing that Ray had looked at and we looked at that solution at the same time when we’re looking at HP and Perfecto. I think that’s the Mobile Labs thing.

Sarah: Yes, great, can you elaborate more on what the issues were with C test and was it industry specific?

Ed: Yes, I think part of it– well, some of this was related to the vendor that we had that was utilizing C test. They could never get the framework integrated properly with ALM. I think there were concerns about that our regulated industry and not C test not being as efficient. We didn’t have it integrated with the Jenkins to run the automated testing automatically with bill delivery.

There were a whole host of concerns that might have been overcome with a better integrator. But in general the quality of tests that were developed again which could have been a function of the integrator and the way it was fully implemented or implemented was not as efficient and certainly not as smooth as the product we have now. Are you using Selenium I think it was and are you exploring using Apiam Studio or do you use Apiam Studio?

Speaker 1: We don’t use Apiam Studio but all our automation testing that we’ve run through Perfecto communicates with the Apiam server inside the Perfecto cloud and, yes, we do use Selenium in our testing as well.

Sarah: Perfect, thank you for answering those. How did you get the buy-in to adopt the automation tools necessary to drive this change?

Ed: Yes, that was a matter of convincing our upper management that we would get this payback. As I said we are exploring with automation with C tests and we had actually looked at a demo at another vendor site in India who had demoed Perfecto and then weighed the cost of the different solutions. The HP has an integrated solution but ALM and Perfecto and just decided that because of the features and maturity of the system that Perfecto was the way we wanted to go. I was able to convince my upper management. We actually got a little bit of a discount on the first year from Perfecto as well. As we learn more we can balance out the number of cradles and licenses we need and that sort of thing as we move forward.

Sarah: All right, great, I believe we have one last question here. How do you link these releases into the change management cycle? Are changes raised to cover the releases to production?

Ed: Yes, we use change requests. How are they linked into the change management cycle? That’s just part of our change management process so we raise the CR that’s done in enable that has a whole host of artifacts linked to it. The requirements documents, the test deliverables and everything from design review, trace matrices, all that happy stuff and we produce and review all of those documents prior to release. I think I’m getting at the root of that question there, right? Like how do we link the point releases into the change management cycle?

Sarah: Thank you Ed. We actually have another one. We’re going to make this the last one. To what extent open source technology is being used by LifeScan for QE? What are the advantages and disadvantages of open source QE?

Ed: Well, the open source tools are the Apiam and Selenium. Those are non-licensed advantages I guess because you’re not paying a license fee obviously. There’s some disadvantages because you need to control the version that you qualified in this regulated application and make sure that it’s controlled on a server and if we don’t upgrade it without doing a re-qualification or considering anyway the risks associated with what that re-qualification might entail.

Sarah: Great and I believe that’s all the questions that we have at this time and I just want to thank you, Ed. Thanks for taking this time to do the webinar today and thanks everyone for joining us as well. You will receive an email the next 24 hours with the slide presentation and a link to the webcast replay. If you have any questions please contact us at info@infostrech.com or please call 14087271100 to speak with a representative. Thank you again, everyone. Enjoy the rest of your day.

Ed: Thanks, Sarah. Thanks, everybody.

With Lifescan, a Johnson & Johnson Co.

Sarah Lynn Brunner: All right, well, good morning, and welcome to our webinar, Moving from Quality Assurance to Quality Engineering. My name is Sarah Lynn Brunner, and I will be your host today. Today’s presenter is Ed Hein. Ed is the manager of Digital Verification and Validation at LifeScan. In his role he oversees the development of implementation of testing strategy for all of LifeScan’s software products. He is responsible for reporting metrics showing the effectiveness of testing activities, and coordinating both internal and outsourced test resources to align on overall testing strategy.

He has over 20 years of experience in a variety of software testing, and validation rules, including Global Quality Manager at Honeywell, and Worldwide Manager of IT and Computer Systems. In addition to Penn State, Ed also holds three certifications from American Society for quality. Before we begin let me review some housekeeping items. First things first, this webcast is being recorded, and will be distributed via email to you, allowing you to share it with your internal teams, or watch it again later.

Second, your line is currently muted. Third, please feel free to submit any questions during the call by accessing the Q and A button in the navigation in the top center of your screen. We will answer all questions towards the end of the presentation. Also during the presentation we encourage you to engage in the poll questions to maximize your experience during this time. We will do our best to keep this webinar to the 45 minute allotment. Now, let me turn it over to Ed.

Ed Hein: Hi Sarah, thank you for that introduction. Hopefully everybody can hear me well. Sarah, I guess.

Sarah: Yes, we can hear you just fine.

Ed Hein: Okay, this is just a list of the topics that we’re going to go through a high level agenda here. Just a brief introduction to LifeScan and our products, and what we do. Talking about some digital testing that we’ve done. Introduction to quality engineering, and the shift left. Also our journey on that role, on that path from quality assurance to quality engineering. We will take some questions and answers going forward. Have you given me control Sarah? I saw a remote control thing pop up.

Sarah: Yes, you now have control Ed, thank you.

Ed Hein: There we go. All right, the introduction to LifeScan, and our products. We are a Johnson & Johnson company, part of their diabetes care franchise, and specifically targeted towards blood glucose meters. The digital group does the software that is embedded in the meters themselves, and then native mobile apps on Android and iOS platforms, and also a website that stores that data, and allows access to outside medical providers. We’ve been committed to improving the quality of life for over 20 plus years in this digital group. Our OneTouch system is the most prescribed blood glucose meter brand in thee world, and it’s used by over 5,000,000 people in the US every day.

We’ve been a part of Johnson & Johnson since ’86 as I mentioned, and had moved our headquarters from Milpitas, California to Chesterbrook, Pennsylvania. We’re based just southwest of [unintelligible 00:03:24] and near Philadelphia. We employ more than 3,000 people worldwide. What we’re looking for is to make a transition from a more traditional quality assurance post product development inspection cycle to a more proactive quality engineering, where we would develop products with quality built into them as we go along. As you might be aware, the blood glucose monitoring system market is growing at a healthy rate, with the unhealthy rate of diabetes epidemic around the world.

There’s fierce competition from other larger companies like Abbot, Bayer, Roche and many innovative newcomers in the area of insulin delivery. I should also mention we have a sister company, Animas, which provides insulin pumps for diabetics, type one diabetics. It’s imperative to accelerate accelerate cycle time, and get new software products out there to help differentiate our products from our competition. We do feel that, that’s one of the things that we do very well, and that our OneTouch reveal mobile apps and websites are a differentiator for our products. We’re presented with a lot of complexities.

Though we are in a regulated environment submitting to the FDA and other regulated bodies around the world in the 10 different countries that we’re marketed in, we do need to maintain strict vigilance on the testing of our product, and produce final end results while at the same time doing that rapid development. We’ve adopted an agile approach with formal testing at the end to provide that documentation. The complexities we’re presented with are not only different OSs. That’s in the areas of the PCs for the websites that we’re on, to various types of browsers, Chrome, IE, Firefox, different networks.

When you get into the mobile device area then you start looking at iOS and Android platforms and all the variations among them, they have different operating systems as well, and how to support them. Then we try to target different capabilities within them, and use functions such as the touch ID that sort of thing, and incorporate that into our products as well.

You can see on this slide we’re just listing the various types of browsers and things that we need to support, and it gets more complex all the time. We try to eliminate older browsers as we advance, and incorporate new ones, typically only supporting two or three different operating systems if possible. Sorry I went too far. How does it matter?

Well, as cycle time shrinks for these digital apps, and customer expectations are rising, testing gets harder and harder to meet those customer needs. We have a limited amount of time to define what the test requirements are, what the customers need and requirements are, and to develop tests around them, because we can’t have low confidence on quality when we go to release and produce a product for our customers. We’re constantly challenged to incorporate user feedback, which comes onto our tier three and tier two channels.

We actually do write tickets that go into JIRA, and get addressed with future releases, or they go through a committee where it’s considered, “Which of these should be logged as improvements? Which are actual bugs?” We go in, and research the bugs. That’s part of what my test group does to ensure that any tickets reported by our customers are real, or any tickets we find either on our on or offshore testing teams, manual or automated are real bugs in they code, and they get scheduled for repair or inclusion in a future point releases. We also wanted to try and do limited coverage for manual testing. There’s several basic types of tests that we do.

Ad hoc exploratory testing, which is a little more well documented form. We’ve always been trying to push for more automated testing throughout the sprint development, and those features are developed in the two to three week sprint cycles. Trying to advance the slide here. Right, what experts suggest is to refocus the QA and testing on the customer experience, and assure that customer need– I just like to think of it like the three amigos, the development group, our testing group and our customer representatives, which our product life cycle management team which is comprised of the tier two, tier three interface teams. We work together to try and provide that enhanced experience for our customers.

Transforming the traditional use of agile and DevOps practices to adopt in quality management. What we did in our last release, which is our new product line, which has just launched last week, OneTouch reveal, 3.0, has incorporated more automated testing as part of our strategy in the sprint development, as I mentioned. Typically you’re running one sprint behind so the functionality is delivered in sprint one, we have automated tests testing that functionality in sprint two.

Functionality development in sprint two has automated tests in sprint three. You’re assuring that as you progress, and build on those functions, and add more functionality, the stuff you’ve already made is still working fine, and doesn’t break with the integration of other functions. This has also helped expand our testing skill beyond manual and test automation to think more strategically, and allows more time to apply for specific exploratory tests that are more customer-based.

We prioritize testing with predictive analytics, continuous feedback from customers as well. We think that that’s gone pretty well. Actually trying to develop tests around some of the weak points and failure points that we’ve observed. Where we see more tickets and more breakage, we could expand test coverage in those areas.

Transforming from quality assurance to quality engineering; that’s a big step in any process, and one that’s very important is this called shift left. What is quality engineering? I think this is a good diagram showing what your typical quality model is and what a lot of standard v-development cycles are where you do all your testing on the back end or the back side after a long development period. It tends to take much longer towards the end because every time you fix problems, as you get further to the right and further through your documentation process there’s more paperwork and more changes, more potential to break other things as you move in that direction.

If you can identify problems earlier, they’re cheaper to fix. You can fix them faster and that’s really the whole goal behind quality engineering. Driving the development of the quality product and process while enabling effective testing parallel to development. As we’re developing we’re also developing automated tests and doing testing, ad hoc testing and exploratory testing as we go along. Automated testing obviously is much faster, gives you better bang for the buck, you can run more tests in a shorter period of time getting more coverage.

Getting the groups to work together, development operations and quality, be one team and develop and execute automated tests continuously, emphasis on executable requirements and acceptance. The requirements may be written such that you can develop tests around them and also target certain areas of the code with those automated tests. There’s immediate feedback from the automated tests, emphasis on automation assessment. You can take that feedback right back to the development team. In our particular example in 3.0, our recent release, we’re running at a large battery of regression tests overnight as many as 135 as we got towards the end of the program.

Basically all our whole library of automated tests. We would know if we ran that test at night US time when the offshore development team came in they would get results in three to five hours. They would take those failures that they observed and look and see was it an issue with the testing platform or perhaps a test script that might have been an error or was it a code problem and a breakage in the code. Then the development team would work on fixing that issue that could come up in that sprint. We scaled it up better as we went along also.

Moving on, still having trouble advancing the slide. Here we go. I think I got it.

A successful quality engineering process brings the automation much closer to the coding. The traditional is more like the ice cream cone shown on the left where you’re doing a little bit of unit testing during development, more integration tests. The bulk of your testing is done near the end of the project after you’ve invested much more time and cost and it’s more costly to fix issues. You see the top of the ice cream cone the scoops or the manual tests and a lot of the automated GUI tests.

Whereas if you can shift that and make it more like the pyramid on the right, as you’re going along to development sprints you’re developing these automated unit tests and doing more component integrated tests, automated API testing. Also I do want to stress that we continue to do ad hoc testing as we go along as well. The automated GUI tests can be reduced and the manual session based testing you can focus more on that at the end and achieve much faster results and fewer failures. Hopefully you drive those failures out early on and not later in the process.

How is quality engineering different from QA? In QA you’re assuring the quality of the process and the product more on the back end as I mentioned, quality engineering drives development of quality product and the process, early on. By focusing on quality from the ideation to commercialization, as you’re developing code you’re developing tests hand-in-hand and also there’s automated tests that run faster. You can define, support and implement the processes such as BDD/TDD, Agile, Kanban. All of those we use primarily the agile in our new product development.

Kanban sometimes on point releases, and utilizing these quality tools you can make those quick decisions and drive quality in enabling DevOps to work more efficiently. I should note the tools we shifted between a C test platform originally. Now we’re using more of a perfecto appium blend with the QMetry automation studio QAS from Infostretch. Some milestones in the quality engineering; first of all strategizing, building that strong foundation, see how that return on investment, get buy-in from the stakeholders and customers. We engage our marketing people and customer base early on in selecting what tickets in a point release will be prepared and what the interfaces look like for a new release.

Get a jump start by clearing out backlogs and harness automation tests to manage those backlogs and building a risk-free cost effective test factory model and clear those backlogs out. During the execution we want to manage the scale so that things don’t get too out of hand, integrate with test management and CI tools, continuous integration as we go along. Then when you have a more mature program you start gearing up, preparing for next generation challenges, leveraging the latest technologies and devices, building on the system that you have and moving forward.

What automation does, it allows you a little more time to focus when at mature area as you go through. Currently LifeScan is probably more in the execution phase, I would say. That the milestone in an area that we’re in which managing and building up still at this point, integrating those tests. The benefits of quality engineering; lower cost, you can reduce your headcount and effectively run more tests with fewer resources particularly leveraging that automation, faster time-to-market. By eliminating problems earlier you reduce the end effect of finding issues at the end and stretching out that final release cycle.

You can accelerate more functionality in a shorter period of time by detecting defects earlier. Overall then it’s an improved user satisfaction. If you’ve done the right thing early on by incorporating user feedback, fixing major issues and bugs that are out there on your product your customer is going to be happier, more engaged and stick with your product and continue to be a user of our apps and our website.

Now we’ll talk about the LifeScan journey, and I mentioned some of the techniques and the tools that we’re using and how we got from our existing process. I should mention that I’ve just been with the company about two years, and one important thing we did was just in trying to select the tool. We had a vendor we were working with and we do automation on both our website using application Lifecycle Manager, the HP’s Quality Center tool and on the mobile side and we sort of have different tools and different focus in each of those areas.

We seem to be much more mature on the web side. It was the mobile side that we really need to focus on. That’s what I’m speaking towards a lot. This most recent release in OneTouch Reveal was a mobile candidate for both iOS and Android platforms. Some of the challenges that we saw at LifeScan were, again, I mentioned earlier we’re in a regulated environment. Some countries are regulated more tightly than others.

United States FDA is enforcement discretion so you can do 510k filings but you don’t necessarily have to file for repoint release. But you’ve got to make sure that you have backup available if they come to look at your data and want to inspect. You better be able to produce testing and quality results per your quality system. Other countries Canada, Japan and Europe, France, they have specific requirements. The French require that any data is stored on French soil on a French-based server. Unlike our other EU partners which are all located in one central server in the UK, we had to open up a separate server.

There’s things like that that we have to do for our operations group and qualifying those different servers. There were lengthy cycle times leading to delays in the go-to-market plan. We were trying to hold two quarterly releases and try to get as much functionality as we could in those quarterly releases. What we found was happening is that functionality wound up being cut and we would slit delivery by a month or two and if you’re going quarter to quarter, a month or two is almost put you into that next quarterly release.

Dealing with those difficulties and integrating various aspects with a testing process, working those different tools together and coordinating between the various types I mentioned, like ad-hoc exploratory and automated, trying to get them all on the same page, no gaps in between. Our process was very manual. Predominantly running manual tests with multiple steps. We were trying to embrace automation and just looking for the right tool and the right vendor to help us do that. As always, the main concern is the accuracy of the monitors and getting that data into the mobile app for a customer.

Then synced up with the back-end on the web, maintaining a data integrity and being able to show it in various trend graphs and logbook views. What I did, the step of this process was sending one of my technical test guys out to a conference and evaluating some various tools that were available. As I mentioned we were using C-test with one of our vendors. That wasn’t working so effectively.

We were looking at Perfecto Mobile, a little mobile cart, HP’s tool that they have that’s integrated within ALM and a few other tools that we all examined, had the minotaur site look at them. It was actually Perfecto that recommended Infostretch as an integrator to help us adopt Perfecto Mobile and develop tests with that tool. Infostretch came out with that a proof of concept along with Perfecto to validate the information flow we have from the meters through the mobile devices.

We took standard test cases that we had, hundreds of manual test cases written and selected some that could be automated, went through and started developing automated tests. Some of these basic tests and functionalities. We partnered with Perfecto to expand the device coverage. We went from a handful of Android and iOS devices to dozens because Perfecto mobile has public cradles that are available. Most of our data is housed on private cradles with devices that we own.

We got the best of both worlds. One of the solutions we looked at was a local cart you could plug devices into and run tests on. We have some local cradles managed through Perfecto server that run the same tests. We can basically swap out any mobile device that we want in those local cradles, iOS or Android, and run tests on them. We’ve increased the amount of devices and OSs that we’re covering in our testing.

We can handle as many as four to five bills a day. Test cases are being automated in the sprints as I mentioned before as a development team builds these features. We have several types of automated tests that run, the smoke tests, are ones that run every time a build is developed, we use a Jenkins tool to automatically kick off the smoke tests and run a small subset, a battery of core functionality in that smoke test.

Every evening, as I mentioned before, whatever that last build was for that day, we typically run a whole full battery regression tests on it. The output from that is what we use to move forward in determining what may have broken in that latest build, what needs to be improved and hopefully a majority of tests pass and all is well. Moving on. Organizational changes, we did have a little shift in that initially my position was reporting into engineering lead or into the same manager as the engineering operations team. Now I’m reporting directly into the Director of Digital R&D. QA has become more tightly integrated with development.

That’s twofold. What we’ve done is the Infostretch is our partner not only helped us develop this library of automated tests, but also helped us develop this automatic testing solution I mentioned using Jenkins. The builds that we had our development team putting together are delivered automatically and the tests pick it up and run the tests automatically as well. We are working more closely with the development team. Also, Infostretch has more recently started doing development work as well. They are responsible for both the point release development and testing for our latest point release. I would stress that we still work with several different test vendors. It’s good to get testing ideas and sort of independent opinions from from a bunch of different sources. I’m still a firm believer in maintaining some sort of independent testing.

Whether it’s internal employees or contract employees that we use on-site from a contract service. We still do our own independent ad-hoc testing and test evaluation. In partnership with Infostretch, LifeScan has automated 60% of the test suite. This is in the 3.0 release. Our goal is close to 75-80% and we think we can get there by taking a harder look at some of the tests that were run manually in the 3.0 release and convert them over to automated tests.

Also Perfecto’s constantly coming out with new functions and features. They have the recording features they rolled out recently. They’re telling us that they have a Bluetooth emulation feature as well which will help us emulate our blood glucose meters and allow some more testing to be run from end to end testing, from an emulated Bluetooth blood glucose meter, through the Bluetooth into the devices and up into the cloud.

Of course we’re still going to run manual tests using actual live meters. I’d want to stress that. I don’t think we probably wouldn’t want to go much more than 80 or 90% automated. You still always have to do some sort of manual testing so that there’s some evaluation particularly with our products too and the way they run. The manual QA cycles that took five to seven days are now automated and completed in five to six hours of emission before these regression test suites that we run.

I’m particularly was inspired by our 3.1 release as we had issues with the Android build delivery and it was delivered late. We’re still actually able to complete the formal testing on time and that’s mostly because of the automated testing that we’re able to run instead of our planned three or four week cycle that we had. We actually were able to complete it in about a week and a half to two weeks. Now the development cycle that took an entire month may be reduced to less than one week with the help of these tools.

Sarah: All right, now it’s time it’s time for Q and A. We’d certainly like to get most of your questions answered here. As a reminder at any time, feel free to ask the question in the window at the center of your screen. Let me just pull some of those questions up. All right, Ed, what is the difference between shift left and QE? Is QE all about automation?

Ed: The difference between shift left and QE. I think they’re somewhat synonymous. I think you can do a shift left– Quality engineering is building the quality into the product. Like designing with the test in mind and making sure that your requirements are testable. Then you have a test methodology around that as well. It’s not necessarily all about automation.

As I said, we still do lots of ad-hoc testing in the sprints and even do manual tests and exploratory testing, before we even develop the automated tests. Just an effective dry running the automated tests you’re building quality in. Hopefully I answered that question there.

Sarah: Perfect, what are the key components of a business case for QA to QE?

Ed: Key components of the business case. I think you really need to focus on the speed, the rapid delivery and the lower cost. There’s an initial investment, an investment in the tools, you’re going to pay some money to have folks come in and develop a library of tests for you. Once you’ve overcome that initial investment I think the payback is immediate and ongoing in the cycles that you run and your point releases after that, and maintaining that level of quality as you go through building in new features and fixing bugs on the point releases.

Sarah: Thanks, Ed, looking at another question here. Do you do performance testing for BGM? Did you explore Mobile Lab?

Ed: Performance testing for the blood glucose meters? Yes, we do, in terms of the time it takes to sync, the amount of data that syncs. Those are some of the manual tests that we run. We actually stuff our meters with the maximum amount of like 500 readings and we measure how long that takes to sync up. That’s the performance testing we do. Right now, as I mentioned we’re not quite all the way through that maturation process. I think they have a lot more performance testing to do on the back end of things in terms of the mobile app and that speed but, yes. I’m sorry what was the second part of my question, Sarah?

Sarah: The second question is, did you explore Mobile Lab?

Ed: Mobile Lab. I think that lost the cart thing that Ray had looked at and we looked at that solution at the same time when we’re looking at HP and Perfecto. I think that’s the Mobile Labs thing.

Sarah: Yes, great, can you elaborate more on what the issues were with C test and was it industry specific?

Ed: Yes, I think part of it– well, some of this was related to the vendor that we had that was utilizing C test. They could never get the framework integrated properly with ALM. I think there were concerns about that our regulated industry and not C test not being as efficient. We didn’t have it integrated with the Jenkins to run the automated testing automatically with bill delivery.

There were a whole host of concerns that might have been overcome with a better integrator. But in general the quality of tests that were developed again which could have been a function of the integrator and the way it was fully implemented or implemented was not as efficient and certainly not as smooth as the product we have now. Are you using Selenium I think it was and are you exploring using Apiam Studio or do you use Apiam Studio?

Speaker 1: We don’t use Apiam Studio but all our automation testing that we’ve run through Perfecto communicates with the Apiam server inside the Perfecto cloud and, yes, we do use Selenium in our testing as well.

Sarah: Perfect, thank you for answering those. How did you get the buy-in to adopt the automation tools necessary to drive this change?

Ed: Yes, that was a matter of convincing our upper management that we would get this payback. As I said we are exploring with automation with C tests and we had actually looked at a demo at another vendor site in India who had demoed Perfecto and then weighed the cost of the different solutions. The HP has an integrated solution but ALM and Perfecto and just decided that because of the features and maturity of the system that Perfecto was the way we wanted to go. I was able to convince my upper management. We actually got a little bit of a discount on the first year from Perfecto as well. As we learn more we can balance out the number of cradles and licenses we need and that sort of thing as we move forward.

Sarah: All right, great, I believe we have one last question here. How do you link these releases into the change management cycle? Are changes raised to cover the releases to production?

Ed: Yes, we use change requests. How are they linked into the change management cycle? That’s just part of our change management process so we raise the CR that’s done in enable that has a whole host of artifacts linked to it. The requirements documents, the test deliverables and everything from design review, trace matrices, all that happy stuff and we produce and review all of those documents prior to release. I think I’m getting at the root of that question there, right? Like how do we link the point releases into the change management cycle?

Sarah: Thank you Ed. We actually have another one. We’re going to make this the last one. To what extent open source technology is being used by LifeScan for QE? What are the advantages and disadvantages of open source QE?

Ed: Well, the open source tools are the Apiam and Selenium. Those are non-licensed advantages I guess because you’re not paying a license fee obviously. There’s some disadvantages because you need to control the version that you qualified in this regulated application and make sure that it’s controlled on a server and if we don’t upgrade it without doing a re-qualification or considering anyway the risks associated with what that re-qualification might entail.

Sarah: Great and I believe that’s all the questions that we have at this time and I just want to thank you, Ed. Thanks for taking this time to do the webinar today and thanks everyone for joining us as well. You will receive an email the next 24 hours with the slide presentation and a link to the webcast replay. If you have any questions please contact us at info@infostrech.com or please call 14087271100 to speak with a representative. Thank you again, everyone. Enjoy the rest of your day.

Ed: Thanks, Sarah. Thanks, everybody.

Latest News, Events, and Thought Leadership

Hear us Speak on “How to Make Use of Serverless Compute Solutions for IoT Device Simulation”
Learn More
Deven Samant, Director
Nov 1, 2017 @ 4:20pm to 4:55pm
Santa Clara Convention Center
Nov 02, 2017
See More Events
Auto-Convert Freestyle Jenkins Jobs to Coded DevOps Pipelines Webinar
Register Now
December 7, 2017
11AM PT / 2PM ET
Online
December 8, 2017
See More News
Re-tool Your CI/CD Pipeline for Infrastructure and Configuration as Code
Read More

The triple effect of cloud adoption, DevOps and CI/CD has dealt a blow to the old tried-and-tested means of...

See More Blogs