Andrey Madan

Biography

Andrey Madan is a software testing professional focusing on automated tools and methodologies. He is currently a Senior Solution Architect at Parasoft where he works with customers to identify best technical and business approaches for efficient testing of heterogeneous applications. Andrey is passionate about testing approaches to satisfy stringent quality expectations. Prior to joining Parasoft, Andrey spent 10 years at Medtronic building test environments for life critical embedded systems. He led various teams to work with development and testing organizations to deliver solutions influencing all phases of SDLC. Andrey received Bachelors and Masters degrees in Computer Science from Purdue University in West-Lafayette, IN.

Are Your Continuous Tests too Fragile for Agile?

With a fragile test suite, the Continuous Testing that’s vital to Agile just isn’t feasible. If you truly want to automate the execution of a broad test suite—embracing unit, component, integration, functional, performance, and security testing—during continuous integration, you need to ensure that your test suite is up to the task. How do you achieve this? This session will provide tips on ensuring that your tests are up to the task:
• Logically-componentized: Tests need to be logically-componentized so you can assess the impact at change time. When tests fail and they’re logically correlated to components, it is much easier to establish priority and associate tasks to the correct resource.
• Incremental: Tests can be built upon each other, without impacting the integrity of the original or new test case.
• Repeatable: Tests can be executed over and over again with each incremental build, integration, or release process.
• Deterministic and meaningful: Tests must be clean and deterministic. Pass and fail have unambiguous meanings. Each test should do exactly what you want it to do—no more and no less. Tests should fail only when an actual problem you care about has been detected. Moreover, the failure should be obvious and clearly communicate what went wrong.
• Maintainable within a process: A test that’s out of sync with the code will either generate incorrect failures (false positives) or overlook real problems (false negatives). An automated process for evolving test artifacts is just as important as the construction of new tests.
• Prescriptive workflow based on results: When a test does fail, it should trigger a process-driven workflow that lets team members know what’s expected and how to proceed. This typically includes a prioritized task list.