Veni, vidi, vici, refactor
Subscribe to my mailing list
Few years ago I was part of small software team. The product was a desktop application that makes market simulations. Work was interesting and team was good.
Our client preferred to give us work in many, usually small, projects. All fixed price and fixed scope.
We ran into issues with project quality. With features and budget fixed there was not much else to cut when we ran into problems.
And this is how our story begins.
One day the client’s directors decided to do something about quality. Project “Quality” was born (not the real name :)).
Client wanted to automate our manual testing. This was the way we can be sure we do not break existing functionality when we introduced something new.
Our team ran a basic estimation, but saw that doing this will take months. Months away from work that will bring actual business value. We did not have the capacity to finish this work. Hiring was also not a good option.
So our client decided to outsource it to somebody else. Our task was to produce requirements for the other team.
This were our requirements:
As you can see, we were quite involved in this whole project, as we will be the ones that maintain the code later.
A few meetings and an outsourcing firm was chosen. They assembled a small team and got familiar with the tool we have chosen to do the job.
Our awesome QA did most of the work in choosing the right UI test automation tool and preparation of the test cases documents.
We also provided requirements for a server that will be used to run the tests after they are created.
We gave the test automation team a build of our application that they can work with along with other helpful documentation and projects that they can use.
Soon they were busy developing tests.
Things were going ok, but of course we had a few issues. One stood out:
Our desktop app was working with projects that could be saved and loaded. It was tedious to open and close the same project again and again, so we implemented the logical fix – when you open the app, it will open the last project you have worked on.
We did not get paid for this feature and have not communicated it with the client upfront. It was a small task, implemented in an hour, convenient for the users. We loved it.
The testing team did not like it one bit.
This was not a change mentioned in their test documents and it broke most of their tests as they expected every time application is opened to start with a clean state.
The client dropped the feature. I am still disappointed about that, because we sacrificed quality and user experience for automation team convenience.
In a few months tests and build machine were ready. We had our manual test steps automated!
Everything was a little unstable first, being tested in the UI and all.
We dug into the code (it was not simple record and play) and took ownership.
Our regression test “shields” were now up.
In the next few weeks we worked on features requested by the business. We spend time maintaining the tests and making new ones for the business features we were implementing.
We knew initial effort was not going to be small, but hoped we will quickly gain more skill and things will speed up over time.
This was the time to see if the tests will work or not.
Here are the issues that we found out:
I will go into more detail for each of those points.
This was partially our fault – we prepared and gave the test cases to the automation team.
But it was not only that, the nature of the UI tests was also part of the problem.
Let’s address first the test cases. The tests cases in general focused on the UI first and functionality second.
There was an enormous complexity hidden in the functionality sitting “behind” that UI. Business domain was complex. Understanding it was hard and when things are hard people usually do what is easier and focus more efforts there.
This is why our manual testing focused more on the UI parts of the applications. That was easier to work with compared to the “real” functionality underneath.
What was true for the test cases was true for the UI tests. It was easy to test UI with them. It was much harder and unreliable to test the stuff that happens underneath.
Our UI automated tests thus focused even more on the UI. There were few of them testing underlying functionality, running simulations, etc. They were not focused enough, tested too much and usually just verified current behavior.
They proved to be ineffective. Every time we changed underlying simulation code we had to change what these tests verified. They could not tests only the part we changed. They will not give us feedback only when we broke something, this was a big issue.
This problem was then amplified by the next point.
Slow run times are expected with such setup. We had a dedicated server to run 4 builds in parallel. This was not enough. We needed hours before a full build finishes.
One of the bottlenecks in our process before Project “Quality” was our manual testing process. Automating it unfortunately did not make it faster, even with a machine doing a lot of the work instead of us.
No fast feedback. When tests failed even after we found the reasons we had to wait a lot of time to know that the fix will not break anything else. And when we broke something it was usually in the other tests and not a regression.
In the next few projects we worked on our productivity took a huge hit – what was taking us a week will take wrk and a half now.
This leads to our last point and conclusion.
Productivity was down. We sacrificed it for confidence in quality.
But did we get higher quality?
We actually found almost no bugs. 11 low priority bugs were found in the process of creating the tests by the automation team. That means our tests cases and software were not aligned. All were in the UI and had low impact. After we started working and maintaining things did not change. We will find an occasional bug but most of the time it was an issue with the test itself.
In the end we were spending extra time just maintaining the tests, while getting an occasional small and low impact bug here and there.
Not a good way to spend thousands of dollars for a few months, in my opinion.
There are a lot of mistakes in this story. If I have to sum it in one sentence it will be this:
Instead of searching for the underlying reason for our issues we focused a lot of effort in treating the symptoms.
I wish this was the only occasion the above sentence was true…
We treated quality as a project. Something we can get if we push harder and do more work.
Quality is not a project.
We needed to change the way we work, not put more effort in the one area that happened to make the issue visible.
After the project failed we went looking for a better solution.
At the time, we identified three main issues:
We made some internal workshops to increase domain knowledge.
We made effort to shift testing earlier in the process. We created more unit and integration tests. We improved our logging and exposed more of it so QA can take a look.
We embraced quality as a company value. That is a whole other story.
All those were not only more helpful than any project, they also required less effort and actually reduced our work load and stress, instead of increasing it.
Thank you for reading.
If you would like to receive future post updates, please subscribe below.