iCARE – testing and debugging locally and from Africa

During early development stages, all app tests were done at TELMeD HQ, between the dev team and locally depending on who’s input and feedback was required. A lot of the feedback we received was verbal via regular Skype meetings, emails, Google docs and via our project management tool: Redbooth. We used GitHub throughout, but because most people we were asking for feedback from had never used GitHub before, we didn’t really utilise the issues feature.

From my point of view, before I sent out a version for testing, I would test using Flash Builders in built tools combined with Adobe Scout. Scout in particular became essential to fixing a memory issue about halfway into the development. Without Scout, whilst I had an idea what the cause of the issue was, Scouts graph format really helped make it more black and white. The issue resulted in refactoring a lot of code (and this blog post). And came about when one day the app loaded on the iPad, the next it didn’t. So I ran the app via Scout then it became clear what needed to be changed.

When we started releasing app builds to more people and across larger distances, we began using Google sheets to keep track of issues. This was extremely helpful (if a little overwhelming at times seeing a project be picked apart for issues) as it was the one place where everyone involved could check if an issue had already been logged, and if not, add it to the sheet. We often worked on it collaboratively. This helped reduce email communication but also increased useful communication in that anyone involved could see what has been fixed in newer app builds and what requires more feedback on or is in progress.

The biggest challenge was as the app became more and more complete, certain issues would crop up that felt impossible at the time to re-create and therefore fix. E.g. We had an issue where very rarely (and always at the most inconvenient time!) the voiceover audio would simply stop playing from a certain scene. I saw this for myself and we had some feedback about it. But it was so rare and difficult to re-create (felt random), I was always reluctant to say “its resolved!” after making changes to fix it.

Fairly early on in development stage we began sending links to download and test the app in Sierra Leone, Tonkililli via an iPad we had shipped out. By far the biggest challenge with gathering feedback and restricting their ability to install frequent app updates was (is) their internet connectivity. The bandwidth is extremely low and dropouts occur extremely frequently. This made setting up an app build to send us usage data to Scout on one of our machines impractical. We relied on verbal feedback and a lot of questions via Skype meetings, emails and Google sheets.

Due to the issue with their internet connectivity, one of the biggest challenges from my point of view was getting usage data from the devices to our managed server. Will and I had so many conversations about this feature throughout the development of the app and the server side of things, it felt like the most mature part of the codebase towards the end. Whilst usage data was transferring pretty well from Sierra Leone, issues came up such as:

  • Transfers were too slow.
  • Transfer keeps being interrupted and therefore has to start from the very beginning.
  • Transfer breaks the app if transfer fails.
  • Transfer was time consuming to admin as they had to press a button to start and then quit / reload the app to restart when it would get very stuck.
  • Person administrating the data transfer didn’t know what went wrong, how, why and at what point during transfer. Making it difficult to provide feedback.

At each iteration, eventually all of the above was completely resolved or at least improved a lot. Below is how we improved each:

  • Transfers were too slow : Will had optimised this massively.
  • Transfer keeps being interrupted and therefore has to start from the very beginning : Updated so SQLite database on the device removes each event once it gets the ok from the live server. Meaning when there is an interruption, it doesn’t have to try and re-send duplicate data.
  • Transfer breaks the app if transfer fails : Improved stability and error checking to prevent app from becoming locked in the event of a fail.
  • Transfer was time consuming to admin as they had to press a button to start and then quit / reload the app to restart when it would get very stuck : improved functionality to become more automated and fix itself in the event of a lost connection and error.
  • Person administrating the data transfer didn’t know what went wrong, how, why and at what point during transfer. Making it difficult to provide feedback : Provided text boxes with some more detailed data which gets updated during every event transfer. If an event fails to transfer for whatever reason, information is show to the user along with a button to copy the debug text so they can then email it to us to look into. This has been a great success in understanding whats an actual error with the app and whats down to internet connectivity.

Data synchronisation takes place within the about section of the app. Those that are administrating this for the project in Sierra Leone have emailed us any issues easily by selecting the copy debug button and emailing to us. Obviously, for any release version of the app made available to the public, the syncing info and copy debug stuff won’t be there.

Debug example1

Data pasted in this image is telling us that data was transferred ok. But it was a duplicate entry. Caused possibly due to internet connection dropping out at some point previously.

Debug example1

Here we have all the information we need that tells us that for this particularly event, there was no wifi connection.

The communication and acquiring feedback from Africa to us in the UK aspect of the project was quite unique and provided a bit of trial and error along the way. However, by improving things with every iteration – including the processes involved as well as the application itself. It became more autonomous and easier to work with.


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.