Archive for the ‘software development lifecycle’ Category

Crowdsourcing Validation Rules for Uganda National ID

I am curious about the ability to validate that the Uganda National Identification Number (NIN) is well formed. However this does not validate that the NIN actually belongs to the person presenting it or that it is correct.

The rules that I have been able to gleam are:

  1. Must be 14 characters long
  2. First character is a letter of the alphabet. C seems to be a common letter – does it stand for citizen?
  3. Second letter is either M or F – male or female
  4. Characters 3 and 4 are numbers, which are the year of birth. Cannot be after 00 since that would make a person below 18
  5. Characters 5, 6, 7 are numbers

How can you help? Which of these rules do not match your NIN? Share any additional patters to build a repository of rules that can later be mapped to programming language validations – Regular expressions and validation frameworks

Advertisements

Software Delivery Skills Plan 2018

If you fail to plan, you are planning to fail! ~Benjamin Franklin

A new year is upon me, and looking over the horizon I am looking to do the following work streams to help better my development skills

1. Work with a new age Javascript framework – vue.js seems the rage, this is also working with webpack and new Javascript build tools

2. Make docker part of my development workflow – this will be project based

3. Distributed ledger proof of concept – the distributed ledger is the rage now, but what can be achieved to prove its capability

4. API First project – this is a separatation of the backend REST APIs from the front end, may be combined with the vue.js to deliver a working project. I will also look to leverage the OpenAPI

5. Write a paper for a scientific journal leveraging the health informatics work I have been doing over the last 3 years and present it at a conference.

UgandaEMR Bootcamp – Strengthening the Foundation for a National EMR

This article was originally published on the METS website at http://mets.or.ug/ugandaemr-bootcamp-2017/ with photos courtesy of Nancy Karunganwa (https://twitter.com/Kanandra26)

The week of November 20 – 24, 2017, was an exciting one for the EMR team at METS where we held the first UgandaEMR developer bootcamp, aimed at developing local capability to extend and build upon UgandaEMR. The end-goal was to have 4 new HMIS forms developed to be added to UgandaEMR by the participants, while building their understanding of the platform, how it works, and the software delivery process used by the METS team.

The blog post below is abridged in Twitter thread at https://twitter.com/ssmusoke/status/933970233261424641

Background

METS is the Monitoring and Evaluation Technical Support program a 5-year Centers for Disease Control (CDC) funded cooperative agreement, started in April 2015, led by Makerere University School of Public Health (MakSPH) as prime with University of California San Francisco (UCSF) as a sub-grantee. The aim of the program is to support Uganda Ministry of Health (MoH) with Strategic Information across multiple areas, one of which being the ability to leverage electronic medical records (EMRs) as the foundation for patient centered care starting with public health facilities.

From August 2015 to February 2016, the METS team embarked on updating an OpenMRS distribution, WHO Express, which was over 6 years old installed in about 350 public health facilities, to the latest released OpenMRS version as well as the the latest HMIS tools in ART care. In March 2016, the METS team started upgrades and training on how to use the new version of the EMR in Kabarole, a district in Western Uganda, with 20 sites. In May 2016, the first National Training of Trainers was held in Mbrarara with over 150 trainees, followed by an official rebranding of the OpenMRS distribution by MoH, to UgandaEMR.

Fast forward to September 2017, when METS released UgandaEMR 2.0.0 an upgrade from the 1.x series released in 2016, using the latest long term release OpenMRS platform (2.0.5) and Java 8, which is seen as the future foundation for the next 5 – 10 years.

The key challenge faced by the METS UgandaEMR team is that new features were not being added as fast as the implementing partners, MoH program areas and end-user facilities needed. This led to the concept of a bootcamp to seed the local tech scene with capacity to develop upon and extend UgandaEMR providing a key component of sustainability for the platform, through speeding up development of health tools needed by stakeholders.

The bootcamp with 20 participants, was split into 5 days, following the principles of agile software delivery, with focus on having usable forms by the end of the camp.

Day 1 – Problem definition and requirements gathering

To kick off the bootcamp was outlining of the expectations by the participants – most of which focused on learning about UgandaEMR, building on top of it and learning more about the software delivery process used to build the EMR.

The 4 HMIS tools to be developed in the week were introduced by a doctor from the METS team, building up an understanding of the purpose of the paper tools and how they are used within the context of a public health facility.

A parallel activity between the breaks of the tools discussion was setup of the development environment tools on participant laptops, which included MySQL/5.5, Java 8, git, maven, OpenMRS SDK, Intellij Community Edition & Visual Studio code that were to be used on preceeding days.

At the end of each day, the facilitators and participants had a “retrospective” looking at how the day went and what can be improved going forward.

UI Wireframe Showcase

Day 2 – Wireframing

The key challenge with digitizing paper records is how the paper tools are transformed to digital forms. The day kicked off with an overview and guidelines for UgandaEMR form design process at https://metsprogram.gitbooks.io/ugandaemr-technical-guide/content/form-management.html

The teams were handed pencils and paper, and with support from facilitators developed UI wireframes. After 3 hours of back-and-forth the teams were requested to show case their designs to provide feedback for each individual team but to also share learnings across all the teams.

Day 3 – Concept Management

OpenMRS Architecture Presentation

OpenMRS Architecture

This was the first day that the facilitators and participants delved into OpenMRS architecture and design, leveraging parts of a presentation “Health IT and OpenMRS: An Introduction

The idea was to introduce concepts and how the medical data is coded using concepts. Once the introduction was complete, the teams re-assembled to identifying the concepts needed for their forms following searching the locations below:

  • UgandaEMR concept dictionary
  • CIEL – a local instannce using the latest released concept dictionary
  • Identifying new custom concepts to be created

The discussion and modeling of the required concepts led to interesting discussions on differing approaches to handling repeated data capture dependent on the context of usage of the concepts. In certain cases we agreed to disagree about the models which would be tackled in subsequent iterations.

Day 4 – Coding HTML Forms

This was the long awaited activity, to see the forms come to life. Based on prior experiences, all laptops had setup and configured development environments.

The process copied from OpenMRS development guide was as follows:

  • Each team picked a single machine to work on
  • The team created a branch which was the HMIS number for the form being worked on by the team
  • An OpenMRS SDK  server named ugandaemr was created
  • The server was configured to watch the developement folder such that any changes to the forms would be automatically viewed following steps at http://bit.ly/2hWyAun

Once the initial form metadata was entered, with a quick primer on using HTML form entry tags, the teams worked leaving gaps where concepts were not yet defined.

The presentations were electrifying as the work for the whole week could be visualised. The approach to the team was the non-developer was to do the actual coding of the form, with the support of the team so as to slow down the “coding speed” to ensure all team members were active.

At the end of the day a working version of each team’s form was committed to GitHub and a pull request issued for progress monitoring.

Day 5 – Presentation Day

This is usually the most stressful day, however to make it fun the teams took the opportunity to self-organize in order of presentation, with a . number of constraints, teams decided the order to present, with opportunities being given to team members who had not lead any presentations during the week.

Dr Eddie Mukooyo, Ms Evelyn Akello with the best performing team at the bootcamp

This was to give all the team members an opportunity and to force the teams to prepare for the presentations.

A winning team was selected based on their presentation style, understanding of the HTML form being delivered and progress within the constraints.

Key Learnings

  1. The bootcamp was for 20 participants of whom 6 were female with a range of background from students of software engineering and Masters, software developers, M& E and public health practitioners
  2. The 5 days were not sufficient to complete working on the forms so the teams will work on their own with a planned follow-on activity to finalize for inclusion in UgandaEMR
  3. The positive feedback from the participants was with regard to the practical, collaborative and hands-on nature of the bootcamp, the learning experience of having to work within their teams which were randomly selected, balance brought to the teams from both the technical and non-technical members.

The event as seen by a little eye

Interactive discussions

Smiles and laughter to lighten up the day

Random selection of teams using sticky notes

Dr Eddie Mukooyo making closing remarks at the bootcamp on opportunity and expectations from the cohort of participants

Part of the METS UgandaEMR team with CDC Program Manager just after the first round of final day presentations

Group Photo – The first UgandaEMR Bootcamp participants and facilitators

Software Delivery Project Setup and Engineering Checklist

I have been a part of a number of software delivery projects, note emphasis on delivery not development, projects over the years and I thought I would share my checklist for projects as well as myths that need to be removed from teams for success.

An interesting quote from a delivery manager I have worked with and admire “We need to remove the notion that software products are successful because of hero developers, it is teams that consistently produce quality software”.

With teams and the fact that not everything can be gotten in place from day 1 of the project here is what I think

Must Have

These practices/processes must be in place on day 1 otherwise you run the risk of paying for them later

  1. Version control process – I have seen many a project with a version control system, but without a process for managing how the developers in the team commit code for features. I personally recommend
    • Trunk based development, master always has the latest version of working code
    • Version branches: when there are major version changes maintain a separate line for bug fixes. However this leads to overhead for back-porting (from master to the version branch) or fore-porting from the version branch to master, so must be used very carefully
    • Pull Requests for code review and feature tracking – each developer must have their own version of the code, and issue pull request for code review and merging.
    • Each developer works on feature in its own branch so does not slow down during code review cycles
    • Regular developer commits
    • Pull request guides I like this one from OpenMRS open source project where I contribute https://wiki.openmrs.org/display/docs/Pull+Request+Tips
  2. Unit Testing Framework – pragmatic usage of unit tests for business rules, and multiple paths through code
  3. Automated building of deployment packages – manual builds are error prone not repeatable
  4. Automated configuration switching between environments
    • external configuration of databases, web service calls etc
    • separation of development and staging environment configurations
  5. CI pipeline – shows status of builds on code commits to the repository, requires unit testing to be in place
  6. Ticketing and Task Tracking – what features are to be built when, and what is their relationship? Also helps track work across sprints as well as communicate to stakeholders
  7. Security – The Open Web Application Security Project (OWASP) top 10 are a minimum standard to be followed
  8. Architecture decisions:
    • Configuration over customization
    • Pragmatic use of external libraries that solve some part of the problem space

Important

May not be in place at project start, but must remain front of mind and put in place when opportunity arises

  1. Coding styles – at project level or even at different layers
  2. Documentation – usually an afterthought, leading to gaps later due to additional pressures. Once project stability is reached, then it is important for different stakeholders. I love Markdown and the excellent GitBook (http://gitbook.com) editor and toolchain
  3. Integration Testing framework – includes UI testing of flows, however is usually brittle so has to be done in a pragmatic manner for critical and complex paths
  4. Automated deployment of builds to staging server – this is a great step as it does not break the developer flows for show cases and demonstrations to stakeholders.
  5. Integration, load, security testing – leave out at your peril as it will come to bite you later. Set some assumptions and test them out to your heart’s content in an automated manner

Myths to be quashed in teams

  1. Developers do not write documentation – it is the responsibility for every member of the team to contribute to the documentation writing and review
  2. Back end, database and front end developers – large projects may provide flexibility for isolation of developers, however it is important for developers to cross cut across the “application layers” to reduce rework and enable evolution as knowledge of the product increases.
  3. Testing is a waste of time – a stitch in time saves nine. Pragmatic testing saves time since it provides more confidence in code reducing stress before showcases and production deployments.
  4. Developers should leave testing to QA staff – testing is multi-layered, so developers should play their part to support testing and quality assurance efforts. QA staff have a different mindset which helps poke holes and find gaps in developed software
  5. All the developers must use the same IDEs – the best tool for a developer is the one they know how to use. If the workflow is IDE specific then the project setup and configuration needs to be looked at to remove this dependency what will constrain the team later
  6. I can build my code faster and better than a framework out there – advice from my mentor “Each problem you are solving is a special case of a more general problem”, “There is no new problem under the sun”. Building new code to solve a special case may be faster today, but you will pay for it in maintenance and evolution

Looking forward to hearing your thoughts and what I may have missed

TechTip: Dbunit Export from Jetbrains DataGrip

I am an avid test driven development (TDD) advocate nowadays, with a pragmatic slant of course, looking to bullet proof the features that I deliver to ensure that they do what is expected, and work out edge cases.

A big challenge to testing is generating of test data, which is needed to setup some integration test work flows. I have been using Jailer (http://jailer.sourceforge.net/) to generate data from existing tables in a Dbunit format which can then embed in my test dataset xml files.

This is a challenge due to the mapping of relationships by Jailer (a neat feature by the way). So while working Datagrip, the database IDE of choice, we were struck by how to export different formats when looking at a table. This solution would allow us to leverage available filtering and searching features, to nail down the datasets that needs to be exported.

On contacting the support team through Twitter (https://twitter.com/0xdbe/status/853900122828222465/photo/1), the recommendation was to modify the existing XML groovy script to generate DBunit XML, following the steps at https://www.jetbrains.com/help/datagrip/2017.1/extending-the-datagrip-functionality.html

And well an hour later below is a groovy script to do just that can be found at https://gist.github.com/ssmusoke/ca4c55b4e52de97acb99a590644a677f

The code was not being well rendered hence the move to a Gist

Building and Maintaining Technical Documentation – Markdown with Gitbook Tooling

Documentation, the word that brings cold sweats to techies far and wide, makes product managers roll their eyes but is the one essential ingredient in aiding adoption and usage of software tools and services.

A key mantra for software development and delivery is “documentation, documentation, documentation” while agile purists will take “Working software over comprehensive documentation” to mean that no documentation.

Obviously in today’s world the expectations for clients, is that software is key and must evolve quickly to meet changing needs measured in weeks, and not months. This fast paced change actually highlights the importance of documentation, but places pressures on it to evolve more rapidly, be easier to use and understand, while maintaining a trail of changes being made within a rapidly changing environment.

Many formats have come and gone over time, plain text,  HTML Help, Windows Compiled HTML Help (CHM), Oracle and Sun Help, Eclipse Help, Flash help not forgetting PDF and MS Word documents for printed manuals. The common practice was to use a single tool to develop the help that compiles into multiple help formats.

Fast forward and the model seems to remain the same, however the challenge being what markup language to use to enable generation. In comes Markdown (https://en.wikipedia.org/wiki/Markdown) which aims for readability (JSON, YAML) so that users do not need to know markup but provides a way of formatting with simple conventions. Interestingly at the time of writing this even Whatsapp one of the most popular chat clients uses markdown like formatting.

The formatting challenge has been solved but now how to build the content, version control it to keep track of updates, generate the content and share it with the world. The most common tools are:

  1. GitHub pages (https://pages.github.com/) – using a special branch within a GitHub based repository to build online based documentation
  2. GitBook (https://www.gitbook.com/) which provides an excellent editor, hosting, build for generating PDF, online documentation, and mobile format (.epub and .mobi). Just open an account, fire up the editor and you are good to go

However if you are using private repositories and need to keep content internal facing, then you need to pay quite a bit for GitBook or jump through multiple hooks. However GitBook fortunately provides a command line client which can be used in this case to build the documentation which is then distributed using internal channels.

The steps to setup a local Gitbook environment are:

  1. Install npm
  2. Install Gitbook cli by typing the command below
    $ npm install gitbook-cli -g
  3. Setup a git repository for the project and add the following files:
    • .gitignore – include the _book directory in which the book will be generated
    • book.json sample below
      {
        "title": "My Book",
        "description": "Description",
        "author": "Author Name",
        "gitbook": ">= 3.0.0",
        "structure": {
          "summary": "SUMMARY.md",
          "readme":"README.md"
        }
      }
    • README.md the default information on the book
    • SUMMARY.md the Table of contents for the book, which may be in different directories recommended is a docs directory
      # Summary
      
      * [Introduction](README.md)
      * [Chapter 1](docs/chapter1.md)
      * [Chapter 2](docs/chapter2.md)
    • package.json – contains build commands for the book
      {
        "scripts": {
          "docs:prepare": "gitbook install",
          "docs:watch": "npm run docs:prepare && gitbook serve"
        }
      }
    • The final project structure looks like

      Screenshot 2017-03-27 11.13.13

      Example gitbook project structure

  4. You can view the book locally by running the command below which starts up a server running on port 4000

    $ npm run docs:watch

  5. Commit the contents to your git repo

I have to say that I love the GitBook Editor which works way better than my IDE, so after commiting the intial files, I fire it up and open the directory containing my project so that I can edit the files from there. Obviously I lose the ability to put high quality comments on what has changed in the files without jumping back into my IDE or git command line, but the sacrifice is currently worth it.

Additional Steps to Generate ePub and PDF

  1. Install Calibre (https://calibre-ebook.com/download) which provides the ebook-convert utility
  2. Add tasks to the package.json as below:
    {
      "scripts": {
        "docs:prepare": "gitbook install",
        "docs:watch": "npm run docs:prepare && gitbook serve",
        "docs:generate-epub" : "gitbook epub ./ ./_book/mybook.epub",
        "docs:generate-pdf" : "gitbook pdf ./ ./_book/mybook.pdf",
        "docs:generate" : "gitbook build && npm run docs:generate-epub && npm run docs:generate-pdf"
      }
    }
  3. Generate epub by running the command

    npm run docs:generate-epub

  4. Generate pdf by running the command

    npm run docs:generate-pdf

  5. Generate both epub and pdf by running

    npm run docs:generate

UPDATE: More information on the GitBook Toolchain  can be found at https://www.gitbook.com/book/gitbookio/docs-toolchain

UPDATE2: Added steps to generate epub and PDF documents

UPDATE3: Discovered that the process has a name – Documentation Driven Development, which is pretty interesting concept … https://twitter.com/brnnbrn/status/847197686042312704/photo/1

c8hzw6qu0aayg2r

 

Alternate Approach to Legal Independent Election Tallying

The Uganda elections are more or less over with less than 6 hours for the Uganda Electoral Commission (EC) to announce the results for the presidential elections.

Given all the time on our hands, with no social media, the team at Styx Technology Group designed the following alternative approach to independent electoral vote tallying for future elections that provides inbuilt mechanisms for audit and verification of results.

The primary data sources for the process are:

  1. Official EC list of polling stations and voters per polling station
  2. Photos of the signed election tally sheets from each polling station. To ensure that the photos are not tampered with and provide an audit trail:
    • Each photograph has to be taken with information on the camera, the GPS coordinates of where the photo was taken, date and time when the photo was taken which is available in many cameras that share it using the Exchangeable Image File Format (EXIF)
    • Two separate photos of the tally sheets have to be taken by different cameras
    • The cameras taking equipment may be registered beforehand to provide validation of the source of the information
    • The signatures of the returning officers and stamp must be clear and visible in the photo

The architecture for the technology solution is as follows:

  1. Web based solution accessible via any browser. Due to poor Internet connectivity in many areas of the country, an Android app would be provided to assist in data collection, then data sent once the user gets into an area with Internet.
  2. The field officers who capture the photos would also be provided with an option of entering the candidate vote tallies.
  3. In the tallying center, candidate vote tallies are entered from the photos received and vote tallies entered by data clerks. In order to reduce errors the following approach would be used:
    • The clerks are randomly assigned photos as they come in
    • The tally for a station must be entered correctly by two separate data entry clerks, then approved by a supervisor. This process is formally called the two-pass verification method or double data entry.
  4. All correctly entered data is shared with the rest of the world for download and analysis.

This system is mission-critical having to be available for the entire vote counting period of 48 hours,  so the architecture includes the following paths for data collection:

  1.  Multiple access IP addresses and domains for the website in case some are blocked off
  2. Any data collected via the Android app can be sent via email to a dedicated tallying center address. To ensure that only data from the app is received and not changed in transit, encryption is used.

The inspiration came from a quote by Ghandi “Be the change you wish to see in the world”, disproving the myth that there is no local capability to design and implement such solutions and most of all that such solutions have to be complex.

Looking forward to hearing your thoughts and suggestions…

%d bloggers like this: