Archive for the ‘software delivery’ Category

The Case of a Platform Rebuild from Laravel

This post is a discussion with a colleague who reached out to me requesting for advice on whether to rebuild their successful e-commerce platform whose usage has grown exponentially over the last 18 months.

Advice on Platform Rebuild
Laravel is not Enterprise Ready

My first piece was 90% of platform rebuilds and re-architectures fail, especially since there are always unseen constraints in the new tooling that slows down the process. My approach is always to advice to squeeze as much as you can from your existing framework, language and platform before starting to look elsewhere

While this post is Laravel – specific, the ideas are applicable to any framework, programming language or even mix of technologies

As a Laravel developer who is passionate about architecture and solving scaling challenges, this is a series of questions to drive decision making

  1. Are you on the latest PHP 8.1 and Laravel 9?
    • Uprading to the latest version of PHP and Laravel pulls in the latest performance improvements in the language stack
    • This also applies to the webserver Apache/Nginx
    • Is your database the latest applicable version
    • Server hardware
      • CPU latest generation of processors for the configuration
      • OS patches – remove unused and unneeded services
      • SSD disks for IO bound performance
      • RAM – the more the better
    • Network configuration – unnecessary round trips from the application to the database (see below too)
  2. Is your database and application well tuned? Indexes, reducing the size of requests and number of queries
    • Application tuning is an art that can be taught – for a database heavy application like this case the Eloquent Performance Patterns by is a key resource (https://eloquent-course.reinink.ca/)
    • This also involves removing unused plugins and libraries, and leveraging the framework best practices – Larvel Beyond Crud (https://spatie.be/products/laravel-beyond-crud) and Frontline PHP (https://spatie.be/products/front-line-php) from the team at Spatie are excellent primers
    • The memory usage per request can easily be shown by the Laravel Debug Bar (https://github.com/barryvdh/laravel-debugbar) during development
    • Checking the size of requests and responses may improve performance – send as little data as required for the operation, paginate in the database not the application
    • Add indexes to improve table joins, redesign tables for needs, separate data collection & reporting (denormalization may be needed here)
    • Use Specific API requests for cases where generic default CRUD ones do not perform well
    • Is all that Javascript necessary in the application pages?
    • Is all that that CSS necessary in the application pages?
    • Is the database tuned for the hardware that it is running on?
  3. Are you using the framework tools like queues to offload processing into asynchronous operations via queues
    • At times performance issues may be experienced by users especially in e-commerce setups where actions that can be asynchronous are added into a synchronous workflow slowing down responses to end users e.g., sending a confirmation email (adds 2-5 seconds to the order response) can be done after so the ordering process completes faster for the customer
    • For faster throughput using extensions like Octane can provide the necessary performance boost
    • Tweaking the web server Apache/Nginx to handle more users
    • Load balancing across more than one webserver can provide quick wins
  4. Increasing your database resources – bigger server, tuning the MySQL
    • Computing power is cheap thus a beefier server or more RAM may do the trick depending on whether the deadlocks are CPU/Disk or memory bound
    • If a beefier server does not help, probably load balancing across multiple smaller servers even for stateful vs stateless requests could improve performance
  5. Concurrency using Laravel Vapor and Octane?
    • Vapor is a paid service to bring serverless to the Laravel applications
    • Octane increases the concurrency of request handling though application changes may be needed to cater for the constraints Octane places
  6. Are you profiling your application with tools like Blackfire or Sentry to find performance deadlocks?
    • The performance improvement approach is measure, find the bottleneck, tweak to improve, then rinse and repeat
    • Are there errors or failures that are causing the application to slow down
  7. Have you refactored and cleaned up your data model to match the current reality?
    • As applications evolve there is a need to remove old code, data columns to suit the new reality
    • Refactoring code to match reality removes any unncessary baggage that is carried along
  8. Is your architecture the simplest that it can be – https://future.a16z.com/software-development-building-for-99-developers/ as your organization may not need the complexity of a Fortune 500 or FAANG
  9. There are hidden gems which can also be leveraged to improve your architecture & unearth performance issues
    • Test Driven Development – even if tests are written after the code, unit testing complex algorithms while carrying out end-to-end workflows of critical user paths
    • Design for failure – especially for external services
    • CI/CD – automate deployments to get new features, bug fixes and enhancements into production as fast as possible
    • Setup staging sandboxes to test out ideas and tweaks
    • Monitor your production application service health – Laravel Health (https://spatie.be/docs/laravel-health/v1/introduction) provides an excellent starting poing

When all else fails and a platform rebuild is necessary the Strangler Fig Pattern is a great way forward – replace different parts of the application as needed replacing them with newer architectural pieces referred Legacy Application Strangulation : Case Studies (https://paulhammant.com/2013/07/14/legacy-application-strangulation-case-studies/)and Strangler Fig Application (https://martinfowler.com/bliki/StranglerFigApplication.html)

What are you thoughts and suggestions? What has worked for you and what pit falls did you find? Any additional advice?

My Code Review Workflow

My role involves reviewing code written by different members of my team, which is an important process in the software delivery lifecycle. In certain cases, code reviews turn into gatekeeping which does not deliver the intended value for the process

The reason for code reviews

  1. Get a shared understanding of the code submitted – an opportunity to share, learn and collaborate
  2. Find opportunities to improve the code and what it is doing – refactoring
  3. Find edge cases that may not have been covered
  4. Verify that the code does what is expected

I just thought I would share my PR review approach and I hope it may help others here

  1. Read through the code changes across the files in the GitHub/GiLab or version control UI – this helps me get a sense of files which should not be there, too many files, too few files etc
  2. Pull the PR locally on my machine – Jetbrains IDEs and VS Code have pull request views and functionality if you are not a cli guru like yours truly
  3. Run the code and test the features as documented both happy path and some random abrupt paths

The improvements we are going to add to our projects (still looking for ideas here)

  1. Automated code checking for formatting and linting

I tend to use Firefox developer edition (set to always start in private mode) for web stuff to view it in a clean browser. Chrome is the new Internet Explorer 6 (the most painful growth phase of the web)

What is your code review workflow, what tips and tricks have you used to make it smoother and more streamlined?

UPDATE 1: November 4, 2021 – excellent reference on by Chelsea Troy https://chelseatroy.com/2019/12/18/reviewing-pull-requests/

An Opinionated Approach to OpenMRS Concept Management

One of the key strengths of the OpenMRS platform is the concept dictionary, which allows for the mapping of real world health care data needs into concepts that provide the questions and answers.

The concept dictionary provides ability to map real life concepts to specializations across health care domains such as: SNOMED CT (clinical healthcare terminology), LOINC (laboratory observations), ICD-10/11 (disease classifications), RxNORM (normalized names for clinical drugs), CVX (vaccination codes) in addition to leaders in certain medical fields such as Partners In Health and AMPATH (HIV Care and treatment).

However with great power comes great responsibility, the concept dictionary coding can easily get out of hand, with duplicate concepts leading to inability to extract data for reporting and improving the efficiency clinical care that are key goals of health informatics activities.

This guide is based on my personal experience from working across multiple diverse implementations and the fact that Open Concept Lab (OCL) is not yet in widespread production usage which would alleviate most of the pains, which includes:

  1. Supporting the upgrade, evolution and rollout of UgandaEMR in Uganda from 350 sites in 2016 to over 1000 sites (December 2019) including implementation of 2 major revisions of national Health Management Information Systems tools
  2. Migration of 2 custom OpenMRS implementations in Uganda to align to and build on top of UgandaEMR
  3. Namibia PTracker PMTCT program
  4. Enhancements of the Reference Application

The key principles to this approach are as follows:

  1. CIEL dictionary is the official source of concepts, and first place to check for concepts
  2. Custom concepts must be setup in such a way that updates to CIEL or other custom modules used do not overwrite the customizations

Concept Server Setup

The setup involves using the following OpenMRS instances:

  1. CIEL dictionary server running the latest reference application version – contains the latest version of the CIEL dictionary and helps for data export when needed. The complete concept dictionary however is not recommended to be loaded for an implementation due to slowdowns in concept lookup, as sample concept numbers are:
    • Reference Application 2.9.0 – 446 concepts
    • UgandaEMR – 5,500 concepts
    • CIEL dictionary – 54,000 concepts
  2. Custom Concepts Server – this one contains the custom concepts and CIEL concepts that exist, is the single source of truth for the project concepts. This needs to run similar modules to what the implementation is running. I would recommend the following additional steps:
    • Set the autoincrement value of the concept table to 5,000,000. This ensure that the concepts created will never be overwritten by CIEL concepts, the last numeric ID as of April 2020 is 165900 due to retired concepts etc
    • Set the auto increment values of the rest of the concept* tables to 10,000,000 (there are usually more values than so these tables grow)
    • Create a custom mapping for your project or implementation, and use that to reference the custom and CIEL concepts that you use in your forms and reports. This adds a layer of redirection and consistency for access
  3. Implementation Development, Staging and Demo Servers – as needed

Moving Concepts Across Servers

There are multiple options for moving concepts from the CIEL to the Custom Concepts Server and finally to the implementation servers

Approach Description Notes
Use of Metadata sharing and metadata deploy modules to build of metadata packages (zip files)
  • Manually downloaded from the servers and uploaded where needed – faces the challenges of manual processes
  • Provide a dedicated url from source servers that clients can subscribe to get updates as the deploy packages are updated following the pub-sub model
  • The creation of the metadata packages is manual
  • The metadata packages are zip files which are difficult to debug
Download the concept data into CSV or DB Unit compliant XML The concept files are then loaded into the implementation using either Initializer or Data Exchange modules
  • This requires developers to extract the concept data from the different tables, without making mistakes.
  • This is still a manual and error prone process
Open Concept Lab (OCL) Allows creation of custom dictionaries and extraction of data through an online web interface This tool is not yet production ready but would provide the automation that solves all issues

Data Export Tools

The following additional tools can be leveraged for this purpose

  1. Dbunit XML data export plugin for Data Grip – https://ssmusoke.com/2017/04/17/techtip-dbunit-export-from-jetbrains-datagrip/

 

 

Perspective: User Requirements for Technology Projects

I was asked to talk about the handling of user requirements, how to link them to implementation within technology projects, by The Medical Concierge Group (TMCG) a digital ehealth service provider.

The key principles are being able to respond to change (agile), keep learning, and how to capture business/customer outcomes as well as improving communication across different departments and external stakeholders.

Agile Software Development for Ugandan Context 2019 Edition

Excited to share my thoughts and experiences in agile software delivery for use within Uganda at the Google Dev Fest in Kampala, on October 26, 2019