While most clients come to us to build apps or digital solutions, some come to us with existing products built by third-party developers. Below, we outline why it’s important to do a code diagnostic on an existing product before making any changes or why clients would even request a code review of the existing code base to begin with. We’re sharing our approach based on a real case we helped a client with.
Our client case with a web platform
One client came to us with a web platform built by a third party developer in New York because they wanted to do updates, but did not know how the product was built by the previous team. However, the client knew that the existing version had bugs that were affecting performance. The platform consisted of 3 different portals for users (consumers, professionals, and admins) that were interdependent, and therefore any changes to one portal could further break the whole platform. Oursky performed a standalone code diagnostic service, with the goal to provide the client with enough information to make informed decisions. The code review for this project took 3 days.
Why a code review and report is a good first deliverable for new partnerships
It is irresponsible for us as vendors to just guess a time and budget for clients on updates or adding new features when we don’t know what we’re looking at. The estimate will be inaccurate due to not knowing the quality of the code base or the existing bugs. A code assessment tells us what we’re working with, which helps give realistic estimates for new features. It is like doing a proper assessment for a house and its foundation before committing to undertaking any renovations. It can be anything from noting a light fixture replacement to discovering leaking pipes behind darkened wall patches.
In working with a new client, we make a concrete deliverable a list of bugs and issues. A client can take this report and get a better estimate from any developer or vendor, not just us. In this case, the client (after seeing our report) also commission Oursky to do development work for both bug fixes and feature additions.
A comprehensive code diagnostic should have two separate components listed below. Depending on the size of a product and quality of documentation, the review could take a few days to two months for huge projects. We break down assessments into modules, to make project estimations more clear.
- Codebase review: to assess whether the code is easy enough to understand and good enough condition to maintain or build on
- Exploratory test: whether there are bugs in the existing product (major, minor, trivial) as a current status snapshot to understand what needs to be fixed, or what bugs existed prior to any new development work
To begin the review, Oursky needs to access the code, which could be zip files or a full repository on Github. The best case scenario is we receive a full repository that:
- is complete (and matches the deployed version)
- has clear commit messages
- is easy to setup the environment for
In this case, we received a zip package with the modules for 3 web portals and a backend API server. We had no commit messages to help us understand the changes made over time and required our development team to figure out the logic and what each component did.
The languages used were:
- Frontend: TypeScript
- Backend: Erlang, with some Python scripts
First, we confirmed that we were able to set up a local development environment for both the frontend and backend. As we did not receive instructions for setting up the environment, we also wrote a README for future developers to build and deploy the application. The README is a standard documentation best practice that development teams should include in a handover.
During the assessment, we are looking for whether the code is easy to trace and read. We also want to see if the application or modules are well organized and structured in a reasonable way for further development. We then separated our code review into the frontend, communication between frontend and backend and backend.
If we do not have direct access to the deployed version, then one of the things to quickly verify is that the deployed (live) and staging (our local) versions are the same. In this client case, the frontend was not the same, though the differences were minor.
When assessing the code, we look at whether the application performs the desired function and also consider whether it has an acceptable UX and UI. For example, pagination may be absent from the code, which is not an error, but we may add a remark if we think it adversely affects the UI. Having all the records listed in a single page will create a longer wait time as the backend needs to process lots of data to feed to the frontend.
Another factor is how the system architecture affects the frontend responsiveness. For example, having the frontend perform functions like sorting may be acceptable if there are not many records, but is not optimal for scaling.
Communication between Frontend and Backend
Communication between frontend and backend depends on the application. If the backend is only for internal use, as long as it communicates well with the frontend and the interfaces are consistent between the projects, there isn’t a problem. If the backend needs to provide an API to the third party itself, a RESTful API/ GraphQL design with a well-defined interface is needed.
In our reports, we make a note for the communication protocol for the specific application and generally document only when an approach is not standard.
We make a note of code that is in good condition and best practices that were helpful, such as debug logs that help code tracing and debugging. We also note when we find documentation that explains the various components, such as the controller, websocket handler, model. We make notes for implementations that have implications on future development. For example, we look at how third-party API integrations are handled, what is used for storing a user session, whether test cases are written.
In addition to doing a code review, we also do an exploratory test to see what bugs already exist from a user’s perspective.
The objective of an exploratory test is to find as many bugs and usability issues as possible within a specified period of time. Oursky manages a team of testers to perform exploratory tests across specified platforms. The reported bugs are clearly documented, with screencaps and descriptions that are manually verified and “accepted” by our QA team. Our reports organized according to the pages in each model we receive. The issues we report are 1) bugs and 2) UX problems that are classified as major, minor, and trivial. With this app developed in New York, our testers reported over 100 issues within 24 hours. After two days of manual verification by our QA team, approximately 70 issues were accepted.
The PM then takes the list of defects to the client to discuss major issues that must be addressed before development can begin on new features.
Moving forward after a test
Each code diagnostic case will have different issues to fix before development. For example, some applications may have serious security vulnerabilities, like a password being stored as plain text and whether admins can directly view it. Some may not have obvious frontend bugs yet, but may require an entirely new database architecture to continue to scale efficiently without future problems, such as an obvious database hotspot that will exponentially slow down data insertion or queries. We also make recommendations for best practices that reduce the chance for future errors, such as having database migration files so that database changes are automatically updated for the entire development team.
Oursky will agree with the client on a list of existing bugs that must be fixed, necessary changes that are not bugs but crucial for synced deployment with continuous integration, a product’s scalability, and security of user data. The confirmation on both sides prevents future delays in development schedule due to unknown major issues.
A code review may seem like an upfront cost with a new vendor. However, the small and fixed time investment for this project helps estimate and cap the time / cost of working on an existing code base that did not have a proper hand-over. Oursky believes in working with our clients together as product owners. After our diagnostic report, a client is better equipped to own their product and more confidently work with any development team to improve the product.
If you found this piece helpful, follow the Oursky Medium publication for more startup/entrepreneurship/project management/app dev/design hacks! *👏
If you have an app idea or product you’d like a code review for, get in touch!