Throughput and Threads

HL7 APIs

In most cases a trust integration engine (TIE) will manage the daily flow of HL7 messages to PKB without the need for further tuning. This will be part of your integration testing in sandbox.

The exception to this would be if the integration includes a one time large bulk load of data via HL7 e.g.

  • whole PAS demographics to load patient records

  • transfer of historical data held against patient records.

A one time load of HL7 messaging should be controlled so as to not swamp the PKB HL7 API endpoint. Concurrent HL7 messages in this scenario must be limited to:

  • 1 on sandbox

  • 1 on production. An exception of up to 4 concurrent messages may be made with agreement from PKB devops. An exception could only be considered after throughput estimation has been conducted on sandbox (see below).

Increasing the number of concurrent messages above these figures will most likely not result in any improvement to throughput (number of messages processed per second), but it will risk saturating the PKB HL7 endpoint and eventually cause requests to be dropped.

Bulk upload throughput estimation

The time it takes to process a message is very dependent on the actual content of the message, the best way to get an estimate of throughput is to measure against a realistic test feed on the sandbox environment. If up to 4 concurrent threads have been agreed for production then adjust by that number for a production estimate.

This estimate should be measured and re-calculated when starting to send messages on production in order to check the original estimate, as there will naturally be some variation due to the database system dealing with a significantly larger quantity of data in production

Proprietary REST APIs

Our guidance is that requests should be executed serially rather than concurrently in typical integrations.

A client may hope to achieve a performance improvement using concurrent requests to our APIs but this will not derive a benefit for the following reasons:

  1. PKB applies rate limiting to prevent overloading APIs with concurrent requests. Rate limiting is applied per client id.

  2. Once a request is accepted to a PKB API, PKB internally will only serve a set number of requests concurrently. Thus submitting a large set of requests in one go won’t provide a performance benefit beyond sending the same requests serially.

If the integration involves an application that will be polling for data frequently (e.g. a dashboard) then some concurrent threads may be incorporated but this must incorporate client side throttling to avoid too many parallel requests being made to PKB. This should be discussed with PKB as an exception to the general guidance.

E.g. a proposed client throttle limit would be:

  • the number of parallel patient requests to one.

  • For each patient, different data is retrieved from PKB, obtained via up to 4 concurrent queries.

  • Each patient request is only triggered if the $date-of-last-data-point API indicates there is new data to retrieve.

 

© Patients Know Best, Ltd. Registered in England and Wales Number: 6517382. VAT Number: GB 944 9739 67.

This API specification and design is licensed under a Creative Commons Attribution 4.0 International License.