open-source, performance

Load testing Apache Thrift

TL;DR: Use massive-attack

We are currently experimenting with integrating Apache Thrift into one of our Finatra-based APIs.

There was a bit of learning curve involved in this, as firstly most of our APIs use Akka HTTP, and also we had not utilised any RPC frameworks before. As part of the prototype, I created a simple Finatra API which had two endpoints that returned static responses: one over HTTP, and the other using Thrift. This is quite simple to do, after you figure out which plugins to use to generate the code based on the Thrift Interface Definition Language (IDL).

It took probably a day to set this up and deploy it to AWS – but then came the realisation that Thrift might simplify a lot of things, but load testing your endpoints is not one of them.

Because of how Thrift works, you would need to create a client to return your data; this is basically just a method. For example if you have created MyService Thrift service in API #1, you would simply create a client in API #2 which needs the data provided in MyService by:

lazy val thriftHost = "localhost:9911"

lazy val thriftClient: MyService.MethodPerEndpoint = 
  Thrift.client.build[MyService.MethodPerEndpoint](thriftHost)

API #2 can then surface the Thrift data from API #1 in JSON (or any other format) in an HTTP endpoint:

I then created Gatling load test scenarios for API #2 which load tested two endpoints: the first one powered by the API #1‘s HTTP endpoint, and the second one powered by API #1‘s Thrift endpoint.

The load tests ran fine, and Thrift-powered endpoint was faster as expected. But the problem with load testing this way is that you are basically load testing HTTP endpoints of API #2, not the Thrift endpoints of API #1, and it didn’t give me a clear idea of how many requests the Thrift endpoint can really handle when accessed from multiple APIs.

The next logical step was to look for load/performance testing tools that were capable of testing Thrift endpoints directly. This proved much more difficult than I expected; strictly speaking there are tools that can do this, but there were three big problems with them:

  1. They were quite complicated to use
  2. They were not updated in some cases in years
  3. And most importantly, they did not provide me with the information I wanted a load test tool to provide me with, such as how many RPS the Thrift endpoint can handle, and how fast.

In this process I experimented with Pinterest’s Bender, Twitter’s iago, and even tried writing my own JMeter Thrift plugin by following an obscure tweet down the rabbit hole.

Eventually all of these (failed) attempts made me think that load testing a Thrift endpoint cannot and definitely should not be this difficult. So, I started writing my own simple load testing tool, and called it simple-load-test. I eventually changed the name to massive-attack, which was brilliantly suggested to me by my colleague Michael; a bit of background: in my team we name our APIs after bands, which is incredibly confusing, but fun.

The concept behind massive-attack is quite simple: you can load test any method which returns a Scala (or Twitter) Future, and it will tell you the response times for that method after calling the method for the specified number of times or duration. You can do this as part of your normal unit/integration tests – I might change this later to implement the SBT’s test interface, but it works perfectly fine when added to Specs2 or ScalaTest scenarios.

For example to load test a Thrift endpoint,  you add the following to your test specs:

"Thrift endpoint" should {
  "provide average response times of less than 40ms" in {

    lazy val thriftHost = "localhost:9911"

    lazy val thriftClient: MyService.MethodPerEndpoint = 
      Thrift.client.build[MyService.MethodPerEndpoint](thriftHost)

    val testProperties = MethodPerformanceProps(
      invocations = 10000,
      duration = 300
    )

    val methodPerformance = new MethodPerformance(testProperties)

    val testResultF: Future[MethodPerformanceResult] = 
  methodPerformance.measure(() => thriftClient.programmes())

    val testResult = Await.result(testResultF, futureSupportTimeout)
  
    testResult.averageResponseTime must beLessThanOrEqualTo(40)
  }
}

This will call your thrift endpoint called “programmes” 10,000 times (or for 5 minutes, whichever come first) and asserts the average response times to be less than 40ms.

You can do assertions based on any of the properties returned as part of the test result. At the moment, following are supported:

  • Minimum response times (ms)
  • Maximum response times (ms)
  • 95 percentile response times (ms)
  • 99 percentile response times (ms)
  • Average response times (ms)
  • Number of invocations
  • Average requests (RPS)
  • Minimum requests (RPS)
  • Maximum requests (RPS)
  • Number of spikes
  • Percentage of spikes
  • Boundary that response is consider a spike

As you can tell, you can test any method this way – even HTTP endpoints:

...
  val httpClient: HttpClient = new HttpClient()
  val httpRequest: httpClient.RequestBuilder = 
    httpClient.get("http://0.0.0.0:8080/programmes")

   val testResultF: Future[MethodPerformanceResult] = 
     methodPerformance.measure(() => httpRequest.execute())
   ...      

As part of setting the test properties, you can also specify on how many threads you want to call your function – this is useful for HTTP/normal methods, but not so much for Thrift endpoints as the client runs only on one thread and calling it from multiple threads causes problems.

This library still needs a lot of work and fine-tuning, but the first version is now available through Maven – and more improvements will follow soon.

 

open-source, sbt-plugin

Tracing usage of your Scala library

Finding where your code is used across multiple projects in a big code base or organisation can be quite difficult, especially if you have made a change that needs to be propagated to every application that uses the updated client or library.

I have created an open source SBT plugin that can simplify this process a bit, more details can be found in the sbt-trace page.