Realtime Analytics: Just how many CPU/Processors would you need if your Market Data Service feeding thousands of ticks per second?

The Problem Statement

Ok, here’s the problem statement. You are building the application infrastructure for a group of 100 traders. The process starts with market data feed. A PriceServer stands between your corporate network and external market data vendors, Bloomberg, Reuters, Morningstar, Xgnite…etc, pumping in price ticks at rate of 3k per sec. CalcServer’s sitting in the back picks up the ticks, run some calculations, then publish results to trading/risk management screens.

Question is,

Just how many CPU would you need in your server farm if your Market Data Service feeding thousands of ticks per second?



Different Loading Conditions - Different Tools

The problem we have at hand falls under CASE 3 or CASE 4 where updates frequency is high and distribution latency permitted is low, whether calculations is short, or lengthy.

Consider this hypothetical architecture. No scheduler, no load balancer, no caching or persistence support, minimal logging (As those found in tools such as, BMC control-m, or Autosys). Analogy is Space Shuttle construction, eliminate any components or weights you don’t need.

PriceServer, CalcServer and SimpleTradingScreen all feeds directly from RabbitMQ. The intention is to minimize latency/overhead cost along execution path starting from PriceServer, to CalcServer, then finally SimpleTradingScreen.

CalcFarm Architecture

How many processors we need in our CalcFarm? The maths is actually not that complicated. PriceServer is feeding the message bus (RabbitMQ) at rate of 3k ticks per second. That’s 1/3 ms per update. If CalcServer farm fails to process (Dequeue from RabbitMQ, run calc, then publish result back to RabbitMQ) a single ticks in 1/3 ms, then it’d lag behind PriceServer. If CalcServer farm takes 1 ms to process one tick, then it’s three times slower than PriceServer. Essentially, every additional 1ms in processing of published ticks requires additional 3 CPU’s in CalcServer farm in order that CalcServer farm stay in pace with PriceServer tick rate of 3k ticks/sec.

Imagine your calculations on average takes 100 ms to execute. That’s equivalent to 300 additional CPU’s. Or 37.5 servers each with 8 processors.

What you are doing here is essentially two things:

(a) To balance data velocity supported by each components along the overall execution chain

(b)  minimize latency along execution path between two points: PriceServer and SimpleTradingScreen.


Source Code? 

Want to experiment with this hands on? We’ve uploaded the source code to Git. Everything is built in .NET 4.5/C#.  Note that you need to install RabbitMQ first.


One aspect worthy of special attention is that serialization/deserialization is high traffic area.  This happens when you queue or dequeue to/from RabbitMQ. General consensus is that “System.Runtime.Serialization.DataContractSerializer” and “System.Xml.Serialization.XmlSerializer” are relatively slow. For demo purpose, we’ve chosen “NetSerializer” – a free serializer from Source from Git:



Start CalcServer Farm?

STEP 1. RabbitMQ

Please download+install RabbitMQ, make sure the service is running.

STEP 2. Start CalcServer

From command prompt, navigate to \CalcServer\bin\Release\

Simply type CalcServer. There’re only three settings in app.config:


    <add key=”QueueUrl” value=”localhost”/>  – Point at RabbitMQ

    <add key=”DetailLog” value=”false”/> – For debugging only. If DetailLog=true, every price tick received would be logged and printed to screen. Set to false otherwise it will slow down CalcServer significantly.

    <add key=”MaxThreadPoolSize” value=”8″ /> – Maximum size of thread pool. Note, threads and concurrent instances of CalcServer on same physical machine share same set of Physical Resources, configure this wisely.


STEP 3. Start PriceServer

From command prompt, navigate to \PriceServer\bin\Release\

Simply type PriceServer. There’re only four settings in app.config:


    <add key=”QueueUrl” value=”localhost”/>

    <add key=”DetailLog” value=”false”/>

    <add key=”PerSecPublishThrottle” value=”3000″/>     – Max publish rate

    <add key=”MaxCountPublishesCulmulative” value=”0″/>  – Max cumulative # publishes before stop


STEP 4. Start SimpleTradingScreen

From command prompt, navigate to \SimpleTradingScreen\bin\Release\

Simply type SimpleTradingScreen. There’re only two settings in app.config, I’m not going to repeat what’s already been covered.


    <add key=”QueueUrl” value=”localhost”/>

    <add key=”DetailLog” value=”false”/>



How do we know that it works?

  • Tested on Intel 2.6GHz (Single Processor) with 4GB RAM (Very low end dev test machine, so if it can feed thru 3k/sec, so can you)
  • CalcServer dump performance statistics into “CalcServerStatistics”.


  1. CountPxUpdateRecv is # ticks CalcServer has picked up from RabbitMQ.
  2. CountCalcCompleted is # calculations (dequeue+calc+publish result) been completed.
  3. Gap = CountPxUpdateRecv – CountCalcCompleted

If CalcServer is able to keep in pace with PriceServer, what you’d see is Gap stabilize.

The following result is done with PerSecPublishThrottle = 3000 (i.e. 3000 ticks per second)

Try set this to, for example, 10000. With Single instance of CalcServer running you will start seeing “Gap Widening” (i.e. Gap keeps increasing most of the time)



We’ve also written a simple NUnit test case.

It simply kicks off CalcServer and PriceServer. Let it sits for a few minutes, then examine CalcServer performance dump file \CalcServer\bin\Release\CalcServerStatistics.log

Then assert that “Average Gap” < 10k

It’s same as manually examining “CalcServerStatistics.log”, and confirm that “Gap” isn’t monotonically increasing, which is an indication that CalcServer can’t keep up with PriceServer.



In addition to “Gap”, which is a measurement of throughput disparity, latency can also be measured simply by adding time stamps to messages.

latency = timestamp (PriceServer publish) – timestamp (SimpleTradingScreen recv)

Happy Coding!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s