The Need for a Lightening Fast Accounting Engine
--
Accounting and Journal Entries. It is the most important thing that few get excited to talk about. At least, it’s unlikely that it is the first place most users’ minds will go to when considering an all-in-one asset management technology platform. My goal is to explain why it is incredibly important, and why it needs to be incredibly fast.
Each trade will create or impact tax lots, as well as have a number of accounting impacts. It gets complex quickly: you fix a trade booking from T-2, and many subsequent accounting decisions were made on that errant data.
However, from the front office perspective, transactions and tax lots are about as much information as you need. You can aggregate transactions by security and settlement date to build settlement ladders, and you can compare current market values vs. the cost basis in tax lots to understand tax gains/losses. This isn’t comprehensive — obviously — but the front office don’t get into the minutia of amortization when making trading decisions, for example.
Historically this happens because front office systems need to be fast, and accounting calculations produces a lot of data. The compromise was obvious; ignore the journal entries in front office systems and get the speed-up. The trade-off was the intensive sync’ing activities that happen throughout the day, and especially at end of day.
With the advent of infinite compute resources (or practically infinite) there is the possibility to adopt new trade-offs. First let’s consider why speed is important.
The Need For Speed
Start-up & Recovery
The most technical, front office systems need to be up, always. Not having your portfolio information on hand, and in a digestible form, wreaks havoc on an organization when markets get rough. When something goes wrong, the system needs to recover, immediately. Reducing the amount of data that needs to be reprocessed on failover reduces the time before business can restart.
Real-time portfolio positions (and P&L)
Every big trade decision is made with the current portfolio in mind. That needs to be current as of the latest trading activity and market values. When that isn’t available within custom or vendor systems it is proxied with spreadsheets. Again, this is crucial in volatile markets. For large organizations, it is also needed to avoid trading errors. A couple examples where errors can creep in:
- A FX trader hedges a foreign bond position, which has just been sold by the principal PM
- A treasuries trader rolls a bond position that has been sold, or pledged as collateral.
Finally, these updates are required to confirm tasks have been successfully committed into the system. Apply a stock split; Execute a trade; Update market data; etc. Portfolio Managers and Traders simply cannot assume these operations will complete eventually, doing so would violate their ever-present responsibility: KNOW YOUR POSITION. When push comes shove, these are the people holding the bag.
What-if Analysis
As regulations and responsibilities have increased, there is a heightened expectation to know the impact of a trade on a portfolio. In some jurisdictions (e.g. Japan), trading without understanding this impact is a violation imposed by local authorities.
- If I trade this bond will I move my portfolio(s) to the desired risk position?
- If I trade this equity will I breach any portfolio constraints?
- If I trade this future will I break my daily exchange limits?
- Which options contract satisfies my portfolio goals at the cheapest price?
- How much tax loss can I harvest?
These are just a few of the questions that need to be asked and answered during the trading day. Again, when these can’t be answered with systems, they are proxied with spreadsheets.
Note, that some solutions are essentially goal-seeking algorithms. It’s not necessarily a simple input/output solution. As such, speed is a requirement, not just a nice-to-have.
The impact of the old trade-off
When journal entries were lopped off the end of transaction processing, in exchange for front office responsiveness, a number of size-able drawbacks were introduced:
- Front office systems implemented rudimentary tax lot processing that didn’t necessarily match the closing methods for each portfolio
- Sub-ledgers needed to be synchronized with general ledgers
- A feedback loop was created between accounting-initiated changes (e.g. re-orgs), and front-office-initiated changes (e.g. trades)
That’s before you add in the usual cast of characters: disjoint market data, reference data, valuation methodologies, and so on. The cost of the trade-off is high: armies of operational staff & technology systems with overlapping capabilities.
The new world
Cloud computing has given us a huge gift. With on-premises hardware, efficiency was achieved by buying servers with ever-bigger capacity. The rest of the infrastructure tended to be costly and intensive: managing firewalls, procuring hardware and so forth. Now we have an array of compute options: stateless and serverless (e.g. Lambda), stateful and serverful (e.g. EC2), and a suite of other options in the middle (e.g. Fargate). All of these are simpler than on-premise setup as you can call an API and have what you want in under 60 seconds, versus the weeks or months of an internal infrastructure team. This unleashes an incredible new option, which I’ll break down.
Per portfolio scaling (at the most extreme)
Setting up a new instance of compute is very easy. This allows us to process transactions (whether hypothetical or real) per portfolio. If we want more efficiency we can reuse instances across multiple portfolios. This gives us infinite ability to scale per portfolio, IF, our security and market data systems can scale alongside.
Additionally, when scaling at this level, the number of events being processed by each instance is much lower, and therefore, with performant code, can be very very fast.
Sounds great, so far. What are the trade-offs?
Eventual Consistency for global transactions
Whether we like it or not, financial systems are eventually consistent in some form. Whether that’s integrating a new price update into your market value; or performing a global change across 100k portfolios. Organizations have segregated systems, or ‘step-by-step’ manual processing to work around scale issues.
Global transactions (e.g. a dividend payment) which impacts many portfolios cannot be fully transactional in this new world model. Instead the processor of such operations needs to have processes in place to monitor and ensure such processing is completed. Many of these operations tend to take place after end of day when eventual consistency is not problematic.
Inter-portfolio locking
This is the big one. When making inter-portfolio transfers, portfolios need to be locked to commit atomically. For transfers between 2 portfolios, this locking time can be very short, again, if your code is very performant. Here, there is a sensitive decision to be made. Allow anyone to lock portfolios and they can create a locking cascade, or reduce scope for this operation to occur and risk frustrating users. Large scale portfolio re-organizations have to be made outside of business hours, for instance.
Summary
The goal of this article isn’t to explain the how. There is intellectual property in how such a system is implemented, but the goal is to explain the why. Only when a platform fully commits to this goal, can we unlock the next level of efficiencies within asset managers across the globe.
The goal is to process transactions within milliseconds, in order to support the myriad of solutions that can be built on top of it.