A Technical Architects Guide to the SIF 3 Infrastructure: High Performance
- For everyone:
- Clean encapsulation accelerates development and performance by simplifying logic.
- Support for multiple data objects in the same payload move more data at once in many situations.
- Fewer overhead messages increases throughput, especially in high latency environments.
- Taken to the max:
- Multiple connections are defined for both synchronous and asynchronous data exchanges.
- Long polling brings real time responsiveness to new levels.
- eTag and similar support for multiple object queries help you get only the data that has changed.
- Same use case 400 times faster.
The rules for web services have changed. In 2005 it seemed things had matured around SOAP, WSDL, and highly dependable asynchronous message flows wherever they may be needed. Fast-forward to today and we have REST, API Sandboxes, and the occasional timeout is seen as preferable to an always-delayed response. The SIF 3 infrastructure both operates in this world and is designed to make the most of it.
First we set out to have clear encapsulation or delineation between where the infrastructure ends and data begins. Fortunately REST makes this evident with a consistent place for headers and another for the body of data. The result is accelerated development and a reduction of errors by simplifying logic. Put another way, it is easier to find the data when it is always in the same place.
Next we considered our rich history bundling data for transport. We had certainly gone the one object at a time route; while simple, it also proved to limit overall performance. Our initial attempt at packages didn’t go much better; while performance could be gained our efforts to set limits sometimes resulted in failure. By the time we designed SIF 3 both capabilities had matured and patters had emerged on how to best handle this situation. Now pages of responses mirror online search or shopping results and tunable queues do the same thing for events. Interoperability works best, when both sides coordinate.
Once these things were tightened up we went looking for other inefficiencies and discovered with a little thought we could eliminate many messages from our flow entirely. Fewer overhead messages increases throughput, especially in high latency situations. Every trip back and forth counts, with SIF 3 you simply make less.
Now the changes above are fundamental and expected to reach everyone. However, if you need more performance the SIF 3 infrastructure has options. From multiple connections for increased throughput to long pulling for closer to real time events, you have options. Taken together for a well-selected use case in a low latency environment we have seen data flow up to 400 times faster than SIF 2.
It is time to start planning your upgrade: http://www.a4l.org/page/Infrastructure