Class BatchedPreprocessor

java.lang.Object
net.i2p.router.tunnel.TrivialPreprocessor
net.i2p.router.tunnel.BatchedPreprocessor
All Implemented Interfaces:
TunnelGateway.QueuePreprocessor
Direct Known Subclasses:
BatchedRouterPreprocessor

class BatchedPreprocessor
extends TrivialPreprocessor
Batching preprocessor that will briefly delay the sending of a message if it doesn't fill up a full tunnel message, in which case it queues up an additional flush task. This is a very simple threshold algorithm - as soon as there is enough data for a full tunnel message, it is sent. If after the delay there still isn't enough data, what is available is sent and padded. As explained in the tunnel document, the preprocessor has a lot of potential flexibility in delay, padding, or even reordering. We keep things relatively simple for now. However much of the efficiency results from the clients selecting the correct MTU in the streaming lib such that the maximum-size streaming lib message fits in an integral number of tunnel messages. See ConnectionOptions in the streaming lib for details. Aside from obvious goals of minimizing delay and padding, we also want to minimize the number of tunnel messages a message occupies, to minimize the impact of a router dropping a tunnel message. So there is some benefit in starting a message in a new tunnel message, especially if it will fit perfectly if we do that (a 964 or 1956 byte message, for example). An idea for the future... If we are in the middle of a tunnel msg and starting a new i2np msg, and this one won't fit, let's look to see if we have somthing that would fit instead by reordering: if (allocated > 0 && msg.getFragment == 0) { for (j = i+1, j < pending.size(); j++) { if it will fit and it is fragment 0 { msg = pending.remove(j) pending.add(0, msg) } } }