The issue at hand is this: should capital newly committed to a system be allocated immediately to the system's open trades in proportion to system total equity or should it be allocated in some staged fashion? Allocating on a fixed schedule (like dollar cost averaging) is one alternative, allocating to new trades as they come up is another.
My initial thought on the topic, without thinking too deeply, was this: if your system has a positive expectation and cash has a zero return, then any allocation across time will have a lower expected return on average than going all in on day one - lost opportunity cost. The average customer should be happy with your committing his capital entirely on day one. But of course there is no such thing as an average customer - being the guy at the wrong end of the distribution of returns will suck.
Why might you not want to go all in? Trend-following systems have an inherent equity give-back at the end of a trade's life. As every day passes every trade is closer to that give-back, the odds of experiencing that give-back go up, why not make sure we enjoy the entire trade by only taking new ones? Perhaps a future losing trade is currently in winning territory, now would be a bad time to enter that position.
Against this there are issues such as the administrative and execution costs of scaling in. As Sluggo points out on the TB forum, there are also issues of size - if only 1/12 of the capital is available this month and a Nat Gas trade comes up, maybe we can't take it. Then it turns out to be this year's 20R trade and our investor is unhappy! Also, the opposite of the give-back can occur - maybe a future winning trade is in a draw-down and our new capital will benefit from this as the trade progresses.
Let's explore a real system, Winton Diversified Futures fund and see what we can see.
I will use publicly available information from the IASG website. You can easily find the capsule performance table for Winton's Diversified Fund by typing "Winton" in the search box on the top right of the IASG home page.
Experiment 1: Enter Over Increasing Time Horizons
In this experiment I explore putting all the money into the fund in one month, vs spreading the allocation over the first 2, 3, ..., 12 months. I also explore using 0.5, 1, 2, 3, 4 times leverage on my allocation. I max out at 4 because running a LSPM analysis using unconstrained optimal f suggests a maximum leverage of something over 4. The leverage variation is to see if / how dependent the decision is on the variance of monthly returns.
I then create 100,000 simulated sets of 12 month returns (sampling with replacement from the most recent 68 months of data) and seeing what total return I have at the end of twelve months under each scenario. For each scenario I calculate the mean and standard deviation of those total returns.
The solid lines are the average total return after 12 months. The highest, of course, is the 4 times leverage, the lowest is the 0.5 times leverage. The broken lines are +-1 standard deviation bands. The x-axis tells you the number of tranches used.
In general, for all levels of leverage, the expected 12 month return is higher for the "All-In" scenario than for the scheduled allocation scenarios. The more tranches in the allocation the worse the return. Not surprising really as the sidelined money has no opportunity to earn a return.
As you might expect, the variability in the return is much greater for the "All-In" scenario, and still greater the higher the leverage.
But here's the thing, the -1 standard deviation band is still better for the "All-In" scenario. That means that for more than 85% of the distribution of returns, the All-In scenario beats any of the other scenarios. And this takes no account of the fuss and bother of the incremental allocation.
Experiment 2: Allocate Tranches Equally Spaced Across 12 Months
This experiment differs from the first only in the scheduling of the allocations. Instead of tranches 1 and 2 being allocated in consecutive months, they are allocated in months 1 and 7 respectively. The 3 tranche scenario has allocations in months 1, 5 and 9 and so on.
We still see the same pattern as in experiment 1, the "All-In" approach outperforms across the majority of expected outcomes.
Looking at Winton's returns in particular you can see a distinct change in the risk-return profile occurring around 5 - 6 years ago. This gives us the chance to explore further - it's almost a different system! Re-running the experiments using the first 99 months of available data, we find the same patterns in the average returns - "All-In" is always the best. However, the noisier return stream is feeding into the variability of the twelve-month returns. Somewhat less of the range of the "All-In" approach out-performs the tranched approach - it amounts to about 0.8 - 0.9 of a standard deviation. Personally, I think it is still the percentage play to go "All-In".
I conclude that, with the population of returns exhibited by Winton's Diversified Fund, on average an investor will be better served by allocating all their capital at once. By extension, the manager should deploy all that capital at once in proportion to the total equity in each position. I suspect that this conclusion can be extended to trend-following systems in general.
If you are a manager of a fund, you can run similar tests on your own system to determine what, on average, will deliver the best outcome for your investors. Returning to the idea that the individual investor's experience can be far from the average, you have to weigh up disappointing more investors a small amount (by under-performing your metrics initially by scaling in) against disappointing a minority of investors a lot!