As I've previously stated, the aim is primarily to reduce risk rather than save money.
Where detectors are in benign environments such as cupboards, and thier analogue values don't fluctuate to any significant degree i.e. there is insufficient stimulus in the normal vicinity of the detector, then the only way to gain assurance that the detector is capable of responding to a fire-like stimulus it to stimulate it. In that case, the functional testing is absolutely necessary, but that's for individual devices, i.e. Some detectors may be functionally tested more regularly than others on the basis of the variations in their ambient environments and the corresponding fluctuations in analogue values seen at the control panel. Such an approach will not only target resources to those devices for which there is little evidence that they are capable of responding to fire like stimuli, but should also highlight those devices that have been impaired by being covered or otherwise contaminated.
The functional test itself is carried out over a 12 month period, and some devices aren't visually checked or otherwise examined between each test. Testing a device today doesn't prevent it's failure tomorrow. There could be a case to argue for reducing interval between functional tests. Currently tests are carried out in a 12 month basis, the BS committee agreed on the interval, probably taking account the reliability and capabilities of fire detection systems available at the time which includes conventional systems with far less automatic monitoring capabilities.
Perhaps there is more of a case to be argued for increasing the interval between functional testing if the outcome of the cost benefit analysis demonstrates that cost of testing (both in terms of increased risk and financial) is disproportionate to the aggregate reduction in risk achieved through functional testing on an annual basis.