top of page
Line Concept Level 3 page 2.PNG

#FailureWins: Is success or failure a scale?

Is a culture where acceptance of failure is normalised one that will have a positive impact on ADF capability? Robert Vine argues that in at least a training environment, outcomes should be evaluated on a scale of performance rather than as a binary pass/fail schema. As part of our #FailureWins series, Vine makes the point that if we want to improve performance and measure capability, we need to measure ourselves against several different variable scales rather than a polarised classification.


Is a culture where acceptance of failure is normalised one that will have a positive impact on ADF capability? When you reflect on a career’s worth of experience, what sticks in your mind is the times where things went wrong and it's these hard lessons that influence your approach to the future. If the experience of failure is a strong driver for improvement then maximising this culture seems logical. But what would be the negative consequences of a force that never succeeds in training? How should we balance the competing benefits of success and failure?


All training events, be it individual, unit level, joint or coalition is a balance of the needs to teach people, test equipment, develop tactics and verify capability. Meeting a training objective demonstrates that a standard has been met. It proves that individual performance, system operation and tactical proficiency produce a capability. Failing to meet an objective identifies a deficiency that needs to be resolved.


It is important to measure success or failure to demonstrate if the ADF is ready for the roles that it is directed to perform. In this context, failure is not a negative outcome. Failure clearly identifies what needs to be fixed, with repeat attempts serving as a way to confirm whether the fix has worked. This methodology is useful to encourage improved performance, but what does failure tell us about the capability of the force as it currently stands?


Rather than viewing training outcomes as a binary pass/fail, the ADF must emphasise training as a scale of performance. If we want to improve performance and measure capability, we need to measure ourselves against a number of different variable scales. For example:

  • Losses: How much of the force was expended to achieve the mission?

  • Capacity: How much of our resources were used to achieve the mission?

  • Timeliness: How long did it take to achieve the mission?

  • Resilience: How long did it take to recover capability after the mission?

  • Consequences: Did the force achieve the mission without adverse consequences?

  • Adaptability: Did the force adapt to changes in the adversary or operating environment?

  • Integration: Did the force optimise its resources towards the mission?

  • Outcome: Did the force achieve the mission in the manner planned?


This method of capability assessment requires the ADF to exercise in a realistic environment. It can no longer be acceptable to exercise against a limited adversary such that the mission is challenging, but still achievable. Logistics must be tested, rather than assumed away. Bases must be measured against a genuine threat rather than treated as safe-havens. When did we last allow the adversary to use initiative and asymmetric approaches? Without a realistic exercise environment, any measure of success is pointless because war is a relative game where the adversary adapts and improves.


To understand our capability relative to an adversary, the ADF must incorporate an independent organisation that sets exercise scenarios which mimic the operational environment. No longer can we develop scenarios as a training aid to demonstrate performance of the force against the adversary we would like to fight. Instead, we must employ a realistic operational environment that allows us to understand how we will actually perform on operations and provide context to what improvements we need to make to the force.


Similarly, the ADF must carefully consider how individual performance is measured. Utilising a system where performance is measured on a scale, which is valued higher; someone who achieves the highest score, or someone who achieves the greatest improvement? If success in warfare is a game of continuous improvement relative to the adversary, then we must reward those who are most adept at improving, not those who demonstrate the best initial performance but fail to adapt to the adversary.


Utilising these performance metrics would drive a culture of continuous improvement while still demonstrating the performance of the force. One of my experiences on Air Warfare Instructor Course still comes to mind over 10 years later, not just because it was a poor performance (an assessment failure) but because of the personal and organisation improvement that has occurred since then.


I was to be the Mission Commander for a mission that required us to adjust all our plans from offensive to defensive operations within just three hours. It became apparent in the first few minutes of the mission that the plan was poor. Despite this acknowledgement, I did not adapt quickly enough. The mission was a failure. However, my analysis of the reasons for the mission failure were good – identifying that our process for planning required too long and relied too much on individual experience rather than tactical procedures. I would go on to pass the course, fix this issue by writing the procedures we needed, and train others in how to use them.


Now, I regard this event as one of the most positive in my career. I am proud of my efforts to improve individually, and to aid in improving the organisation. Unfortunately, we rarely provide people the opportunity to transfer individual lessons to the organisation. If we are to judge performance on a scale and value improvement, then we need to provide people the ability to implement hard won lessons into the organisation. We need to provide people time to write new tactics, the authority to change their systems configuration, the budget to buy necessary equipment, and reward testing new ideas.


Failure can prove to be a strong driver for some individuals to improve performance, but normalising failure should not neglect the need for the ADF to know that it is ready to perform its role. A system that measures performance not just as a binary pass/fail but as a scale offers the potential to drive individual and organisational change. When coupled with a system that rewards improvement and provides the broad ability to make the changes quickly, we can generate a culture that is able to improve at a greater rate than an adversary. To achieve this the ADF must train in a realistic environment rather than an idealistic one.


Robert Vine is an Air Battle Manager in the Royal Australian Air Force currently specialising in futures and concepts for Joint Command and Control, and Integrated Air and Missile Defence. The opinions expressed are his alone and do not reflect those of the Royal Australian Air Force, the Australian Defence Force, or the Australian Government.


bottom of page