- Super Best Case: Get The NPV Upper Bound
In the search for the upside potential for the NPV of a given project, this setup explores the whole solution space without any other constraints but processing capacities, in a global multi-period optimization fully focused on maximizing the project’s discounted cashflow.
As MiningMath optimizes all periods simultaneously, without the need for revenue factors, it has the potential to find higher NPVs than traditional procedures based on LG/Pseudoflow nested pits, which do not account for processing capacities (gap problems), cutoff policy optimization and discount rate. Traditionally, these, and many other, real-life aspects are only accounted for later, through a stepwise process, limiting the potentials of the project.
This setup serves as a reference to challenge the Best Case obtained by other means, including more recent academic/commercial DBS technologies available. The block periods and destinations optimized by MiningMath could be imported back into your preferred mining package, for comparison, pushback design or scheduling purposes. This is all available for free!
Processing capacity: 10 Mt per year.
Stockpiling parameters on.
Timeframe: Years (1).
Figure 1: Production constraints
1.1 Advanced experience and refinements
It is important to mention that if you have multiple destinations, extra processing, or dump routes, it could be added for proper cutoff optimization. Besides that, the surfaces obtained here could be used in further steps or imported back into any mining package for pushback design and scheduling.
Constrained Best Case scenarios could be found by adding more constraints, preferably one at a time to make your assessments more realistic and evaluate their financial impacts in the project, potential conflicts among them and so on:
In case more efficiency is needed, the resulting surface obtained in the Constraints Validation step could be used as restrict mining for the runs here.
These scenarios might take longer and the main recommendation is to use a powerful machine to run it, in parallel, while other optimizations are performed.