1. Introduction

The optimization run time is a common concern for professionals dealing with robust models. This page aims to provide context and guidance to improve run times, which might be quite useful for having a big picture of the project's behavior under different assumptions and hypotheses.

2. Runtime Barriers

What affects the runtime?

The runtime depends on a combination of multiple aspects. It is directly related to the complexity of the deposit. The runtime is directly proportional to the number of:

    1. Blocks.

    2. Multiple destinations (+3).

    3. Constraints in use and conflicting goals with the same hierarchy order.

    4. Variables imported.

    5. Period ranges.

    6. Parameters changing over time.

    7. Multi-mine deposits.

Model size limit

Often, users are concerned with the limits to handle models with +20M blocks. MiningMath can virtually handle any model size. It has successfully made tests with models up to 240M blocks without reblocking, which took three weeks, and over a 32 Gb desktop machine.

Typically, datasets with 5 million blocks take a few hours (in an 8GB RAM machine). The technology can also execute multiple scenarios in parallel, using the same computer. There is no need for special servers with extra RAM capabilities for deposits of average size.

3. Hardware Improvements


Overall, the main bottleneck for MininingMath is memory consumption. Hardware upgrades that most positively impact the optimization run time are:

    • RAM capacity.

    • RAM frequency.

Cores and threads

MiningMath is a single-thread application, which means:

    • Additional cores and threads do not affect the optimization run time.

    • Processors with higher clock speeds improve the run time.

Additionally, the user can open several instances of MiningMath to run multiple scenarios in parallel. While a single scenario might not run faster, it is possible to improve the average of scenarios tested per unit of time.

4. Strategies to improve the runtime

Use surfaces

The most recommended strategy is passing through the tutorial steps of validating data and constraints validations then starting using the surfaces as a guide to reduce the complexity, without losing dilution aspects on your approach.

To get such guidance on a broader view with a reduced runtime use the Exploratory Analysis to get pushbacks and insights on what could be used. The last step is to get a detailed Schedule since the model has such complexity. If such approaches do not offer a proper runtime, try to get intermediate results by splitting the total production into 2 or 3 periods.


This is another approach that could be done, which is not recommended since we lose dilution aspects by increasing the block size. Therefore, if your blocks have the dimensions of 5x5x5 and you could increase it to 10x10x10, which could reduce the number of blocks to half of its standard dataset size. However, according to user's feedback, a multi-mine project with 32M blocks for the final integrated model, which mining complex considered various processing routes and operational constraints involving certain mine infrastructure within the final pit boundaries, obtained the following runtimes:

    • Quadruple reblocking in each direction took about 4-5 hours for each run.

    • Triple reblocking took 12 hours.

    • Double reblocking took 36 hours.

As the solution became clearer, reblocking was gradually reduced to have more flexibility.