Configuration of a Hypercube job
If you compare the display of input data between a single run in MIRO Base Mode and the configuration of a Hypercube job, you will notice that some of the input widgets have been automatically transformed:
When we set up a run in the MIRO Base Mode, we could specify the freight cost as well as the minimum amount of goods to ship. Those are zero dimensional values. The very same scalars can now be specified within a given range:
The sliders we specified in the Base Mode have now become slider ranges. In addition, we can set a step size for each slider range.
In addition, the single dropdown menu for selecting the problem type has been replaced by a multi dropdown menu.
Tip:
Input widgets you defined for your scalars in the Base Mode are automatically expanded in the Hypercube Mode (e.g. a slider in Base Mode is expanded to a slider range in Hypercube Mode and a single-dropdown menu is expanded to a multi-dropdown menu etc.).
The table below gives an overview of the widgets and their different appearance in the base and the Hypercube Mode.
Single mode |
Hypercube Mode |
Single slider |
Slider range with step size selection |
Slider range |
Slider range with step size and combination-type selection |
Single dropdown menu |
Multi dropdown menu |
Multi dropdown menu |
Not supported yet |
Checkbox |
Multi dropdown menu with options yes and no |
Date selector |
Date selector |
Date range selector |
Date range selector |
Text input |
Text input |
Numeric input |
Numeric input |
Parameter table |
Parameter table |
The transformation of input widgets has the following implication for the submission of Hypercube jobs: Unless explicitly specified in the MIRO configuration of the model, each scalar is expanded and the cartesian product over all scalars defines a Hypercube job.
In other words: All combinations are generated from the configuration selected by the user and each of those combinations is the basis of what we call a scenario in the Base Mode.
In the example, the slider for freight in dollars per case per thousand miles was set by us to a range from 75 to 150 with a step size of 5.
The slider configured in this way leads to 16 different variants.
- Variant 1: freight = 75
- Variant 2: freight = 80
- Variant 3: freight = 85
[...]
- Variant 16: freight = 150
For the scalar minimum shipment the slider was set to a range between 20 and 200 with a step size of 20. The resulting variants would therefore be:
- Variant 1: number of days for training = 20
- Variant 2: number of days for training = 40
- Variant 3: number of days for training = 60
[...]
- Variant 10: number of days for training = 200
The third slider for the scalar parameter beta is not set to a range but to a single value. Due to the non-existent variation of the scalar, only one variant results from it. The same is true for the last widget, a multi dropdown menu where we specify to only consider the MIP version of our model.
The cartesian product that results from the combination of all variations is now calculated: It is created from the different variants of the symbols freight in dollars per case per thousand miles (16), minimum shipment (10), beta (1) and Select model type (1) resulting in 16 x 10 x 1 x 1 = 160 individual scenarios.
It becomes clear that the number of scenarios resulting from a particular Hypercube job configuration can increase rapidly with the number of scalar that are expanded. Depending on how computationally intensive the underlying GAMS model is, the scenarios should be carefully configured.
You may want to restrict certain input widgets to be transformed: This can be done in the MIRO Configuration Mode by unchecking the widget-specific option Should element be expanded automatically in Hypercube Mode?.
When activated, this configuration has the consequence that the scalar can still be selected in the scenario generation but with a regular slider instead of a slider range.
Multidimensional Symbols:
Unlike scalar values, multidimensional symbols such as sets, parameters and multidimensional singleton sets cannot be varied within a Hypercube job configuration. The data remains fixed in every created scenario of a job. However, this does not mean that Hypercube Mode can only be used for zero dimensional symbols. In order to be able to vary data of multidimensional symbols using the Hypercube job configuration, this must be implemented in the GAMS model itself. For example, you can use a scalar in the model and assign different data to a multidimensional parameter in the model depending on the value of that scalar. If this scalar value is now varied during a Hypercube job, this also affects the multidimensional parameter.
Job submission: general
Once you are happy with the set up of your Hypercube job, you can submit it by clicking: Submit job. This triggers the expansion of all scenarios resulting from your configuration and sets up your Hypercube job.
Note:
The maximum number of scenarios that can be submitted in one job is limited to 10 000.
Note:
Currently, when executing Hypercube jobs locally (without GAMS Engine), your model must be prepared to work with idir. MIRO creates a local working directory for each scenario and executes your model from this directory, while referring to the main gms file via idir1.
You can debug your model by submitting a Hypercube job, navigating to the directory of your Hypercube job (C:\Users\<username>\.miro\hcube_jobs\<modelname>\1 on Windows, ~/.miro/hcube_jobs/<modelname>/1 on macOS/Linux) and executing the file jobsubmission.gms file from GAMS Studio.
MIRO also looks into the database where your previous runs are stored and checks whether a subset of the scenarios you want to commit have already been saved. If it finds results from previous runs, MIRO asks if it should commit only the scenarios that have not yet been executed, or if it should submit all of them again.
There are a number of reasons why you might want to re-submit certain scenarios. We will elaborate on these reasons in the next section that discusses the technical details of how MIRO sets up a Hypercube job. If you are reading this documentation for the first time, you might want to skip this next advanced (sub)section.
If you are interested, just click on the headline.
Advanced: How MIRO identifies Hypercube jobs
You might be wondering how MIRO knows whether a job has already been submitted or rather how a job is identified. Each scenario is characterized by the set of all input datasets that are populated by MIRO. This means that when you change something in your model or your execution environment that is not known to MIRO, it will not be detected as a different scenario. Let's consider the case where you want to run a certain set of scenarios on your local machine and then on a compute cluster in order to perform some performance analyses. Since all of the input datasets that come from MIRO remain the same, MIRO thinks that the scenarios are identical. No attribute "compute environment" is known to MIRO.
A scenario is identified by a 256-bit SHA-2 hash value that MIRO computes as follows:
For each scenario of a Hypercube job the GAMS call is compiled:
gams transport.gms --HCUBE_STATIC_a=8bc030208cf6482932608872c5681310 --HCUBE_SCALAR_beta=0.86 --type="mip"
You probably notice some oddities: All of the inputs that come from MIRO are included in this call, even those that are not GAMS command line parameters. GAMS scalars are prefixed with the keyword HCUBE_SCALAR_; parameters and other multi-dimensional GAMS symbols with the keyword HCUBE_STATIC_. Furthermore, the value of parameter a looks weird: MIRO hashes the csv representation of the parameter with a 128-bit MD5 hash function. This way, we make sure that the parameter and thus the entire GAMS command changes when the table is modified and at the same time keep the call short.
The HCUBE_SCALAR_ keyword instructs the GAMS compiler to assign the specified value not to a compile time variable but a GAMS symbol declared in your model.
Command line parameters (GAMS options and double dash parameters) are just appended as in any GAMS call.
Once the command is built, we hash this again with the aforementioned SHA-2 hash function and get an identifier of the form:
7f14dcce0c726d215adae54daa2690b6a86b540cf4e3efaf9f97e8f4f3ab90a5.
We end up with a (relatively) compact identifier of a scenario while also making sure that we have a valid file/directory name without any reserved characters. The obvious disadvantage is that there is no way to derive the input parameters from the hash.
Automated job submission
General
All scenarios of the Hypercube job are calculated automatically on the machine running MIRO.
Job tags: When submitting a Hypercube job automatically, we can optionally specify job tags. Job tags are an identifier attached to all scenarios of a Hypercube job. They can help you identify scenarios of a certain job or a group of jobs with similar attributes.
This means that you can use those tags as part of your database queries in order to easily find certain scenarios that you are interested in. Additionally, job tags can also be used to compare different groups of scenarios that you are interested in via PAVER.
Import results
When we submit the Hypercube job automatically, it will be executed in the background. Without going into too much detail, what happens here is simple:
First, a Hypercube file which contains all scenario configurations is created by MIRO.
This file gets converted into executable GAMS calls, which are then processed. The script which is responsible for this conversion is located in the /resources folder of the MIRO installation directory and can be customized as needed.
More about these steps in the section on manual execution of a Hypercube job.
Once a Hypercube job has been submitted, it is shown in the Import results section which can be accessed via the sidebar menu. There, all jobs are listed that are still running or whose results have not yet been imported.
The overview shows for each submitted job the owner, the submission date and the (optional) specified job tag(s). In addition, the current status (scheduled, running or completed) is visible.
The progress of a run can be displayed by clicking on the corresponding show progress button and results that have not yet been loaded into the database or are still pending can be discarded.
As soon as a job has finished, the results can be loaded into the database with a click on Import.
If you want to see jobs that you previously imported or discarded, you can do so by clicking on Show history:
Once a job has been imported, the results are stored in the database.
Manual job submission
This option is especially interesting if the calculations are not to be performed on the same machine on which MIRO is running, e.g. some compute cluster.
Note:
GAMS version 30.2 or later must be installed on the system running the job.
Download zip archive
If we want to execute a Hypercube job manually, we can download a ZIP archive with all needed data by clicking on Submit job → Process job manually.
The archive has the following contents:
- transport.gms
model file
- static folder
This folder contains all non-scalar (thus multidimensional) input data in form of a GDX file. As explained here, this data remains the same for all scenarios of a Hypercube job.
- hcube.json
Hypercube file. Contains all variable information on the scenarios to be calculated (scenario ID, command line arguments).
- transport.pf
This parameter file contains options which must be set in order to solve the scenarios of a hypercube job. You should not touch this file.
- hcube_submission.gms
File used in automated job submission to generate a script based on the contents of the hypercube file and all static data, which then can be executed in GAMS.
- In addition, all directories and files that have been assigned to the model by the user (see model assembly file) are also included.
Let's have a brief look at the hcube.json file which originates from the already shown configuration above:
{
"jobs":[ {
"id": "e505eb77f2fed92ecb6c7609a6873974ea87d69b620a7ca9aa8d6a1a62e7159b",
"arguments": [
"--HCUBE_SCALARV_mins=20",
"--HCUBE_SCALARV_beta=0.97",
"--HCUBE_SCALARV_f=75",
"--type=\"mip\""
]
},
{
"id": "834cc2ff77d5cddc70f6462263a500ab43d22f084a0d98ff1fd8cfc354c8e6ec",
"arguments": [
"--HCUBE_SCALARV_mins=40",
"--HCUBE_SCALARV_beta=0.97",
"--HCUBE_SCALARV_f=75",
"--type=\"mip\""
]
},
{
"id": "867432ac65a22a251ee6fca0ce5eb7ba45195538e389406abe7593f21b8255c4",
"arguments": [
"--HCUBE_SCALARV_mins=60",
"--HCUBE_SCALARV_beta=0.97",
"--HCUBE_SCALARV_f=75",
"--type=\"mip\""]
},
[...]
]
}
Each object contains the information needed for exactly one scenario, i.e. each variable scenario configuration is stored in a separate JSON object:
- ID:
Each job has an individual ID in form of a hash value which used to (almost) uniquely identify a scenario. To learn more about how this hash is generated, read more on how MIRO identifies Hypercube jobs here.
- Arguments:
The arguments contain information on all scalar values, i.e. GAMS Scalars, scalar Sets, double-dash parameters and GAMS Options. This is where the individual scenarios differ.
Run the Hypercube job
To be able to calculate all configured scenarios in GAMS, we need to convert the scenario-specific information provided in the file hcube.json together with the static data into an executable GAMS script. Additionally, the results need to be saved in such a way that MIRO can validate and import them. The file hcube_submission.gms is a simple script which is used in the automated job submission for this purpose and which can also be used here.
Note:
In case you need to customize this script e.g. because you want to send the GAMS jobs to some sort of compute server, you will need to adjust this file accordingly. Note that the folder structure of the downloaded ZIP file corresponds to the structure assumed in the file hcube_submission.gms.
By running the hcube_submission.gms we can generate a script based on the contents of the hcube.json file and all static data, which then can be executed in GAMS. This is how the resulting file jobsubmission.gms looks like.
$if dexist 4upload $call rm -r 4upload
$call mkdir 4upload
$call cd 4upload && mkdir tmp0
$if errorlevel 1 $abort problems mkdir tmp0
$call cd 4upload/tmp0 && gams "C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.gms" --HCUBE_SCALARV_mins=20 --HCUBE_SCALARV_beta=0.97 --HCUBE_SCALARV_f=75 --type="mip" pf="C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.pf"
$if dexist e505eb77f2fed92ecb6c7609a6873974ea87d69b620a7ca9aa8d6a1a62e7159b $call rm -r e505eb77f2fed92ecb6c7609a6873974ea87d69b620a7ca9aa8d6a1a62e7159b
$call cd 4upload && mv tmp0 e505eb77f2fed92ecb6c7609a6873974ea87d69b620a7ca9aa8d6a1a62e7159b
$onecho > "%jobID%.log"
1/160
$offecho
$call cd 4upload && mkdir tmp1
$if errorlevel 1 $abort problems mkdir tmp1
$call cd 4upload/tmp1 && gams "C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.gms" --HCUBE_SCALARV_mins=40 --HCUBE_SCALARV_beta=0.97 --HCUBE_SCALARV_f=75 --type="mip" pf="C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.pf"
$if dexist 834cc2ff77d5cddc70f6462263a500ab43d22f084a0d98ff1fd8cfc354c8e6ec $call rm -r 834cc2ff77d5cddc70f6462263a500ab43d22f084a0d98ff1fd8cfc354c8e6ec
$call cd 4upload && mv tmp1 834cc2ff77d5cddc70f6462263a500ab43d22f084a0d98ff1fd8cfc354c8e6ec
$onecho > "%jobID%.log"
2/160
$offecho
$call cd 4upload && mkdir tmp2
$if errorlevel 1 $abort problems mkdir tmp2
$call cd 4upload/tmp2 && gams "C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.gms" --HCUBE_SCALARV_mins=60 --HCUBE_SCALARV_beta=0.97 --HCUBE_SCALARV_f=75 --type="mip" pf="C:\Users\Robin\Documents\.miro\hcube_jobs\transport\1\transport.pf"
$if dexist 867432ac65a22a251ee6fca0ce5eb7ba45195538e389406abe7593f21b8255c4 $call rm -r 867432ac65a22a251ee6fca0ce5eb7ba45195538e389406abe7593f21b8255c4
$call cd 4upload && mv tmp2 867432ac65a22a251ee6fca0ce5eb7ba45195538e389406abe7593f21b8255c4
[...]
We can see the commands for running through the first three scenarios of the hcube.json file. A separate (temporary) folder is created for each scenario to be calculated. In the main GAMS call all arguments contained in the hcube.json file which belong to the scenario are listed as command line parameters.
$call [...] gams "[...]\transport.gms" --HCUBE_SCALARV_mins=20 --HCUBE_SCALARV_beta=0.97 --HCUBE_SCALARV_f=75 --type="mip" [...]
GAMS Scalars can usually not be set via a GAMS call, but are defined in the model. In order to be able to set scalars via the command line, MIRO adds a special prefix: --HCUBE_SCALARV_<scalar-name>=<scalar-value>, e.g. --HCUBE_SCALARV_f=90 for scalar f and a value of 90. To continue this example, if a GAMS model is called with IDCGDXInput (which is done here using the pf file) and the compile-time variable --HCUBE_SCALARV_f=90, a scalar f is searched in the list of symbols declared in your GAMS model and, if declared, set to the value of the compile-time variable, i.e. 90. The set text of a singleton set is communicated via the prefix --HCUBE_SCALART_.
Note:
The GAMS options and double-dash parameters associated with a scenario are stored in MIRO's internal database. However, the information about them is lost when a scenario is exported as a GDX file. We therefore recommend using singleton sets instead of command line parameters. Both the element label and the element text of a set can be migrated to compile time variables using the Dollar Control Options
eval.Set,
evalGlobal.Set,
and evalLocal.Set.
Besides this scenario-specific data, a GDX file containing all static data is included at the end of the $call as part of a pf file:
$call [...] pf="[...]\transport.pf"
execMode=0
IDCGDXOutput="_miro_gdxout_.gdx"
IDCGenerateGDXInput="_miro_gdxin_.gdx"
IDCGDXInput="..\..\static\_miro_gdxin_.gdx"
trace="_sys_trace_transport.trc"
traceopt=3
The pf file also contains the command to create a trace file. In case you are interested in doing performance analysis using PAVER, you need to make sure to generate trace files for your scenarios.
After the calculations, the results of a scenario (input and output data) are stored in a folder named by its ID.
Note:
As already mentioned, the hcube_submission.gms file that is included in the archive you downloaded provides you with a template (in this case using Python) to automatically generate the jobsubmission.gms script shown above. Depending on your infrastructure, you can either extend this file or write a new one from scratch. Irrespective of whether the provided script or a custom method is used, you need to make sure that the results you retrieve are structured according to the rules in order to import them back into MIRO.
Import rules
At the end of the calculations, the results should be available in the form of a ZIP archive. For MIRO to accept the archive as a valid set of scenarios, the input and output data records for a certain scenario must be available as GDX files. These must be located in a folder named after the hash value of the scenario and located in the root directory of the zip file.
There are some files that must exist in order for MIRO to accept a particular scenario. Other datasets are optional and don't have to be included in the zip archive. The mandatory files are:
- _miro_gdxin_.gdx
Scenario input data (GDX file)
- _sys_trace_<modelname>.trc
Trace file of the scenario run. This file contains numerous run statistics which are used for the the integrated performance analysis tool PAVER. The Hypercube job file already contains the GAMS options for generating the trace file automatically. In case you specified (in your MIRO app configuration) that you are not interested in trace information (option saveTraceFile
set to false
), mustn't be included in your scenario data. You can find more information about the trace file here.
These files are generated automatically when you use the hcube_submission.gms file. If scenario data does not contain all mandatory files, this scenario will be ignored and thus not be stored in the database.
If results are available, beside the GDX with input data also one with output data is created, namely the GDX _miro_gdxout_.gdx. This file contains all output parameters declared in the GAMS model between the $onExternalOutput
/ $offExternalOutput
tags. If a model calculation is aborted, this GDX is not generated automatically.
The following figure summarizes the steps required to move from the downloaded archive created by MIRO to a ZIP archive that you can import back into MIRO:
Import results
Once a Hypercube job finished execution, it can be imported and stored in MIRO's database via the
import results module.
In the bottom right corner of the window, click on
Manual import.
Upload Zip file with results:
A dialog pops up that lets you choose the zip archive with the Hypercube job results.
Note that you probably want to assign job tags here as well.
With a click on upload the archive is evaluated and stored in the MIRO database if the contained data is valid.