Mosel parallel and distributed computing examples
Most of the examples in this directory are explained in the whitepaper
'Multiple models and parallel solving with Mosel'.
|
| Parallel computing with mmjobs |
|
|
| Working with multiple models: submodels, coordination, communication, and parallelization: Parallel computing, distributed computing, distributed architecture, in-memory data exchange, shmem, job queue, events, detach
submodel, clone submodel
|
| |
|
| Type: |
Programming |
| Rating: |
3 (intermediate) |
| Description: |
The Mosel module mmjobs enables the user to
work with several models concurrently. We show here
a series of examples of basic tasks that typically need
to be performed when working with several models in Mosel:
Parallel computing:
- Running a submodel from another Mosel model: runtestsub.mos (main model
executing testsub.mos)
- Retrieving termination status from submodels (means of coordination of different models):
runsubevnt.mos (main model executing testsub.mos)
- Retrieving user event sent by the submodel:
runsubevnt2.mos (main model executing testsubev.mos)
- Stopping a submodel: runsubwait.mos (main model
executing testsub.mos)
- Compiling to memory: runsubmem.mos (main model
executing testsub.mos)
- Setting runtime parameters: runrtparam.mos (main model
executing rtparams.mos)
- Sequential execution of submodels: runrtparseq.mos
(main model executing rtparams.mos)
- Parallel execution of submodels: runrtparprl.mos
(main model executing rtparams.mos)
- Parallel execution with cloning of submodels: runrtparclone.mos
(main model executing rtparams.mos)
- Job queue for parallel execution of submodels: runrtparqueue.mos
(main model executing rtparams.mos)
- Using the shmem (shared memory) I/O driver for data exchange (bin format): runsubshm.mos
(main model executing testsubshm.mos)
- Using the shmem (shared memory) I/O driver for data exchange (raw format): runsubshmr.mos
(main model executing testsubshmr.mos)
- Using the mempipe (memory pipe) I/O driver for data exchange:
runsubpip.mos (main model executing testsubpip.mos)
- Sharing data between cloned models:
runsubclone.mos (main model executing a copy of itself)
Distributed computing:
- Check for available remote Mosel servers: findservers.mos
- Run a single model on a remote machine: runrtdistr.mos (main model
executing rtparams.mos)
- Run a single model on a remote machine with
configuration options: runrtdistrconf.mos (main model
executing rtparams.mos)
- Running parallel submodels in a distributed architecture: runrtpardistr.mos (main model
executing rtparams3.mos)
- Queuing submodels for parallel execution in a distributed
architecture with one or several models per node: runrtparqueued.mos (main model
executing rtparams3.mos)
- 3-level tree of (parallel) submodels: runrtpartree.mos (main model
executing rtparams2.mos)
- Running a submodel that detaches itself from its parent: runrtdetach.mos (main model
executing rtparams4.mos)
- Using the shmem (shared memory) I/O driver for data exchange (bin format): runsubshmdistr.mos
(main model executing testsubshm.mos)
|
| File(s): |
runtestsub.mos, testsub.mos (submodel), runsubevnt.mos, runsubevnt2.mos, testsubev.mos (submodel), runsubwait.mos, runsubmem.mos, runrtparam.mos, runrtparam2.mos, rtparams.mos, runrtparseq.mos, runrtparprl.mos, runrtparclone.mos, runrtparqueue.mos, runsubshm.mos, testsubshm.mos (submodel), runsubshmr.mos, testsubshmr.mos (submodel), runsubpip.mos, testsubpip.mos (submodel), runsubclone.mos |
|
|
| Column generation for a cutting stock problem: solving different models in sequence: Submodel execution, in-memory data exchange
|
| |
|
| Type: |
Cutting stock |
| Rating: |
4 (medium-difficult) |
| Description: |
The first version (main model: paperp.mos, submodel:
knapsack.mos using the 'bin' and 'shmem' I/O drivers, main model: paperpr.mos, submodel:
knapsackr.mos, using the 'raw' and 'shmem' I/O drivers) shows how to implement the column generation algorithm with
two separate models, illustrating the following features
of Mosel:
- working with multiple models
- executing a sequence of models
- passing data via shared memory
The second version (papersn.mos) works with multiple problems within a single model,
recreating a new problem for each subproblem solving iteration.
|
| File(s): |
paperp.mos, knapsack.mos (submodel), paperpr.mos, knapsackr.mos (submodel), papersn.mos |
|
|
| ELS - Solving several model instances in parallel: Parallel execution of submodels, parallel execution of submodels, stopping submodels, branch-and-cut, managed cuts, stopping
criteria, callbacks
|
| |
|
| Type: |
Lot sizing |
| Rating: |
5 (difficult) |
| Description: |
In its basic version (els.mos) this model has the following
features:
- adding new constraints and resolving the LP-problem (cut-and-branch)
- basis in- and output
- if statement
- repeat-until statement
- procedure
The second model version (elsc.mos) implements a configurable cutting plane algorithm:
- defining the OPTNODE node callback function,
- defining and adding cuts during the MIP search (branch-and-cut), and
- using run-time parameters to configure the solution algorithm.
User cuts can also be added in the form of managed cuts using the CUTROUND callback function (model version elsmanagedcuts.mos) whereby the Optimizer manages automatically when
to load or remove
such cuts from node problems.
A model version with callbacks implementing user-defined stopping criteria and extended logging is also presented:
- defining the INTSOL, PRENODE, GAPNOTIFY, and CHECKTIME callbacks (elscb.mos)
- same with a deterministic worklimit/timer (elscbdet.mos)
- SVG graph drawing of the progress of the optimization run (elscb_graph.mos)
A third implementation (main model: runels.mos or runelst.mos, submodel:
elsp.mos; version with SVG graph drawing: runels_graph.mos with elspg.mos) parallelizes the execution of several model
instances, showing the following features:
- parallel execution of submodels
- communication between different models (for bound updates on the objective function)
- sending and receiving events
- stopping submodels
- working with timers (runelst.mos, runels_graph.mos)
The fourth implementation (main model: runelsd.mos, submodel: elsd.mos)
is an extension of the parallel version in which the solve of each submodels are
distributed to various computing nodes.
|
| File(s): |
runels.mos, runels_graph.mos, runelst.mos, elsp.mos (submodel), elspg.mos (submodel), elsc.mos, elsmanagedcuts.mos, elscb.mos, elscbdet.mos, elscb_graph.mos, elsglobal.mos |
| Data file(s): |
els.dat, els4.dat, els5.dat |
|
|
| Dantzig-Wolfe decomposition: combining sequential and parallel solving: Concurrent subproblem solving, coordination via events, in-memory data exchange, mempipe, mempipe notifications, remote mempipe
|
| |
|
| Type: |
Production planning |
| Rating: |
5 (difficult) |
| Description: |
Dantzig-Wolfe decomposition is a method for solving large
LP problems. The model implementation shows the following
features:
- iterative sequence of concurrent solving of a set of
subproblems
- data exchange between several models via shared memory and memory pipe
- coordination of several models via events
Two different implementations are proposed. The first one runs on a single node allowing to execute
multiple concurrent solves (standard version: cocoMs.mos and cocoSubFs.mos, or using mempipe notification in place of
an
event sent by submodels: cocoMn.mos and cocoSubFn.mos) and the second one executes on multiple
nodes (file-based communication: cocoMd.mos and cocoSubFd.mos, or using shared memory and memory pipe remotely: cocoMdp.mos
and cocoSubFdp.mos). Extending the single node model to support distributed
architecture requires only few lines of new code to setup the list of nodes and define which node
solves which subproblem.
|
| File(s): |
coco3.mos, cocoMs.mos, cocoSubFs.mos (submodel), cocoMn.mos, cocoSubFn.mos (submodel) |
| Data file(s): |
coco2.dat, coco3.dat |
|
|
| Benders decomposition: sequential solving of several different submodels: Multiple concurrent submodels, events, in-memory data exchange
|
| |
|
| Type: |
Programming |
| Rating: |
5 (difficult) |
| Description: |
Benders decomposition is a method for solving large
MIP problems. The model implementation shows the following
features:
- iterative sequence of concurrent solving of a set of subproblems,
- data exchange between several models via shared memory, and
- coordination of several models via events.
An implementation using a single model is also presented (benders_single.mos).
|
| File(s): |
benders_main.mos, benders_dual.mos (submodel), benders_cont.mos (submodel), benders_int.mos (submodel), benders_single.mos |
| Data file(s): |
bprob12.dat, bprob33.dat |
|
|
| Jobshop scheduling - Generating start solutions via parallel computation: Sequential and parallel submodel execution, loading partial MIP start solutions, model cloning, shared data
|
| |
|
| Type: |
Jobshop scheduling |
| Rating: |
5 (difficult) |
| Description: |
The job shop problem is solved by sequentially sequencing tasks on a single machine and then loading this
initial schedule as an initial integer solution (jobshopas.mos). A parallel version of the algorithm is
also presented. In the parallel version, the single machine sequencing models are solved concurrently, exchanging data
in memory with the parent model.
- Implementation as a single model (jobshopasc.mos) that clones itself to generate the submodels in order to work with shared
data structures between the parent and its submodels.
- Implementation as a main model (jobshopasp.mos) starting several submodels (jobseq.mos) in parallel. Data exchange via shmem (shared memory blocks from/to which parent model and submodels copy their data).
|
| File(s): |
jobshopas.mos, jobshopasc.mos, jobshopasp.mos, jobseq.mos (submodel) |
| Data file(s): |
mt06.dat, mt10.dat |
|
|
| Outer approximation for quadratic facility location: solving different problem types in sequence: Iterative sequential submodel solving, nonlinear subproblem, MIQP formulation
|
| |
|
| Type: |
Quadratic facility location |
| Rating: |
4 (medium-difficult) |
| Description: |
A quadratic facility location problem is formulated and solved as an MIQP problem (quadfacloc.mos) and via an outer approximation
algorithm that iterates over MIP and QP subproblems (quadfaclocoa.mos).
|
| File(s): |
quadfacloc.mos, quadfaclocoa.mos |
|
|
|
| Distributed computing with mmjobs |
|
|
| Working with multiple models: remote connection, coordination, communication, and parallelization: Distributed computing, distributed architecture, job queue, events, detach submodel
|
| |
|
| Type: |
Programming |
| Rating: |
3 (intermediate) |
| Description: |
The Mosel module mmjobs enables the user to
work with several models concurrently. We show here
a series of examples of basic tasks that typically need
to be performed when working with remote models in Mosel:
- Check for available remote Mosel servers: findservers.mos
- Run a single model on a remote machine: runrtdistr.mos (main model
executing rtparams.mos)
- Run a single model on a remote machine with
configuration options: runrtdistrconf.mos (main model
executing rtparams.mos)
- Running parallel submodels in a distributed architecture: runrtpardistr.mos (main model
executing rtparams3.mos)
- Queuing submodels for parallel execution in a distributed
architecture with one or several models per node: runrtparqueued.mos (main model
executing rtparams3.mos)
- 3-level tree of (parallel) submodels: runrtpartree.mos (main model
executing rtparams2.mos)
- Running a submodel that detaches itself from its parent: runrtdetach.mos (main model
executing rtparams4.mos)
- Using the shmem (shared memory) I/O driver for data exchange (bin format): runsubshmdistr.mos
(main model executing testsubshm.mos)
|
| File(s): |
findservers.mos, runrtdistr.mos, rtparams.mos, runrtdistrconf.mos, runrtpardistr.mos, rtparams3.mos, runrtparqueued.mos, runrtpartree.mos, rtparams2.mos, runrtdetach.mos, rtparams4.mos (submodel), runsubshmdistr.mos, testsubshm.mos (submodel) |
|
|
| Dantzig-Wolfe decomposition: combining sequential and parallel solving: Concurrent subproblem solving, coordination via events, in-memory data exchange, memory pipe
|
| |
|
| Type: |
Production planning |
| Rating: |
5 (difficult) |
| Description: |
Dantzig-Wolfe decomposition is a method for solving large
LP problems. The model implementation shows the following
features:
- iterative sequence of concurrent solving of a set of
subproblems
- data exchange between several models executing remotely, file-based communication (cocoMd.mos and cocoSubFd.mos), or using
shared memory and memory pipe remotely (cocoMdp.mos and cocoSubFdp.mos)
- coordination of several models via events
|
| File(s): |
cocoMd.mos, cocoSubFd.mos (submodel), cocoMdp.mos, cocoSubFdp.mos (submodel) |
| Data file(s): |
coco2.dat, coco3.dat |
|
|
| ELS - Solving several model instances in parallel: Parallel execution of submodels, parallel execution of submodels, stopping submodels
|
| |
|
| Type: |
Lot sizing |
| Rating: |
5 (difficult) |
| Description: |
This implementation (main model: runelsd.mos, submodel:
elsd.mos) extends the parallel version to solving of submodels
distributed to various computing nodes, showing the following features:
- parallel remote execution of submodels
- communication between different models (for bound updates on the objective function)
- sending and receiving events
- stopping submodels
|
| File(s): |
runelsd.mos, elsd.mos (submodel) |
| Data file(s): |
els.dat, els4.dat, els5.dat |
|
|
| Solving the TSP problem by a series of optimization subproblems: Solution heuristic, parallel submodel execution
|
| |
|
| Type: |
Traveling Salesman Problem |
| Rating: |
5 (difficult) |
| Description: |
This model (tspmain.mos) calculates a random TSP instance and divides
the areas in a number of defined squares. The instance is solved by
a two step algorithm. During the first step, the optimal TSP tour
of each square is calculated concurrently by concurrently executing
multiple submodels (tspsub.mos). Then during the second stage, the main
model takes 2 neighbouring areas and unfixes the variables closest to
the common border and reoptimizes the resulting subproblem.
Each submodel instance is sent an optimization problem (set of nodes,
coordinates, and possibly previous results). The results are passed
back via a file (located at the same place as the parent model,
no write access to remote instances is required).
Once the result has been displayed, the submodel is restarted
for a new optimization run if any are available.
This main model and also the submodels may run on any platform.
|
| File(s): |
tspsub.mos (submodel), tspmain.mos |
|
|
|
| Distributed computing with XPRD |
|
|
| Basic tasks: remote connection, coordination, communication, and parallelization: Distributed computing, concurrent submodel execution, job queue, events
|
| |
|
| ELS - Solving several model instances in parallel: Parallel execution of submodels, parallel execution of submodels, stopping submodels
|
| |
|
| Type: |
Lot sizing |
| Rating: |
5 (difficult) |
| Description: |
This implementation (program: runelsd.* starting submodel:
elsd.mos) extends the parallel version of the ELS model to solving of submodels
distributed to various computing nodes, showing the following features:
- parallel remote execution of submodels
- communication between different models (for bound updates on the objective function)
- sending and receiving events
- stopping submodels
|
| File(s): |
runelsd.c, runelsd.java, elsd.mos (submodel), readelsdem.mos (submodel) |
| Data file(s): |
els.dat |
|
|
| Mandelbrot - Java GUI for distributed computation with Mosel: Parallel execution of submodels, Java solution graph
|
| |
|
| Type: |
Programming |
| Rating: |
5 (difficult) |
| Description: |
Calculation of the
Mandelbrot function, subdividing the space into squares of points
to be solved by the submodel instances.
Graphical representation
of function values using Java graphing functionality.
|
| File(s): |
mandelbrot.java, mandelbrotsub.mos (submodel) |
|
|
| Folio - remote execution of optimization models: Submodel execution, events, bindrv binary format, in-memory data I/O
|
| |
|
| Type: |
Portfolio optimization |
| Rating: |
4 (medium-difficult) |
| Description: |
Various XPRD program versions running the memory I/O version of the 'Folio' portfolio optimization example from the 'Getting
Started' guide.
- runfoliodistr.[c|java] (requires: foliomemio.mos, folio10.dat):
run an optimization model and retrieve detailed solution info,
defining a file manager for data exchange in memory
(XPRD version of runfoliodistr.mos)
- distfolio.[c|java] (requires: foliomemio.mos, folio250.dat):
run an optimization model and retrieve detailed solution info,
reading binary solution files
- distfoliopar.[c|java] (requires: foliomemio.mos, folio250.dat):
run several optimization models on different remote Mosel instances
and retrieve detailed solution info, reading binary solution files
(XPRD version of runfoliopardistr.mos)
- distfoliocbioev.[c|java] (requires: foliocbioev.mos, folio250.dat):
retrieve solution info during optimization model run,
coordination via events
|
| File(s): |
runfoliodistr.c, runfoliodistr.java, foliomemio.mos, distfolio.c, distfolio.java, distfoliopar.c, distfoliopar.java, distfoliocbioev.c, distfoliocbioev.java, foliocbioev.mos (submodel) |
| Data file(s): |
folio250.dat, folio10.dat |
|
|
|
|
|