This talk presents two parallel and distributed techniques which significantly increase the size of the models that can be analysed using Markov modelling. The techniques address both major phases of the analysis, i.e. construction of the Markov chain from a high-level model (state space generation) and solution of the Markov chain to determine its equilibrium distribution (steady state solution). The methods attack both the space and time requirements of Markov modelling. Space requirements are reduced through the use of probabilistic and disk-based storage schemes. Time requirements are reduced by exploiting the compute power provided by a distributed memory parallel computer or a network of workstations. Neither method places any restrictions on the type of model that can be analysed.
Both techniques have been implemented in C++ on a 16-node Fujitsu AP3000 distributed memory parallel computer, together with other tools required to build a complete parallel performance analysis pipeline. Results show that the methods are capable of analysing very large models, while delivering good speedups and exhibiting good scalability.