New website with a matrix of computers/models/assessments for benchmarking simulation task timings

1 post / 0 new

I presented a paper at the eSim conference in Hamilton Ontario
in May 2016 looking at simulation times across a range of computer
types and model types. And because computers and simulation software
and project goals are hardly static an update via a conference presentation
or journal paper will be out of date before it gets presented or published.

So....I have updated that study and hosted the benchmark tables
and observations on a web page so it can be updated easily. It is based
on fresh versions of ESP-r (V12.7) and EnergyPlus V8.8 in an expanded
test matrix which includes:

a) Nine computer/OS variants (including virtual computers)
b) Two models (3 zone 40 surface lightweight and a 13 zone
435 surface high mass)
c) Four & twenty time-steps per hour
d) one week, two month, four month and annual assessments
e) saving performance data hourly and at each timestep
f) save lots of performance data (i.e. annual assessment = 21GB) or a subset
g) un-optimized and optimized versions of simulation software
h) the impact of different solvers (CTF vs Conduction Finite Difference vs Finite Volume)
i) resources needed for pre-simulation tasks such as calculating viewfactors and insolation patterns
j) post-processing tasks

It also looks at the impact of different working practices
such as the order of simulation assessments and data extraction
e.g. sequential vs parallel assessments and under what circumstances
assessments become disk-bound.

Check it out at: <>

I will also be updating the simulation tool comparison website
that Drew Crawley and I have written (NOW PAST 400 VISITS!)
<> with this
bench-marking information in the near future.

You comments and observations are appreciated!

Regards, Jon Hand, Glasgow Scotland. jon at esru.strath.ac.uk

Jon Hand's picture
Offline
Joined: 2011-10-02
Reputation: 0