current limitations in building energy modeling and simulation

4 posts / 0 new
Last post

Hello Community,

Again, I am coming to you for input in my efforts to make EnergyPlus run

As Nathaniel Jones suggested, I would like to consider not just the current
bottlenecks in EnergyPlus simulations, which are ran from models generated
by designers considering the limitations of the software and its
capabilities; I would like to know where you would like to get more
detailed, precise or accurate data from a simulation run, but accept lesser
data in the interest of getting your job finished in a limited amount of

To boil it down, I'll ask my question this way: *if you could choose one
area in EnergyPlus to speed up, what would it be? Another way of putting
it: where would you like more detail in your models, but you simplify in
order to reduce simulation execution time?*
Many thanks for your help and Happy New Year --

Geof Sawaya

Geof Sawaya's picture
Joined: 2012-12-17
Reputation: 0

I'll tell you one. This happens on every energy modeling project. The
architect says "Why are you charging me to compute load calculations,
and you are charging me more to do an energy model? Why can't you get
the loads from the energy model?"

And I always have to say something like "A loads calc model with
sufficient detail to get the room by room loads we need for design is
way too detailed for an energy model. If we use a load calcs model for
energy calcs, the progress would be too slow and the answers would not
be more accurate." And this I know from experience.

It would be nice to enter room by room information and have it spit out
load data for sizing equipment and air flows, but use some different way
of zoning for the energy calcs. Maybe a way to identify rooms with
similar load profiles that would be considered a single room for the
purposes of the energy calculations. Then of course, have a report for
design loads and equipment sizing.

Sounds hard to me, but hey, you asked...

Randall C. Wilkinson, P.E., C.E.M.

Randall Wilkinson's picture
Joined: 2013-01-07
Reputation: 0

In my opinion, since Randall got me thinking.... (and not completely on the topic of execution speed)
Most modeling effort in much of the world appears to be spent currently on new building models. I expect and hope that this will change and the focus will change to existing building performance. These, however, are VERY different activities!

For energy models of new or not-yet-built buildings:

1. The most effort is spent on input preparation and output review.

a. Input preparation will be faster for models imported from IFC-compliant BIM tools once that's a common feature for GUIs like DesignBuilder, Simergy and OpenStudio

b. I suspect that almost all new buildings will use BIM within the next few years, so that part of the problem may disappear.

2. Output review is another matter. I cannot quite imagine how modelers who do not have detailed reporting of individual nodes and energy at each timestep can conduct quality checking. EnergyPlus has this capability, I use it on every project and would (probably) never consider doing otherwise. Nonetheless, I think that the creation of all those variables may consume some processing time. Perhaps that can be improved?

a. A related topic that probably has nothing to do with execution speed is that the detailed output is not very "user friendly". Formatting is primitive, for one thing, and it's only available in text format.

b. There are some 3rd party graphical review tools, but they seem limited to review of a single variable at a time. I am probably going to try using Universal Translator ( for this purpose shortly. It's main purpose in life is analyzing data logger files, but that's not too different than reviewing an energy model's output. Has anyone got a great, multi-variable solution for output review?

3. I have mentioned in the past on this forum that if you prepare input carefully, and manage run times properly, execution time is not critically important. (e.g., run only Design Days until output looks believable, add complex systems after checking simpler systems, etc.)
I won't ever complain about faster execution time (!), but since Geof has been asked to make it a lot faster, I'm wondering for whom the execution time has been a problem?

For existing buildings that have utility consumption data, the same basic problems present themselves in a very different fashion:

1. Input preparation cannot be imported from BIM. It may need to be hand-created by field investigation because there are no drawings whatsoever.

a. The requirements for precise representation of geometry are not the same. Every tiny reveal or dormer need not be described for the model. Every construction detail and material is not always known. Glass emissivity and filler gas are also not known. This means the geometry and, to some extent, materials can be simplified - perhaps greatly. They normally do not affect the model significantly, especially when compared to all the other variables.

b. All of the schedules, diversity patterns and control settings still need to be part of the input, however, and these are very time-consuming to develop. There is software now available ( e.g., which analyzes utility data and infers a lot of useful information. This is especially powerful if your site has an interval meter (records usage hourly or more frequently). "Infer" is not the same as "know", but it is extremely difficult to get reliable data about building operation, even from people who work at the building! Check out the cool chart of interval data that I've attached and tell me when the people arrive and leave each day :). A Building Management System, properly used, can also help a lot, but too few of those are taken full advantage of.

c. In short, input preparation is a big job, but is probably one for a GUI, not the calculation engine.

2. Output review, assuming that you correctly define the energy systems, has a new and different focus. Not only must the model correctly define the actual energy systems, but it must result in the same energy use and energy use pattern as the energy meters.

a. For starters, I am convinced that you must use the actual weather which occurred for the meter billing period. A quick comparison recently suggests that 10-20% of energy use can be affected by using actual vs. TMY weather data. Why live with that uncertainty?

b. I am also convinced that it is well worthwhile to measure key energy consumers so as to eliminate more uncertainty. You'll never know exactly what the infiltration rate is, but it's easy to measure major fan and pump power, for example.

c. That leaves only a 100-500 or so remaining variables to tweak in order to get a good match between model-predicted and meter-measured energy! "A good match" includes high correlation (at least) for monthly energy in addition to annual energy. This is not a trivial effort.

d. Assuming that you understand which variables will affect the "match" problem positively, THIS is where execution speed is a VERY BIG DEAL. Easily, you may have dozens (or hundreds) of full year simulations until you get a "good match".

e. Because existing building energy models have so many unknowns, it seems to be the case that less calculation rigor is needed. Why spend the time to model a complete energy balance every 15 minutes when you are guessing about the schedule last Spring, the room setpoints, the COP of the 20 year old chiller and the condition of the poorly maintained steam traps? Once again, there are lights on the software horizon (Retroficiency is one, but there are others) which are attempting to use simplified models. These may be all that's needed, but it's early in their development and the "votes" are not all counted.

That's all for now. Can you tell that I think the future of energy modeling is in existing buildings? I am certain that some of you are far ahead of me in that area, but I hear very little about it in my corner of the USA. I'd love to hear more!

James V Dirkes II, PE, BEMP, LEED AP

James V Dirkes II, PE's picture
Joined: 2011-10-02
Reputation: 203


I agree with your sentiment that modeling of potential retrofits to existing buildings demands of an analyst somewhat of a different skillset than that required for modeling new construction. This is because, in my opinion, most existing buildings contain operable systems are in some way 'broken' (i.e. malfunctioning control systems, poorly balanced energy distribution systems, unintended simultaneous heating and cooling, etc., etc.). An analyst/modeler wanting to represent these defects needs a firm grasp of how older analogue equipment and controls (lighting, HVAC, service water) are supposed to operate before one can judge how a system is actually operating. This is not so much of a concern in modeling for new construction, as we assume equipment will be installed, commissioned, etc. to operate much closer to the ideal conditions that quasi-steady state simulations impose.

Furthermore, for modeling existing 'out of control' systems in buildings (only with the intent of seeing the ROI for improving them), one also needs a firm grasp of how to 'trick' the particular simulation engine into abstracting the energy consumption effects of the particular defect. For example, in eQUEST, one way to 'model' zero benefit simultaneous heating and cooling loads is to develop custom schedule driven process loads to 'hang' from the appropriate fluid loop. While possible, this is harder than it should be to do and fraught with error. I think our community would benefit if standardized methods for common 'retro-commissioning' measures could be developed and adopted, and that then tool vendors could then incorporate interfaces to accomplish the methods.

The good news is that data acquisition costs for supporting these sorts of 'pre-processed' simulation calculations continues to get cheaper and easier. I'd love to hear from others in the forum on, (for creating accurate models of existing buildings, and when a building does not have a EMCS capable of trend configuration), what types of data acquisition hardware are people having success with? For me, the simple stand-alone HOBO U series loggers (temp, humidity, motor status) have provided sufficient short term data to significantly improve model quality.

To address your question in 2b below, I suggest you take a look at (2) different (free) data visualization tools that I am aware of:

1) DVIEW (stand-alone executable)

2) Energy Charting and Metrics Tool - ECAM (MS Excel Add-in)

A few comments on each -

DVIEW can produce the types of plots you shared, and much more. It can import a structures csv file, but is limited to viewing 8760 rows of data (put in as many columns as you want). Because of the 8760 limitation, it is not useful at all for sub-hourly simulation results unless the results are sampled to hourly. Having said that, it integrates quite well with eQUEST 'custom' hourly reports, which are quite extensive.

An improved version of ECAM has recently been posted to the web. ECAM, is a much more generic and less restrictive data visualization tool, which works best for analyzing hourly of finer data streams . While it can do many things, I've discovered that one of its strengths for simulation 'true-up' is the flexible creation of 'load profiles' from generic data sets. Think of dataset1 as measured, real world data, and dataset2 as model output data. If dataset is driven by an AMY weather file, load profiles for an overlapping period of time (January, weekdays, for example) from both streams should visually match. If not, begin the sleuthing process.. And, more interestingly, with ECAM the two datasets do not need to be at the same frequency.

Let's keep the dialogue moving! I know I am learning.

All the Best,

Chris Balbach, PE, CEM, BEMP, BESA, BEAP, CMVP

Chris Balbach's picture
Joined: 2011-09-30
Reputation: 1