Comparing ASHRAE 90.1 App G Models to Real Buildings

25 posts / 0 new
Last post

Bldg-Sim Community -

I'm trying to compile a list of why it might be inappropriate to compare Appendix G models to actual consumption data. This comes about because I recently got into a debate with one of my co-workers when looking at the infamous NBI chart/study that shows little correlation to predicted and actual energy values of LEED buildings. I was trying to explain that the Appendix G model's intent is NOT to be compared to actual consumption, as it is a modeling protocol aimed at creating consistent relative comparisons for LEED points.

Here are the reasons thus far that support this notion (that App G models shouldn't be compared to actual data). Does anyone know of any resources out there that expand upon this? Or can you think other reasons?

- Appendix G does not take into account external shading, which can be critical in urban environments for accurate energy predictions

- Schedules are typically not created with the intent of being predictive. Overall building hours are adhered to, but detailed schedule creation is not usually in the scope of a LEED model (or is it, in your experience?). For instance, typical plug load base values during unoccupied times are .3, this is a pretty big assumption.

- The App G model uses a TMY weather file, which can vary from the current weather year (I wonder on average by how much?)

- Infiltration values are assumed, unless blower door testing has been done (which is rare for commercial buildings).

- Thermostat values are modeled as consistent across the building, which is rarely the case in an actual operating building

Any additional insight is much appreciated!

Jacob Dunn LEED AP BD+C

ESKEW+DUMEZ+RIPPLE, APC
2014 AIA National Architecture Firm Award

365 Canal Street Suite 3150
New Orleans LA 70130
504.561.8686
eskewdumezripple.com

Jacob Dunn2's picture
Offline
Joined: 2014-12-18
Reputation: 0

Jacob,

Happy to hear that other people are thinking the same.

I was into this subject on my own thoughts recently.

I would like to add in your list: Occupants' behavior actually which is not
in the stage to be included into the prediction.
I would add commissioning as well along with controls simulation and
controls operation.

Apologies because I haven't worked with LEED projects but I think the above
applied in general to "the performance gap".

Really looking forward to hearing more about this subject as I am not that
experienced engineer yet, but really interested in "the performance" side
of buildings.

Best,
Maria-Lida Kounadi

2015-05-21 15:04 GMT+01:00 Jacob Dunn :

Maria-Lida Kou's picture
Offline
Joined: 2015-05-22
Reputation: 0

Climate files used in the simulations versus the actual weather.

I?m in the midst of a project that shows a variance in EUI of up to 200% just by changing the climate file for the DOE primary school. Full results to be presented in September.

Add in all the other factors already mentioned, and if your 90.1 model comes anywhere close the real buildings? it?s far more likely that the 90.1 model was extremely ?lucky? than it is that the model used accurate assumptions.

Regards,

Nathan

[cid:image001.png at 01D093A7.88A72020]

Nathan Kegel
Business Development Manager

O:

763.276.9981

M:

415.420.9314

http://www.iesve.com

Integrated Environmental Solutions Limited. Registered in Scotland No. SC151456
Registered Office - Helix Building, West Of Scotland Science Park, Glasgow G20 0SP

Email Disclaimer

nkegel's picture
Offline
Joined: 2011-03-09
Reputation: 0

Dear Colleagues,
A reminder that Thermal Energy System Specialists, LLC will be
hosting a three day TRNSYS Training Course in Madison, WI on June 2-4,
2015 . TRNSYS is a highly flexible modeling tool that has been under
continual development since the 1970s and which is ideally suited to
research and to simulating complex and innovative energy systems.

The course will cover both introductory and advanced topics including
HVAC system simulation, whole building simulation, and stand alone
system development. The process of writing and adding new components to
the TRNSYS library will be demonstrated. Given TRNSYS's flexibility we
can usually cover topics of specific interest to course attendees.

Please visit the website for an outline, registration form and more
information:

http://tess-inc.com/trnsys/schedule-2015-summer

For more information about the energy and building simulation software
package, TRNSYS, please visit either http://www.trnsys.com or
http://sel.me.wisc.edu/trnsys.

If you have any questions or concerns, please do not hesitate to contact
me.

Best regards,
David Bradley

--
***************************
David BRADLEY
Principal
Thermal Energy Systems Specialists, LLC
22 North Carroll Street - suite 370
Madison, WI 53703 USA

P:+1.608.274.2577
F:+1.608.278.1475
d.bradley at tess-inc.com

http://www.tess-inc.com
http://www.trnsys.com

David BRADLEY2's picture
Offline
Joined: 2011-10-02
Reputation: 0

Impact of weather depends on how climate-responsive the building is. Recently finished a paper that showed +-5% variation in year-to-year weather in some building types (constant volume systems mostly) and more than 20% in others. Location and humidity also play a role. One building type that is extremely energy-intensive (restaurant) also had high variation due to weather - outside preheating/cooling.

Images from extremely hot and extremely cold climates - identical model, actual weather from 1980-2014. 1980 on right. TMY-type files on left.

[cid:image001.png at 01D093B1.C3E98560][cid:image002.png at 01D093B1.C3E98560]

Dru.Crawley at bentley.com's picture
Joined: 2011-10-02
Reputation: 0

I agree fully with all of the above comments and would like to add these:

- Even buildings that are commissioned properly will see their
performance erode over time. There are hundreds of reason for this,
ranging from poor maintenance to well-intentioned maintenance people not
having time to monitor operations well. There is NO BUILDING that operates
well for long.
- Buildings often see changes in operation, occupancy and schedule.
These are oftimes gradual changes over a period of years, but can be
substantial

James V Dirkes II, PE's picture
Joined: 2011-10-02
Reputation: 203

I believe Appendix G is not meant to measure the performance of the existing building. ASHRAE 90.1 Appendix G Technical Committee is the right source to answer this question. They can clarify the intention of the APP G modeling methodology.

Best,
Alamelu
Alamelu Brooks LEED AP (BD+C), HBDP, BEAP, EIT| Senior Associate | +1.443.718.4881 direct | Alamelu.Brooks at icfi.com | icfi.com
ICF INTERNATIONAL | 7125 Thomas Edison Drive, Suite 100, Columbia, MD 21046 USA
Connect with us on social media.

ab_icfi's picture
Offline
Joined: 2014-06-25
Reputation: 0

The text of Appendix G states this explicitly in Section G1.2, Note 2:

"Neither the *proposed building performance* nor the *baseline building
performance* are predictions of actual energy consumption or costs for
the *proposed
design* after construction. Actual experience will differ from these
calculations due to variations such as occupancy, building operation and
maintenance, weather, energy use not covered by this procedure, changes in
energy rates between design of the building and occupancy, and the
precision of the calculation tool."

Eric Ringold's picture
Offline
Joined: 2015-05-22
Reputation: 0

Dear Jacob,

This is an eternal debate and there are many reasons for moving away from the use of 90.1 Appendix G to evaluate the performance of a building designs. To answer your question directly, we worked a few years ago with Enermodal in Canada on a comparison between design phase building energy models (BEM) prepared for LEED Canada certification (slightly different to Appendix G) to calibrated BEM and measured energy use for 18 buildings. The main findings are quoted below:

Analysis of a Simplified Calibration Procedure for 18 Design-Phase Building Energy Models
H W Samuelson, A Ghorayshi and C F Reinhart
Journal of Building Performance Simulation, DOI: http://dx.doi.org/10.1080/19401493.2014.988752
This paper evaluates the accuracy of 18 design-phase building energy models, built according to LEED Canada protocol, and investigates the effectiveness of model calibration steps to improve simulation predictions with respect to measured energy data. These calibration steps, applied in professional practice, included inputting actual weather data, adding unregulated loads, revising plug loads (often with submetered data), and other simple updates. In sum, the design-phase energy models underpredicted the total measured energy consumption by 36%. Following the calibration steps, this error was reduced to a net 7% underprediction. For the monthly energy use intensity (EUI), the coefficient of variation of the root mean square error improved from 45% to 24%. Revising plug loads made the largest impact in these cases. This step increased the EUI by 15% median (32% mean) in the models. This impact far exceeded that of calibrating the weather data, even in a sensitivity test using extreme weather years.

Best,

Christoph
Christoph Reinhart
Associate Professor
Department of Architecture
Massachusetts Institute of Technology
77 Massachusetts Ave, Rm 5-418, Cambridge, MA 02139, USA
t: 617 253 7714, f: 617 253 6152, creinhart at mit.edu
Sustainable Design Lab | DIVA | Daysim | mapdwell | umi
-------------------------------------------------------------------------
Events Modeling Urban Sustainability | DIVA Day 2015

Christoph Reinhart's picture
Joined: 2012-03-08
Reputation: 0

A couple of points to add to the list:

1. Thermal bridging ? I only know of one firm who attempt to include the effects of all thermal bridging in the building envelope: slab edges/balconies, window connections, curtain wall transitions, shelf angles, etc.

2. Equipment performance curves. It is difficult and time consuming to chase down the manufacturer data for creating performance curves.

Christopher Jones, P.Eng.
Tel: 416.644.4226 ? Toll Free: 1.888.425.7255 x 527

Chris Jones

crollinjones's picture
Offline
Joined: 2013-12-12
Reputation: 0

There is a section in Appendix G (G1.2) addressing this:

Neither the proposed building performance nor the

baseline building performance are predictions of

actual energy consumption or costs for the proposed

design after construction. Actual experience will differ

from these calculations due to variations such as occupancy,

building operation and maintenance, weather,

energy use not covered by this procedure, changes in

energy rates between design of the building and occupancy,

and the precision of the calculation tool.

You may also want to check out articles on building labeling systems that discuss differences between asset (theoretical, modeling-based) and operational (based on actual utility bills) ratings, such as the one attached (see p.4-6). Both have value as long as they are interpreted correctly.

Maria

--

Maria Karpman LEED AP, BEMP, CEM

________________

Karpman Consulting

www.karpmanconsulting.net

Phone 860.430.1909

41C New London Turnpike

Glastonbury, CT 06033

Maria Karpman's picture
Offline
Joined: 2011-09-30
Reputation: 0

Jacob et al:

Some more points to the mix:

1) Increased infiltration/exfiltration due to seasonal changes in wind patterns, which mostly affect high-rises, is not modeled very often. The changing nature of wind (more or less cold or hot wind coming from a given direction) will affect cooling and heating loads.

2) Not every software can model the thermal mass of concrete or brick.

3) Microclimatic conditions (the urban overshadowing Jacob was mentioning) have to do also with air movement (urban canyon effect), and are not modeled often.

4) Buildings with Occupancy/Vacancy sensors are usually not modeled with accuracy. App G just gives a 10% LPD reduction when using them (in theory you could model them if you know the exact patterns of occupancy before construction, which never happens).

5) Daylight Harvesting and Electrochromic glass are too dynamic systems to be modeled accurately (unpredictable occupancy again).

However, I do agree with Prof. Reinhart about calibrating design models with as much real time data as possible. That way we could recreate other scenarios (climate change, systems decay, the impact of a change of occupancy, etc.).

Alfonso E. Hernandez, LEED AP
Kirksey | Architecture

Alfonso Hernandez's picture
Joined: 2015-05-22
Reputation: 0

Classification: UNCLASSIFIED
Caveats: NONE

Energy models are nearly 100% accurate as long as you can accurately predict the 3 main variables.
Human behavior, the weather and energy prices (a.k.a stock market & global economy)

p.s. - if you can accurately predict any of those three, how in the heck did you end up making energy models for a living?

Classification: UNCLASSIFIED
Caveats: NONE

Jeurek's picture
Offline
Joined: 2010-10-07
Reputation: 0

Lots of strong points raised so far ? I think this ?conversation/debate? is
one we must have all faced at some point.

I don?t have much else to add to what?s already been suggested, except to
note many of the critical assumptions of occupant behavior / plug load
density / system operations have an additional layer of obfuscation that
shouldn?t be missed for new construction calibration: NC buildings? actual
energy bills immediately after occupancy are usually NOT representative of
?typical? usage (for which Appendix G models are necessarily/justifiably
based upon). The occupants are settling into, learning to operate, and
calibrating/messing with/breaking the building?s systems. System
commissioning is working the bugs out of actual system operations to match
(or improve upon) the original design. I?ve also run into it taking some
years to wrestle with the local utility when the wrong rates are applied to
the building. Whenever I am tasked with calibrating from a newly occupied
building (LEED/App.G or otherwise), it?s a high priority to determine at
what extent the available utility metering/bills may be skewed by such
?growing pains.?

With all this said however, all these ?shortcomings? for mis-applying 90.1
Appendix G models to the task of ?predicting? actual energy costs help to
frame what they are actually very GOOD for. I think it might behoove you
to at least outline that purpose in contrast:

- [*insert G1.2 here*]

- Correctly developed 90.1 App. G models make very decent
?platforms? from which to develop a calibrated model down the road.

o They have the advantage of a structured set of assumptions which can be
transformed into a checklist for surveying actual operations.

o For developing a calibrated model ?from scratch,? much of the first
stages of development are ready-made with a Proposed Appendix G model.
Geometries, construction/window types, LPD?s, and mechanical systems are
first defined referencing available construction documentation. From
there, a brief review of ?as-built? documentation (including submittals)
should highlight some initial variances for evaluation.

- For the intended/stated purpose: developing a performance rating
well before actual operations can be known, 90.1/Appendix G (in combination
with the additional requirements of LEED and rigor of GBCI simulation
review) is a fine option. Analogous standards I?ve worked with do some
things better and some things worse, and there will likely always be room
for improvement, but for all that we talk of problems/holes in the
standard as applied to the ?real world? of modern building design, the
standard covers quite a bit well enough.

- ?Design phase? modeling, inclusive of the Appendix G protocol,
is perhaps most critically useful in determining *relative* (not absolute)
performance for design alternatives to inform the design process. In my
experience, it?s a slippery slope to quantify design alternatives in terms
other than relative % performance increases. Presenting such results in
terms of $$$ savings supports the fallacy that such early results are
founded upon something other than a broad range of (probably incorrect)
assumptions. This is not a comfortable concept for most to grasp, but
energy simulations are sometimes at their most informative when they are
?wrong.? I have found it a difficult issue to explain for those outside
our field, so I usually try to guide conversations around such sticky
matters. The broader point I am trying to make in this bullet is that
?relative? performance metrics have a distinct value from absolute cost
predictions, and relative metrics translate better into informing the
design process. Maybe someone else can make the point more eloquently? ?

I hope that adds to the discussion for some =)!

~Nick

*NICK CATON, P.E.*
*Owner*

*Caton Energy Consulting*
1150 N. 192nd St., #4-202

Shoreline, WA 98133
office: 785.410.3317

www.catonenergy.com

*From:* Bldg-sim [mailto:bldg-sim-bounces at lists.onebuilding.org] *On Behalf
Of *Alfonso Hernandez
*Sent:* Thursday, May 21, 2015 8:34 AM
*To:* Maria Karpman; 'Brooks, Alamelu'; 'Jim Dirkes'; 'Nathan Kegel'; Jacob
Dunn
*Cc:* bldg-sim at lists.onebuilding.org
*Subject:* Re: [Bldg-sim] Comparing ASHRAE 90.1 App G Models to Real
Buildings

Jacob et al:

Some more points to the mix:

1) Increased infiltration/exfiltration due to seasonal changes in wind
patterns, which mostly affect high-rises, is not modeled very often. The
changing nature of wind (more or less cold or hot wind coming from a given
direction) will affect cooling and heating loads.

2) Not every software can model the thermal mass of concrete or brick.

3) Microclimatic conditions (the urban overshadowing Jacob was mentioning)
have to do also with air movement (urban canyon effect), and are not
modeled often.

4) Buildings with Occupancy/Vacancy sensors are usually not modeled with
accuracy. App G just gives a 10% LPD reduction when using them (in theory
you could model them if you know the exact patterns of occupancy before
construction, which never happens).

5) Daylight Harvesting and Electrochromic glass are too dynamic systems to
be modeled accurately (unpredictable occupancy again).

However, I do agree with Prof. Reinhart about calibrating design models
with as much real time data as possible. That way we could recreate other
scenarios (climate change, systems decay, the impact of a change of
occupancy, etc.).

*Alfonso E. Hernandez, LEED AP*

*Kirksey* | Architecture

*From:* Bldg-sim [mailto:bldg-sim-bounces at lists.onebuilding.org
] *On Behalf Of *Maria Karpman
*Sent:* Thursday, May 21, 2015 9:55 AM
*To:* 'Brooks, Alamelu'; 'Jim Dirkes'; 'Nathan Kegel'
*Cc:* bldg-sim at lists.onebuilding.org
*Subject:* Re: [Bldg-sim] Comparing ASHRAE 90.1 App G Models to Real
Buildings

There is a section in Appendix G (G1.2) addressing this:

Neither the *proposed building performance *nor the

*baseline building performance *are predictions of

actual *energy *consumption or costs for the *proposed*

*design *after *construction*. Actual experience will differ

from these calculations due to variations such as occupancy,

building operation and maintenance, weather,

*energy *use not covered by this procedure, changes in

*energy *rates between design of the building and occupancy,

and the precision of the calculation tool.

You may also want to check out articles on building labeling systems that
discuss differences between asset (theoretical, modeling-based) and
operational (based on actual utility bills) ratings, such as the one
attached (see p.4-6). Both have value as long as they are interpreted
correctly.

Maria

--

*Maria Karpman *LEED AP, BEMP, CEM

________________

Karpman Consulting

www.karpmanconsulting.net

Phone 860.430.1909

41C New London Turnpike

Glastonbury, CT 06033

*From:* Bldg-sim [mailto:bldg-sim-bounces at lists.onebuilding.org
] *On Behalf Of *Brooks, Alamelu
*Sent:* Thursday, May 21, 2015 10:43 AM
*To:* Jim Dirkes; Nathan Kegel
*Cc:* bldg-sim at lists.onebuilding.org
*Subject:* Re: [Bldg-sim] Comparing ASHRAE 90.1 App G Models to Real
Buildings

I believe Appendix G is not meant to measure the performance of the
existing building. ASHRAE 90.1 Appendix G Technical Committee is the right
source to answer this question. They can clarify the intention of the APP G
modeling methodology.

Best,

Alamelu

Alamelu Brooks LEED AP (BD+C), HBDP, BEAP, EIT| Senior Associate |
+1.443.718.4881 direct | Alamelu.Brooks at icfi.com | icfi.com

ICF INTERNATIONAL | 7125 Thomas Edison Drive, Suite 100, Columbia, MD 21046
USA

Connect with us on social media .

*From:* Bldg-sim [mailto:bldg-sim-bounces at lists.onebuilding.org
] *On Behalf Of *Jim Dirkes
*Sent:* Thursday, May 21, 2015 10:36 AM
*To:* Nathan Kegel
*Cc:* bldg-sim at lists.onebuilding.org
*Subject:* Re: [Bldg-sim] Comparing ASHRAE 90.1 App G Models to Real
Buildings

I agree fully with all of the above comments and would like to add these:

- Even buildings that are commissioned properly will see their
performance erode over time. There are hundreds of reason for this,
ranging from poor maintenance to well-intentioned maintenance people not
having time to monitor operations well. There is NO BUILDING that operates
well for long.
- Buildings often see changes in operation, occupancy and schedule.
These are oftimes gradual changes over a period of years, but can be
substantial

Nicholas Caton's picture
Offline
Joined: 2014-12-09
Reputation: 0

It occurs to me that every energy modeler "worth his / her salt" recognizes
the limitations of a "baseline" model. It's a reference point intended to
compare with another reference. It's a relative, not an absolute,
comparison.

The models which compare predicted versus actual (e.g., utility bill)
performance are a different matter. At that point, you start trying to
gain understanding about why the model differs from reality. Until you
have actual data, however, it's all relative and may bear little
resemblance to reality.

James V Dirkes II, PE's picture
Joined: 2011-10-02
Reputation: 203

I?d like to add that although 90.1 Appendix G includes scope language as pointed out by several board members, that its purpose is ?to make an apples-to-apples comparison for rating? it doesn?t mean that the model couldn?t also reflect the expected energy use of the building subject to some of the major assumptions.

That Appendix G scope statement shouldn?t be a ?get out of jail free? card. I don?t think people on this list mean it that way, but I can imagine some confusion if an owner uses Google to search for ?accuracy of building energy models using 90.1 Appendix G? and this thread comes up.

The NBI chart in my opinion is provocative and out of date with current industry practices and standards ? some of the buildings use double the energy of the baseline building! How could this happen? There must have been some significant variances between the modeled equipment and what was actually constructed, and the building may be operating much differently than the original assumptions ? 200% energy use doesn?t happen from faulty zoning, slightly incorrect setpoints, default equipment curves, 0.5 W/sf plug load equipment when it was actually 0.75 W/sf, etc. There must be major equipment variances, process loads, and/or extended operating hours.

I look forward to an updated study in this regard based on current modeling practices, USGBC?s review process which has evolved since that study was published, availability of Appendix G as a resource as opposed to using ECB which is designed for pass/fail analysis, and USGBC collecting data from buildings going forward?hopefully this study is in the works.

I?ll suggest that our message to those retaining consultants for modeling services should be:

? The 90.1 comparison for LEED or Green Globes has some constraints in order to make it a model to model comparison across many buildings that are submitted. Appendix G doesn?t have a primary intention of predicting actual energy use. Energy efficiency investments made to the building based on modeled energy savings are likely to remain valid (or perform even better in terms of payback) if actual energy use comes in higher than the 90.1 model?if the systems are verified to be operating properly, and the extra energy use may be coming from higher plug loads and operating hours.

? Yes the models can provide a predictive estimate of energy use, but the level of coordination between the owner, design team, contractor, and modeler requires extra effort from all parties. An experienced modeler will have a higher probability of asking the right questions from these parties to close the gap between modeled and actual energy use. The energy model is not able to account for changes that take place after the fact unless information is exchanged and the modeler?s scope includes updating the model at that point. This should be emphasized for any core and shell buildings, and any building that will have high EUI tenants that aren?t known at the time of the energy model?s creation.

? In order to verify the cumulative performance of ECMs or investigate higher than expected energy use, an M&V process is recommended after the building is occupied in order to refine the model to correct for building occupancy factors, especially plug loads and operating hours, as well as actual weather. Comparing a pre-occupancy energy model to post-occupancy energy use data includes bias errors that need to be addressed through a calibrated energy model. The calibrated model may (will) provide different results than what was submitted to LEED or Green Globes.

This discussion also hinges on what someone considers to be ?close? ? is that within 5%, 10%, or 25%? An expectation needs to be set before the project.

If anyone wants to propose some ASHRAE research to provide better plug load and operating schedule data, let me know! TC 7.6 research subcommittee (among others) wants your input?

David

David S. Eldridge, Jr., P.E., LEED AP BD+C, BEMP, BEAP, HBDP
Associate

Direct: (847) 316-9224 | Mobile: (773) 490-5038

Grumman/Butkus Associates | 820 Davis Street, Suite 300 | Evanston, IL 60201
Energy Efficiency Consultants and Sustainable Design Engineers

grummanbutkus.com | Blog | Facebook | Twitter

David Eldridge's picture
Offline
Joined: 2012-05-08
Reputation: 1

All,

I would agree with all of the factors mentioned that absolutely can result
in deviations between actual and modeled building performance. I would also
echo the sentiment that following modeling rules shouldn't necessarily
preclude us from attempting to better predict actual building performance
through the LEED process. Applying careful attention to important details
and a healthy dose of experience (bringing together real life building
performance knowledge as it relates to the grey areas - namely schedules,
equipment controls, occupant behavior, etc) can truly help close the gap.
At the end of the day, a rating system should be attempting to reward
buildings that actually perform well, not theoretically perform well (and
as modeler's we should take a leading role in making *good* (i.e.
fair) assumptions).

My colleague (Janine Vanry) has recently completed research (to be
published soon) for her masters thesis at the University of Waterloo
(Ontario, Canada) which studied how LEED certified academic buildings in
southwestern Ontario performed in comparison to government energy intensity
benchmarks, campus-wide energy intensities, and in general how LEED
(modeled) results compare to actual building performance (as measured
through M&V). Consistent with Dr Samuelson's (et al.) research findings, the
discrepancies between the modeled results and the actual energy intensities
showed that there was an under-prediction anywhere from 2% to 44%.

While energy modeling professionals understand (as is evident by this
thread) that there will be differences between the documented EAc1 energy
savings and actual building energy usage, this isn't always communicated
and understood by the building owners and the professionals we work with.

Chris

Chris Hadlock's picture
Offline
Joined: 2012-01-23
Reputation: 0

Thank you all for your thoughtful insight on this matter! It?s an important debate ? both in understanding the capabilities/limitations of our craft as energy modelers, but also to communicate our value to the community at large.

To clarify slightly, my question revolved around how the specific modeling protocol of Appendix G could account for the ?performance? gap between modeled and actual use. Thus, the fact that buildings aren?t operated as the energy model specified and the lack of building commissioning, while true and important, are not inherent to the intent of Appendix G modeling. The most interesting question is, ?If you model a LEED App G model perfectly according to protocol, AND the building was operated according to the modeled schedules, will it predict the right number?? Lots of your responses lent insight into this question, thanks again!

I?ve revised my list below based on your responses:

Added:
- Plug load values are assumed, which can have a huge impact on overall energy (Thanks Christoph and Chris Hadlock for the insight)
- Insulation values are largely specified without thought to thermal bridging
- HVAC controls simulation is often simplified
- Performance curves are often not simulated due to increased effort and unavailability of performance data from manufacturers

Original:

- Appendix G does not take into account external shading, which can be critical in urban environments for accurate energy predictions

- Schedules are typically not created with the intent of being predictive. Overall building hours are adhered to, but detailed schedule creation is not usually in the scope of a LEED model (or is it, in your experience?). For instance, typical plug load base values during unoccupied times are .3, this is a pretty big assumption.

- The App G model uses a TMY weather file, which can vary from the current weather year (I wonder on average by how much?)

- Infiltration values are assumed, unless blower door testing has been done (which is rare for commercial buildings).

- Thermostat values are modeled as consistent across the building, which is rarely the case in an actual operating building
Cheers,

Jacob Dunn LEED AP BD+C

ESKEW+DUMEZ+RIPPLE, APC
2014 AIA National Architecture Firm Award

365 Canal Street Suite 3150
New Orleans LA 70130
504.561.8686
eskewdumezripple.com

Jacob Dunn2's picture
Offline
Joined: 2014-12-18
Reputation: 0

Classification: UNCLASSIFIED
Caveats: NONE

This is a slightly different question:
How close do you expect 2 energy models to be created by 2 different modelers (using the same program) if you give them the same plans and information?

(Ask Pablo Picasso, Salvador Dali and Rembrandt to draw a tree)

I assume most companies don't double up on the energy modeling efforts which would show how consistent or non-consistent energy models are. (assuming the energy modelers are experienced and competent.)

Somebody who teachers energy modeling may be able to provide insight and good examples.

As far as an energy model matching the actual utilities bills..... If you have a 1000 modelers, making models on 1000 computers for 1000 years......

Jeurek's picture
Offline
Joined: 2010-10-07
Reputation: 0

Just catching up with this thread so I'm sorry if I'm repeating anything that's already been mentioned.

Take a look at the paper from Pam Berkeley et al published last year at SimBuild.
https://www.ashrae.org/membership--conferences/conferences/ashrae-ibpsa-usa-papers

10 experienced energy modelers modeling the same building in a 3hr period. Fascinating results.

There are emerging methods to do more rigorous QC using a variation on Monte Carlo for energy models to calculate a confidence interval for the model rather than fully rely on modeler experience and 3rd party QC. Georgia Tech has integrated this into their version of e+, which I hope gets integrated into a future version of e+. It's computationally intense so cloud computing is probably the right way to do this, which I believe e+ is heading towards.

Fred

FRED BETZ? PhD., LEED AP ?BD+C
SENIOR SUSTAINABLE
DESIGN CONSULTANT
?
AEI | AFFILIATED ENGINEERS, INC. ?
5802 Research Park Blvd. | Madison, WI ?53719

P: 608.236.1175 | F: 608.238.2614??
fbetz at aeieng.com? |? www.aeieng.com ?

Fred Betz's picture
Offline
Joined: 2011-09-30
Reputation: 0

Models are never going to agree, but that doesn't mean they aren't powerful
tools for decision-making. Look at the ASHRAE 140 results for highly
simplified geometries with different engines. Things easily vary by 20%.
Hell, things vary by 20% just between some versions of the same tool. I
didn't see it mentioned that the underlying models just aren't going to
give you the "right" answer.

That's why we need to calibrate. The models are not generally actually
calculating things based on first principles. They use shortcuts for
computation time and ease of specification everywhere. That's why it's no
less real for me to impose some sort of somewhat goofy setpoint schedule to
get things to work out well in the calibration of my model. Ideally, we'd
calibrate to one set of bills and then run it against a second set of bills
to see if we over-calibrated the models (the same as over-fitting a
regression model). Instead of arguing whether or not the models are right,
we should be going back to the fundamental question of whether or not the
models provide useful decision-making information, i.e. whether or not they
get the first order impacts right for different kinds of design choices in
new construction. Remember that uncertainty around each point and move
ahead making the design decisions you need to make.

There's still plenty of room to develop better analysis practices that will
improve the design decisions we make. I would love to see a study looking
at normalized consumption and key characteristics of 5-year-old commercial
buildings, and then again at 10-year-old commercial buildings. You could do
this with LEED submittals and regress against the ECMs included and other
characteristics. You wouldn't necessarily get to the why of how the
buildings failed to deliver without a lot of extra work, but I think you
could get the what just by looking at bills and data. You could also do
this on the residential side. We have some utility clients who run
commercial or residential new construction programs. Theoretically, we
could get bills and characteristics for a large number buildings and
regress. I know that would work on the residential side. On the commercial
side, I don't know if there is enough participant data. Others should
consider doing the same. It would be a fascinating study and would make for
a great energy nerd parlor game. Which of these measures just didn't
deliver?

My gut says that failures in fancy control systems are to blame for the
most egregious differences. There are so many failure points -- sensors,
valves, dampers, actuators, not to mention the likelihood of a mistake in
the controls sequence. You need somebody on top of monitoring for each of
those potential failure points and correcting them. My gut also says that
good solid basic design will deliver the savings. Cutting down on
west-facing solar gains will save cooling energy. Installing more efficient
HVAC equipment, given solid ratings information, will also work. More
efficient lighting designs will work. Really fancy stuff will fail some of
the time. And some of the time it will work! And we need to celebrate those
pieces that work!

Justin Spencer's picture
Offline
Joined: 2011-10-02
Reputation: 1

I don't think the primary cause of discrepancy between simulations and measured data are
the algorithms (although they do play a part), but much more the input assumptions. I
also agree that more calibration should be done, not so much to tune an individual model,
which can easily be taken to a meaningless excess, but more to gain insight into which
input assumptions or defaults are most questionable and need revision of the "conventional
wisdom". A case in point is how much shading from drapes, blinds, and other obstructions
should be assumed? For many years, I've seen and used a SHADING-FRACTION of 0.60, i.e.,
40% of the incident solar is being obstructed, but is that really the best average value ?
(this will likely vary by simulation program).

When this issue of how reliable are simulations was raised at a DOE-sponsored workshop in
Jan. 2012 following the ASHRAE conference in San Antonio, and someone put up the
proverbial plot of simulations vs measured for LEED buildings showing a huge cloud, I had
suggested that we should tackle the calibration process in steps:
1. original ASHRAE-90.1 model vs measured
2. 1 from above corrected to as built and operated (including actual weather!) vs
measured
3. 2 from above with end-use metered data vs measured, plus tuning the model
1 would be what we're doing now. 2 is eminently doable by onsite visit to the actual
building. 3 would require much more instrumentation and manpower, and only doable as a
research project.
What I would like to see is how much do the correlations improve going to 2 and then 3.
My gut feeling is that the standard deviations would be halved each time, i.e., from 40%
to 20% to 10%.

I've actually thought of turning this idea into an ASHRAE WS (RFP), and might still do it,
time and energy permitting. What I envision is for some contractor to get a healthy number
of ASHRAE 90.1 models (~30-40), possibly submitted for LEED, and then apply the same
methodology to get 2, and pick a few (~5) that would be monitored for 3. If there are any
ASHRAE members or friends interested to flush out such a WS, please send me an e-mail.

Joe (currently Chair ASHRAE TC 4.7 on Energy Calculations)

Joe Huang
White Box Technologies, Inc.
346 Rheem Blvd., Suite 205A
Moraga CA 94556
yjhuang at whiteboxtechnologies.com
http://weather.whiteboxtechnologies.com for simulation-ready weather data
(o) (925)388-0265
(c) (510)928-2683
"building energy simulations at your fingertips"

Joe Huang's picture
Offline
Joined: 2011-09-30
Reputation: 406

Classification: UNCLASSIFIED
Caveats: NONE

Fred,

Thank you for the link to the article. It was fascinating. That answered exactly what I was wondering about.

John Eurek PE, LEED AP
Mechanical Engineer,

Jeurek's picture
Offline
Joined: 2010-10-07
Reputation: 0

Thou shalt not model unless thy answer is already known.

Morgan Heater, P.E. | Mechanical Engineer
Ecotope, Inc. | 4056 9th Ave NE | Seattle, WA 98105
206.322.3753| Direct 206.596.4709
www.ecotope.com

Morgan Heater's picture
Offline
Joined: 2011-10-02
Reputation: 0

Is there any kind of an online video courses available in TRNSYS for
beginners?
Is there any organization in INDIA which organizes such training courses.

V.Ravindra
Research Scholar
School of Energy Science and Engineering
IIT Kharagpur
Kharagpur, West Bengal-721302, India
Ph no-8016634988

v ravindra's picture
Offline
Joined: 2016-04-05
Reputation: 0