How have you approached...

12 posts / 0 new
Last post

I have a precedent question for everyone... this is not a "how to"
procedural question for any particular software, but to simply feel out
what has worked for others in the past:

Here's a quick description of the situation at-hand: a VAV terminal
unit with a "normal" minimum damper position (say, 30%) is tied by relay
to the space occupancy sensor, which also controls the lights. Upon
sensing space vacancy, the minimum airflow damper position is reset to
0% (airflow is permitted to stop, provided thermostat temperature set
point is satisfied). Upon sensing occupancy, the preset minimum damper
position is restored.

For LEED/USGBC-reviewed energy models, can anyone relate success in
modeling this or a similar energy-saving behavior in the proposed model,
distinct from the baseline? If so, what approach did you use to model
this behavior and/or quantify the energy savings?

I was discussing the possibilities with some colleagues and have a few
ideas for approaches that might all be justifiable, ranging from simple
(and quick) to complex (and time-consuming).

There are likely multiple "right" answers here, but I am hoping to
identify some precedent to understand what we can anticipate the LEED
reviewership will accept.

Thanks in advance!


Nick-Caton's picture
Joined: 2011-09-30
Reputation: 805

Classification: UNCLASSIFIED
Caveats: NONE


I would do it with schedules. This would be the long method.

I have the same controls and have not got around to model it. I would also
make many schedules. A different schedule for occupancy, lights, equipment,
ect. Also if this applies to a lot of rooms, you'd likely make multiple
schedules, one assuming that the room is empty from 9~10 another room empty
from 2~3. (The inverse for meeting rooms, if you have multiple meeting
rooms, not all meetings will happen at the same time.)

"Is Freedom a small price to pay to stop Global Warming?"

John Eurek PE, LEED AP

Jeurek's picture
Joined: 2010-10-07
Reputation: 0

I agree with John, I've done this on a model before and its painful. Assuming there are 10 hours in a work day, I set up 10 different occupancy and lighting schedules, each schedule being essentially identical except for a different hour of zero occupancy / lighting. I then applied these 10 schedules to the different office zones on a floor so that specific areas were vacant from 8am-9am, some vacant from 9am-10am, etc. And then for my system, I assumed relatively consistent 10% non-occupied conditions and reduced the OA at the system level accordingly. If your non-occupied conditions are more drastic (30%), obviously you can set up your schedules differently.

However, if your primary air is also cooling (and not some sort of DOAS chilled water VAV box), then you are going to also have to specify 10 separate thermostat schedules (really 20 with heating and cooling) so that the rooms do not have unmet load hours. I'm assuming you go into setback mode when the offices are unoccupied.

However, this was under v2.0, so that was a long time ago and reviewers were not as critical in their reviews.

James Hansen, P.E., LEED AP

James Hansen's picture
Joined: 2011-09-30
Reputation: 200

Thanks so much for the suggestions!

I myself have tread the "extensive, realistic fractional/thermostat
scheduling" path before under similar circumstances. Upon facing a VERY
large project, where the amount of explicit scheduling required for that
approach is compounded by the sheer quantity and variety of occupied
spaces benefiting from this relay setup, I am challenged with
brainstorming any acceptable (by LEED reviewer) means of approximating
the same behavior/savings by simplifying the problem...

Here are some alternative ideas we've come up with so far - I would
very much appreciate others' thoughts on these, or any further related
LEED/USGBC experiences to share as well:

1. 90.1 already prescribes how we quantify the savings of
occupancy sensors for installed lighting (10 or 15% flat deduction on
the LPD). Rather than define & justify the quantity and timing of
unoccupied hours between varying space types (murky waters at best), one
could instead reduce the loads incident on the affected systems by the
same percentage. This might be accomplished by applying this 10/15%
deduction to the affected spaces' fractional load schedules (occupancy,
equipment & lighting). One would need to tread carefully to avoid
"double-dipping" on any spaces already claiming a LPD deduction for
occupancy sensors. Baseline model's schedules would remain unaffected
and would be documented alongside the modified ones to illustrate the

2. (Simpler to model, but requiring slightly more documentation):
Let's say a hospital has an annual average of 85% occupancy for all its
patient rooms. Treating every other room normally, select a
representative sampling (considering envelope loads) of 15% of the
patient rooms. Model those selected rooms as "empty" (set people,
lights and equipment loads = 0) but still conditioned to maintain the
thermostat setpoint (against loads incident from the envelope &
neighboring spaces). Apply the 0% minimum turndown behavior to those
"empty" rooms only. Baseline model would receive identical treatments,
excepting the 0% turndown behavior. Documentation would include
illustrating which zones were sampled against the others, and
justification for the net annual "occupancy rate" used for each space

I have mixed feelings - obviously any simplification of the problem has
the potential to under/overstate the savings that might be found with a
more exhaustive scheduling approach, but may result in as good or even a
better estimation provided with solid documentation and execution. Does
anyone think the above approaches could work well, or have any
suggestions to refine the strategies?

Thanks again!


Nick-Caton's picture
Joined: 2011-09-30
Reputation: 805

Hi Nick,

To keep it simple, I would just derate the schedules, your 10/15% reduction
but on an hourly basis (ie one or more hours are moved to 0%/setback) and
apply the RESET-PRIORITY and MIN-RESET-FLOW keywords. I think averaging it
out beforehand and explaining as part of your modelling methodology in your
report would take less time and be easier to explain. I would start with a
small test model, to make sure you have the right percentages.

Shaun Martin LEED AP

Shaun Martin's picture
Joined: 2011-09-30
Reputation: 0

I also agree with using schedules. I think the fact that Std 90 went from
"no schedule changes" to allowing it for DCV means they do too.

I think the best thing you can do to simplify your LEED review is to explain
to the reviewer what you are doing and why. As long as they can follow you,
it makes sense, and it is reasonably conservative, i.e. err on that side, it
should be acceptable to them.


cmg750's picture
Joined: 2010-10-05
Reputation: 0

Classification: UNCLASSIFIED
Caveats: NONE


One of my projects has server rooms which combined produce 143 MBH (more heat
than some home furnaces) and the servers run all the time.
I am designing a ground source heat pump system for the building. The energy
model shows that the geothermal well field is unbalanced by 522,500,000 btu.
I'm plan on using a snow melt system to get rid of the extra heat. (Snow melt
is a nice by product, the goal is to balance the well field's annual load.)

Has anyone designed this kind of system?
I'm not really sure how to model it?
I'm foggy on how to control it so I get rid of just the right amount of heat.
Has anyone designed this kind of system?

It is fun to get to design energy efficient systems and coming up with new
ideas, but it is hard to do research on them. (I know snow melt isn't new,
geothermal isn't new, computer room cooling isn't new, but I haven't found
any articles tying the three together.)

John Eurek PE, LEED AP

Jeurek's picture
Joined: 2010-10-07
Reputation: 0

I will say I got really excited this past year about the isolated idea
of combining snow melt with geothermal well field design...

For those unfamiliar, well fields in this part of the Midwest (KS/MO/NE)
historically heat up over the long haul, especially with intense
equipment loading like John's situation... fields installed decades ago
commonly end up struggling and failing when the extreme summers roll
around, sooner or later.

The most common pre-emptive/patch solution I've seen to tackle this
trend is to tack on (or set up infrastructure for) an exterior dry
cooler to reject excess heat from the loop.

I pushed hard recently to consider a "snow melt" loop (even if not used
to functionally melt snow) as a better "free" means of keeping a
well-field in balance annually, but the real trick is in how to set it
up so that this extra loop it doesn't require active
maintenance/monitoring... Making a location for horizontal loops like
this is also a challenge in many cases where the land is not readily
available or has future expansions planned. Most clients out there,
even those with motivated maintenance staffing, would really be best
served with a "fire and forget" system that could handle itself decades
down the road without requiring active monitoring/adjusting as the
seasons change and temps swing. Ultimately, I haven't yet come up with
any particularly great setup that I would consider fool-proof over the
span of decades, but I'd be very interested if others have
thoughts/details to share on how this might be achieved.

Conceivably, and to your specific questions John, I'd speculate the
actual controls would at a minimum need to define parameters
establishing a range of "target" loop temperatures for the cooling and
heating season (these would be based on 1st year loop simulations, to
start). The "melt" loop routed near the surface (or embedded in
concrete where it might be safe), would be normally closed, but valves
would open for circulation when the outside temperatures* are conducive
to pushing the well field return temperatures in the desired direction.
Done cleverly, you could push the system to permit water below the
setpoint during the cooling season, and above the setpoint during the
heating season as well (this behavior would potentially occur in the
swing seasons - spring/fall). I'm thinking the ideal controls would
constantly monitor and take advantage of free heat rejection & free heat
gains whenever applicable.

* Another point to consider: What temperatures are ideal to monitor for
such a loop? Outside air data may be easily available, but considering
the lag in topsoil temps and depending on what you choose for the actual
loop installation location you might do better to locate a sensor
elsewhere... A loop underneath black asphalt getting will experience
different temperatures than the same loop embedded in light-colored
hardscape or simply under exposed topsoil.

Northern climates might find the same setup of interest to primarily
gain free heat gain to keep loops in balance, routing the secondary loop
through building envelopes or similar massed structures that would pick
up extra heat in the summer months.

Suffice to say I find it an interesting topic - if anyone has ever come
across a design guide or white paper on this subject I'd also be very
interested to hear about it!


Nick-Caton's picture
Joined: 2011-09-30
Reputation: 805

Check the Oklahoma State Building and Environmental Thermal Systems
Research Group -- there are a number of papers on various topics related
to geothermal systems in hybrid, shallow, and other configurations.

There is also a great research project where snow melt was the primary
purpose of the geothermal system, which is not quite what you guys are
after but might offer some guidance.

David S. Eldridge, Jr., P.E., LEED AP BD+C, BEMP, BEAP, HBDP

David S Eldridge's picture
Joined: 2011-09-30
Reputation: 2000

I was wondering how the snow melt system would work help you during the
summer months, unless you are up in the mountains somewhere. What about
preheating, or just heating, DHW as it would be a year round load?

cmg750's picture
Joined: 2010-10-05
Reputation: 0

Hey Carol,

In a majority of 4-season climates (here in Kansas City too), I'd expect
you would not use such a secondary loop in the dead of summer where
night-time temperatures would not drop to practical/conducive levels for
heat rejection.

But to answer your question from other angles: this heat rejection/heat
gain loop would help in the summer months decades down the road, where
you've avoided overcooling/overheating the ground temperatures over

It would also help in that through late spring and early summer, you
would be able to cool through the workday/evening, then use cooler
nighttime/morning temperatures to offload the extra heat built up on a
daily basis, resulting in cooler net temperatures day-to-day. This
would ultimately reduce and forestall the maximum temperatures that
would be reached during mid-to-late summer when you'd want this transfer
loop shut off 24/7.

I'll be digging into that OSU link David, thanks for the heads up!


Nick-Caton's picture
Joined: 2011-09-30
Reputation: 805


While I appreciate the relative novelty of a snow melt system for balancing annual ground loop temperatures, there are a couple ideas that shouldn't be excluded outright when considering server cooling loads: higher cooling temperatures and air-side economizers. The control strategies and equipment for economizers are well-established, and their use is becoming more common in datacenters despite concerns about contaminants and humidity control. A conservative supply air temperature is 75?F, with ASHRAE TC 9.9 recommendation as high as 80.6?F. Google, Intel and Microsoft have all had good experimental results even with 90?F and above, and with little to no humidity control. I also second Carol's suggestion of heat pump space heating and DHW.

The servers might have a different opinion:



Bishop, Bill2's picture
Joined: 2011-09-30
Reputation: 0