Connected Thermostats Stakeholder Meeting Insights

 
E
P
A
 
E
N
E
R
G
Y
 
S
T
A
R
 
C
o
n
n
e
c
t
e
d
T
h
e
r
m
o
s
t
a
t
s
 
Stakeholder working meeting
Connected Thermostat Field Savings Metric
6/5/2015
 
A
g
e
n
d
a
 
Administrative updates and issues
Data call results, discussion
Accuracy of temperature readings
 
A
d
m
i
n
i
s
t
r
a
t
i
v
e
 
n
o
t
e
s
 
New web site referring to Connected Thermostats
Documents previous to June 9 can be accessed on the
original web site, referring to Climate Controls
Some of the references at energystar.gov may take a
week or two to be updated
New email address operational in the next few days:
connectedthermostats@energystar.gov
Any issues?
 
D
a
t
a
 
c
a
l
l
 
3 organizations have agreed to share submitted data.
These three data sets have been shared
Tentative conclusions about some hypotheses possible
In depth discussion of hypotheses and conclusions next meeting
Additional data may yet come in
 
 
Data call – discussion
 
Are temperature measurements accurate? e.g. are the
differences between vendors real?
Surprised if there was a 1F overall bias for all thermostats, though
there would be random variation
Argument, if there is, for comparing thermostats to themselves
However, if differences are due to demographic differences, the
fairer thing to use is a regional baseline
Single regional number biases who vendors want to sell to
Set a deviance bound – if a particular home’s baseline is outside
of it, default to regional baseline
Is removing outliers ever valid?  
 warning 
 Definitely needs to
be done carefully
 
Data call – discussion
 
Difference between 90
th
 percentile and average is somewhat
indicative of setback behavior
Assertion that this difference drives the savings
But if driven by demographics alone then what do we attribute to the
tstat?
RCT and before-after studies show actual savings
Non-fixed climate zone model, using data from vendors
Baseline based on demographics of the zip code of the location of the
t’stat
Locational baseline – much finer scale based on publicly available
demographic data. E.g. income, 2
nd
 home percentage, retirees, HDD per
year
Are zip codes a good proxy, though?  Lots of heterogeneity within zip
codes, but are differences between larger
Agreed that it would be closer, but more complex, may be small numbers
in each zip code
 
Data call – discussion
 
RBSA data that is not smart t-stat data.  Is size of set point
effect large compared to what we see in a data set that is
without smart t’stats
Fraunhoefer study – very little setback occurs
Difference between average and 90
th
 and average and 10
th
 for
RBSA data (or the metric) to see if the control group is very
different than what we are seeing from these three excellent
t’stats
Initial analysis of RBSA data does show a difference between
average indoor temp and temperature preference
How directly comparable is this to the data from this data call
 
Data call – discussion
 
Where can we find indoor temp datasets?
Alarm companies with data?  With remote access tstats in their package?
[alarm.com, ADT Pulse, Shclage, iControl to make connections]
Homes with data loggers from utilities?
DR companies?  (Other than those working on EE?)
RBSA shows setback behavior across the board – similar to data set 1
Comparison of pacific NW to all of cold climate not valid
Looked at differences between zip codes within the same state, 4-5 F
differences in average
RBSA data: 68.8 F average indoor temp, 90
th
 indoor temp 71.6 F;
filtered for the heating season (not easy to describe, but meant to be
comparable to the run time method); std. errors around 3 F for each
Some marine and some cold climate zone, 50% - 65% in the marine zone
Assertion that this data shows that climate zone is too coarse a
comparison.
May also want to try to analyze infrequently occupied homes differently.
 
D
a
t
a
 
c
a
l
l
 
-
 
d
i
s
c
u
s
s
i
o
n
 
Tstats will look better or worse depending on their
customer’s ability to choose energy saving schedules
Fundamental issue is to try to correct for that – zip code
level data would be an attempt to do that
Seriously easier if we can mostly use census data
Derived products available that are close to demographics at
the zip code level
None of this is relevant if we can rely entirely on per-
home baseline – main problem with that is not rewarding
induced differences in preference temperature
Differences in outdoor temps (and maybe set points) may
be largely based on different geographic clustering
 
A
c
t
i
o
n
 
i
t
e
m
s
 
Can we establish why data set 3 has so much larger std.
error?
Yellow cells in spreadsheet show anomalies – please
check your sets
Also can check indoor temp data against RBSA data with
more regional accuracy
Can BPA give us results broken down within each climate
zone?  Really small sample sizes – probably not worth it
May be worth looking for additional indoor temperature
only data sets
 
T
e
m
p
e
r
a
t
u
r
e
 
m
e
a
s
u
r
e
m
e
n
t
 
a
c
c
u
r
a
c
y
 
Starting point: NEMA DC-3 test method and requirements
Static temp accuracy ±1°F after 1 hour soak
Droop <1.5°F, measured as difference of cut-in temp when room
temp ramps up and when it ramps down
Droop is the effect of internal heating in the thermostat; it
has (2) undesirable effects:
Displayed/reported room temp is warmer than actual
There is a “droop” in the cut-in point so the room is cooler than
desired
How much accuracy is needed to make comparisons
between vendors accurate?
Do static temperature accuracy and droop adequately
capture measurement accuracy in the field?
 
F
r
o
m
 
N
E
M
A
 
D
C
3
,
 
A
n
n
e
x
 
A
-
2
0
1
3
:
 
A
.
1
3
.
3
 
R
o
o
m
 
T
e
m
p
e
r
a
t
u
r
e
 
D
r
o
o
p
H
e
a
t
i
n
g
 
O
p
e
r
a
t
i
o
n
 For the 20% duty cycle, ramp rate shall be 8°F (4.4°C) / hour up and 2°F (1.1°C) / hour
down.
 For the 80% duty cycle, ramp rate shall be 2°F (1.1°C) / hour up and 8°F (4.4°C) / hour
down. The effective value of room temperature droop is the difference between the cut-in
points at the 20% duty cycle and the 80% duty cycle.
A
.
1
3
.
4
 
R
o
o
m
 
T
e
m
p
e
r
a
t
u
r
e
 
D
r
o
o
p
C
o
o
l
i
n
g
 
O
p
e
r
a
t
i
o
n
• For the 20% duty cycle, ramp rate shall be 2°F (1.1°C)/hour up and 8°F (4.4°C)/hour
down.
• For the 80% duty cycle, ramp rate shall be 8°F (4.4°C) /hour up and 2°F (1.1°C) / hour
down. The effective value of room temperature droop is the difference between the cut-in
points at the 20% duty cycle and the 80% duty cycle.
A
.
1
4
 
S
T
A
T
I
C
 
T
E
M
P
E
R
A
T
U
R
E
 
A
C
C
U
R
A
C
Y
The static temperature measurement accuracy shall be determined by holding the unit
in a temperature chamber for one hour at 70°F (21°C), and comparing the display
value to the chamber setpoint [70°F (21°C)].
 
Temperature accuracy – discussion
 
Accuracy not necessary – it’s the systematic differences
that are important.  But vendors are best equipped to
correct for that
 
R
u
n
n
i
n
g
 
p
a
r
k
i
n
g
 
l
o
t
 
Zoned systems?  Usually not integrated.  Multiple
systems in one home?  Ask for statistics about how
common this is.
Definition of a “product” – e.g. enrollment in peak control
service makes it a different product
Verification and gaming the system?
Does the customer base bias the metric results, aside
from the qualities of the products?
Add on today’s parking lot items…
 
C
o
n
t
a
c
t
 
I
n
f
o
r
m
a
t
i
o
n
 
Abigail Daken
EPA ENERGY STAR Program
202-343-9375
daken.abigail@epa.gov
 
Doug Frazee
ICF International
443-333-9267
dfrazee@icfi.com
Slide Note
Embed
Share

In this stakeholder meeting, various topics such as administrative updates, data call results, accuracy of temperature readings, sharing of data sets, and discussions on temperature measurements accuracy were covered. Debates on regional baselines, outliers removal validity, savings drives, demographic influences, and locational baselines were highlighted. The session emphasized the need for careful data analysis and consideration of various factors affecting thermostat performance.

  • Thermostats
  • Stakeholder Meeting
  • Data Analysis
  • Temperature Readings
  • Energy Efficiency

Uploaded on Sep 06, 2024 | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. EPA ENERGY STAR Connected Thermostats Stakeholder working meeting Connected Thermostat Field Savings Metric 6/5/2015

  2. Agenda Administrative updates and issues Data call results, discussion Accuracy of temperature readings <#>

  3. Administrative notes New web site referring to Connected Thermostats Documents previous to June 9 can be accessed on the original web site, referring to Climate Controls Some of the references at energystar.gov may take a week or two to be updated New email address operational in the next few days: connectedthermostats@energystar.gov Any issues? <#>

  4. Data call 3 organizations have agreed to share submitted data. These three data sets have been shared Tentative conclusions about some hypotheses possible In depth discussion of hypotheses and conclusions next meeting Additional data may yet come in <#>

  5. Data call discussion Are temperature measurements accurate? e.g. are the differences between vendors real? Surprised if there was a 1F overall bias for all thermostats, though there would be random variation Argument, if there is, for comparing thermostats to themselves However, if differences are due to demographic differences, the fairer thing to use is a regional baseline Single regional number biases who vendors want to sell to Set a deviance bound if a particular home s baseline is outside of it, default to regional baseline Is removing outliers ever valid? warning Definitely needs to be done carefully <#>

  6. Data call discussion Difference between 90thpercentile and average is somewhat indicative of setback behavior Assertion that this difference drives the savings But if driven by demographics alone then what do we attribute to the tstat? RCT and before-after studies show actual savings Non-fixed climate zone model, using data from vendors Baseline based on demographics of the zip code of the location of the t stat Locational baseline much finer scale based on publicly available demographic data. E.g. income, 2ndhome percentage, retirees, HDD per year Are zip codes a good proxy, though? Lots of heterogeneity within zip codes, but are differences between larger Agreed that it would be closer, but more complex, may be small numbers in each zip code <#>

  7. Data call discussion RBSA data that is not smart t-stat data. Is size of set point effect large compared to what we see in a data set that is without smart t stats Fraunhoefer study very little setback occurs Difference between average and 90thand average and 10thfor RBSA data (or the metric) to see if the control group is very different than what we are seeing from these three excellent t stats Initial analysis of RBSA data does show a difference between average indoor temp and temperature preference How directly comparable is this to the data from this data call <#>

  8. Data call discussion Where can we find indoor temp datasets? Alarm companies with data? With remote access tstats in their package? [alarm.com, ADT Pulse, Shclage, iControl to make connections] Homes with data loggers from utilities? DR companies? (Other than those working on EE?) RBSA shows setback behavior across the board similar to data set 1 Comparison of pacific NW to all of cold climate not valid Looked at differences between zip codes within the same state, 4-5 F differences in average RBSA data: 68.8 F average indoor temp, 90thindoor temp 71.6 F; filtered for the heating season (not easy to describe, but meant to be comparable to the run time method); std. errors around 3 F for each Some marine and some cold climate zone, 50% - 65% in the marine zone Assertion that this data shows that climate zone is too coarse a comparison. May also want to try to analyze infrequently occupied homes differently. <#>

  9. Data call - discussion Tstats will look better or worse depending on their customer s ability to choose energy saving schedules Fundamental issue is to try to correct for that zip code level data would be an attempt to do that Seriously easier if we can mostly use census data Derived products available that are close to demographics at the zip code level None of this is relevant if we can rely entirely on per- home baseline main problem with that is not rewarding induced differences in preference temperature Differences in outdoor temps (and maybe set points) may be largely based on different geographic clustering <#>

  10. Action items Can we establish why data set 3 has so much larger std. error? Yellow cells in spreadsheet show anomalies please check your sets Also can check indoor temp data against RBSA data with more regional accuracy Can BPA give us results broken down within each climate zone? Really small sample sizes probably not worth it May be worth looking for additional indoor temperature only data sets <#>

  11. Temperature measurement accuracy Starting point: NEMA DC-3 test method and requirements Static temp accuracy 1 F after 1 hour soak Droop <1.5 F, measured as difference of cut-in temp when room temp ramps up and when it ramps down Droop is the effect of internal heating in the thermostat; it has (2) undesirable effects: Displayed/reported room temp is warmer than actual There is a droop in the cut-in point so the room is cooler than desired How much accuracy is needed to make comparisons between vendors accurate? Do static temperature accuracy and droop adequately capture measurement accuracy in the field? <#>

  12. From NEMA DC3, Annex A-2013: A.13.3 Room Temperature Droop Heating Operation For the 20% duty cycle, ramp rate shall be 8 F (4.4 C) / hour up and 2 F (1.1 C) / hour down. For the 80% duty cycle, ramp rate shall be 2 F (1.1 C) / hour up and 8 F (4.4 C) / hour down. The effective value of room temperature droop is the difference between the cut-in points at the 20% duty cycle and the 80% duty cycle. A.13.4 Room Temperature Droop Cooling Operation For the 20% duty cycle, ramp rate shall be 2 F (1.1 C)/hour up and 8 F (4.4 C)/hour down. For the 80% duty cycle, ramp rate shall be 8 F (4.4 C) /hour up and 2 F (1.1 C) / hour down. The effective value of room temperature droop is the difference between the cut-in points at the 20% duty cycle and the 80% duty cycle. A.14 STATIC TEMPERATURE ACCURACY The static temperature measurement accuracy shall be determined by holding the unit in a temperature chamber for one hour at 70 F (21 C), and comparing the display value to the chamber setpoint [70 F (21 C)]. <#>

  13. Temperature accuracy discussion Accuracy not necessary it s the systematic differences that are important. But vendors are best equipped to correct for that <#>

  14. Running parking lot Zoned systems? Usually not integrated. Multiple systems in one home? Ask for statistics about how common this is. Definition of a product e.g. enrollment in peak control service makes it a different product Verification and gaming the system? Does the customer base bias the metric results, aside from the qualities of the products? Add on today s parking lot items <#>

  15. Contact Information Abigail Daken EPA ENERGY STAR Program 202-343-9375 daken.abigail@epa.gov Doug Frazee ICF International 443-333-9267 dfrazee@icfi.com <#>

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#