2024-03-28T19:57:48Zhttp://www.research-collection.ethz.ch/oai/requestoai:www.research-collection.ethz.ch:20.500.11850/522022-03-28T07:09:27Zcom_20.500.11850_15col_20.500.11850_16
Cirrus clouds and their geoengineering potential
Gasparini, Blaž
Lohmann, Ulrike
Peter, Thomas
Leisner, Thomas
http://hdl.handle.net/20.500.11850/52
info:doi:10.3929/ethz-b-000000052
climate modelling; cirrus clouds; geoengineering; aerosol-cloud interactions; cloud radiative effects; climate engineering
info:eu-repo/classification/ddc/500
Natural sciences
This thesis evaluates the option of modifying cirrus clouds in order to counteract part
of the anthropogenic global warming, also known as cirrus seeding. The feasibility
of cirrus seeding is assessed with the general circulation model ECHAM6-HAM with
coupled aerosol-cloud interactions. The warming effect of cirrus clouds on climate led
part of the research community to the idea of artificially decreasing their frequency
and optical thickness by seeding them with ice nucleating particles. Cirrus seeding re-
lies on the competition between two distinct cirrus formation pathways: homogeneous
nucleation of soluble aerosols and heterogeneous ice crystal nucleation with the help
of solid ice nucleating particles. Seeding attempts to transform optically thicker, longer
lived homogeneously nucleated cirrus clouds to thinner and shorter lived heteroge-
neously nucleated cirrus clouds.
The cirrus seeding effectiveness depends on the correct representation of cirrus
clouds in climate models. We evaluated the ECHAM6-HAM model against CALIPSO
satellite data and found that the model reproduces the cirrus cloud occurrence fraction
and ice water content well, but overestimates their extinction. Most importantly, we
found that a large fraction of cirrus clouds formed in environments with liquid water,
which cannot be modified by cirrus seeding. Moreover, modelled in situ cirrus formed
by heterogenous ice nucleation on dust aerosols in most of the world, limiting the cir-
rus geoengineering potential. Meanwhile, homogeneous cirrus were dominant only in
the tropical tropopause layer and over mountain regions.
Seeding cirrus clouds with ice nucleating particles of radii below 10 μm did not lead to
a significant cooling effect on climate, due to radiative warming effects of increasing cir-
rus cloud cover and decreasing ice crystal radius, which neutralized the cooling gained
by the ice crystal number decrease. However, when seeding with larger ice nucleating
particles, cirrus cloud cover, ice crystal number concentration, and ice water content
decreased, due to increased ice crystal sedimentation velocity, leading to a cooling
of up to 0.7◦ C globally. In addition, an idealized cirrus geoengineering setup with in-
creased ice crystal sedimentation velocities at temperatures colder than -35◦ C could
fully counteract the temperature increase by 1.5 x CO2 concentrations. Furthermore,
seeding with ice nucleating particles larger than 10 μm could counteract up to 55% of
the temperature and precipitation damage caused by a 1.5 x CO2 increase, without
causing negative side effects in any of the analysed regions. Thus, cirrus geoengi-
neering was found to be, not considering its problematic engineering details and large
modelling uncertainties, an attractive method to counteract part of the anthropogenic
warming.
ETH Zurich
2016
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/1092022-03-28T07:09:29Zcom_20.500.11850_15col_20.500.11850_17
Nonlinear Holography and Projection Moiré
Tatasciore, Philippe
http://hdl.handle.net/20.500.11850/109
info:doi:10.3929/ethz-b-000000109
ETH Zurich
1998-11
info:eu-repo/semantics/other
Habilitation Thesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/1942023-02-06T09:21:16Zcom_20.500.11850_15col_20.500.11850_16
Iron and Arsenic Cycling in Organic Freshwater Flocs
Thomas Arrigo, Laurel K.
Kretzschmar, Ruben; id_orcid0000-0003-2587-2430
Mikutta, Christian
Pfeiffer, Stefan
http://hdl.handle.net/20.500.11850/194
info:doi:10.3929/ethz-b-000000194
info:eu-repo/classification/ddc/333.7
info:eu-repo/classification/ddc/550
Natural resources, energy and environment
Earth sciences
ETH Zurich
2017
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/1352022-03-28T07:09:33Zcom_20.500.11850_15col_20.500.11850_16
Void safety
Kogtenkov, Alexander
Meyer, Bertrand
Furia, Carlo A.
Mazzara, Manuel
Meijer, Erik
Thiele, Lothar
http://hdl.handle.net/20.500.11850/135
info:doi:10.3929/ethz-b-000000135
void safety; null safety; static analysis; object initialization; operational semantics equivalence; Eiffel; certified attachment pattern; object-oriented language safety; source code safety benchmark; big-step operational semantics; formal methods; software modularity; null reference; null pointer dereferencing
info:eu-repo/classification/ddc/004
Data processing, computer science
Null pointer dereferencing is a well-known issue in object-oriented programming, and can be avoided by adding special validity rules to the programming language. However, just introducing a single rule is not enough: the whole language infrastructure has to be considered instead. The resulting guarantees are called void safety.
The thesis reviews, in detail, engineering solutions and migration efforts that enabled the transition from classic to void safe code of multiple libraries and projects with lines of code ranging in the millions. Experience with the tiny details of the implementation can be an invaluable source of insight for researcher looking into making a language void safe.
The void safety rules can be divided into three major categories. The first one is the extension of a regular type system with attached (non-null) and detachable (possibly-null) types. Generic programming opens a door to different interpretations. The thesis defines some base void safety properties for formal generic types and specifies void-safety-aware conformance rules.
The second category of rules ensures that newly created objects reach a stable state maintaining the type system guarantees. The thesis proposes two solutions for this object initialization issue and compares them to previous work. It formalizes the rules in the Isabelle/HOL proof assistant and establishes some of their properties. To ensure safety at the end of object life cycle it also specifies validity rules for finalizers. A number of examples are used to demonstrate that the proposed solutions are of practical use and do not suffer from limited expressiveness caused by the lack of additional annotations describing intermediate object states.
The third category of void safety rules covers a practical need to bridge the gap between attached and detachable types. The thesis proposes formal void safety rules for local variables in the context of an object-oriented language that do not require any marks to distinguish between attached and detachable types. It demonstrates advantages of the annotation-free approach with benchmarks based on open source code, discusses implementation decisions and how they are reflected in the formal model.
The thesis concludes with a machine-checkable soundness proof for the rules involving local variables using the Isabelle/HOL proof assistant.
ETH Zurich
2017-01-31
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://creativecommons.org/licenses/by-nc-sa/4.0/
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
oai:www.research-collection.ethz.ch:20.500.11850/2182022-03-28T07:09:36Zcom_20.500.11850_15col_20.500.11850_16
The role of surface sublimation in the summer mass balance of glaciers in the subtropical semiarid Andes
Ayala Ramos, Alvaro Ignacio
Burlando, Paolo
Pellicciotti, Francesca
Pomeroy, John
McPhee, James
http://hdl.handle.net/20.500.11850/218
info:doi:10.3929/ethz-b-000000218
Glaciers; Snow; Semiarid Andes; Hydrology; Surface energy balance; Surface sublimation
info:eu-repo/classification/ddc/550
Earth sciences
The role of glaciers as storages of water resources is of primary importance in arid and semiarid mountain regions characterised by pronounced seasonality and long melting periods without significant precipitation. A robust understanding of the physical processes controlling the energy and mass balance of glaciers in these areas is thus crucial for estimation of water resources and their seasonal variability, long term trends and future changes.
It has been suggested that glaciers in dry regions are strongly affected by mass and energy losses associated to surface sublimation, but few studies provide a quantification of its influence at the glacier or catchment scales.
The main objective of this thesis is to understand and quantify the role of surface sublimation in the energy and mass balance of glaciers in dry environments, by means of point-scale and distributed physically-based energy and mass balance models. Complementary to this objective, this thesis also addresses three relevant topics in semiarid catchments related to the hydrological role of debris-free glaciers and the modelling of glacier ablation.
These topics are the runoff contribution of debris-free glaciers in comparison to that of the seasonal snowpack and debris-covered glaciers, the use of temperature-index models under sublimation-favourable conditions and the spatial distribution of near-surface air temperature over mountain glaciers during the ablation season.
As study region, the subtropical semiarid Andes of North-Central Chile (29-34°S) is selected. This region is characterized by elevations up to almost 7000 m a.s.l., a dry environment, a large number of debris-free and debris-covered glaciers and a strong dependence of ecosystems and human activities on fresh water resources originated from snow and ice melt. In this region, it has been suggested that distributed, physically-based models at the glacier and catchment scales can bridge the gap between regional studies based on remote sensing products and specific studies focused on point-scale process understanding. The presented analyses are conducted on seven glaciers for which a unique dataset of meteorological, glaciological and hydrological variables has been collected in the period 2008-2015. The study sites are grouped in two clusters, one in North Chile (29-30°S), in which climatic conditions are particularly dry, and another one in Central Chile (32-34°S), where the climate is more Mediterranean.
To achieve the proposed objectives, the collected field data is analysed and complemented with remote sensing products to force several hydrological, melt and energy balance models at different spatial scales. The main results of these analyses can be summarised as follows: i) At the integrated glacier scale, surface sublimation accounts for 6.6\% of total ablation during a 2-month summer period in a selected case study in the Juncal Norte Glacier (33°S). It is found that surface sublimation is negligible in comparison to melt at low-elevation low-albedo sites, but it dominates at high-elevation wind-exposed sites, where it represents most of the total ablation. Negative latent heat fluxes, associated with sublimation, are one of the largest sinks in the glacier energy balance and consistently reduce the energy available for melt. ii) Despite having remarkably different spatial and temporal mass balances patterns, the total annual contribution to runoff of low-elevation debris-covered glaciers is similar to that of high-elevation debris-free glaciers. iii) At low-elevation sites, the performance of an enhanced temperature-index model is good compared to that of an energy balance model and its parameters are transferable from one glacier to another and from season to season. However, its performance and parameter transferability tends to decrease with elevation as energy losses modify the diurnal cycle of surface temperature and lower the correlation of melt and index variables. iv) During warm periods, the most relevant controls of near-surface air temperature over glaciers are off-glacier lapse rates and the advection of cold air by katabatic winds and warm air by up-valley winds. A new air temperature distribution model including these processes has been developed and tested with positive results.
In relation to the main research question of this thesis, it is concluded that neglecting surface sublimation in the glacier mass balance has relatively small consequences in the simulation of one ablation season, but it might have large cumulative effects in long term simulations, especially on glaciers in very dry environments, such as Tapado and Guanaco glaciers in North-Chile. Physically-based (or oriented) distributed models, as the ones presented in this study, have the potential to shed light on the dominant processes and energy and mass fluxes on glacierised catchments, and their spatial and temporal patterns. However, much work needs to be committed to i) develop model components that reproduce still poorly-known processes, such as turbulent fluxes on penitente fields or ablation processes on debris-covered glaciers, ii) generate appropriate meteorological forcing fields, and iii) obtain the corresponding on-site model validation data. The obtained results and developed methods are likely to be relevant for the scientific community of glaciologists and hydrologists, and to communities, decision-makers, water-managers and engineers interested on glaciers and water resources in arid and semiarid regions, particularly those in the central regions of Chile and Argentina.
ETH Zurich
2017
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/2252022-03-28T07:09:37Zcom_20.500.11850_15col_20.500.11850_16
The Relationship between Knowledge Absorption and Innovation Outcomes
Seliger, Florian; id_orcid0000-0002-6277-8235
Egger, Peter
Wörter, Martin; id_orcid0000-0003-4467-9134
http://hdl.handle.net/20.500.11850/225
info:doi:10.3929/ethz-b-000000225
info:eu-repo/classification/ddc/330
info:eu-repo/classification/ddc/510
Economics
Mathematics
ETH Zurich
2017-06
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/2362022-03-28T07:09:39Zcom_20.500.11850_15col_20.500.11850_16
Leveraging Data Analytics Towards Activity-based Energy Efficiency in Households
Cao, Hông-Ân
Mattern, Friedemann
Nunes, Nuno Jardim
Bach Pedersen, Torben
http://hdl.handle.net/20.500.11850/236
info:doi:10.3929/ethz-b-000000236
info:eu-repo/classification/ddc/333.7
Natural resources, energy and environment
Aiming for sustainable development means reconsidering the access to energy sources in industrialized countries, which are not faced with contingency scenarios that are implemented in emergent and newly-developed countries, to allow equal access to energy sources for all and thwart environmental degradation. The global penetration of renewable energy sources to replace fossil fuel and nuclear power plants means adjusting to stochastic energy production. The expected yield will be dependent on very different weather and landscape conditions and will represent a challenge for countries with continuous access to energy sources and where energy is often considered a public utility. Tracking wastage, improving the scheduling, and the processes that consume energy, would allow us to match the demand and the supply of energy. This will be particularly crucial during peak time, where meeting the high demand incurs the ramping up of mostly unclean additional power plants or introducing power system instability.
The digitalization of the energy sector has started with the roll- out of smart meters to record the electricity consumption at a finer granularity and are aimed to replace the biannual or yearly dispatch of utility companies’ employees to read the meter. Considerable research efforts have been directed at analyzing aggregated loads from these smart meters or at developing algorithms for disaggregating households’ total electricity consumption to isolate single appliances’ traces. However, less focus has been set on assessing the potential of using sub-metered data for improving the energy efficiency in households. This was primarily linked to the fact that the necessary datasets were not widely available, due to the difficulty and the costs in instrumenting households for acquiring the consumption data from appliances. The objective of this thesis is to investigate how to leverage and improve existing disaggregated datasets to develop data-driven techniques to improve the energy efficiency within residential homes.
Starting from smart meter data, we segmented households into groups with similar electricity consumption pattern based on their peak consumption, to identify hurtful consumption patterns in the perspective of utility companies, for which they could launch targeted mitigation campaigns. However, improving the energy efficiency in the residential sector requires to change individuals’ relationship to- wards their electricity consumption. These behaviors are closely re- lated to the activities that are carried out throughout the day and can be supported by the usage of consumer electronics, such as appliances. Therefore, we turned to analyzing the behaviors inside house-
i
holds that triggered the usage of electricity by studying a large disaggregated dataset and developed learning techniques to extract activity patterns. We first addressed the challenge of determining when appliances are actively used by households’ residents, from when they are off or idle and incurring standby consumption by developing GMMthresh, an automatic thresholding method, which is agnostic of the appliance’s type, brand and model, but instead relies on the statistical distribution of its power consumption.
Due to the lack of event-based and activity labels in existing datasets to allow us to validate our learning technique, we leveraged crowdsourcing concepts to provide an expert-annotated dataset to enrich the existing datasets through our Collaborative Annotation Framework for Energy Datasets (CAFED). We conducted two in-depth studies to quantify the performance of regular users against expert users in labeling energy data on CAFED. We provided analysis tools and methods that can be generalized to crowdsourcing systems for improving the quality of the workers’ contributions. Using the expert-annotated labels, we validated GMMthresh with expert manually labeled data. Then, we developed a method for learning temporal association rules for identifying activities involving the usage of appliances within households. Our pipeline includes our thresholding algorithm and a novel search algorithm for determining time windows for the association rules efficiently and in a data-driven manner.
The contributions of this thesis rely on exploiting energy data and developing novel techniques towards identifying activity patterns and their scheduling, which could then become part of an ambient intelligence system that would smarten existing homes. The methods we developed are not restricted to the energy research, as they can be applied to sensor data, where for example inertial sensors also require machine learning algorithms to filter out background noise from actual movement. Similarly, our work on the crowdsourcing of time series opens new perspectives for extending the range of data that can be annotated by the crowd and provides design insights and mitigation techniques for improving the quality of the labeling on collaborative platforms. Finally, our temporal association rules mining framework is not limited to energy time series but can be applied to search for temporal windows and understanding the scheduling of any time series dataset.
ETH Zurich
2017-06
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/2412022-03-28T07:09:41Zcom_20.500.11850_15col_20.500.11850_16
Search for High-Mass Diphoton Resonances in Proton-Proton Collisions at 13 TeV and Radiation Studies for Calorimetry at the High-Luminosity LHC
Quittnat, Milena Eleonore
Dissertori, Günther
Wallny, Rainer
http://hdl.handle.net/20.500.11850/241
info:doi:10.3929/ethz-b-000000241
Particle Physics; LHC Physics; Exotica; CMS Experiment; Diphoton Resonances; Calorimetry; Electromagnetic Calorimeter; Inorganic Scintillator; FLUKA; HL-LHC Upgrade; Radiation Studies
info:eu-repo/classification/ddc/530
Physics
In this dissertation, two different topics are addressed which are part of the main areas
of research of modern high-energy physics experiments at the Large Hadron Collider
(LHC): a search for new physics and the development of new detectors.
The first part of this dissertation presents the search for high mass diphoton resonances
in proton-proton collisions at a center-of-mass energy of 13 TeV with the Compact
Muon Solenoid (CMS) experiment. Particular attention is paid to the assessment of the
background. The results are interpreted in terms of spin-0 and spin-2 resonances with
masses between 0.5 and 4.5 TeV and widths, relative to the mass, between 1.4x10^(-4) and
5.6x10^(-2). Limits are set on scalar resonances produced through gluon-gluon fusion,
and on Randall–Sundrum gravitons. Two results are presented, both following the same
search strategy, but one employing a dataset of 3.3 1/fb, the other 16.2 1/fb. Both are
statistically combined with results obtained by the CMS collaboration at 8 TeV with
19.7 1/fb. For the combination with the dataset of 3.3 1/fb, a modest excess of events
compatible with a narrow resonance with a mass of about 750 GeV and a global significance
of 1.6 standard deviations is observed. This excess could not be confirmed with
the larger dataset of 16.2 1/fb. The production of RS-gravitons is excluded at leading
order at a 95% CLs up to 3.85 and 4.45 TeV for coupling parameters of 0.1 and 0.2, respectively.
These are the most stringent limits on Randall-Sundrum graviton production
to date.
For the High-Luminosity LHC (HL-LHC), the forward electromagnetic calorimeter
(ECAL) of CMS has to be replaced. A sampling calorimeter, using an inorganic scintillator
as an active medium, was one suitable option. In the second part of this dissertation,
Monte-Carlo simulations with the particle-physics toolkit FLUKA determine aspects of
the behavior of such a sampling calorimeter in the radiation environment of the upgraded
CMS detector at the HL-LHC. Measurements performed for LYSO, YSO and cerium
fluoride crystals, exposed to a proton fluence of up to 5x10^14 cm^2, are compared to
dedicated FLUKA simulations. The main drivers of their residual dose are determined.
It is found that LYSO and cerium fluoride crystals show similar levels of residual dose
as lead-tungstate. Based on these results, an extrapolation to the behavior of the sampling
calorimeter, located in the CMS detector, is performed. Characteristic parameters
such as the induced ambient dose, fluence spectra for different particle types and the
residual nuclei are studied, and the suitability of these materials for a future calorimeter
is surveyed. Particular attention is given to the creation of isotopes in an LYSO-tungsten
calorimeter that might contribute a prohibitive background to the measured signal.
The harsh radiation environment in CMS induces high levels of radioactivity in the exposed
materials. The radiological hazard to personnel during the upgrade of the electronic
system of the central part of ECAL for the HL-LHC is quantified.
ETH Zurich
2017
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/2492022-03-28T07:09:43Zcom_20.500.11850_15col_20.500.11850_16
Physical Understanding of Solar Irradiance in Ultraviolet and Radio Wavelengths
Tagirov, Rinat
Carollo, C. Marcella
Schmutz, Werner
Unruh, Yvonne C.
http://hdl.handle.net/20.500.11850/249
info:doi:10.3929/ethz-b-000000249
Radiative transfer, numerical methods, lambda-iteration, approximate lambda-operators, line formation, opacity, solar atmosphere, center-to-limb variation, eclipses, solar variability modelling, solar radio emission
info:eu-repo/classification/ddc/5
Science
Understanding of solar and stellar brightness variability plays a crucial role in
the studies of solar-stellar and solar-terrestrial connection. In particular, the
modeling of solar brightness variations is important for understanding the role
of the Sun in the climate variability, while studying the stellar variability allows
to better constrain the physics of the stellar activity.
Since the launch of the NIMBUS-7 mission in 1978 the solar brightness has been
continuously monitored and has been found to vary on all time scales on which
it has ever been measured. Although the number of solar brightness datasets
has been increasing over the last years, the observations alone do not provide a
sufficient means to understand the influence of solar radiation on climate due to
large uncertainties and gaps in the available datasets. This calls for the devel-
opment of solar brightness modeling in order to complement the observational
data.
The physics-based modeling of solar and stellar brightness variations relies
on the spectra of magnetic features and surrounding quiet regions in the solar
and stellar atmospheres. These spectra are provided by radiative transfer codes.
Currently the radiative transfer codes oriented towards such modeling represent
the stellar atmospheres as a one-dimensional structure.
Development of 1D radiative transfer codes is a challenging task due to the
Non-Local Thermodynamic Equillibrium (NLTE) coupling of matter and radia-
tion. As a first approximation one can assume full coupling, i.e. Local Thermody-
namic Equillibrium (LTE), but by now a substantial evidence has been acquired
pointing to the inadequacy of this approximation. Hence, today one of the focal
points of solar and stellar physics is the development of NLTE radiative transfer
codes which is the main goal of this thesis.
In Chapter 2 we present the NLTE Spectral SYnthesis (NESSY) code. The
code was originally designed for modeling the spectra of hot stars with expand-
ing atmospheres. This purpose predisposed the numerical scheme of the code to
work in a way that is not efficient for spectrum synthesis under solar-like con-
ditions in which the expansion is much less pronounced. The aim of our work
was to adjust the code so that it can handle both expanding and non-expanding
cases. Such an adjustment was a significant step on the way to make NESSY efficiently applicable to the synthesis of spectra emerging from all kinds of stars.
It required a complete change of the algorithms of the code responsible for the
NLTE calculations. The new version of the code is very well suited for spec-
tral synthesis over broad spectral ranges which is required for modeling of solar
and stellar brightness variations. In the following chapters we demonstrate the
capabilities of the code in the exemplary case of the Sun.
In Chapter 3 we apply the code to modeling the Center-to-Limb Variation(s) of
solar brightness (CLV) which are a powerful diagnostic tool for constraining the
models of solar atmosphere. They are also important for modeling of the solar
brightness variations especially on the time scale of solar rotation. We compare
the CLVs modeled with NESSY to those derived from the measurements of solar
brightness variations during eclipses observed with the PREMOS instrument on-
board the PICARD mission. We use the light curves of the three solar eclipses
measured by the radiometers of PREMOS to derive CLVs in the UV, visible and
IR parts of the solar spectrum. We show that in the visible and IR the modeled
CLVs agree well with those derived from the eclipse observations which proves
that NESSY can not only reproduce the full-disk solar spectrum, but also the
distribution of brightness across the solar disk. In the UV the derived CLVs
allow us to constrain the source of the so-called “missing opacities” and make a
step toward the resolution of this well-known problem arising from the lack of
laboratory measurements of lines constituting the UV solar spectrum within 160
nm - 320 nm range.
Chapter 4 is devoted to the first ever physics-based reconstruction of the solar
brightness variability in the radio. This reconstruction became possible thanks to
the changes of the code described in Chapter 2. The NLTE effects are important
for the formation of radio wavelengths and therefore with NESSY we could for
the first time consistently model the brightness variability of the Sun in the
radio. We can model both the variability from UV (where the LTE assumption
also fails) to IR and in the radio without any empirical corrections. In Chapter 4
we take advantage of this capability and show how radio wavelengths can be used
for reconstruction of of variability of the entire solar spectrum from UV to IR.
ETH Zurich
2017
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai:www.research-collection.ethz.ch:20.500.11850/2692022-03-28T07:09:44Zcom_20.500.11850_15col_20.500.11850_16
Modeling The Seismic Resilience Of Electric Power Supply Systems
Sun, Li
Stojadinovic, Bozidar; id_orcid0000-0002-1713-1977
Sansavini, Giovanni
Bruneau, Michel
http://hdl.handle.net/20.500.11850/269
info:doi:10.3929/ethz-b-000000269
info:eu-repo/classification/ddc/621.3
Electric engineering
The modern community is an organically assembled system of people, organizations, and infrastructures, as well as patterned interdependences and interactions. Functioning of modern communities relies on the continuous production and distribution of the essential goods and services, accomplished by large-scale, man-made, networked systems, called infrastructures. Such infrastructures are termed critical if their incapacity or malfunction could have a devastating impact on the health, security, and social well-being of community inhabitants. As exemplified by many recent occurrences, critical infrastructure systems in diverse communities across the spectrum of wealth have not been sufficiently robust and have not recovered quickly enough after severe natural disasters, with long-lasting physical damage and technical failures causing significant hardships and economic losses. Against this backdrop, it is imperative to comprehensively investigate, understand and model the disaster resilience of critical community infrastructure systems.
Among such critical infrastructure systems, the Electric Power Supply System (EPSS) stands at the core of a modern community. Among many natural hazards, the earthquake hazard stands out as potentially the most devastating and the most difficult to predict. Therefore, this thesis is focused on modeling and assessment of seismic resilience of EPSS and the community it serves.
The study begins with a review and an examination of the merits and drawbacks of the resilience modeling and assessment of current civil infrastructure system seismic resilience modeling frameworks. An important common shortcoming is the focus solely on the supply capacity of the infrastructure systems. To overcome this shortcoming, a measure of EPSS-Community system functionality and seismic resilience is formulated by comparing the service supply provided by the EPSS to the Community and the service demand generate by the Community. The supply/demand approach to quantify the seismic resilience of an EPSS-Community system is demonstrated using a virtual EPSS-Community system. A direct measure of the seismic resilience of the EPSS-Community system, the gap between the electric power supply and demand, is proposed in this thesis. This measure is tracked from the time an earthquake occurs until the EPSS-Community system has recovered to yield instantaneous and cumulative measures of resilience. One such instantaneous seismic resilience measure, the percentage of people without power (PPwoP) at any time after an earthquake, can serve as a societal measure of EPSS-Community system systemic resilience.
While the robustness of the EPSS-Community system is crucial for reducing the impact of an earthquake, the post-earthquake recovery process is critical to the seismic resilience of EPSS-Community system. This post-earthquake recovery process is case-specific, given their unique characteristics of EPSS and Community physical vulnerability, and dynamic, given the interactions among different infrastructure systems, community sectors, and the political and economic governance structures put in place after the disaster. An Agent-Based model is developed in this thesis to capture the unique dynamic characteristics of the EPSS-Community system seismic recovery process. Two individual agents, the EPSS Operator and the Administrator, are specified using a set of parameters to define their individual behavior and interactions. The effect of agent parameters and their interactions is identified in simulations of the seismic recovery process of a virtual EPSS-Community using the supply/demand approach.
The post-earthquake restoration of a modern EPSS is contingent upon the post-earthquake serviceability of other critical infrastructure systems, in particular upon the serviceability of the transportation systems (TS) of the community. To investigate this interdependency among the community infrastructure systems, the virtual EPSS-Community system is expanded to include a transportation system, and a third agent, the TS Operator, is added to the model. The conducted case studies demonstrate that the interplay among different agents, as well as the interdependency between the civil infrastructure systems, determine the recovery path for the integrated EPSS-TS-Community system.
The community resources available for post-earthquake recovery are finite. A network-theoretical model is used to gauge the impact of the quantity of the disposable repair resources and work crews on the seismic recovery for EPSS-TS system. The case study simulation results clearly indicate the rate of EPSS-TS system recovery is affected by the amount of available resources, but, importantly, that an optimal distribution of the available resources between the EPSS and the TS can significantly reduce the system recovery time and, thus, increase its seismic resilience.
The presented scientific findings lay the foundation for a comprehensive and integrated resilience assessment on the EPSS-Community system based on the proposed agent-based network-theoretical supply/demand framework. Further work on generalizing the model by including all community infrastructure systems and refining their interactions in the model can be done using the proposed framework to investigate the interdependencies among the infrastructure systems and optimize community governance actions. Inclusion of dynamic models of community and infrastructure system post-disaster behavior, such as movement of the population, restructuring of the infrastructure and the effects on the production and consumption of goods and services, would make it possible to examine how disaster resilience of the integrated critical infrastructure systems shapes the long-term socio-economic development of the communities.
ETH Zurich
2017
info:eu-repo/semantics/doctoralThesis
application/application/pdf
en
info:eu-repo/semantics/openAccess
http://rightsstatements.org/page/InC-NC/1.0/
In Copyright - Non-Commercial Use Permitted
oai_dc///com_20.500.11850_15/10