Talk Abstracts/Slides

Spring 2017

  1. Scalable methods for optimal control of systems governed by PDEs with random coefficient fields [pdf]
    by Prof. Omar Ghattas
    Friday, April 21, 2017
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: We present a method for optimal control of systems governed by partial differential equations(PDEs) with uncertain parameter fields. We consider an objective function that involves the mean and variance of the control objective. Conventional numerical methods for optimization under uncertainty are prohibitive when applied to this problem; for example, sampling the (discretized infinite-dimensional) parameter space to approximate the mean and variance would require solution of an enormous number of PDEs, which would have to be done at each optimization iteration. To make the optimal control problem tractable, we invoke a quadratic Taylor series approximation of the control objective with respect to the uncertain parameter field. This enables deriving explicit expressions for the mean and variance of the control objective in terms of its gradients and Hessians with respect to the uncertain parameter. The risk-averse optimal control problem is then formulated as a PDE-constrained optimization problem with constraints given by the forward and adjoint PDEs defining these uncertain parameter gradients and Hessians.
    The expressions for the mean and variance of the control objective under the quadratic approximation involve the trace of the (preconditioned) Hessian, and are thus prohibitive to evaluate for (discretized) infinite-dimensional parameter fields. To overcome this difficulty, we employ a randomized eigensolver to extract the dominant eigenvalues of the decaying spectrum. The resulting objective functional can now be readily differentiated using adjoint methods along with eigenvalue sensitivity analysis to obtain its gradient with respect to the controls. Along with the quadratic approximation and truncated spectral decomposition, this ensures that the cost of computing the risk-averse objective and its gradient with respect to the control--measured in the number of PDE solves--is independent of the (discretized) parameter and control dimensions, leading to an efficient quasi-Newton method for solving the optimal control problem. Finally, the quadratic approximation can be employed as a control variate for accurate evaluation of the objective at greatly reduced cost relative to sampling the original objective. Several applications with high-dimensional uncertain parameter spaces will be presented.
    This work is joint with Alen Alexanderian (NCSU), Peng Chen (UT Austin), Noemi Petra (UC Merced), Georg Stadler (NYU), and Umberto Villa (UT Austin).
  2. Optimal power plant operation
    by Dr. Sergey Kolos
    Friday, April 7, 2017
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: The talk is designed to show an example problem that quantitative analysts work on in energy markets. We describe a popular contract in power markets - power plant tolling agreement. First we discuss how power plant constraints are reflected in the contract specification, what are the methods and challenges in pricing such contracts. Then we will review the contribution by BP's last year intern to techniques in accelerating this contract valuation.
  3. A triphasic constrained mixture model of engineered tissue formation under in vitro dynamic mechanical conditioning
    by Prof. Michael Sacks
    Friday, March 24, 2017
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: While it has become axiomatic that mechanical signals promote in vitro engineered tissue formation, the underlying mechanisms remain largely unknown.Moreover, efforts to date to determine parameters for optimal extracellular matrix (ECM) development have been largely empirical. In the present work, we propose a two-pronged approach involving novel theoretical developments coupled with key experimental data to develop better mechanistic understanding of growth and development of dense connective tissue under mechanical stimuli. To describe cellular proliferation and ECM synthesis that occur at rates of days to weeks, we employ mixture theory to model the construct constituents as a nutrient-cell-ECM triphasic system, their transport, and their biochemical reactions. Dynamic conditioning protocols with frequencies around 1Hz are described with multi-scale methods to couple the dissimilar time scales.Enhancement of nutrient transport due to pore fluid advection is upscaled into the growth model, and the spatially dependent ECM distribution describes the evolving poroelastic characteristics of the scaffold-engineered tissue construct. Simulation results compared favorably to the existing experimental data, and most importantly, distinguish between static and dynamic conditioning regimes. The theoretical framework for mechanically conditioned tissue engineering (TE) permits not only the formulation of novel and better-informed mechanistic hypothesis describing the phenomena underlying TE growth and development, but also the exploration/optimization of conditioning protocols in a rational manner.
  4. Modeling Hurricane Storm Surge and Proposed Mitigation Systems in the Houston, TX Region
    by Prof. Clint Dawson
    Friday, March, 10, 2017
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: Since the 2005-2008 hurricane seasons, there has been extensive effort to understand and predict the impacts of hurricane-induced flooding on low-lying regions in the Gulf of Mexico. After Hurricane Ike in 2008, which made landfall at Galveston, TX, the Severe Storms Prediction, Education and Evacuation from Disasters (SSPEED) Center was formed, including a consortium of major universities in Texas. The focus of the center was to study the socio-economic/environmental/structural impact and possible mitigation of hurricane storm surge in the Houston-Galveston region. As part of this effort, our group at the University of Texas performed extensive studies using the Advanced Circulation (ADCIRC) modeling framework. In this talk, we will discuss this framework, the validation of this model for historical hurricanes, and the application of the model to the study of potential storm mitigation systems, including structural and non-structural options. We will also discuss a number of other issues that arise in these types of studies, including political, environmental and socio-economic considerations.

Fall 2016

  1. Towards Predictive Oncology through Multi-scale Imaging and Multi-scale Modeling
    by Thomas Yankeelov (Professor, BioMedical Engineering, Dell Medical School, UT Austin)
    Friday, October 21, 2016
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: The ability to identify, early in the course of therapy, cancer patients that are not responding to a given therapeutic regimen is highly significant. In addition to limiting patients’ exposure to the toxicities associated with unsuccessful therapies, it would allow patients the opportunity to switch to a potentially more efficacious treatment. In this presentation, we will discuss ongoing efforts at using data available from advanced imaging technologies to initialize and constrain predictive biophysical and biomathematical models of tumor growth and treatment response.
  2. Discovering Causality in Data using Entropy [pdf]
    by Alex Dimakis (Associate Professor, Electrical and Computer Engineering, UT Austin)
    Friday, October 7, 2016
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: Causality has been studied under several frameworks in statistics and artificial intelligence. We will briefly survey Pearls Structural Equation model and explain how interventions can be used to discover causality. We will also present a novel information theoretic framework for discovering causal directions from observational data when interventions are not possible. The starting point is conditional independence in joint probability distributions and no prior knowledge on causal inference is required for this lecture.
  3. Isogeometric Analysis: Past, Present, Future [pdf]
    by Thomas J.R. Hughes
    Friday, September 23, 2016
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: October 1, 2015 marked the tenth anniversary of the appearance of the first paper [1] describing my vision of how to address a major problem in Computer Aided Engineering (CAE). The motivation was as follows: Designs are encapsulated in Computer Aided Design (CAD) systems. Simulation is performed in Finite Element Analysis (FEA) programs. FEA requires the conversions of CAD descriptions to analysis-suitable formats form which finite element meshes can be developed. The conversion process involves many steps, is tedious and labor intensive, and is the major bottleneck in the engineering design-through-analysis process, accounting for more than 80% of overall analysis time, which remains an enormous impediment to the efficiency of the overall engineering product development cycle. The approach taken in [1] was given the pithy name Isogeometric Analysis. Since its inception it has become a focus of research within both the fields of FEA and CAD and is rapidly becoming a mainstream analysis methodology and a new paradigm for geometric design [2]. The key concept utilized in the technical approach is the development of a new paradigm for FEA, based on rich geometric descriptions originating in CAD, resulting in a single geometric model that serves as a basis for both design and analysis. In this talk I will describe areas in which progress has been made in developing improved Computational Mechanics methodologies to efficiently solve vexing problems that have been at the very least difficult, if not impossible, within traditional FEA. I will also describe current areas of intense activity and areas where problems remain open, representing both challenges and opportunities for future research (see, e.g., [3]).
    REFERENCES
    [1] T.J.R. Hughes, J.A. Cottrell and Y. Bazilevs, Isogeometric Analysis: CAD, Finite Elements, NURBS, Exact Geometry and Mesh Refinement, Computer Methods in Applied Mechanics and Engineering, 194, (2005) 4135-4195.
    [2] J.A. Cottrell, T.J.R. Hughes and Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, Wiley, Chichester, U.K., 2009.
    [3] Isogeometric Analysis Special Issue (eds. T.J.R. Hughes, J.T. Oden and M. Papadrakakis), Computer Methods in Applied Mechanics and Engineering, 284, (1 February 2015), 1-1182.
  4. A fresh look at the Bayesian theorem from information theory [pdf]
    by Tan Bui (Aerospace Engineering and Engineering Mechanics, ICES, UT Austin),
    Friday, September 9, 2016
    Host: Amir Gholaminejad (on behalf of Prof. Babuska)
    Abstract: We construct a convex optimization problem whose first order optimality condition is exactly the Bayes formula and whose unique solution is precisely the posterior distribution. In fact, the solution of our optimization problem includes the usual Bayes posterior as a special case and it is therefore more general. We provide the construction, and hence a generalized Bayes formula, for both finite and infinite dimensional settings. We shall show that the our posterior distribution, and the Bayes one as a special case, is optimal in the sense that it is the unique minimizer of an objective function. We provide the detailed and constructive derivation of the objective function using information theory and optimization technique. In particular, the objective is the compromise of two quantities: 1) the relative entropy between the posterior and the prior, and 2) the mean squared error between the computer model and the observation data. As shall be shown, our posterior minimizes these two quantities simultaneously.
  5. Spring 2015

    1. Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow [pdf]
      by Omar Ghattas
      ICES, UT Austin on Friday, February 13, 2015
      Host: Hamidreza Arabshahi (on behalf of Prof. Babuska)
      Abstract: The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shear-thinning fluid, can be used to infer this uncertain basal boundary condition. We cast this ill-posed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for large-scale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter dimension and data dimensions.
    2. Parallel hierarchical algorithms for volume integral equations
      by George Biros
      Host: Hamidreza Arabshahi (on behalf of Prof. Babuska)
      ICES, UT Austin on Friday, January 30, 2015
      Abstract: I will consider the problem of evaluating volume integrals with weakly singular kernels in three dimensions. The evaluation of such volume integrals is a well understood problem. Volume integral equations can be used for solving boundary value problems, for example the Laplace, Stokes and Helmholtz problems. Despite the significance of such methods, there exist no scalable efficient implementations and as a result their use from non-experts is somewhat limited. I will discuss the formulation, numerical challenges and scalability of algorithms for volume potentials and present a new open-source library for such problems. I will compare their performance to other state-of-the art codes and conclude with an example from computational fluid mechanics.
    3. Fall 2014

      1. Bayesian parameter estimation in predictive engineering [pdf]
        by Damon McDougall
        ICES, UT Austin on Friday, December 5, 2014
        Abstract: Often it is the case one possesses a model; a discretised partial differential equation designed to capture some physical phenomenon. The parameters in this model may not be known precisely for a particular scenario of interest in which a prediction is required. A good example is fluid dynamics, experiments can be executed at low Reynolds number, and the parameters in a CFD model can be calibrated with the experiments to obtain parameter estimates with associated uncertainties. The model can then be run at a high Reynolds number and, using the parametric uncertainty, provide a prediction of a quantity of interest with uncertainty. We will give an overview of Bayesian parameter estimation in engineering-type scenarios, and we will explain how to do it. No prior knowledge of statistics is needed. Basic knowledge of optimisation would be helpful.
      2. Sparsity-aware sampling theorems and applications [pdf]
        by Rachel Ward
        Math Department/ICES, UT Austin on Friday, November 21, 2014
      3. Global Jacobian Mortar Algorithms for Multiphase Flow in Porous Media [pdf]
        by Ben Ganis
        Center for Subsurface Modeling, ICES, UT Austin on Friday, November 7, 2014
        Abstract: In this work, we consider a fully-implicit formulation for two-phase flow in a porous medium with capillarity, gravity, and compressibility in three dimensions. The method is implicit in time and uses the multiscale mortar mixed finite element method for a spatial discretization in a non-overlapping domain decomposition context. The interface conditions between subdomains are enforced in terms of Lagrange multiplier variables defined on a mortar space. There are two novelties in our approach: first, we linearize the coupled system of subdomain and mortar variables simultaneously to form a global Jacobian and eliminate variables by taking Schur complements; and second, we adapt a two-stage preconditioning strategy to solve the resulting formulation. This algorithm is shown to be more efficient and robust compared to a previous algorithm that relied on two separate nested linearizations of subdomain and interface variables. We also examine various upwinding methods for accurate integration of phase mobility terms near subdomain interfaces. Numerical tests illustrate the computational benefits of this scheme.
      4. Manipulating and predicting viral evolution
        by Claus Wilke
        Professor, Integrative Biology, The University of Texas at Austin on Friday, October 24, 2014
        Abstract: Viruses are among the most rapidly evolving organisms. They frequently adapt to new hosts (leading to emerging diseases), they evolve resistance to treatments or immune response, and they can recover quickly when attenuated with deleterious but not lethal mutations. In this talk, I will discuss several computational approaches to predict how viruses may evolve, to manipulate them such that their adaptive potential is limited, and to identify important sites in viral proteins. I will specifically talk about host-range shifts in New-World arenaviruses and about designing attenuated viruses by codon-deoptimization.
      5. Predictability of Coarse-Grained Models of Atomic Systems in the Presence of Uncertainty [pdf]
        by J. Tinsley Oden
        Director, ICES, The University of Texas at Austin on Friday, October 10, 2014
        Abstract: For over a half century, the analysis of a large systems of atoms in problems ranging from polymer chemistry, material science, nanomanufacturing, biological systems, to medicine, drug design, and atomic physics, have resorted to the use of so-called coarse-grained models, in which groups of atoms are aggregated into superatoms, beads, or molecules to reduce the size and complexity of the system. But are such coarse-grained models valid? Can they be used to predict important features of the system? Do they properly account for uncertainties in the data or parameters or the model selection? We explore these questions in this lecture and provide some suggested answers, all done within a Bayesian framework.
      6. Modeling & Simulation Behind Every Day Products
        by Thomas Lange
        Affiliation: Procter & Gamble on Wednesday, October 1, 2014
        Abstract: What you don't see DOES make life a little better. One example is the modeling & simulation behind products you use every day. We take consumer products we use every day for granted. We take a shower, use a fresh towel, shampoo and condition our hair, wash our skin, maybe a shave, use some SPF-15 facial moisturizer, brush our teeth, rinse our mouth, comb our hair, put on makeup, apply some deodorant, dab a little perfume or cologne, dress in clean clothes, diaper change the baby, feed the dog and cat, start the dishwasher, throw a load of laundry in ... and go out to meet, greet, & make the world a little better. P&G products, for 175 years, have been part of that day. Today, through brands like Pantene, Head & Shoulders, Gain, Gillette, Crest, Oral B, Clairol, Covergirl, Hugo Boss, Pampers, Charmin, Prilosec, Iams, Cascade, & Tide, we try to make that days start just a little better. What makes those brands memorable is that they work. The 'first moment of truth for our brands is at the point of purchase. They must be memorable and worth it during a usually hurried shopping trip (who really likes to shop for these things ;-). What makes them worth it and memorable is they have to work at the second moment of truth when you use them to start your day. There is an amazing amount of Science and Engineering behind those products. Unlike airplanes or cars or high tech electronics, we can't advertise that. It might scare people if we had some nerdy looking engineer in a white coat standing in front of an enormous machine that made something like toilet paper (Charmin). That machine IS what makes the paper soft and strong and affordable. But, when we advertise we used to use Mr. Whipple the friendly, quirky, grocer and today we use a cartoon bear. The unfortunate consequence from an Engineer's perspective is that this leaves the impression that everyday goods are low tech when the challenges our Scientists and Engineers face everyday ARE Rocket Science hard. Think about the contradictions everyday products face. We need to make paper that dissolves when wet, but is strong when dry (Charmin) ... BUT make Paper Towels (Bounty) that are absorbent but VERY strong when wet! We need to make diapers that soft and absorbent -- but fit babies like pants. We need Laundry treatments that are concentrated, but easy to use & remove stains, and but protect fabrics (including cloth dyes). We want toothpaste that dispenses easily, but stays on the brush. We need containers that never leak, but open easily, don't break when dropped, but have a minimum of plastic to recycle. Perhaps, most importantly, all these products & their daily use must be a good value. We will look at a few examples of where computers are making our lives a little better. For years, we have been trying to make wash day easier. Some may still remember SALVO tablets, which were very convenient, but they ended up in pockets un-dissolved. Although, we just could not fix that we finally solved the contradiction of easy and not messy to use, but still dissolves easily with Tide PODs. It required sophisticated computer simulation to guide manufacturing machines to make sure that could not happen -- Rocket Science.
      7. Financial Mathematics [pdf]
        by Thaleia Zariphopoulou
        Department of Mathematics, The University of Texas at Austin on Friday, September 12, 2014
        Abstract: Optimal portfolio construction is one of the fundamental problems in both Financial Mathematics and finance industry. Despite its importance, however, there are many limitations in the existing investment models as well as a serious disconnection between academic research and investment practice. In this talk, I will discuss these issues and will also present a new theory aiming at bridging the normative and descriptive approaches in portfolio choice.

      Spring 2014

      1. New Trends in Cardiac Valve and Cardiovascular Modeling
        Michael Sacks
        Affiliation: Director, ICES Center for Cardiovascular Simulation, January 24, 2014
        Abstract: Our Center has pioneered the development and application of morphologically-driven constitutive models for heart valve tissues. Our current work focuses on mitral valve (MV) repair, since recent long-term studies have indicated that excessive tissue stress and the resulting strain-induced tissue failure are important etiologic factors leading to the recurrence of significant MR after repair. In the present work, we have developed a high-fidelity computational framework, incorporating detailed collagen fiber architecture, accurate constitutive models for soft valve tissues, and micro-anatomically accurate valvular geometry, for simulations of functional mitral valves which allows us to investigate the organ-level mechanical responses due to physiological loadings. This computational tool also provides a means, with some extension in the future, to help the understanding of the connection between the repair-induced altered stresses/strains and valve functions, and ultimately to aid in the optimal design of MV repair procedure with better performance and durability. We are also extending these in two directions. First, we are extending these studies to cellular deformation to link with the underlying mechanobiological responses of the constituent cellular population. Second, we seek to incorporate the entire MV model into a ventricular model, which is essential for the understanding of the underlying processes of ventricular, and MV coupled dysfunction. Other related projects on cardiac tissue remodeling and engineered tissue will be presented.
      2. Simulation of Quantum Electronic Dynamics in Molecular Materials
        Peter J. Rossky
        Affiliation: ICES, The University of Texas at Austin, February 21, 2014
        Abstract: In order to develop a working chemical intuition about electronically active organic materials, and particularly with the goal of developing design principles for organic photovoltaic (solar cell) materials, it is imperative to understand the relationship between molecular-level structure and the electronic excited state phenomena that follows light absorption. These are primarily excited state energy migration and formation of charge carriers. In this presentation, recent progress in simulating these processes, using a mixed quantum/classical molecular dynamics approach will be described. I will describe the several components that must be drawn together to produce an accessible model description, the underlying physics that governs the dynamics, and then illustrate how these method can be used to provide considerable insight into the underlying molecular optoelectronic processes.
      3. Creating the Quark Gluon Plasma at the LHC
        Christina Markert
        Affiliation: Physics Department, The University of Texas at Austin, March 7, 2014
        Abstract: In ultra-relativistic heavy ion collisions a fireball of hot and dense nuclear matter is created. When the energy density inside the fireball is very high, liberation of partons is expected to occur and a new phase of matter, the Quark Gluon Plasma (QGP), is formed. I will give an introduction into the strong force and present the ALICE experiment setup at the LHC collider in Switzerland to investigate the properties of the created QGP in Pb-Pb heavy ion collision.
      4. Learning and Multiagent Reasoning for Autonomous Robots
        Peter Stone
        Affiliation: Department of Computer Science, University of Texas at Austin, March 28, 2014
        Abstract: Over the past half-century, we have transitioned from a world with just a handful of mainframe computers owned by large corporations, to a world in which private individuals have multiple computers in their homes, in their cars, in their pockets, and even on their bodies. This transition was enabled by computer science research in multiple areas such as systems, networking, programming languages, human computer interaction, and artificial intelligence. We are now in the midst of a similar transition in the area of robotics. Today, most robots are still found in controlled, industrial settings. However, robots are starting to emerge in the consumer market, and we are rapidly transitioning towards a time when private individuals will have useful robots in their homes, cars, and workplaces. For robots to operate robustly in such dynamic, uncertain environments, we are still in need of multidisciplinary research advances in many areas such as computer vision, tactile sensing, compliant motion, manipulation, locomotion, high-level decision-making, and many others. This talk will focus on two essential capabilities for robust autonomous intelligent robots, namely online learning from experience, and the ability to interact with other robots and with people. Examples of theoretically grounded research in these areas will be highlighted, as well as concrete applications in domains including robot soccer and autonomous driving.
      5. Phase-field Modeling of the Evolution of Material Interfaces
        Chad Landis
        Affiliation: Aerospace Engineering and Engineering Mechanics, UT Austin, April 4, 2014
        Abstract: Continuum phase-field models can be applied to describe the evolution of various types of material interfaces including domain, grain boundaries, solidification or phase change fronts, and fracture surfaces. The phase-field modeling approach introduces an order parameter that describes the variant or phase type within the material. For ferroelectrics and ferromagnetics this order paprameter is usually chosen to be the polarization or magnetization respectivley. For shape memory alloys an order parameter that describes the orientation of martensite must be introduced. For fracture the order parameter is used to described the amount of damage or brokenness that exists in the material. The order parameter can be thought of as a configurational variable describing the thermodynamic state of the material. Associated with such configurational variables is a themodynamically work conjugate micro-force. In this talk, I will present some of our work on phase-field modeling and discuss some of the physical and numerical modeling challenges.
      6. Scalable Network Analysis
        Inderjit S. Dhillon
        Affiliation: The University of Texas at Austin, April 18, 2014
        Abstract: Unstructured data is being generated at a tremendous rate in modern applications as diverse as social networks, recommender systems, genomics, health care and energy management. Networks are an important example of unstructured data and may arise explicitly, as in social networks, or implicitly, as in recommender systems. These networks are challenging to handle; not only are they large-scale but they are constantly evolving, and many applications require difficult prediction tasks to be solved, such as link or ratings prediction. In this talk, I will discuss scalable solutions for a class of prediction tasks on large-scale networks, that involve algorithmic innovation in response to the demands of modern computer systems.
      7. Subgroup Reporting using Nonparametric Bayesian Inference
        Peter Mueller
        Affiliation: Math Department, UT Austin, May 2, 2014
        Abstract: We discuss Bayesian inference for subgroups in clinical trials. The key feature of the proposed approach is that we separate the decision problem of selecting subgroups for reporting and the probability model for prediction. For the latter we use a flexible nonparametric Bayesian model, while the decision problem is based on a parsimonious description of the subgroups and a stylized utility function.

      Fall 2013

      1. Automated Predictions of Molecular Assemblies with Quantified Certainity
        Chandrajit Bajaj
        Affiliation: UT-ICES/CS, September 4, 2013
        Abstract: Most bio-molecular complexes involve three or more molecules. We consider the automated prediction of bimolecular structure assemblies formulating it as the solution of a non-convex geometric optimization problem. The conformation of the molecules with respect to each other are optimized with respect to a hierarchical interface matching score. The assembly prediction decision procedure involves both search and scoring over very high dimensional spaces, (O(6 n) for n rigid molecules) and moreover is provably NP-hard. To make things even more complicated, predicting bio-molecular complexes requires search optimization to include molecular flexibility and induced conformational changes as the assembly interfaces complementarily align. In this talk I shall first present a general approximation algorithm to predict multi-piece 3D assemblies, and then describe a provably polynomial time approximation scheme (PTAS) for the special case of predicting symmetric 3D spherical shell assemblies, given a constant number of primitive component molecules that make up the asymmetric unit. I shall then derive probabilistic measures (Chernoff like bounds) and provide statistical certificates of accuracy for our assembly prediction, in the presence of various molecular structural uncertainties. The spherical shell assembly solution utilizes a novel 6D parameterization (independent of the total number of individual molecules) of the search space and includes symmetric decorations of periodic and aperiodic spherical tilings.
      2. Modeling and Simulation of High Resolution Imaging Processes
        Grant Willson
        Affiliation: The Departments of Chemistry and Chemical Engineering, The University of Texas, Austin, TX, October 4, 2013
        Abstract: There has been a continuing and nearly frantic effort on the part of the microelectronics manufacturers over the past several decades to make smaller and smaller devices. Companies that cannot keep pace with these advances quickly disappear from the market place and sadly many with famous names like Siemens, Motorola and Sony have fallen by the wayside. Photolithography, the process that has enabled the production of all of todays microelectronic devices has now reached physical limits imposed by mass transport and kinetic issues. Efforts to push that technology to provide still higher resolution by the historical paths of wave length reduction, increasing the numerical aperture and reduction in the Raleigh constant have been abandoned. Can device scaling continue?? Various incredibly clever tricks based on chemical engineering principles have been devised by that extend the resolution limits of photolithography, some of which are already in use in full scale manufacturing. In addition, a very promising new methodology for printing these small structures has been demonstrated. These methodologies have enabled printing of structures that are smaller than a virus, structures comprised of so few molecules that stochastic variations in the chemistry and the mechanical properties are a concern. This concern can best be met by modeling and simulation based on experimental verification. A description of the new processes and a plea for help with development of the models and simulations will be presented.
      3. Some aspects of surface and front homogenization
        Luis Caffarelli
        Affiliation: UT-Austin, Department of Mathematics, October 18, 2013
        Abstract: A typical example of front homogenization would be that of a flame propagating through a layered media. At the scale of the media, it will look dry wiggly , but at a large scale we can expect it to propagate like a front, with different speeds depending on direction ( if some of the layers are non- combustable the transversal speed would be much slower than along the layers.
      4. MODELING FAILURE IN DUCTILE MATERIALS
        Krishnaswamy Ravi-Chandar
        Affiliation: Department of Aerospace Engineering and Engineering Mechanics, UT-Austin, November 1, 2013
        Abstract: Ductile failure in structural materials has been a problem of longstanding interest, both from the fundamental scientific and applied technological perspectives. While different failure models ranging from phenomenological to computational models to mechanism based micromechanical models have been proposed and used over the past five decades, there is still an active debate concerning the predictive capabilities of these models. In this presentation, I will begin by describing the results of a multiscale experimental investigation which reveals that very large deformations occur under different stress states without measureable damage in the material at scales that are above grain size; this brings about the need for proper calibration of the underlying plasticity models for strain levels that are much greater than accessible in standard test procedures. The experiments are also used to provide a robust lower bound estimate for the onset of fracture. I will follow this with a description of a hybrid experimental-computational procedure for material modeling. The efficacy of the resulting constitutive and failure models will be demonstrated through two example problems that include nucleation and growth of cracks in complex structural configurations.
      5. Towards Validation for Predictions of Unobserved Quantities
        Todd Oliver
        Affiliation: ICES, The University of Texas at Austin, December 13, 2013
        Abstract: The ultimate purpose of most computational models is to make predictions, commonly in support of some decision-making process (e.g., for design or operation of some system). The quantities that need to be predicted (the quantities of interest or QoIs) are generally not experimentally observable before the prediction, since otherwise no prediction would be needed. Assessing the validity of such extrapolative predictions, which is critical to informed decision-making, is challenging. In classical approaches to validation, model outputs for observed quantities are compared to observations to determine if they are consistent. By itself, this consistency only ensures that the model can predict the observed quantities under the conditions of the observations. This limitation dramatically reduces the utility of the validation effort for decision making because it implies nothing about predictions of unobserved QoIs or for scenarios outside of the range of observations. This talk will describe recent research toward a framework enabling validation for predictions of such unobserved QoIs. The proposed process includes stochastic modeling, calibration, validation, and predictive assessment phases where representations of known sources of uncertainty and error are built, informed, and tested. A key aspect of the framework is that information known independently from the specific calibration and validation data being used is critical to enabling reliable extrapolation. Illustrative examples will be drawn from simple toy problems, including a spring-mass-damper system, as well as more complex problems, such as RANS turbulence modeling.

UT Wordmark