Wednesdays at 4 PM in Eckhart 202.
Chemical Engineers use bubble chambers for certain chemical reactions by passing a gas through a liquid. The liquid is contained in a chamber whose floor is perforated by small holes through which the gas is driven by a pressure in excess of the hydrostatic pressure. Most studies of bubble formation concentrate on a single orifice, and seek a local formation in terms of an incompressible liquid with a free surface. The difficulty is that Bernoulli's equation is insensitive to the mean pressure, and a local analysis fails. I present a new local formulation that shows how to include the far-field effects to remove the arbitrariness in the mean pressure. This new formulation is used to study the effects of surface tension on the linearized behavior of the free surface at the orifice.
Many physical systems have a Hamiltonian structure in terms of Lie-Poisson brackets. In particular, for systems of several variables the corresponding Lie-Poisson bracket is built from a Lie algebra extension. We give a classification of these brackets, which involves finding a set of normal forms, independent under coordinate transformations. This is achieved with the techniques of Lie algebra cohomology. For extensions of order less than five, we find that the number of normal forms is small and they involve no free parameters. A special extension, known as the Leibniz extension, is the unique "maximal" extension.
Multigrid iterative methods are one of the most significant developments in the numerical solution of partial differential equations in the last twenty years. Multigrid methods have been analyzed from several diverse perspectives, from Fourier-like spectral decompositions to approximate Gaussian Elimination, with each perspective yielding new insight. In this lecture we will introduce the multigrid method, and survey several of the tools used in its analysis.
We are developing a framework for adaptive and parallel computation that is capable of (i) generating three-dimensional unstructured meshes of tetrahedral elements, (ii) automatically refining and coarsening these meshes, (iii) partitioning the computation into subdomains that may be processed in parallel, and (iv) maintaining a balanced parallel computation through element migration. Parallel tools for handling computation on heterogeneous computers involve descriptions of the problem at partition, process, and computer levels. Partitioning based on an octress decomposition of the space-time domain of the partial differential system is efficient and is linked to our mesh generation procedures. Predictive strategies for anticipating load imbalances reduce migration times and provide partitions that maintain balance for a greater portion of the solution process. Applications invovling the solution of transient compressible flows using a discontinuous Galerkin method will be described and presented.
The present work takes place into the frameworks of non destructive control of structures by the means of thermal (or electrical) boundary measurements. Occurence of defaults such as cracks breaks one major feature, the reciprocity principle (known as the Maxwell-Betti principle in mechanics), which is due to the symmetry of the Laplace operator governing the direct problem. We take advantage of the lack of reciprocity to built up semi-explicit algorithms, in the case of 3D planar cracks (or 2D line segment ones), working in two steps : the first one yields by explicit computations the plane (or line) hosting the cracks. In the second step, the recovery of the cracks, which happen to be the exact support of the inside discontinuity of temperature, is performed by expanding this latter on some basis such as the Fourier one. No finite element computation is needed, the Fourier cpoefficients being computed by simple integrations on the boundary.
Shells are three-dimensional objects. The purpose of the dimension reduction in linear theory of shells is to obtain a two-dimensional problem which is computationally less expensive. The thickness of the shell, t, becomes a parameter. Numerical difficulties arise when t is small, i.e., the shell is (effectively) thin. The relative error in energy norm behaves as K(t)(h/L)^p, where h is the mesh spacing, p is the degree of the finite elements, and K(t) is a locking factor, which may diverge as t -> 0. Since in the worst case K(t) ~ (1/t), the use of high-order finite elements, the p-method, is a natural choice. The asymptotic character of the shells varies, sometimes dramatically, depending on three factors: the geometry of the shell, the kinematical constraints imposed (that is, the boundary conditions), and the type of load applied. This behaviour inspired E. Ramm to call shell "The Primadonna of Structures". For any shell problem the three essential questions are : shell asymptotics, scale resolution and layers, and accuracy of shell models. We shall demonstrate the relevance of these questions in relation to a set of numerical experiments.
Nonlinear hyperbolic conservation laws have solutions with propagating `shocks' or discontinuities. Compressive shocks satisfy an `entropy condition' in which characteristics enter the shock on each side. Undercompressive shocks violate this condition. We show that scalar laws with non-convex fluxes and fourth order diffusion have stable undercompressive fronts, yielding such unusual behavior as double shock structures from simple jump (Riemann) initial data. Thermal/gravity driven thin film flow is described by such equations and the signature of undercompressive fronts has been observed in recent experiments. Unlike compressive fronts, undercompressive film fronts are stable to fingering instabilities [1,2].  A. L. Bertozzi, A. Muench, X. Fanton, and A. M. Cazabat, Phys. Rev. Lett. 81(23), December 7, 1998.  A. L. Bertozzi, A. Muench, and M. Shearer, preprint.
Textured patterns are ubiquitous in nature and have been studied in well-controlled experimental systems. They are created via spontaneous symmetry breaking and consequently exhibit ``configuration independent" characteristics. A study of patterns should aim to describe and analyze these common aspects. A class of measures referred to as the ``disorder function", $\bar\delta(\beta)$ which provides such a characterization will be introduced. Analysis of patterns shows that $\bar\delta(\beta)$ is identical for multiple patterns generated under fixed external conditions. The behavior of $\bar\delta(\beta)$ for relaxation of patterns from initially random states will be presented. It exhibits distinct characteristics during the creation of domains and coarsening.
We will describe our recent development in designing third and fourth order accurate weighted essentially non-oscillatory (WENO) schemes for solving hyperbolic conservation laws on arbitrary 2D triangulations. Numerical results will be shown to illustrate the capability of the method.
Experimental observation of "anomalous" optical patterns have been reported in the literature. These consist of concentric rings of bright or dark spots made up by modes with co-prime azimuthal indexes. If one assumes, as it usually done, that the symmetry of the optical system is $O_2$, then these patterns cannot arise as the result of a co-dimension one bifurcation, and are thus called "anomalous". However, it can be shown that the presence of a hard diffracting object inside the optical cavity, for example an aperture or a waveguide, coupled with a linear polarizer reduces the invariance of the system from $O_2$ to $D_2$. As a consequence, "anomalous" patterns appear naturally as the result of co-dimension one primary bifurcations. As an example of this type of systems, I will discuss the bifurcations of a ring laser with a metallic duct and a polarizer inside the optical cavity.
For questions, contact