Personal tools
You are here: Home / Courses / 2012S / Fundamentals of Numerical Analysis and Symbolic Computation

Fundamentals: Tensor spaces and numerical tensor calculus

This term's DK lecture Fundamentals of Numerical Analysis and Symbolic Computation is held by Prof. Wolfgang Hackbusch (Max Planck Institute for Mathematics in the Sciences Leipzig, Germany).

The lecture follows the equally titled book of the author and consists of three parts.

In Part I, a quick introduction and preview is given, followed by the algebraic foundations of algebraic tensor spaces.

Part II is concerned with topological tensor spaces, more precisely, Banach and Hilbert tensor spaces. Important terms are the crossnorms and the particular projective norm and injective norm. The underlying question is, how the norm of the tensor spaces is related to the norms of the generating normed vector spaces. In fact, the norm of the tensor space is not fixed by the generating norm. An important link to linear (not multi-linear) algebra is the matricisation, which, e.g., allows to define the rank of a tensor. The last subject of the second part are the 'minimal subspaces' associated to a tensor. The results are important for later approximation problems.

Part III discusses the numerical aspects. The tool for any numerical treatment are suitable data-sparse 'formats'. In a certain sense, all these formats are generalisations of the class of low-rank matrices to tensors. The lecture discusses three different ones: the r-term format, the tensor subspace format, and the hierarchical tensor format (where a particular case is the TT format). For each format one has to distinguish two aspects: the properties of the format itself and the approximation of general tensors by a tensor from this format. A particular case of the latter approximation is the truncation from larger tensor rank to a smaller one. The use of such truncations are unavoidable for the tensor operations. The various tensor operations (e.g., addition, scalar product, Hadamard product, convolution, matrix-matrix and matrix-vector multiplications). An additional technique is the tensorisation, which, in the best case, allows to approximate tensors of size n^d by data of size O(log(n^d)). This has important consequences for the discretisations of pdes in Cartesian domain. Another important technique is the generalised cross approximation, which can be used to approximate (complicated) multivariate functions.

 

Coordinates: 2012S, 327.101, 2 hours, 3 ECTS

Location: S2 416 (RICAM seminar room)

Lecture times:

Fri, 6 July, 9:00-12:15,

Mon, 9 July, 9:00-12:15 and 16:00-18:00,

Tue, 10 July, 9:00-12:15,

Wed, 11 July, 9:00-12:15 and 16:00-18:00,

Thu, 12 July, 9:00-12:15,

Fri, 13 July, 9:00-12:15.