Type-2 Complexity

Concepts and tools from Theoretical Computer Science are regularly applied to discrete problems: as algorithmic foundation significantly supporting software development, performance predictions, and optimality proofs. Numerical Engineering on the other hand largely employs recipes and methods whose correctness and efficiency is demonstrated empirically. Mere computability, as investigated by the classical Type-2 Theory, however is too coarse for practical relevance. Type-2 Complexity Theory thus refines this approach from a resource-oriented perspective in the bit-cost model. 'Positive' results in this field provide (rigorous analyses of) fully-specified algorithms with parameterized running time and memory bounds: over continuous universes such as real numbers, vectors, sequences, functions, subsets, and more generally separable metric spaces by approximation up to guaranteed absolute error 1/2n; while 'negative' results establish them optimal: by adapting adversary arguments from IBC to the bit model and/or relative to famous conjectures in discrete complexity theory such as Harvey Friedman's, Ker-I Ko's, and Akitoshi Kawamura's celebrated connections between smooth maximization/integration/ODE solving and $\mathcal{P}$-vs-$\mathcal{NP}$-vs-#$\mathcal{P}$-vs-$\mathcal{PSPACE}$. Selected References: