## Joint seminar on Theoretical Computer Science

- August 11, 14h00-15h30, E3-1 #2452:
Dongseong Seon and Chansu Park (both KAIST interns from SNU)
*Topological View on Computational Shape Optimization*

By the sometimes so-called MAIN THEOREM of Recursive Analysis, a computable function must necessarily be continuous. We investigate this condition for example primitives in Shape Optimization: volume, perimeter, and convexity, considered as real-valued objective/constraint functionals on various classes of compact sets equipped with the Hausdorff metric. - August 1, 14h00-15h30, E3-1 #2452:
Seungwoo Schin (KAIST)
*Computational Complexity of N-Body Problem*

N-body problem is classical problem in physics. In this talk, error bound of stepwise simulation of n-body problem will be discussed. As a side-result of error bound discussion, it can be shown that n-body reachability problem is PSPACE. Then, to show PSPACE-hardness, reduction from other problems that are known to be PSPACE-hard will be given. To be specific, following problems will be discussed; 1) Ray-tracing problem with rational coefficient reflecting obstacles(and for a brainteaser, equivalent problem in game of go will be discussed, which it's equivalence is very straightforward with the given problem), 2) 1-body reachability problem with immobile rectangular obstacles, 3) 1-body reachability problem with uniformly distributed charge plate obstacles, while both problems are discussed with restriction to the impact position. - June 22, 11h30-13h30, E3-1 #2452:
Iwan Koswara (KAIST)
*Holographic Algorithms*

In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because "their effect can be viewed as that of producing interference patterns among the solution fragments". The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram. Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. They have received notable coverage due to speculation that they are relevant to the P versus NP problem and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P. Holographic algorithms have some similarities with quantum computation, but are completely classical. - June 1, 14h30-16h00, E3-1 #2452:
장문수 (KAIST)
*The Ford-Fulkerson Algorithm and its Analysis*

- May 27, 14h30-16h00, E3-1 #2452:
Imran Fareed (KAIST)
*Algorithms for computing the 2D convex hull*

- May 26, 16h30-18h00, E3-1 #2452:
Dongkyu Lee (KAIST)
*Algorithms for Parallel Computers*

- May 25, 12h00-14h00, E3-1 #2452:
Jiwon Park (KAIST)
*Obstructing Visibilities with One Obstacle*

An obstacle representation of a graph G is a drawing of G in the plane with polygons called obstacles; two points are adjacent iff the straight line segment connecting them does not intersect any obstacles. Obstacle number of a graph is the smallest number of obstacles which allows an obstacle representation of the graph.

Even a class of graphs of obstacle number 1 is not known completely. There is a nice characterization for graphs which have a representation with 1 convex obstacle: non-double covering circular arc graphs. Also it is known that every outerplanar graph has a representation with 1 outside obstacle. And as far as I know, they are all results about graphs of obstacle number 1.In this talk, the following new results are presented:

1) A smallest graph of obstacle number 2. It has 8 vertices and it is tight. (the smallest graph of obstacle number 2 known so far had 10 vertices)

2) Every graph of circumference at most 6 has an outside obstacle representation.

3) A class of graphs with an outside obstacle and a a class of graphs without outside obstacle are different. (it was one of main questions on the obstacle number of graphs) - May 18, 16h30-18h00, E3-1 #2452:
김영석 (Tony) from KAIST's School of Computing
*The Knuth-Morris-Pratt (KMP) Algorithm for String Matching*

- May 11, 16h00-17h30, E3-1 #3445:
Neil Immerman (University of Massachusetts)
*Towards Capturing Order-Independent P*

In Descriptive Complexity we characterize the complexity of decision problems by how rich a logical language is needed to describe the problem. Important complexity classes have natural logical characterizations, for example NP is the set of problems expressible in second order existential logic (NP =SOE) and P is the set of problems expressible in first order logic, plus a fixed point operator (P = FO(FP)).

The latter characterization is over ordered graphs, i.e. the vertex set is a linearly ordered set. This is appropriate for computational problems because all inputs to a computer are ordered sequences of bits. Any ordering will do; we are interested in the order-independent properties of graphs. The search for order-independent P is closely tied to the complexity of graph isomorphism. I will explain these concepts and the current effort to capture order-independent P. - April.15 14h00-16h00, E3-1 #4443:
Volker Betz (TU Darmstadt)
*Escape probabilities and stationary distribution of metastable Markov chains: theory and computation*

Metastability is often an obstacle to efficient computation or simulation of average or typical quantities of stochastic dynamical systems. In particular when there is no reversibility, and thus potential theory is not available, few methods exist. In this talk I will argue that escape probabilities are the right object to study in this case, and will show how they determine the metastable dynamics of finite Markov chains. This will lead to an efficient way of computing both the dynamics and the stationary distribution of metastable Markov chains. - Apr.15 10h30-12h00, E3-1 #4443:
Florian Steinberg (TU Darmstadt and University of Tokyo)
*Computational complexity theory for spaces of integrable functions*

The framework of second order representations was introduced by Kawamura and Cook in 2010 and provides a rigorous notion of computation over continuous structures. It also provides a notion of time and space restricted computation. Choosing a representation means to specify what information a program computing an element of the structure has to provide. We relate the existence of a reasonable choice of information on a compact metric space to a purely metric property of the space: The metric entropy of a compact metric space measures fast the number of balls needed to cover the space grows with the radius of the balls decreasing. We show that the optimal running time of the metric is inherently connected to the metric entropy. These results are applied to show optimality of some concrete representations of function spaces - Mar.11 12h30-14h30, E3-1 #2452:
Jaewoong Han (U-Waterloo and KAIST)
*Automatic Design of Casts for 3D Printing*

The modern civilization evolved with the series of industrial revolutions. According to Jeremy Rifkin, 3-D printing is the third industrial revolution that minimizes the cost of building and fastens the process of designing. In addition, easier access to a private 3-D printer will personally customize products by the user and for the user.

The 3-D printers are not affordable for everyone yet, but in near future, people will design their own products with their own 3-D printer at home. As it is a new technology, there is a lack of software applications related to 3-D printing in the market. An user-friendly program that designs and builds a set of moulds for the user’s own product is in demand.

There are a number of complications when using a 3-D printer and designing a mould. It is critical to clean and maintain a 3-D printer properly to ensure the quality of the printed parts. Many techniques apply in designing a mould to reduce the material and speed up the process of printing.

A complex convex octahedron is chosen as an input product by the user. The software application will be programmed to create eight pieces of mould for the polyhedron. Using OpenSCAD, an open-sourced software application for creating 3-D computer-aided designs, the program will locate six vertices, create a bounding box, implement existing library to convert vertices to normal vectors of faces and vice versa, calculate cross products of vectors to form cutting planes, and then divide the bounding box into eight parts. Moreover, further improvements will be made, including increasing the size of the bounding box, adding a sprue for pouring material, attaching the connecting section for each piece of the mould, and upgrading the algorithm of the program to deal with more complex geometry. - Jan.25 10h30-12h00 and 16h00-17h30, Jan.26 10h30-12h00 E3-1 #3445:
Norbert Preining (JAIST)
*Algebraic Specification and Verification - an Introduction to CafeOBJ*

Ensuring correctness of complex systems like computer programs or communication protocols is gaining ever increasing importance. For correctness it does not suffice to consider the finished system, but correctness already on the level of specification is necessary, that is, before the actual implementation starts. To this aim, algebraic specification and verification languages have been conceived. They are aiming at a mathematically correct description of the properties and behavior of the systems under discussion, and in addition often allow to prove (verify) correctness within the same system.

This course will give an introduction to algebraic specification, its history, logic roots, and actuality with respect to current developments. Paired with the theoretical background we give an introduction to CafeOBJ, a programming language that allows both specification and verification to be carried out together. The course should enable participants to understand the theoretical background of algebraic specification, as well as enable them to read and write basic specifications in CafeOBJ.

2015:

- Dec 10, 12h30 E3-1 #2450: Otfried Cheong (KAIST)
*Shortcuts for the Circle*

(joint work with Mark de Berg, Sang Won Bae, Joachim Gudmundsson, and Christos Levcopoulos)

Let $C$ be the unit circle in**R**^{2}. We can view $C$ as a plane graph whose vertices are all the points on $C$, and the distance between any two points on $C$ is the length of the smaller arc between them. We consider a graph augmentation problem on $C$, where we want to place $k\geq 1$*shortcuts*on $C$ such that the diameter of the resulting graph is minimized. We analyze for each $k$ with $1\leq k\leq 7$ what the optimal set of shortcuts is. Interestingly, the minimum diameter one can obtain is not a strictly decreasing function of $k$. For example, with seven shortcuts one cannot obtain a smaller diameter than with six shortcuts. Finally, we prove that the optimal diameter is $2 + \Theta(1/k^{\frac{2}{3}})$ for any $k$. - Dec 3, 14h30 N1 #111: Martin Ziegler (KAIST):
*Worst-Case vs. Average-Case Bit-Complexity Theory of Real Functions*

While concepts and tools from Theoretical Computer Science are regularly applied to, and significantly support, software development for discrete problems, Numerical Engineering largely employs recipes and methods whose correctness and efficiency is demonstrated empirically. We advertise Real Complexity Theory: a resource-oriented foundation to rigorous computations over continuous universes such as real numbers, vectors, sequences, continuous functions, and Euclidean subsets: in the bit-model by approximation up to given absolute error. We review the underlying notions and some illustrative results -- regarding worst-case complexity theory; to then introduce, and initiate the study of, average-case bit-complexity theory over the reals: Like in the discrete case a first, naive notion of polynomial average runtime turns out to lack robustness and is thus refined. Standard examples of explicit continuous functions with increasingly high worst-case complexity are shown to be in fact easy in the mean; while a further example is constructed with both worst and average complexity exponential: for topological/metric reasons, i.e., oracles do not help. - Nov 26, 12h30 E3-1 #2450: Bastian Dörig (TU Darmstadt):
*Well-Separated Pair Decomposition and the Fast Multipole Method*

The N-body problem cannot be solved analytically for N>2 and hence is generally approached numerically. Naive algorithms incur cost quadratic in N; and we report on the state-of-the-art achieving subquadratic runtime by means of approximation: the Fast Multipole Method, said to be among the top ten algorithms of the 20th century. We present a rigorous worst-case performance analysis, based on techniques from computational geometry and in terms of bit cost. - Nov 17, 13h00 E3-1 #3420: Ji-Won Park (KAIST):
*Symmetric Difference is a Metric for Convex Shapes Modulo Homothety*

Symmetric difference can be used to measure similarity between two shapes. In this talk, similarity up to homothety, which is a combination of translations and scaling, is concerned and there is a recent result of Yon et al. about it. However, their definition is not symmetric and thus not metric. We introduce a variant of symmetric difference whic his a metric for convex shapes modulo homothety (so two convex shapes are regarded the same iff one is a homothety of the other) and provide a proof. - Nov 3, 12h45 E3-1 #3420: Bastian Dörig (TU Darmstadt):
*On the Computational Complexity of Perturbation Sums in Nuclear Physics*

Perturbation Theory, although successful in many areas of physics, had long been considered irrelevant for nuclear states where series tend to diverge. Only recent work was able to mend that based on the Hartree-Fock partitioning. It thus raised the question of efficiently computing these approximating finite but large and multi-indexed sums. Using perturbation diagrams we exhibit their inherent structure and symmetries. We exploit them to derive upper and lower bounds on their complexity with a trade-off between approximation quality and runtime.