Israel SIGGRAPH meeting on April 26th, 2002

Lev Hall , School of Exact Sciences
Tel-Aviv University

Chair: Ariel Shamir
            School of Computer Science
           
The Interdisciplinary Center

The following program is also available in PostScript format. The PostScript/PDF file is to be printed double-sided on A4 paper, and folded into three columns with INVITATION and the digit "5" on the exposed columns. 

Unfortunately, the invitation this time does not serve as an entrance permit to Tel-Aviv University Campus!

 
Time Speaker Title Abstract
8:30   REFRESHMENTS  
9:00 Shachar Flieshman

Tel-Aviv University

Progressive point set surfaces

Progressive point set surfaces (PPSS) are a multilevel point-based surface representation. They combine the usability of multilevel scalar displacement maps (e.g. compression, filtering, modeling) with the generality of point-based surface representations (i.e. no fixed homology group or continuity class). The multiscale nature of PPSS fosters the idea of point-based modeling.
The basic building block for the construction of PPSS is a projection operator, which maps points in the proximity of the shape onto local polynomial surface approximations. The projection operator allows the computing of displacements from smoother to more detailed levels.Based on the properties of the projection operator we derive an algorithm to construct a base point set. Starting from this base point set, a refinement rule using the projection operator constructs a PPSS from any given manifold surface.

Joint work with Marc Alexa, Daniel Cohen-Or and Claudio T. Silva.

9:30 Sara Keren 

Technion

An Augmented Reality System for Un-calibrated Images of Indoor Architectural Scenes

An augmented reality system is an environment in which virtual objects are overlaid upon the user’s view of the real world. It should provide a convincing perception of the combination of the virtual world and the real world. One of the major problems an augmented reality system should solve is camera calibration and virtual object registration.
Our work discusses the problem of inserting three-dimensional models into a single image of an indoor architectural scene. The image is taken by a camera whose internal calibration parameters and whose position and orientation in the scene are unknown. Typically, in architectural scenes lines appearing in the images are parallel to the three major axes. We utilize this information to recover the parameters of the real camera, which are used in overlaying the virtual objects in their correct positions and orientations in the real image.
An important aspect of our work is a theoretical and an experimental analysis of the errors of the camera’s parameters and of the errors of the projection of the virtual objects. We also implemented a system which “plants” virtual 3D objects in the image, and tested the system on many indoor scenes. Our analysis and experiments have shown that errors in the placement of the objects are un-noticeable by the viewers.

M.Sc. Thesis under the supervision of Dr. Ayellet Tal and Dr. Ilan Shimshoni.

10:00 Raanan Fattal 

Hebrew University

Gradient Domain High Dynamic Range Compression

We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then constructed by solving a Poisson equation on the modified gradient field.
Our results demonstrate that the method is capable of drastic dynamic range compression while preserving fine details and without common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance
details in ordinary images.

Joint work with Dani Lischinski and Michael Werman.
10:30   COFFEE BREAK  
11:00 Neta Sokolovsky 

Ben-Gurion University

Integrating Occlusion Culling with View-Dependent Rendering

We present a novel approach that integrates occlusion culling within the view-dependent rendering framework. View-dependent rendering provides the ability to change level of detail over the surface of the object. The selection of appropriate level of detail is based on view-parameters, that causes even occluded regions to be rendered in high level of detail. To overcome this serious drawback we have integrated occlusion culling into the level selection mechanism. Since computing exact visibility is expensive and it is currently not possible to perform this computation in real time, in our work we use a visibility estimation technique instead. Our approach reduces dramatically the number of rendered triangles.

Joint work with Jihad El-Sana and Claudio T. Silva.
11:30 Tommer Leyvand 

Tel-Aviv University

Ray Space Factorization for From-Region Visibility

In large dense models (urban models) standard culling techniques are too slow for interactive use whereas the actual visible part of the model is small. In these cases special visibility calculations can be performed before rendering to greatly reduce the number of visible polygons, thus enabling interactive speeds and even online city walkthroughs. 
From-region visibility is the problem of finding the set of polygons which are visible from a given viewcell. This visibility information is valid while viewing from within the viewcell, thus amortizing visibility costs per frame. Solutions for from-region visibility usually describe methods for merging umbrae or use some ray parameterization to efficiently encapsulate occlusion information. 
Our method calculates the 3D from-region visibility by factorizing the 4D ray space into two 2D problems, one "horizontal" and one "vertical". The two solutions are combined and implemented in image-space and are accelerated using standard graphics hardware. 
We introduce a new parameterization of rays intersection a rectangle for 2D occlusion and enhance the solution into 3D by applying an improved umbrae merging method where the occluded ranges are stored by K-pairs of values per 2D parameter space point. 

Joint work with Daniel Cohen-Or and Olga Sorkine.