Important dates

> TECHNICAL PAPERS

Papers submission
May 4th, 2012
May 14th, 2012

Notification of Acceptance
June 17th, 2012
June 26th, 2012
June 30th, 2012

Camera ready due
July 6th, 2012
July 15th, 2012
July 20th, 2012

> WORKSHOP PROPOSALS

Submission deadline
April 19th, 2012

Notification of acceptance
April 25th, 2012
May 8th, 2012

> TUTORIAL PROPOSALS

Submission deadline
May 6th, 2012
May 13th, 2012

Notification of acceptance
May 27th, 2012
May 31st, 2012

Survey paper due
June 24th, 2012
July 15th, 2012

Camera ready due
August 7th, 2012

Handouts due
July 31st, 2012
August 5th, 2012

> WTD PAPERS

Abstract due date
May 16th, 2012

Submission deadline
May 20th, 2012
May 25th, 2012

Notification of acceptance
June 20th, 2012
June 22nd, 2012

Camera ready due
July 8th, 2012

> WUW PAPERS

Papers submission
May 11th, 2012
May 21st, 2012

Notification of acceptance
June 24th, 2012
July 1st, 2012
July 3rd, 2012

Camera ready due
July 8th, 2012
July 12th, 2012
July 15th, 2012

> WIP PAPERS

Submission deadline
June 25th, 2012
July 2nd, 2012
July 6th, 2012

Notification of acceptance
July 16th, 2012
July 18th, 2012
July 24th, 2012

Camera ready due
July 31st, 2012
August 2nd, 2012

> WGARI PAPERS

Submission deadline
June 24th, 2012
July 4th, 2012

Notification of acceptance
July 18th, 2012
July 24th, 2012
July 26th, 2012

Camera ready due
July 29th, 2012
August 1st, 2012
August 5th, 2012

> WIVis PAPERS

Submission deadline
July 15th, 2012
August 1st, 2012

Notification of acceptance
August 1st, 2012
August 5th, 2012

Camera ready due
August 7th, 2012

Sponsorship / Organization

Technical Co-Sponsorship

logo_CS

sbc

Organization

Logomarca_UFOP

decom


Search

Tutorials Proposals

Accepted Tutorials

T1 - Cloud and mobile Web-based graphics and visualization

Haim Levkowitz (University of Massachussetts Lowell)
Full-Day - Advanced Level

Cloud computing is becoming one of the most prevailing computing platforms. The combination of mobile devices and cloud-based computing is rapidly changing how users consume and use computing resources. With the introduction and penetration of HTML5, and, in particular, its visual capabilities in the form of the canvas element, the implementation of high-quality Web-based graphics has become a reality. Indeed, WebGL offers capabilities comparable to the traditional OpenGL utilizing cloud-based computing resources. It is now feasible to have high-performance graphics and visualization ``in your palm,'' utilizing a mobile device as the front end interface and the display, but performing all the graphics ``heavy lifting'' on a cloud-based platform. We argue that this will become the most common platform for computer graphics and visualization in the not-too-distant future. The goals of this course are to make students familiar with the underlying technologies that make this possible, including (but not limited to) cloud-based computing, mobile computing, their combination, HTML5 and the canvas element, the WebGL graphics library, general Web-based graphics and visualization, and Web-based interactive development environments. Who should attend: researcher, practitioners, and students focused on, interested in, or aspiring to enter the fields of cloud computing, mobile computing, graphics, visualization, Web-based environments and their applications are encouraged to attend. Students will gain a deep understanding of these novel techniques and technologies, and will become capable of applying their knowledge to develop interactive mobile- and cloud-based graphics and visualization applications, which will provide them with soon-to-be highly-desirable skills. Students with previous knowledge of and experience in interactive computer graphics and visualization will gain much more value from this tutorial.

T2 - GPU programing with GLSL

Rodrigo de Toledo (UFRJ), Thiago Gomes (UFRJ)
Full-Day - Elementary Level

Este tutorial propõe um overview sobre o pipeline gráfico programável da forma como temos hoje. Nossa proposta é criar um ambiente de aprendizado baseado em Coding Dojo. Como focaremos no ensino de programação em GPU, propomos a utilização de uma ferramenta de abstração de código de aplicação – o ShaderLabs, desenvolvida pelos mesmos autores deste tutorial.

T3 - Interactive Graphics Applications with OpenGL Shading Language and Qt

Joao Paulo Gois (UFABC), Harlen Batagelo (UFABC)
Full-Day - Intermediate Level

The goal of this tutorial is to enable the attendants to develop interactive graphics applications with OpenGL using the Qt framework. One of the powers of the Qt framework is to allow for easy development of professional cross-platform applications using C++. In particular, Qt offers a full-featured framework to render OpenGL contexts and several classes that make easy the development of interactive graphics applications using OpenGL and OpenGL Shading Language (GLSL). For instance, Qt provides classes to manipulate matrices, vectors, OpenGL vertex buffer objects, textures and GLSL programs. As a by-product of the present tutorial we will illustrate that Qt is also a suitable framework to didactic purposes where, not only it can easily replace traditional window management libraries such as GLUT (OpenGL Utility Toolkit), but also offers the possibility to develop interactive sophisticate object-oriented applications.

T4 - Kinect and RGBD Images: Challenges and Applications

Leandro Cruz (IMPA), Djalma Lúcio (IMPA), Luiz Velho (IMPA)
Half-Day - Intermediate Level

The Kinect is a device introduced in 2010 as an accessory of XBox 360. The data acquired has different and complementary natures, combining geometry with visual attributes. For this reason, the Kinect is a flexible tool that can be used in applications of several areas, such as: Computer Graphics, Image Processing, Computer Vision and Human-Machine Interaction. In this way, the Kinect is a widely used device in the industry (games, robotics, theater performers, natural interfaces, etc.) and in research. Initially in this tutorial we will present the main techniques related to the acquisition of data: capturing, representation and filtering. The data consists of a colored image (RGB) and depth information (D). This structure is called RGBD Image. After that, we will talk about tools available for developing applications on various platforms. We will also discuss some recent developed projects based on RGBD Images. In particular, those related to Object Recognition, 3D Reconstruction, and Interaction. In this tutorial, we will show research developed by the academic community, and some projects developed for the industry. We intend to show the basic principles to begin developing applications using Kinect, and present some projects developed at the VISGRAF Lab. And finally, we intend to discuss the new possibilities, challenges and trends raised by Kinect.

T5 - Looking at People: Concepts and Applications

William Robson Schwartz (UFMG)
Half-Day - Advanced Level

The understanding of activities performed by humans in videos presents a great interest of the Computer Vision community. To achieve a precise and accurate interpretation of the activities being performed in a scene, tasks such as detection, recognition, tracking, person re-identification, pose estimation, and action recognition have to be performed accurately to provide enough information to inference systems responsible for recognizing such activities. Such tasks belong to the application domain of the Computer Vision called Looking at People, which has as general goal the analysis of images and videos containing humans. Problems in this domain present increasing interest of the scientific community due to their direct application to areas such as surveillance, biometrics and automation, which can provide significant technological advances. Due to the high degree of dependence between the tasks (e.g., the individual action recognition depends on the correct tracking and detection of a person), they can be affected by error propagation and amplification during the process. In such case, tasks performed later (e.g., action recognition) might not be accomplished accurately due to the large amount of incorrect results obtained by previous tasks (e.g., poor detection results that do not locate correctly the person that is performing an action). Therefore, there is the need that each task be performed in a robust manner. A promising approach being employed in these problems is the use of multiple feature descriptors simultaneously so that richer visual information contained in the scene is captured. This tutorial focuses on the concepts of looking at people and tasks belonging to this domain and is structured in four parts. First, the concepts and the importance of the domain to the academic community and the development of the technology will be presented. Second, the extraction of the visual information through the use of feature descriptors will be discussed. Third, the main goals, challenges, connections and possible effects of error propagation for each task in the domain (e.g., background subtraction, human and face detection, face recognition, person tracking and re-identification, pose estimation and action recognition) will be discussed. Finally, a promising approach based on the combination of multiple feature descriptors through the statistical method called Partial Least Squares will be discussed and its application to several computer vision tasks will be described. At the end of this tutorial, the audience will be able to identify the most suitable feature extraction methods to be applied to a given task and will be aware of the importance of extracting richer visual information from the scene to obtain accurate results for the low level problems which allows the achievement of improved and robust solutions for the high level problems

T6 - Transparency and Anti-Aliasing Techniques for Real-Time Rendering

Marilena Maule (UFRGS), João Comba (UFRGS), Rafael Torchelsen (Universidade Federal da Fronteira Sul - UFFS), Rui Bastos (NVIDIA)
Half-Day - Advanced Level

Transparency and anti-aliasing effects are crucial to enhance the realism in computer-generated images. In common, both effects have the fact they rely on processing discrete samples from a given function, but using the samples for different purposes. For transparency computation, samples usually encode color information along a ray that crosses a scene, which are combined in order of intersection to produce the final color of a pixel. For anti-aliasing, samples usually represent different color contributions within the region of a pixel, and must be combined with the proper weights to produce the final color of a pixel. Graphics applications have a high demand for such effects. Transparency effects are largely used to render transparent objects in CAD models, and several other phenomena, such as hair, smoke, fire, rain, grass, etc. Anti-aliasing is an even more crucial effect, because jagged edges can be easily spotted and create disruptive distractions during a 3D walkthrough, and clearly are unacceptable in real-time applications. For algorithms that compute these effects, there are several factors that impact the quality and performance of the final result. For example, a simple transparency effect can be simulated using pre-computed textures, sorted in depth order. However, more complex transparent effects, which require a higher cost for processing transparency samples in depth ordering, are often too costly to be included in interactive 3D applications, like games. This scenario is changing with the improvement in performance of GPUs, but transparency effects still have to fight for the computational power with the other aspects of the application. Similarly, anti-aliasing (AA) techniques are directly impacted by the need to collect and process multiple samples. There is a vast number of proposals in the image processing literature that describes how to solve the problem, each with its own tradeoffs on quality and performance. Likewise, GPUs are often used to improve performance of AA algorithms, and some already have AA support in hardware. While good results with respect to quality and performance are obtained for static scenes, to maintain temporal coherence (coherence between frames) is still challenging, especially when relying only in the color information. In this tutorial we review state-of-the-art techniques for transparency and anti-aliasing effects. Their initial ideas and subsequent GPU accelerations are detailed, with a discussion on their strengths and limitations. We conclude with a discussion on applications and methods that can arise in the future.


Reviewers

The program committee would like to thank the following reviewers:

Afonso Paiva (ICMC-USP)
Bruno Carvalho (UFRN)
Fernando Marson (PUC-RS)
Fernando Osorio (USP)
Gabriel Taubin (Brown University)
Joao Paulo Gois (UFABC)
José Rodrigues Júnior (ICMC-USP)
José Mario De Martino (UNICAMP)
Luiz Henrique de Figueiredo (IMPA)
Marcelo Dreux (PUC-Rio)
Marcelo Kallmann (University of California, Merced)
Marcelo Siqueira (UFRN)
Marcos Lage (UFF)
Maria Cristina de Oliveira (ICMC-USP)
Michael Reale (Binghamton University)
Ricardo Marroquim (UFRJ)
Roberto Scopigno (CNR-ISTI)
Selan dos Santos (UFRN)
Shaun Canavan (Binghamton University)
Waldemar Celes (PUC-Rio)
Wu Shin-Ting (UNICAMP)


Call for Tutorial Proposals

SIBGRAPI 2012 welcomes submissions of tutorials in all areas of the Conference, particularly Computer Graphics (CG), Computer Vision (CV), Image Processing (IP), and Pattern Recognition (PR). Tutorials should be classified in one of the following: elementary, intermediate, or advanced. Tutorials are free of charge to the conference attendees and may take 3 hours (half day) or 6 hours (full day). Only elementary tutorials may be presented in Portuguese. Intermediate or advanced tutorials must be presented in English with handouts in the same language of presentation. In order to prepare the submission, please consider the guidelines below:
  • Elementary Tutorials: typically complement the basic undergraduate curriculum in Computer Science and help to attract students for graduate studies in CV, IP, CG, and PR. Instructors should not assume that the audience has basic knowledge in these areas.
  • Intermediate Tutorials: are targeted to students, professionals and researchers in CV, IP, CG, and PR who wish to learn advanced techniques, which can be used in their work. Instructors may assume that attendees are familiar with basic notions of mathematics, numerical methods, programming, and CV, IP, CG, and PR.
  • Advanced Tutorials: should focus the state-of-the-art research, recent developments, emerging topics and novel applications in CV, IP, CG, and PR.
PLEASE NOTE that due to infrastructure constraints we cannot offer hands-on tutorials.

Important Dates

  • Submission deadline: May 6th, 2012 May 13th, 2012
  • Notification of acceptance: May 27th, 2012 May 31st, 2012
  • Survey paper due: June 24th, 2012 July 15th, 2012 (more details below)
  • Camera ready due: August 7th, 2012
  • Handouts due: July 31st, 2012 August 5th, 2012 (more details below)
Incomplete or late submissions will not be considered. All submissions will be acknowledged.

Tutorial Selection

All proposals will be blindly reviewed by at least two experts in the topic and will be judged by: relevance for the conference; expected size of the audience; potential to attract participants to the conference; originality; and qualification of the instructors in the topic of the tutorial.

Author's instructions

Tutorial proposals should be submitted through the JEMS submission site. Please, send your proposal up to 8 pages in PDF format using the SIBGRAPI IEEE LaTeX template available at the submission instructions' page. The proposals should be written in Portuguese (elementary tutorials only) or English and contain the following information:

First page:

  • Title
  • Level (elementary, intermediate or advanced and how many hours required, 3h or 6h)
  • Abstract

Next 7 pages:

  • Motivation
  • Target audience
  • Interest for the CV, IP, CG and PR community;
  • List of the topics to be presented, including: estimated duration, subtopics, relevant literature;
  • Presentation requirements: equipments and/or software;
  • Short biography of the instructors, including relevant experience in the topic of the tutorial
  • Plan for presentation (include information on who will present the tutorial)

Presentation

Each tutorial must be presented by at least one of the authors (please include this information in the submission). Handouts must be prepared to be distributed to the tutorials' participants. Instructions for the creation of the handouts will be given to the authors of the accepted tutorials at a later date.

Funding

SIBGRAPI will cover the registration fee of one of the authors and will provide partial financial support for authors of accepted tutorials.

Survey Papers

Authors of the selected tutorials will be invited to prepare a survey paper in English, in the tutorial's topic, for the electronic version of the conference proceedings. This paper will be submitted to the IEEE Xplore Digital Library. A survey paper is a manuscript with an introduction, a structured presentation of the topic, and conclusions indicating trends, applications, and directions for future work. This paper will be peer reviewed as well before publication. For tutorials of previous years published by IEEE CPS please check: For inquiries concerning this call, please contact the chairs below.

Chairs

  • Marcelo Walter (UFRGS, Brazil) marcelo.walter at inf.ufrgs.br
  • Lijun Yin (Binghamton University, USA) lijun at cs.binghamton.edu
 


Support

Cooperating Society

Published by

logo_google logo_microsoft capes cnpq parquemetalurgico eurographics CPS

logo-aperam_color
fapemig

Powered by Joomla!. Designed by: themes joomla celebrity list Valid XHTML and CSS.