Topics for a doctor research


I’m currently looking into advanced topics that could be used as a basis for doctor thesis.

So far I’m trying to understand what are the general ideas that are being developed right now in Siggraph conferences. But finding a topic that would be needed in future is always tricky.

What are your thoughts, which topics are going to be hot after 3 years or so?

Massively parallel processing is likely to be the focus of intense research for many years to come. Anything that raises the bar both in terms of speed and ease is bound to be a real crowd pleaser, on the HW and SW sides of the track.

You could also look into the new proprietary ASIC / FPGA ray tracing clusters that are being put together. Most people don’t think they will make it into gaming / sims, but for specialized applications they are quite exciting. Well I think so anyway. :slight_smile:

As replies given are parallel processing/ray tracing inclined, does it mean OpenGL in its latest state has some such support for these features?

When it comes to cluster real time ray tracing/rendering, is it typical to have a manager software that directs the ray tracer on each computer in the cluster to render a sub-square of each frame and combine the result?

Or is it typical that the real time ray tracer so tightly manages work loading to computers, perhaps for efficient real time reasons?

I would be interested in knowing existing or developing solutions.


I would do something regarding content creation. For example, how to create complex 3D environments (indoor / outdoor) with as few work, as possible, through powerful tools. Better methods for content creation are definitely something that will become more and more vital in the future.


There a lot of research to be done in the field of image-based rendering and model acquisition. This ties in to the content creation proposed by Jan. Idealy, we’d like to just walk through a room carying some cameras and obtain a reasonably accurate 3D digital model of the room. Methods are needed to robustly estimate depth from images; to interpolate between depth images, handling occlusion artifacts and missing information; to handle the huge volumes of data involved in this process; to render this data in real-time, etc, etc. How nice would it be if Google’s streetview was not just showing reprojected images, but allowed for an actual real-time 3D walk-through… or to be able to see a real 3D person/head in front of you, talking, with video conferencing.

Thank you very much for the input.

I was thinking about ray-tracing and ways to make it more real time, bit the content creation/re-creation is actually even more interesting.

I will definitely think about this topic.