πLab is currently actively seeking high quality students to pursue PhD degrees. Join our team and start an exciting career in a growing area!
Studying with PILab
Want to study for a PhD degree in any of the areas below? Please contact πLab or the relevant supervisor listed.
PILab areas of interest
Perceptually-accurate simulation of real surfaces and materials in virtual environments
Degree: PhD
Supervisor: A/Prof Stuart Perry (UTS)
Co-supervisors: Dr Juno Kim (UNSW) and A/Prof Don Bone (UTS)
Project description: Perceptually-accurate simulation of real surfaces and materials in virtual environments (joint project with School of Optometry and Vision Science, University of New South Wales):
An exciting opportunity is available to undertake a PhD conducting research on a cross-institutional collaboration in the field of surface and material appearance. Material appearance is the vivid perceptual experience we have of different material properties when we look at images (e.g., 3D shape, colour, gloss, lightness/albedo). Research into graphics and visual reality aims to simulate interactions of light with surfaces to generate these material experiences in artificial environments, both in real-time and as realistically as possible. Much of the complexity in light’s interaction with opaque objects can be simulated using computational models, such as a BRDF. Although BRDF information is generally captured using highly specialised equipment, this equipment is usually not well-suited to the problem of scanning real-life 3D objects. This is because many real-life objects have complicated BRDFs and may even deviate beyond the scope of these models (e.g., when objects are semi-opaque). Hence, collecting accurate material appearance for real 3D objects is still a potentially complicated problem.
This project is primarily concerned with the collection of material appearance information from scans of real objects. Current techniques attempt to bring together multiple image captures to collect sufficient information to resolve a physical model of surface reflectance, such as a micro-facet model. However, as material appearance is directly related to how humans perceive materials, this project will also use psychophysical experimentation to elucidate the fundamental dimensions of model parameters that are necessary to maximise the efficient capture and simulation of physical surface properties.
3D visual saliency detection
Degree: PhD
Supervisor: A/Prof Min Xu
Project description: 3D visual saliency detection:
Visual saliency has been widely researched to estimate human gaze density in 2D. In this research, we will explore human gaze density estimation in 3D. In a 3D environment, human gaze will be attracted by salient regions, which are not only standout with visual feature contrast but also distinguishable in a depth map.
Emotion-based human computer (multimedia) interaction
Degree: PhD
Supervisor: A/Prof Min Xu
Project description: Emotion-based human computer (multimedia) interaction:
This research will focus on human emotion estimation through analysing a couple of wearable sensor data. The research involves signal processing, wearable sensor data fusion and time series data analysis.
Image Aesthetics Analysis
Degree: PhD
Supervisor: A/Prof Min Xu
Project description: Image Aesthetics Analysis:
Automated assessment of image aesthetics is a significant research problem due to its potential applications, such as image retrieval, image editing, design, and human computer interaction. The research will create a machine expert system, which is able to provide an automatic aesthetic rating for any images. Image analysis and machine learning (e.g. deep learning) are the key components.
3D Point Cloud Segmentation and Analysis
Degree: PhD
Supervisor: A/Prof Stuart Perry and Dr Wenjing Jia
Project description: 3D Point Cloud Segmentation and Analysis
Scanned 3D data is increasing in availability and produced by various devices such as LIDAR, SLAM and structured light imaging systems. Although there has been considerable research into the segmentation and analysis of 2D imagery and videos, there has been a lack of research into the segmentation and analysis of 3D point cloud data. Segmentation and analysis of point cloud data is crucial to a number of applications such as autonomous driving, 3D urban mapping and the 3D scanning of large cultural heritage sites to name a few.
This project is concerned with using advanced machine learning frameworks to identify and classify objects in both public datasets such as Semantic 3D (http://www.semantic3d.net/) and data sets collected using equipment available at UTS such as structured light and stereo 3D capture technologies. Data collected may be static or dynamic point cloud data and the goal would be to develop systems relevant to real-world problems such as safe systems for autonomous vehicles.