It takes true vision to find meaning in complexity. Our research explores new horizons in visual and aural intelligence to revolutionise a wide range of applications.

Our research
Strawberry verticillium wilt detection

Examples of our strawberry verticillium wilt dataset. The plants in the red box show verticillium wilt, the blue boxed plants are a healthy sample.
Accurate detection of plant disease can control the spread early and prevent unnecessary loss, with the potential to significantly impact plant cultivation. Strawberry verticillium wilt is a soil-borne, multi-symptomatic disease. This multi-stage project is designed to detect strawberry verticillium wilt accurately, first through the establishment of a disease detection network based on Faster R-CNN and multi-task learning, followed by a strawberry verticillium wilt detection network (SVWDN) using attention mechanisms in the feature extraction of the disease detection network. Unlike existing methods that aim to detect disease from the whole plant appearance, SVWDN automatically detects verticillium wilt according to the symptoms of detected plant components, such as young leaves and petioles.
Advanced smart city data analysis: an intelligent safety monitoring system based on image and video understanding
Governments and communities are increasingly looking to smart cities to ensure urban environments function seamlessly and improve the lives of citizens. Our research in this arena is developing advanced smart city solutions through the automatic detection of objects in images and video, recognition and identification of moving objects such as vehicles, identifying pre-defined audio events, and more.
Quantitative analysis of structure and material based on deep learning segmentation and identification in medical imaging
Accurate segmentation and identification of medical images is essential to quantitative analysis and disease diagnosis, enabling the delineation of objects such as anatomical structures or tissues from the background and more effective identification and labelling. We’re collaborating with a local CT image company to develop a robust solution for understanding foot bone CT images, paving the way for better quantitative analysis and disease diagnosis.
Recommendation in social networks
With the emergence of online social networks (OSNs), video recommendation plays an ever-growing role in mitigating the semantic gap between users and videos. Conventional approaches to video recommendation focused primarily on exploiting content features or simple user-video interactions to model users’ preferences fail to model complex video context interdependency, which is obscured in heterogeneous auxiliary data. Our researchers are studying video recommendation in Heterogeneous Information Networks (HINs) to underpin a new Context-Dependent Propagating Recommendation network (CDPRec), to obtain accurate video embedding and capture global context cues among videos in HINs. The research will also propose a knowledge graph enhanced neural collaborative recommendation (K-NCR) framework, effectively combining user-item interaction information and auxiliary knowledge information for recommendation tasks.
Image-based phenomics for biodiversity discovery (fine-grained classification)
There is a compelling need to harness emerging technologies in computer vision to understand nature fast enough to shape informed response to the impact of humans on the world. This requires acceleration of all aspects of biodiversity discovery and documentation to populate a digital biodiversity knowledge bank, employing quantitative observation on a large scale to objectively and rapidly measure meaningful phenotypes. This research targets the development of fine-grained classification techniques to differentiate between hard-to-distinguish object classes such as species of birds, flowers or animals. We are also developing ship type identification from satellite image databases and hyperspectral image classification.
Elderly health monitoring system
With the aging of the population and the demand for improved quality of life, safety and health monitoring of the elderly, especially those living alone, has attracted more and more attention. This project aims to improve elderly wellbeing by remotely monitoring their health and safety. Current health monitoring approaches for the elderly have limitations. Wearable motion detector devices rely on the user wearing them, and camera based solutions raise privacy concerns and have specific lighting requirements in order for behaviour to be detected. By applying deep learning in radio-based human sensing, we are developing a prototype using millimetre wave radar for elderly health and safety monitoring. The trajectory tracking and behaviour analysis system using millimeter-wave radar greatly benefits the user as it provides advantages such as a high recognition rate, privacy protection and it does not need to be worn.
MEET THE BRIGHT MINDS
making it happen