A list of completed theses and new thesis topics from the Computer Vision Group.

Are you about to start a BSc or MSc thesis? Please read our instructions for preparing and delivering your work.

Below we list possible thesis topics for Bachelor and Master students in the areas of Computer Vision, Machine Learning, Deep Learning and Pattern Recognition. The project descriptions leave plenty of room for your own ideas. If you would like to discuss a topic in detail, please contact the supervisor listed below and Prof. Paolo Favaro to schedule a meeting. Note that for MSc students in Computer Science it is required that the official advisor is a professor in CS.

AI deconvolution of light microscopy images

Level: master.

Background Light microscopy became an indispensable tool in life sciences research. Deconvolution is an important image processing step in improving the quality of microscopy images for removing out-of-focus light, higher resolution, and beter signal to noise ratio. Currently classical deconvolution methods, such as regularisation or blind deconvolution, are implemented in numerous commercial software packages and widely used in research. Recently AI deconvolution algorithms have been introduced and being currently actively developed, as they showed a high application potential.

Aim Adaptation of available AI algorithms for deconvolution of microscopy images. Validation of these methods against state-of-the -art commercially available deconvolution software.

Material and Methods Student will implement and further develop available AI deconvolution methods and acquire test microscopy images of different modalities. Performance of developed AI algorithms will be validated against available commercial deconvolution software.

computer vision master thesis

  • Al algorithm development and implementation: 50%.
  • Data acquisition: 10%.
  • Comparison of performance: 40 %.

Requirements

  • Interest in imaging.
  • Solid knowledge of AI.
  • Good programming skills.

Supervisors Paolo Favaro, Guillaume Witz, Yury Belyaev.

Institutes Computer Vison Group, Digital Science Lab, Microscopy imaging Center.

Contact Yury Belyaev, Microscopy imaging Center, [email protected] , + 41 78 899 0110.

Instance segmentation of cryo-ET images

Level: bachelor/master.

In the 1600s, a pioneering Dutch scientist named Antonie van Leeuwenhoek embarked on a remarkable journey that would forever transform our understanding of the natural world. Armed with a simple yet ingenious invention, the light microscope, he delved into uncharted territory, peering through its lens to reveal the hidden wonders of microscopic structures. Fast forward to today, where cryo-electron tomography (cryo-ET) has emerged as a groundbreaking technique, allowing researchers to study proteins within their natural cellular environments. Proteins, functioning as vital nano-machines, play crucial roles in life and understanding their localization and interactions is key to both basic research and disease comprehension. However, cryo-ET images pose challenges due to inherent noise and a scarcity of annotated data for training deep learning models.

computer vision master thesis

Credit: S. Albert et al./PNAS (CC BY 4.0)

To address these challenges, this project aims to develop a self-supervised pipeline utilizing diffusion models for instance segmentation in cryo-ET images. By leveraging the power of diffusion models, which iteratively diffuse information to capture underlying patterns, the pipeline aims to refine and accurately segment cryo-ET images. Self-supervised learning, which relies on unlabeled data, reduces the dependence on extensive manual annotations. Successful implementation of this pipeline could revolutionize the field of structural biology, facilitating the analysis of protein distribution and organization within cellular contexts. Moreover, it has the potential to alleviate the limitations posed by limited annotated data, enabling more efficient extraction of valuable information from cryo-ET images and advancing biomedical applications by enhancing our understanding of protein behavior.

Methods The segmentation pipeline for cryo-electron tomography (cryo-ET) images consists of two stages: training a diffusion model for image generation and training an instance segmentation U-Net using synthetic and real segmentation masks.

    1. Diffusion Model Training:         a. Data Collection: Collect and curate cryo-ET image datasets from the EMPIAR             database (https://www.ebi.ac.uk/empiar/).         b. Architecture Design: Select an appropriate architecture for the diffusion model.         c. Model Evaluation: Cryo-ET experts will help assess image quality and fidelity             through visual inspection and quantitative measures     2. Building the Segmentation dataset:         a. Synthetic and real mask generation: Use the trained diffusion model to generate             synthetic cryo-ET images. The diffusion process will be seeded from either a real             or a synthetic segmentation mask. This will yield to pairs of cryo-ET images and             segmentation masks.     3. Instance Segmentation U-Net Training:         a. Architecture Design: Choose an appropriate instance segmentation U-Net             architecture.         b. Model Evaluation: Evaluate the trained U-Net using precision, recall, and F1             score metrics.

By combining the diffusion model for cryo-ET image generation and the instance segmentation U-Net, this pipeline provides an efficient and accurate approach to segment structures in cryo-ET images, facilitating further analysis and interpretation.

References     1. Kwon, Diana. "The secret lives of cells-as never seen before." Nature 598.7882 (2021):         558-560.     2. Moebel, Emmanuel, et al. "Deep learning improves macromolecule identification in 3D         cellular cryo-electron tomograms." Nature methods 18.11 (2021): 1386-1394.     3. Rice, Gavin, et al. "TomoTwin: generalized 3D localization of macromolecules in         cryo-electron tomograms with structural data mining." Nature Methods (2023): 1-10.

Contacts Prof. Thomas Lemmin Institute of Biochemistry and Molecular Medicine Bühlstrasse 28, 3012 Bern ( [email protected] )

Prof. Paolo Favaro Institute of Computer Science Neubrückstrasse 10 3012 Bern ( [email protected] )

Adding and removing multiple sclerosis lesions with to imaging with diffusion networks

Background multiple sclerosis lesions are the result of demyelination: they appear as dark spots on t1 weighted mri imaging and as bright spots on flair mri imaging.  image analysis for ms patients requires both the accurate detection of new and enhancing lesions, and the assessment of  atrophy via local thickness and/or volume changes in the cortex.  detection of new and growing lesions is possible using deep learning, but made difficult by the relative lack of training data: meanwhile cortical morphometry can be affected by the presence of lesions, meaning that removing lesions prior to morphometry may be more robust.  existing ‘lesion filling’ methods are rather crude, yielding unrealistic-appearing brains where the borders of the removed lesions are clearly visible., aim: denoising diffusion networks are the current gold standard in mri image generation [1]: we aim to leverage this technology to remove and add lesions to existing mri images.  this will allow us to create realistic synthetic mri images for training and validating ms lesion segmentation algorithms, and for investigating the sensitivity of morphometry software to the presence of ms lesions at a variety of lesion load levels., materials and methods: a large, annotated, heterogeneous dataset of mri data from ms patients, as well as images of healthy controls without white matter lesions, will be available for developing the method.  the student will work in a research group with a long track record in applying deep learning methods to neuroimaging data, as well as experience training denoising diffusion networks..

Nature of the Thesis:

Literature review: 10%

Replication of Blob Loss paper: 10%

Implementation of the sliding window metrics:10%

Training on MS lesion segmentation task: 30%

Extension to other datasets: 20%

Results analysis: 20%

Fig. Results of an existing lesion filling algorithm, showing inadequate performance

Requirements:

Interest/Experience with image processing

Python programming knowledge (Pytorch bonus)

Interest in neuroimaging

Supervisor(s):

PD. Dr. Richard McKinley

Institutes: Diagnostic and Interventional Neuroradiology

Center for Artificial Intelligence in Medicine (CAIM), University of Bern

References: [1] Brain Imaging Generation with Latent Diffusion Models , Pinaya et al, Accepted in the Deep Generative Models workshop @ MICCAI 2022 , https://arxiv.org/abs/2209.07162

Contact : PD Dr Richard McKinley, Support Centre for Advanced Neuroimaging ( [email protected] )

Improving metrics and loss functions for targets with imbalanced size: sliding window Dice coefficient and loss.

Background The Dice coefficient is the most commonly used metric for segmentation quality in medical imaging, and a differentiable version of the coefficient is often used as a loss function, in particular for small target classes such as multiple sclerosis lesions.  Dice coefficient has the benefit that it is applicable in instances where the target class is in the minority (for example, in case of segmenting small lesions).  However, if lesion sizes are mixed, the loss and metric is biased towards performance on large lesions, leading smaller lesions to be missed and harming overall lesion detection.  A recently proposed loss function (blob loss[1]) aims to combat this by treating each connected component of a lesion mask separately, and claims improvements over Dice loss on lesion detection scores in a variety of tasks.

Aim: The aim of this thesisis twofold.  First, to benchmark blob loss against a simple, potentially superior loss for instance detection: sliding window Dice loss, in which the Dice loss is calculated over a sliding window across the area/volume of the medical image.  Second, we will investigate whether a sliding window Dice coefficient is better corellated with lesion-wise detection metrics than Dice coefficient and may serve as an alternative metric capturing both global and instance-wise detection.

Materials and Methods: A large, annotated, heterogeneous dataset of MRI data from MS patients will be available for benchmarking the method, as well as our existing codebases for MS lesion segmentation.  Extension of the method to other diseases and datasets (such as covered in the blob loss paper) will make the method more plausible for publication.  The student will work alongside clinicians and engineers carrying out research in multiple sclerosis lesion segmentation, in particular in the context of our running project supported by the CAIM grant.

computer vision master thesis

Fig. An  annotated MS lesion case, showing the variety of lesion sizes

References: [1] blob loss: instance imbalance aware loss functions for semantic segmentation, Kofler et al, https://arxiv.org/abs/2205.08209

Idempotent and partial skull-stripping in multispectral MRI imaging

Background Skull stripping (or brain extraction) refers to the masking of non-brain tissue from structural MRI imaging.  Since 3D MRI sequences allow reconstruction of facial features, many data providers supply data only after skull-stripping, making this a vital tool in data sharing.  Furthermore, skull-stripping is an important pre-processing step in many neuroimaging pipelines, even in the deep-learning era: while many methods could now operate on data with skull present, they have been trained only on skull-stripped data and therefore produce spurious results on data with the skull present.

High-quality skull-stripping algorithms based on deep learning are now widely available: the most prominent example is HD-BET [1].  A major downside of HD-BET is its behaviour on datasets to which skull-stripping has already been applied: in this case the algorithm falsely identifies brain tissue as skull and masks it.  A skull-stripping algorithm F not exhibiting this behaviour would  be idempotent: F(F(x)) = F(x) for any image x.  Furthermore, legacy datasets from before the availability of high-quality skull-stripping algorithms may still contain images which have been inadequately skull-stripped: currently the only solution to improve the skull-stripping on this data is to go back to the original datasource or to manually correct the skull-stripping, which is time-consuming and prone to error. 

Aim: In this project, the student will develop an idempotent skull-stripping network which can also handle partially skull-stripped inputs.  In the best case, the network will operate well on a large subset of the data we work with (e.g. structural MRI, diffusion-weighted MRI, Perfusion-weighted MRI,  susceptibility-weighted MRI, at a variety of field strengths) to maximize the future applicability of the network across the teams in our group.

Materials and Methods: Multiple datasets, both publicly available and internal (encompassing thousands of 3D volumes) will be available. Silver standard reference data for standard sequences at 1.5T and 3T can be generated using existing tools such as HD-BET: for other sequences and field strengths semi-supervised learning or methods improving robustness to domain shift may be employed.  Robustness to partial skull-stripping may be induced by a combination of learning theory and model-based approaches.

computer vision master thesis

Dataset curation: 10%

Idempotent skull-stripping model building: 30%

Modelling of partial skull-stripping:10%

Extension of model to handle partial skull: 30%

Results analysis: 10%

Fig. An example of failed skull-stripping requiring manual correction

References: [1] Isensee, F, Schell, M, Pflueger, I, et al. Automated brain extraction of multisequence MRI using artificial neural networks. Hum Brain Mapp . 2019; 40: 4952– 4964. https://doi.org/10.1002/hbm.24750

Automated leaf detection and leaf area estimation (for Arabidopsis thaliana)

Correlating plant phenotypes such as leaf area or number of leaves to the genotype (i.e. changes in DNA) is a common goal for plant breeders and molecular biologists. Such data can not only help to understand fundamental processes in nature, but also can help to improve ecotypes, e.g., to perform better under climate change, or reduce fertiliser input. However, collecting data for many plants is very time consuming and automated data acquisition is necessary.

The project aims at building a machine learning model to automatically detect plants in top-view images (see examples below), segment their leaves (see Fig C) and to estimate the leaf area. This information will then be used to determine the leaf area of different Arabidopsis ecotypes. The project will be carried out in collaboration with researchers of the Institute of Plant Sciences at the University of Bern. It will also involve the design and creation of a dataset of plant top-views with the corresponding annotation (provided by experts at the Institute of Plant Sciences).

computer vision master thesis

Contact: Prof. Dr. Paolo Favaro ( [email protected] )

Master Projects at the ARTORG Center

The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients. Assessment of Digital Biomarkers at Home by Radar.  [PDF] Comparison of Radar, Seismograph and Ballistocardiography and to Monitor Sleep at Home.   [PDF] Sentimental Analysis in Speech.  [PDF] Contact: Dr. Stephan Gerber ( [email protected] )

Internship in Computational Imaging at Prophesee

A 6 month intership at Prophesee, Grenoble is offered to a talented Master Student.

The topic of the internship is working on burst imaging following the work of Sam Hasinoff , and exploring ways to improve it using event-based vision.

A compensation to cover the expenses of living in Grenoble is offered. Only students that have legal rights to work in France can apply.

Anyone interested can send an email with the CV to Daniele Perrone ( [email protected] ).

Using machine learning applied to wearables to predict mental health

This Master’s project lies at the intersection of psychiatry and computer science and aims to use machine learning techniques to improve health. Using sensors to detect sleep and waking behavior has as of yet unexplored potential to reveal insights into health.  In this study, we make use of a watch-like device, called an actigraph, which tracks motion to quantify sleep behavior and waking activity. Participants in the study consist of healthy and depressed adolescents and wear actigraphs for a year during which time we query their mental health status monthly using online questionnaires.  For this masters thesis we aim to make use of machine learning methods to predict mental health based on the data from the actigraph. The ability to predict mental health crises based on sleep and wake behavior would provide an opportunity for intervention, significantly impacting the lives of patients and their families. This Masters thesis is a collaboration between Professor Paolo Favaro at the Institute of Computer Science ( [email protected] ) and Dr Leila Tarokh at the Universitäre Psychiatrische Dienste (UPD) ( [email protected] ).  We are looking for a highly motivated individual interested in bridging disciplines. 

Bachelor or Master Projects at the ARTORG Center

The Gerontechnology and Rehabilitation group at the ARTORG Center for Biomedical Engineering is offering multiple BSc- and MSc thesis projects to students, which are interested in working with real patient data, artificial intelligence and machine learning algorithms. The goal of these projects is to transfer the findings to the clinic in order to solve today’s healthcare problems and thus to improve the quality of life of patients. Machine Learning Based Gait-Parameter Extraction by Using Simple Rangefinder Technology.  [PDF] Detection of Motion in Video Recordings   [PDF] Home-Monitoring of Elderly by Radar  [PDF] Gait feature detection in Parkinson's Disease  [PDF] Development of an arthroscopic training device using virtual reality  [PDF] Contact: Dr. Stephan Gerber ( [email protected] ), Michael Single ( [email protected]. ch )

Dynamic Transformer

Level: bachelor.

Visual Transformers have obtained state of the art classification accuracies [ViT, DeiT, T2T, BoTNet]. Mixture of experts could be used to increase the capacity of a neural network by learning instance dependent execution pathways in a network [MoE]. In this research project we aim to push the transformers to their limit and combine their dynamic attention with MoEs, compared to Switch Transformer [Switch], we will use a much more efficient formulation of mixing [CondConv, DynamicConv] and we will use this idea in the attention part of the transformer, not the fully connected layer.

  • Input dependent attention kernel generation for better transformer layers.

Publication Opportunity: Dynamic Neural Networks Meets Computer Vision (a CVPR 2021 Workshop)

Extensions:

  • The same idea could be extended to other ViT/Transformer based models [DETR, SETR, LSTR, TrackFormer, BERT]

Related Papers:

  • Visual Transformers: Token-based Image Representation and Processing for Computer Vision [ViT]
  • DeiT: Data-efficient Image Transformers [DeiT]
  • Bottleneck Transformers for Visual Recognition [BoTNet]
  • Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [T2TViT]
  • Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer [MoE]
  • Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [Switch]
  • CondConv: Conditionally Parameterized Convolutions for Efficient Inference [CondConv]
  • Dynamic Convolution: Attention over Convolution Kernels [DynamicConv]
  • End-to-End Object Detection with Transformers [DETR]
  • Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [SETR]
  • End-to-end Lane Shape Prediction with Transformers [LSTR]
  • TrackFormer: Multi-Object Tracking with Transformers [TrackFormer]
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [BERT]

Contact: Sepehr Sameni

Visual Transformers have obtained state of the art classification accuracies for 2d images[ViT, DeiT, T2T, BoTNet]. In this project, we aim to extend the same ideas to 3d data (videos), which requires a more efficient attention mechanism [Performer, Axial, Linformer]. In order to accelerate the training process, we could use [Multigrid] technique.

  • Better video understanding by attention blocks.

Publication Opportunity: LOVEU (a CVPR workshop) , Holistic Video Understanding (a CVPR workshop) , ActivityNet (a CVPR workshop)

  • Rethinking Attention with Performers [Performer]
  • Axial Attention in Multidimensional Transformers [Axial]
  • Linformer: Self-Attention with Linear Complexity [Linformer]
  • A Multigrid Method for Efficiently Training Video Models [Multigrid]

GIRAFFE is a newly introduced GAN that can generate scenes via composition with minimal supervision [GIRAFFE]. Generative methods can implicitly learn interpretable representation as can be seen in GAN image interpretations [GANSpace, GanLatentDiscovery]. Decoding GIRAFFE could give us per-object interpretable representations that could be used for scene manipulation, data augmentation, scene understanding, semantic segmentation, pose estimation [iNeRF], and more. 

In order to invert a GIRAFFE model, we will first train the generative model on Clevr and CompCars datasets, then we add a decoder to the pipeline and train this autoencoder. We can make the task easier by knowing the number of objects in the scene and/or knowing their positions. 

Goals:  

Scene Manipulation and Decomposition by Inverting the GIRAFFE 

Publication Opportunity:  DynaVis 2021 (a CVPR workshop on Dynamic Scene Reconstruction)  

Related Papers: 

  • GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields [GIRAFFE] 
  • Neural Scene Graphs for Dynamic Scenes 
  • pixelNeRF: Neural Radiance Fields from One or Few Images [pixelNeRF] 
  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [NeRF] 
  • Neural Volume Rendering: NeRF And Beyond 
  • GANSpace: Discovering Interpretable GAN Controls [GANSpace] 
  • Unsupervised Discovery of Interpretable Directions in the GAN Latent Space [GanLatentDiscovery] 
  • Inverting Neural Radiance Fields for Pose Estimation [iNeRF] 

Quantized ViT

Visual Transformers have obtained state of the art classification accuracies [ViT, CLIP, DeiT], but the best ViT models are extremely compute heavy and running them even only for inference (not doing backpropagation) is expensive. Running transformers cheaply by quantization is not a new problem and it has been tackled before for BERT [BERT] in NLP [Q-BERT, Q8BERT, TernaryBERT, BinaryBERT]. In this project we will be trying to quantize pretrained ViT models. 

Quantizing ViT models for faster inference and smaller models without losing accuracy 

Publication Opportunity:  Binary Networks for Computer Vision 2021 (a CVPR workshop)  

Extensions:  

  • Having a fast pipeline for image inference with ViT will allow us to dig deep into the attention of ViT and analyze it, we might be able to prune some attention heads or replace them with static patterns (like local convolution or dilated patterns), We might be even able to replace the transformer with performer and increase the throughput even more [Performer]. 
  • The same idea could be extended to other ViT based models [DETR, SETR, LSTR, TrackFormer, CPTR, BoTNet, T2TViT] 
  • Learning Transferable Visual Models From Natural Language Supervision [CLIP] 
  • Visual Transformers: Token-based Image Representation and Processing for Computer Vision [ViT] 
  • DeiT: Data-efficient Image Transformers [DeiT] 
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [BERT] 
  • Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT [Q-BERT] 
  • Q8BERT: Quantized 8Bit BERT [Q8BERT] 
  • TernaryBERT: Distillation-aware Ultra-low Bit BERT [TernaryBERT] 
  • BinaryBERT: Pushing the Limit of BERT Quantization [BinaryBERT] 
  • Rethinking Attention with Performers [Performer] 
  • End-to-End Object Detection with Transformers [DETR] 
  • Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [SETR] 
  • End-to-end Lane Shape Prediction with Transformers [LSTR] 
  • TrackFormer: Multi-Object Tracking with Transformers [TrackFormer] 
  • CPTR: Full Transformer Network for Image Captioning [CPTR] 
  • Bottleneck Transformers for Visual Recognition [BoTNet] 
  • Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [T2TViT] 

Multimodal Contrastive Learning

Recently contrastive learning has gained a lot of attention for self-supervised image representation learning [SimCLR, MoCo]. Contrastive learning could be extended to multimodal data, like videos (images and audio) [CMC, CoCLR]. Most contrastive methods require large batch sizes (or large memory pools) which makes them expensive for training. In this project we are going to use non batch size dependent contrastive methods [SwAV, BYOL, SimSiam] to train multimodal representation extractors. 

Our main goal is to compare the proposed method with the CMC baseline, so we will be working with STL10, ImageNet, UCF101, HMDB51, and NYU Depth-V2 datasets. 

Inspired by the recent works on smaller datasets [ConVIRT, CPD], to accelerate the training speed, we could start with two pretrained single-modal models and finetune them with the proposed method.  

  • Extending SwAV to multimodal datasets 
  • Grasping a better understanding of the BYOL 

Publication Opportunity:  MULA 2021 (a CVPR workshop on Multimodal Learning and Applications)  

  • Most knowledge distillation methods for contrastive learners also use large batch sizes (or memory pools) [CRD, SEED], the proposed method could be extended for knowledge distillation. 
  • One could easily extend this idea to multiview learning, for example one could have two different networks working on the same input and train them with contrastive learning, this may lead to better models [DeiT] by cross-model inductive biases communications. 
  • Self-supervised Co-training for Video Representation Learning [CoCLR] 
  • Learning Spatiotemporal Features via Video and Text Pair Discrimination [CPD] 
  • Audio-Visual Instance Discrimination with Cross-Modal Agreement [AVID-CMA] 
  • Self-Supervised Learning by Cross-Modal Audio-Video Clustering [XDC] 
  • Contrastive Multiview Coding [CPC] 
  • Contrastive Learning of Medical Visual Representations from Paired Images and Text [ConVIRT] 
  • A Simple Framework for Contrastive Learning of Visual Representations [SimCLR] 
  • Momentum Contrast for Unsupervised Visual Representation Learning [MoCo] 
  • Bootstrap your own latent: A new approach to self-supervised Learning [BYOL] 
  • Exploring Simple Siamese Representation Learning [SimSiam] 
  • Unsupervised Learning of Visual Features by Contrasting Cluster Assignments [SwAV] 
  • Contrastive Representation Distillation [CRD] 
  • SEED: Self-supervised Distillation For Visual Representation [SEED] 

Robustness of Neural Networks

Neural Networks have been found to achieve surprising performance in several tasks such as classification, detection and segmentation. However, they are also very sensitive to small (controlled) changes to the input. It has been shown that some changes to an image that are not visible to the naked eye may lead the network to output an incorrect label. This thesis will focus on studying recent progress in this area and aim to build a procedure for a trained network to self-assess its reliability in classification or one of the popular computer vision tasks.

Contact: Paolo Favaro

Masters projects at sitem center

The Personalised Medicine Research Group at the sitem Center for Translational Medicine and Biomedical Entrepreneurship is offering multiple MSc thesis projects to the biomed eng MSc students that may also be of interest to the computer science students. Automated quantification of cartilage quality for hip treatment decision support.  PDF Automated quantification of massive rotator cuff tears from MRI. PDF Deep learning-based segmentation and fat fraction analysis of the shoulder muscles using quantitative MRI. PDF Unsupervised Domain Adaption for Cross-Modality Hip Joint Segmentation. PDF Contact:  Dr. Kate Gerber

Internships/Master thesis @ Chronocam

3-6 months internships on event-based computer vision. Chronocam is a rapidly growing startup developing event-based technology, with more than 15 PhDs working on problems like tracking, detection, classification, SLAM, etc. Event-based computer vision has the potential to solve many long-standing problems in traditional computer vision, and this is a super exciting time as this potential is becoming more and more tangible in many real-world applications. For next year we are looking for motivated Master and PhD students with good software engineering skills (C++ and/or python), and preferable good computer vision and deep learning background. PhD internships will be more research focused and possibly lead to a publication.  For each intern we offer a compensation to cover the expenses of living in Paris.  List of some of the topics we want to explore:

  • Photo-realistic image synthesis and super-resolution from event-based data (PhD)
  • Self-supervised representation learning (PhD)
  • End-to-end Feature Learning for Event-based Data
  • Bio-inspired Filtering using Spiking Networks
  • On-the fly Compression of Event-based Streams for Low-Power IoT Cameras
  • Tracking of Multiple Objects with a Dual-Frequency Tracker
  • Event-based Autofocus
  • Stabilizing an Event-based Stream using an IMU
  • Crowd Monitoring for Low-power IoT Cameras
  • Road Extraction from an Event-based Camera Mounted in a Car for Autonomous Driving
  • Sign detection from an Event-based Camera Mounted in a Car for Autonomous Driving
  • High-frequency Eye Tracking

Email with attached CV to Daniele Perrone at  [email protected] .

Contact: Daniele Perrone

Object Detection in 3D Point Clouds

Today we have many 3D scanning techniques that allow us to capture the shape and appearance of objects. It is easier than ever to scan real 3D objects and transform them into a digital model for further processing, such as modeling, rendering or animation. However, the output of a 3D scanner is often a raw point cloud with little to no annotations. The unstructured nature of the point cloud representation makes it difficult for processing, e.g. surface reconstruction. One application is the detection and segmentation of an object of interest.  In this project, the student is challenged to design a system that takes a point cloud (a 3D scan) as input and outputs the names of objects contained in the scan. This output can then be used to eliminate outliers or points that belong to the background. The approach involves collecting a large dataset of 3D scans and training a neural network on it.

Contact: Adrian Wälchli

Shape Reconstruction from a Single RGB Image or Depth Map

A photograph accurately captures the world in a moment of time and from a specific perspective. Since it is a projection of the 3D space to a 2D image plane, the depth information is lost. Is it possible to restore it, given only a single photograph? In general, the answer is no. This problem is ill-posed, meaning that many different plausible depth maps exist, and there is no way of telling which one is the correct one.  However, if we cover one of our eyes, we are still able to recognize objects and estimate how far away they are. This motivates the exploration of an approach where prior knowledge can be leveraged to reduce the ill-posedness of the problem. Such a prior could be learned by a deep neural network, trained with many images and depth maps.

CNN Based Deblurring on Mobile

Deblurring finds many applications in our everyday life. It is particularly useful when taking pictures on handheld devices (e.g. smartphones) where camera shake can degrade important details. Therefore, it is desired to have a good deblurring algorithm implemented directly in the device.  In this project, the student will implement and optimize a state-of-the-art deblurring method based on a deep neural network for deployment on mobile phones (Android).  The goal is to reduce the number of network weights in order to reduce the memory footprint while preserving the quality of the deblurred images. The result will be a camera app that automatically deblurs the pictures, giving the user a choice of keeping the original or the deblurred image.

Depth from Blur

If an object in front of the camera or the camera itself moves while the aperture is open, the region of motion becomes blurred because the incoming light is accumulated in different positions across the sensor. If there is camera motion, there is also parallax. Thus, a motion blurred image contains depth information.  In this project, the student will tackle the problem of recovering a depth-map from a motion-blurred image. This includes the collection of a large dataset of blurred- and sharp images or videos using a pair or triplet of GoPro action cameras. Two cameras will be used in stereo to estimate the depth map, and the third captures the blurred frames. This data is then used to train a convolutional neural network that will predict the depth map from the blurry image.

Unsupervised Clustering Based on Pretext Tasks

The idea of this project is that we have two types of neural networks that work together: There is one network A that assigns images to k clusters and k (simple) networks of type B perform a self-supervised task on those clusters. The goal of all the networks is to make the k networks of type B perform well on the task. The assumption is that clustering in semantically similar groups will help the networks of type B to perform well. This could be done on the MNIST dataset with B being linear classifiers and the task being rotation prediction.

Adversarial Data-Augmentation

The student designs a data augmentation network that transforms training images in such a way that image realism is preserved (e.g. with a constrained spatial transformer network) and the transformed images are more difficult to classify (trained via adversarial loss against an image classifier). The model will be evaluated for different data settings (especially in the low data regime), for example on the MNIST and CIFAR datasets.

Unsupervised Learning of Lip-reading from Videos

People with sensory impairment (hearing, speech, vision) depend heavily on assistive technologies to communicate and navigate in everyday life. The mass production of media content today makes it impossible to manually translate everything into a common language for assistive technologies, e.g. captions or sign language.  In this project, the student employs a neural network to learn a representation for lip-movement in videos in an unsupervised fashion, possibly with an encoder-decoder structure where the decoder reconstructs the audio signal. This requires collecting a large dataset of videos (e.g. from YouTube) of speakers or conversations where lip movement is visible. The outcome will be a neural network that learns an audio-visual representation of lip movement in videos, which can then be leveraged to generate captions for hearing impaired persons.

Learning to Generate Topographic Maps from Satellite Images

Satellite images have many applications, e.g. in meteorology, geography, education, cartography and warfare. They are an accurate and detailed depiction of the surface of the earth from above. Although it is relatively simple to collect many satellite images in an automated way, challenges arise when processing them for use in navigation and cartography. The idea of this project is to automatically convert an arbitrary satellite image, of e.g. a city, to a map of simple 2D shapes (streets, houses, forests) and label them with colors (semantic segmentation). The student will collect a dataset of satellite image and topological maps and train a deep neural network that learns to map from one domain to the other. The data could be obtained from a Google Maps database or similar.

New Variables of Brain Morphometry: the Potential and Limitations of CNN Regression

Timo blattner · sept. 2022.

The calculation of variables of brain morphology is computationally very expensive and time-consuming. A previous work showed the feasibility of ex- tracting the variables directly from T1-weighted brain MRI images using a con- volutional neural network. We used significantly more data and extended their model to a new set of neuromorphological variables, which could become inter- esting biomarkers in the future for the diagnosis of brain diseases. The model shows for nearly all subjects a less than 5% mean relative absolute error. This high relative accuracy can be attributed to the low morphological variance be- tween subjects and the ability of the model to predict the cortical atrophy age trend. The model however fails to capture all the variance in the data and shows large regional differences. We attribute these limitations in part to the moderate to poor reliability of the ground truth generated by FreeSurfer. We further investigated the effects of training data size and model complexity on this regression task and found that the size of the dataset had a significant impact on performance, while deeper models did not perform better. Lack of interpretability and dependence on a silver ground truth are the main drawbacks of this direct regression approach.

Home Monitoring by Radar

Lars ziegler · sept. 2022.

Detection and tracking of humans via UWB radars is a promising and continuously evolving field with great potential for medical technology. This contactless method of acquiring data of a patients movement patterns is ideal for in home application. As irregularities in a patients movement patterns are an indicator for various health problems including neurodegenerative diseases, the insight this data could provide may enable earlier detection of such problems. In this thesis a signal processing pipeline is presented with which a persons movement is modeled. During an experiment 142 measurements were recorded by two separate radar systems and one lidar system which each consisted of multiple sensors. The models that were calculated on these measurements by the signal processing pipeline were used to predict the times when a person stood up or sat down. The predictions showed an accuracy of 72.2%.

Revisiting non-learning based 3D reconstruction from multiple images

Aaron sägesser · oct. 2021.

Arthroscopy consists of challenging tasks and requires skills that even today, young surgeons still train directly throughout the surgery. Existing simulators are expensive and rarely available. Through the growing potential of virtual reality(VR) (head-mounted) devices for simulation and their applicability in the medical context, these devices have become a promising alternative that would be orders of magnitude cheaper and could be made widely available. To build a VR-based training device for arthroscopy is the overall aim of our project, as this would be of great benefit and might even be applicable in other minimally invasive surgery (MIS). This thesis marks a first step of the project with its focus to explore and compare well-known algorithms in a multi-view stereo (MVS) based 3D reconstruction with respect to imagery acquired by an arthroscopic camera. Simultaneously with this reconstruction, we aim to gain essential measures to compare the VR environment to the real world, as validation of the realism of future VR tasks. We evaluate 3 different feature extraction algorithms with 3 different matching techniques and 2 different algorithms for the estimation of the fundamental (F) matrix. The evaluation of these 18 different setups is made with a reconstruction pipeline embedded in a jupyter notebook implemented in python based on common computer vision libraries and compared with imagery generated with a mobile phone as well as with the reconstruction results of state-of-the-art (SOTA) structure-from-motion (SfM) software COLMAP and Multi-View Environment (MVE). Our comparative analysis manifests the challenges of heavy distortion, the fish-eye shape and weak image quality of arthroscopic imagery, as all results are substantially worse using this data. However, there are huge differences regarding the different setups. Scale Invariant Feature Transform (SIFT) and Oriented FAST Rotated BRIEF (ORB) in combination with k-Nearest Neighbour (kNN) matching and Least Median of Squares (LMedS) present the most promising results. Overall, the 3D reconstruction pipeline is a useful tool to foster the process of gaining measurements from the arthroscopic exploration device and to complement the comparative research in this context.

Examination of Unsupervised Representation Learning by Predicting Image Rotations

Eric lagger · sept. 2020.

In recent years deep convolutional neural networks achieved a lot of progress. To train such a network a lot of data is required and in supervised learning algorithms it is necessary that the data is labeled. To label data there is a lot of human work needed and this takes a lot of time and money to be done. To avoid the inconveniences that come with this we would like to find systems that don’t need labeled data and therefore are unsupervised learning algorithms. This is the importance of unsupervised algorithms, even though their outcome is not yet on the same qualitative level as supervised algorithms. In this thesis we will discuss an approach of such a system and compare the results to other papers. A deep convolutional neural network is trained to learn the rotations that have been applied to a picture. So we take a large amount of images and apply some simple rotations and the task of the network is to discover in which direction the image has been rotated. The data doesn’t need to be labeled to any category or anything else. As long as all the pictures are upside down we hope to find some high dimensional patterns for the network to learn.

StitchNet: Image Stitching using Autoencoders and Deep Convolutional Neural Networks

Maurice rupp · sept. 2019.

This thesis explores the prospect of artificial neural networks for image processing tasks. More specifically, it aims to achieve the goal of stitching multiple overlapping images to form a bigger, panoramic picture. Until now, this task is solely approached with ”classical”, hardcoded algorithms while deep learning is at most used for specific subtasks. This thesis introduces a novel end-to-end neural network approach to image stitching called StitchNet, which uses a pre-trained autoencoder and deep convolutional networks. Additionally to presenting several new datasets for the task of supervised image stitching with each 120’000 training and 5’000 validation samples, this thesis also conducts various experiments with different kinds of existing networks designed for image superresolution and image segmentation adapted to the task of image stitching. StitchNet outperforms most of the adapted networks in both quantitative as well as qualitative results.

Facial Expression Recognition in the Wild

Luca rolshoven · sept. 2019.

The idea of inferring the emotional state of a subject by looking at their face is nothing new. Neither is the idea of automating this process using computers. Researchers used to computationally extract handcrafted features from face images that had proven themselves to be effective and then used machine learning techniques to classify the facial expressions using these features. Recently, there has been a trend towards using deeplearning and especially Convolutional Neural Networks (CNNs) for the classification of these facial expressions. Researchers were able to achieve good results on images that were taken in laboratories under the same or at least similar conditions. However, these models do not perform very well on more arbitrary face images with different head poses and illumination. This thesis aims to show the challenges of Facial Expression Recognition (FER) in this wild setting. It presents the currently used datasets and the present state-of-the-art results on one of the biggest facial expression datasets currently available. The contributions of this thesis are twofold. Firstly, I analyze three famous neural network architectures and their effectiveness on the classification of facial expressions. Secondly, I present two modifications of one of these networks that lead to the proposed STN-COV model. While this model does not outperform all of the current state-of-the-art models, it does beat several ones of them.

A Study of 3D Reconstruction of Varying Objects with Deformable Parts Models

Raoul grossenbacher · july 2019.

This work covers a new approach to 3D reconstruction. In traditional 3D reconstruction one uses multiple images of the same object to calculate a 3D model by taking information gained from the differences between the images, like camera position, illumination of the images, rotation of the object and so on, to compute a point cloud representing the object. The characteristic trait shared by all these approaches is that one can almost change everything about the image, but it is not possible to change the object itself, because one needs to find correspondences between the images. To be able to use different instances of the same object, we used a 3D DPM model that can find different parts of an object in an image, thereby detecting the correspondences between the different pictures, which we then can use to calculate the 3D model. To take this theory to practise, we gave a 3D DPM model, which was trained to detect cars, pictures of different car brands, where no pair of images showed the same vehicle and used the detected correspondences and the Factorization Method to compute the 3D point cloud. This technique leads to a completely new approach in 3D reconstruction, because changing the object itself was never done before.

Motion deblurring in the wild replication and improvements

Alvaro juan lahiguera · jan. 2019, coma outcome prediction with convolutional neural networks, stefan jonas · oct. 2018, automatic correction of self-introduced errors in source code, sven kellenberger · aug. 2018, neural face transfer: training a deep neural network to face-swap, till nikolaus schnabel · july 2018.

This thesis explores the field of artificial neural networks with realistic looking visual outputs. It aims at morphing face pictures of a specific identity to look like another individual by only modifying key features, such as eye color, while leaving identity-independent features unchanged. Prior works have covered the topic of symmetric translation between two specific domains but failed to optimize it on faces where only parts of the image may be changed. This work applies a face masking operation to the output at training time, which forces the image generator to preserve colors while altering the face, fitting it naturally inside the unmorphed surroundings. Various experiments are conducted including an ablation study on the final setting, decreasing the baseline identity switching performance from 81.7% to 75.8 % whilst improving the average χ2 color distance from 0.551 to 0.434. The provided code-based software gives users easy access to apply this neural face swap to images and videos of arbitrary crop and brings Computer Vision one step closer to replacing Computer Graphics in this specific area.

A Study of the Importance of Parts in the Deformable Parts Model

Sammer puran · june 2017, self-similarity as a meta feature, lucas husi · april 2017, a study of 3d deformable parts models for detection and pose-estimation, simon jenni · march 2015, accelerated federated learning on client silos with label noise: rho selection in classification and segmentation, irakli kelbakiani · may 2024.

Federated Learning has recently gained more research interest. This increased attention is caused by factors including the growth of decentralized data, privacy concerns, and new privacy regulations. In Federated Learning, remote servers keep training a model on local datasets independently, and subsequently, local models are aggregated into a global model, which achieves better overall performance. Sending local model weights instead of the entire dataset is a significant advantage of Federated Learning over centralized classical machine learning algorithms. Federated learning involves uploading and downloading model parameters multiple times, so there are multiple communication rounds between the global server and remote client servers, which imposes challenges. The high number of necessary communication rounds not only increases high-cost communication overheads but is also a critical limitation for servers with low network bandwidth, which leads to latency and a higher probability of training failures caused by communication breakdowns. To mitigate these challenges, we aim to provide a fast-convergent Federated Learning training methodology that decreases the number of necessary communication rounds. We found a paper about Reducible Holdout Loss Selection (RHO-Loss) batch selection methodology, which ”selects low-noise, task-relevant, non-redundant points for training” [1]; we hypothesize, if client silos employ RHO-Loss methodology and successfully avoid training their local models on noisy and non-relevant samples, clients may offer stable and consistent updates to the global server, which could lead to faster convergence of the global model. Our contribution focuses on investigating the RHO-Loss method in a simulated federated setting for the Clothing1M dataset. We also examine its applicability to medical datasets and check its effectiveness in a simulated federated environment. Our experimental results show a promising outcome, specifically a reduction in communication rounds for the Clothing1M dataset. However, as the success of the RHO-Loss selection method depends on the availability of sufficient training data for the target RHO model and for the Irreducible RHO model, we emphasize that our contribution applies to those Federated Learning scenarios where client silos hold enough training data to successfully train and benefit from their RHO model on their local dataset.

Amodal Leaf Segmentation

Nicolas maier · nov. 2023.

Plant phenotyping is the process of measuring and analyzing various traits of plants. It provides essential information on how genetic and environmental factors affect plant growth and development. Manual phenotyping is highly time-consuming; therefore, many computer vision and machine learning based methods have been proposed in the past years to perform this task automatically based on images of the plants. However, the publicly available datasets (in particular, of Arabidopsis thaliana) are limited in size and diversity, making them unsuitable to generalize to new unseen environments. In this work, we propose a complete pipeline able to automatically extract traits of interest from an image of Arabidopsis thaliana. Our method uses a minimal amount of existing annotated data from a source domain to generate a large synthetic dataset adapted to a different target domain (e.g., different backgrounds, lighting conditions, and plant layouts). In addition, unlike the source dataset, the synthetic one provides ground-truth annotations for the occluded parts of the leaves, which are relevant when measuring some characteristics of the plant, e.g., its total area. This synthetic dataset is then used to train a model to perform amodal instance segmentation of the leaves to obtain the total area, leaf count, and color of each plant. To validate our approach, we create a small dataset composed of manually annotated real images of Arabidopsis thaliana, which is used to assess the performance of the models.

Assessment of movement and pose in a hospital bed by ambient and wearable sensor technology in healthy subjects

Tony licata · sept. 2022.

The use of automated systems describing the human motion has become possible in various domains. Most of the proposed systems are designed to work with people moving around in a standing position. Because such system could be interesting in a medical environment, we propose in this work a pipeline that can effectively predict human motion from people lying on beds. The proposed pipeline is tested with a data set composed of 41 participants executing 7 predefined tasks in a bed. The motion of the participants is measured with video cameras, accelerometers and pressure mat. Various experiments are carried with the information retrieved from the data set. Two approaches combining the data from the different measure technologies are explored. The performance of the different carried experiments is measured, and the proposed pipeline is composed with components providing the best results. Later on, we show that the proposed pipeline only needs to use the video cameras, which make the proposed environment easier to implement in real life situations.

Machine Learning Based Prediction of Mental Health Using Wearable-measured Time Series

Seyedeh sharareh mirzargar · sept. 2022.

Depression is the second major cause for years spent in disability and has a growing prevalence in adolescents. The recent Covid-19 pandemic has intensified the situation and limited in-person patient monitoring due to distancing measures. Recent advances in wearable devices have made it possible to record the rest/activity cycle remotely with high precision and in real-world contexts. We aim to use machine learning methods to predict an individual's mental health based on wearable-measured sleep and physical activity. Predicting an impending mental health crisis of an adolescent allows for prompt intervention, detection of depression onset or its recursion, and remote monitoring. To achieve this goal, we train three primary forecasting models; linear regression, random forest, and light gradient boosted machine (LightGBM); and two deep learning models; block recurrent neural network (block RNN) and temporal convolutional network (TCN); on Actigraph measurements to forecast mental health in terms of depression, anxiety, sleepiness, stress, sleep quality, and behavioral problems. Our models achieve a high forecasting performance, the random forest being the winner to reach an accuracy of 98% for forecasting the trait anxiety. We perform extensive experiments to evaluate the models' performance in accuracy, generalization, and feature utilization, using a naive forecaster as the baseline. Our analysis shows minimal mental health changes over two months, making the prediction task easily achievable. Due to these minimal changes in mental health, the models tend to primarily use the historical values of mental health evaluation instead of Actigraph features. At the time of this master thesis, the data acquisition step is still in progress. In future work, we plan to train the models on the complete dataset using a longer forecasting horizon to increase the level of mental health changes and perform transfer learning to compensate for the small dataset size. This interdisciplinary project demonstrates the opportunities and challenges in machine learning based prediction of mental health, paving the way toward using the same techniques to forecast other mental disorders such as internalizing disorder, Parkinson's disease, Alzheimer's disease, etc. and improving the quality of life for individuals who have some mental disorder.

CNN Spike Detector: Detection of Spikes in Intracranial EEG using Convolutional Neural Networks

Stefan jonas · oct. 2021.

The detection of interictal epileptiform discharges in the visual analysis of electroencephalography (EEG) is an important but very difficult, tedious, and time-consuming task. There have been decades of research on computer-assisted detection algorithms, most recently focused on using Convolutional Neural Networks (CNNs). In this thesis, we present the CNN Spike Detector, a convolutional neural network to detect spikes in intracranial EEG. Our dataset of 70 intracranial EEG recordings from 26 subjects with epilepsy introduces new challenges in this research field. We report cross-validation results with a mean AUC of 0.926 (+- 0.04), an area under the precision-recall curve (AUPRC) of 0.652 (+- 0.10) and 12.3 (+- 7.47) false positive epochs per minute for a sensitivity of 80%. A visual examination of false positive segments is performed to understand the model behavior leading to a relatively high false detection rate. We notice issues with the evaluation measures and highlight a major limitation of the common approach of detecting spikes using short segments, namely that the network is not capable to consider the greater context of the segment with regards to its origination. For this reason, we present the Context Model, an extension in which the CNN Spike Detector is supplied with additional information about the channel. Results show promising but limited performance improvements. This thesis provides important findings about the spike detection task for intracranial EEG and lays out promising future research directions to develop a network capable of assisting experts in real-world clinical applications.

PolitBERT - Deepfake Detection of American Politicians using Natural Language Processing

Maurice rupp · april 2021.

This thesis explores the application of modern Natural Language Processing techniques to the detection of artificially generated videos of popular American politicians. Instead of focusing on detecting anomalies and artifacts in images and sounds, this thesis focuses on detecting irregularities and inconsistencies in the words themselves, opening up a new possibility to detect fake content. A novel, domain-adapted, pre-trained version of the language model BERT combined with several mechanisms to overcome severe dataset imbalances yielded the best quantitative as well as qualitative results. Additionally to the creation of the biggest publicly available dataset of English-speaking politicians consisting of 1.5 M sentences from over 1000 persons, this thesis conducts various experiments with different kinds of text classification and sequence processing algorithms applied to the political domain. Furthermore, multiple ablations to manage severe data imbalance are presented and evaluated.

A Study on the Inversion of Generative Adversarial Networks

Ramona beck · march 2021.

The desire to use generative adversarial networks (GANs) for real-world tasks such as object segmentation or image manipulation is increasing as synthesis quality improves, which has given rise to an emerging research area called GAN inversion that focuses on exploring methods for embedding real images into the latent space of a GAN. In this work, we investigate different GAN inversion approaches using an existing generative model architecture that takes a completely unsupervised approach to object segmentation and is based on StyleGAN2. In particular, we propose and analyze algorithms for embedding real images into the different latent spaces Z, W, and W+ of StyleGAN following an optimization-based inversion approach, while also investigating a novel approach that allows fine-tuning of the generator during the inversion process. Furthermore, we investigate a hybrid and a learning-based inversion approach, where in the former we train an encoder with embeddings optimized by our best optimization-based inversion approach, and in the latter we define an autoencoder, consisting of an encoder and the generator of our generative model as a decoder, and train it to map an image into the latent space. We demonstrate the effectiveness of our methods as well as their limitations through a quantitative comparison with existing inversion methods and by conducting extensive qualitative and quantitative experiments with synthetic data as well as real images from a complex image dataset. We show that we achieve qualitatively satisfying embeddings in the W and W+ spaces with our optimization-based algorithms, that fine-tuning the generator during the inversion process leads to qualitatively better embeddings in all latent spaces studied, and that the learning-based approach also benefits from a variable generator as well as a pre-training with our hybrid approach. Furthermore, we evaluate our approaches on the object segmentation task and show that both our optimization-based and our hybrid and learning-based methods are able to generate meaningful embeddings that achieve reasonable object segmentations. Overall, our proposed methods illustrate the potential that lies in the GAN inversion and its application to real-world tasks, especially in the relaxed version of the GAN inversion where the weights of the generator are allowed to vary.

Multi-scale Momentum Contrast for Self-supervised Image Classification

Zhao xueqi · dec. 2020.

With the maturity of supervised learning technology, people gradually shift the research focus to the field of self-supervised learning. ”Momentum Contrast” (MoCo) proposes a new self-supervised learning method and raises the correct rate of self-supervised learning to a new level. Inspired by another article ”Representation Learning by Learning to Count”, if a picture is divided into four parts and passed through a neural network, it is possible to further improve the accuracy of MoCo. Different from the original MoCo, this MoCo variant (Multi-scale MoCo) does not directly pass the image through the encoder after the augmented images. Multi-scale MoCo crops and resizes the augmented images, and the obtained four parts are respectively passed through the encoder and then summed (upsampled version do not do resize to input but resize the contrastive samples). This method of images crop is not only used for queue q but also used for comparison queue k, otherwise the weights of queue k might be damaged during the moment update. This will further discussed in the experiments chapter between downsampled Multi-scale version and downsampled both Multi-scale version. Human beings also have the same principle of object recognition: when human beings see something they are familiar with, even if the object is not fully displayed, people can still guess the object itself with a high probability. Because of this, Multi-scale MoCo applies this concept to the pretext part of MoCo, hoping to obtain better feature extraction. In this thesis, there are three versions of Multi-scale MoCo, downsampled input samples version, downsampled input samples and contrast samples version and upsampled input samples version. The differences between these versions will be described in more detail later. The neural network architecture comparison includes ResNet50 , and the tested data set is STL-10. The weights obtained in pretext will be transferred to self-supervised learning, and in the process of self-supervised learning, the weights of other layers except the final linear layer are frozen without changing (these weights come from pretext).

Self-Supervised Learning Using Siamese Networks and Binary Classifier

Dušan mihajlov · march 2020.

In this thesis, we present several approaches for training a convolutional neural network using only unlabeled data. Our autonomously supervised learning algorithms are based on connections between image patch i. e. zoomed image and its original. Using the siamese architecture neural network we aim to recognize, if the image patch, which is input to the first neural network part, comes from the same image presented to the second neural network part. By applying transformations to both images, and different zoom sizes at different positions, we force the network to extract high level features using its convolutional layers. At the top of our siamese architecture, we have a simple binary classifier that measures the difference between feature maps that we extract and makes a decision. Thus, the only way that the classifier will solve the task correctly is when our convolutional layers are extracting useful representations. Those representations we can than use to solve many different tasks that are related to the data used for unsupervised training. As the main benchmark for all of our models, we used STL10 dataset, where we train a linear classifier on the top of our convolutional layers with a small amount of manually labeled images, which is a widely used benchmark for unsupervised learning tasks. We also combine our idea with recent work on the same topic, and the network called RotNet, which makes use of image rotations and therefore forces the network to learn rotation dependent features from the dataset. As a result of this combination we create a new procedure that outperforms original RotNet.

Learning Object Representations by Mixing Scenes

Lukas zbinden · may 2019.

In the digital age of ever increasing data amassment and accessibility, the demand for scalable machine learning models effective at refining the new oil is unprecedented. Unsupervised representation learning methods present a promising approach to exploit this invaluable yet unlabeled digital resource at scale. However, a majority of these approaches focuses on synthetic or simplified datasets of images. What if a method could learn directly from natural Internet-scale image data? In this thesis, we propose a novel approach for unsupervised learning of object representations by mixing natural image scenes. Without any human help, our method mixes visually similar images to synthesize new realistic scenes using adversarial training. In this process the model learns to represent and understand the objects prevalent in natural image data and makes them available for downstream applications. For example, it enables the transfer of objects from one scene to another. Through qualitative experiments on complex image data we show the effectiveness of our method along with its limitations. Moreover, we benchmark our approach quantitatively against state-of-the-art works on the STL-10 dataset. Our proposed method demonstrates the potential that lies in learning representations directly from natural image data and reinforces it as a promising avenue for future research.

Representation Learning using Semantic Distances

Markus roth · may 2019, zero-shot learning using generative adversarial networks, hamed hemati · dec. 2018, dimensionality reduction via cnns - learning the distance between images, ioannis glampedakis · sept. 2018, learning to play othello using deep reinforcement learning and self play, thomas simon steinmann · sept. 2018, aba-j interactive multi-modality tissue sectionto-volume alignment: a brain atlasing toolkit for imagej, felix meyenhofer · march 2018, learning visual odometry with recurrent neural networks, adrian wälchli · feb. 2018.

In computer vision, Visual Odometry is the problem of recovering the camera motion from a video. It is related to Structure from Motion, the problem of reconstructing the 3D geometry from a collection of images. Decades of research in these areas have brought successful algorithms that are used in applications like autonomous navigation, motion capture, augmented reality and others. Despite the success of these prior works in real-world environments, their robustness is highly dependent on manual calibration and the magnitude of noise present in the images in form of, e.g., non-Lambertian surfaces, dynamic motion and other forms of ambiguity. This thesis explores an alternative approach to the Visual Odometry problem via Deep Learning, that is, a specific form of machine learning with artificial neural networks. It describes and focuses on the implementation of a recent work that proposes the use of Recurrent Neural Networks to learn dependencies over time due to the sequential nature of the input. Together with a convolutional neural network that extracts motion features from the input stream, the recurrent part accumulates knowledge from the past to make camera pose estimations at each point in time. An analysis on the performance of this system is carried out on real and synthetic data. The evaluation covers several ways of training the network as well as the impact and limitations of the recurrent connection for Visual Odometry.

Crime location and timing prediction

Bernard swart · jan. 2018, from cartoons to real images: an approach to unsupervised visual representation learning, simon jenni · feb. 2017, automatic and large-scale assessment of fluid in retinal oct volume, nina mujkanovic · dec. 2016, segmentation in 3d using eye-tracking technology, michele wyss · july 2016, accurate scale thresholding via logarithmic total variation prior, remo diethelm · aug. 2014, novel techniques for robust and generalizable machine learning, abdelhak lemkhenter · sept. 2023.

Neural networks have transcended their status of powerful proof-of-concept machine learning into the realm of a highly disruptive technology that has revolutionized many quantitative fields such as drug discovery, autonomous vehicles, and machine translation. Today, it is nearly impossible to go a single day without interacting with a neural network-powered application. From search engines to on-device photo-processing, neural networks have become the go-to solution thanks to recent advances in computational hardware and an unprecedented scale of training data. Larger and less curated datasets, typically obtained through web crawling, have greatly propelled the capabilities of neural networks forward. However, this increase in scale amplifies certain challenges associated with training such models. Beyond toy or carefully curated datasets, data in the wild is plagued with biases, imbalances, and various noisy components. Given the larger size of modern neural networks, such models run the risk of learning spurious correlations that fail to generalize beyond their training data. This thesis addresses the problem of training more robust and generalizable machine learning models across a wide range of learning paradigms for medical time series and computer vision tasks. The former is a typical example of a low signal-to-noise ratio data modality with a high degree of variability between subjects and datasets. There, we tailor the training scheme to focus on robust patterns that generalize to new subjects and ignore the noisier and subject-specific patterns. To achieve this, we first introduce a physiologically inspired unsupervised training task and then extend it by explicitly optimizing for cross-dataset generalization using meta-learning. In the context of image classification, we address the challenge of training semi-supervised models under class imbalance by designing a novel label refinement strategy with higher local sensitivity to minority class samples while preserving the global data distribution. Lastly, we introduce a new Generative Adversarial Networks training loss. Such generative models could be applied to improve the training of subsequent models in the low data regime by augmenting the dataset using generated samples. Unfortunately, GAN training relies on a delicate balance between its components, making it prone mode collapse. Our contribution consists of defining a more principled GAN loss whose gradients incentivize the generator model to seek out missing modes in its distribution. All in all, this thesis tackles the challenge of training more robust machine learning models that can generalize beyond their training data. This necessitates the development of methods specifically tailored to handle the diverse biases and spurious correlations inherent in the data. It is important to note that achieving greater generalizability in models goes beyond simply increasing the volume of data; it requires meticulous consideration of training objectives and model architecture. By tackling these challenges, this research contributes to advancing the field of machine learning and underscores the significance of thoughtful design in obtaining more resilient and versatile models.

Automated Sleep Scoring, Deep Learning and Physician Supervision

Luigi fiorillo · oct. 2022.

Sleep plays a crucial role in human well-being. Polysomnography is used in sleep medicine as a diagnostic tool, so as to objectively analyze the quality of sleep. Sleep scoring is the procedure of extracting sleep cycle information from the wholenight electrophysiological signals. The scoring is done worldwide by the sleep physicians according to the official American Academy of Sleep Medicine (AASM) scoring manual. In the last decades, a wide variety of deep learning based algorithms have been proposed to automatise the sleep scoring task. In this thesis we study the reasons why these algorithms fail to be introduced in the daily clinical routine, with the perspective of bridging the existing gap between the automatic sleep scoring models and the sleep physicians. In this light, the primary step is the design of a simplified sleep scoring architecture, also providing an estimate of the model uncertainty. Beside achieving results on par with most up-to-date scoring systems, we demonstrate the efficiency of ensemble learning based algorithms, together with label smoothing techniques, in both enhancing the performance and calibrating the simplified scoring model. We introduced an uncertainty estimate procedure, so as to identify the most challenging sleep stage predictions, and to quantify the disagreement between the predictions given by the model and the annotation given by the physicians. In this thesis we also propose a novel method to integrate the inter-scorer variability into the training procedure of a sleep scoring model. We clearly show that a deep learning model is able to encode this variability, so as to better adapt to the consensus of a group of scorers-physicians. We finally address the generalization ability of a deep learning based sleep scoring system, further studying its resilience to the sleep complexity and to the AASM scoring rules. We can state that there is no need to train the algorithm strictly following the AASM guidelines. Most importantly, using data from multiple data centers results in a better performing model compared with training on a single data cohort. The variability among different scorers and data centers needs to be taken into account, more than the variability among sleep disorders.

Learning Representations for Controllable Image Restoration

Givi meishvili · march 2022.

Deep Convolutional Neural Networks have sparked a renaissance in all the sub-fields of computer vision. Tremendous progress has been made in the area of image restoration. The research community has pushed the boundaries of image deblurring, super-resolution, and denoising. However, given a distorted image, most existing methods typically produce a single restored output. The tasks mentioned above are inherently ill-posed, leading to an infinite number of plausible solutions. This thesis focuses on designing image restoration techniques capable of producing multiple restored results and granting users more control over the restoration process. Towards this goal, we demonstrate how one could leverage the power of unsupervised representation learning. Image restoration is vital when applied to distorted images of human faces due to their social significance. Generative Adversarial Networks enable an unprecedented level of generated facial details combined with smooth latent space. We leverage the power of GANs towards the goal of learning controllable neural face representations. We demonstrate how to learn an inverse mapping from image space to these latent representations, tuning these representations towards a specific task, and finally manipulating latent codes in these spaces. For example, we show how GANs and their inverse mappings enable the restoration and editing of faces in the context of extreme face super-resolution and the generation of novel view sharp videos from a single motion-blurred image of a face. This thesis also addresses more general blind super-resolution, denoising, and scratch removal problems, where blur kernels and noise levels are unknown. We resort to contrastive representation learning and first learn the latent space of degradations. We demonstrate that the learned representation allows inference of ground-truth degradation parameters and can guide the restoration process. Moreover, it enables control over the amount of deblurring and denoising in the restoration via manipulation of latent degradation features.

Learning Generalizable Visual Patterns Without Human Supervision

Simon jenni · oct. 2021.

Owing to the existence of large labeled datasets, Deep Convolutional Neural Networks have ushered in a renaissance in computer vision. However, almost all of the visual data we generate daily - several human lives worth of it - remains unlabeled and thus out of reach of today’s dominant supervised learning paradigm. This thesis focuses on techniques that steer deep models towards learning generalizable visual patterns without human supervision. Our primary tool in this endeavor is the design of Self-Supervised Learning tasks, i.e., pretext-tasks for which labels do not involve human labor. Besides enabling the learning from large amounts of unlabeled data, we demonstrate how self-supervision can capture relevant patterns that supervised learning largely misses. For example, we design learning tasks that learn deep representations capturing shape from images, motion from video, and 3D pose features from multi-view data. Notably, these tasks’ design follows a common principle: The recognition of data transformations. The strong performance of the learned representations on downstream vision tasks such as classification, segmentation, action recognition, or pose estimation validate this pretext-task design. This thesis also explores the use of Generative Adversarial Networks (GANs) for unsupervised representation learning. Besides leveraging generative adversarial learning to define image transformation for self-supervised learning tasks, we also address training instabilities of GANs through the use of noise. While unsupervised techniques can significantly reduce the burden of supervision, in the end, we still rely on some annotated examples to fine-tune learned representations towards a target task. To improve the learning from scarce or noisy labels, we describe a supervised learning algorithm with improved generalization in these challenging settings.

Learning Interpretable Representations of Images

Attila szabó · june 2019.

Computers represent images with pixels and each pixel contains three numbers for red, green and blue colour values. These numbers are meaningless for humans and they are mostly useless when used directly with classical machine learning techniques like linear classifiers. Interpretable representations are the attributes that humans understand: the colour of the hair, viewpoint of a car or the 3D shape of the object in the scene. Many computer vision tasks can be viewed as learning interpretable representations, for example a supervised classification algorithm directly learns to represent images with their class labels. In this work we aim to learn interpretable representations (or features) indirectly with lower levels of supervision. This approach has the advantage of cost savings on dataset annotations and the flexibility of using the features for multiple follow-up tasks. We made contributions in three main areas: weakly supervised learning, unsupervised learning and 3D reconstruction. In the weakly supervised case we use image pairs as supervision. Each pair shares a common attribute and differs in a varying attribute. We propose a training method that learns to separate the attributes into separate feature vectors. These features then are used for attribute transfer and classification. We also show theoretical results on the ambiguities of the learning task and the ways to avoid degenerate solutions. We show a method for unsupervised representation learning, that separates semantically meaningful concepts. We explain and show ablation studies how the components of our proposed method work: a mixing autoencoder, a generative adversarial net and a classifier. We propose a method for learning single image 3D reconstruction. It is done using only the images, no human annotation, stereo, synthetic renderings or ground truth depth map is needed. We train a generative model that learns the 3D shape distribution and an encoder to reconstruct the 3D shape. For that we exploit the notion of image realism. It means that the 3D reconstruction of the object has to look realistic when it is rendered from different random angles. We prove the efficacy of our method from first principles.

Learning Controllable Representations for Image Synthesis

Qiyang hu · june 2019.

In this thesis, our focus is learning a controllable representation and applying the learned controllable feature representation on images synthesis, video generation, and even 3D reconstruction. We propose different methods to disentangle the feature representation in neural network and analyze the challenges in disentanglement such as reference ambiguity and shortcut problem when using the weak label. We use the disentangled feature representation to transfer attributes between images such as exchanging hairstyle between two face images. Furthermore, we study the problem of how another type of feature, sketch, works in a neural network. The sketch can provide shape and contour of an object such as the silhouette of the side-view face. We leverage the silhouette constraint to improve the 3D face reconstruction from 2D images. The sketch can also provide the moving directions of one object, thus we investigate how one can manipulate the object to follow the trajectory provided by a user sketch. We propose a method to automatically generate video clips from a single image input using the sketch as motion and trajectory guidance to animate the object in that image. We demonstrate the efficiency of our approaches on several synthetic and real datasets.

Beyond Supervised Representation Learning

Mehdi noroozi · jan. 2019.

The complexity of any information processing task is highly dependent on the space where data is represented. Unfortunately, pixel space is not appropriate for the computer vision tasks such as object classification. The traditional computer vision approaches involve a multi-stage pipeline where at first images are transformed to a feature space through a handcrafted function and then consequenced by the solution in the feature space. The challenge with this approach is the complexity of designing handcrafted functions that extract robust features. The deep learning based approaches address this issue by end-to-end training of a neural network for some tasks that lets the network to discover the appropriate representation for the training tasks automatically. It turns out that image classification task on large scale annotated datasets yields a representation transferable to other computer vision tasks. However, supervised representation learning is limited to annotations. In this thesis we study self-supervised representation learning where the goal is to alleviate these limitations by substituting the classification task with pseudo tasks where the labels come for free. We discuss self-supervised learning by solving jigsaw puzzles that uses context as supervisory signal. The rational behind this task is that the network requires to extract features about object parts and their spatial configurations to solve the jigsaw puzzles. We also discuss a method for representation learning that uses an artificial supervisory signal based on counting visual primitives. This supervisory signal is obtained from an equivariance relation. We use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. The most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. We discuss a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific finetuned model. Finally, we study the problem of multi-task representation learning. A naive approach to enhance the representation learned by a task is to train the task jointly with other tasks that capture orthogonal attributes. Having a diverse set of auxiliary tasks, imposes challenges on multi-task training from scratch. We propose a framework that allows us to combine arbitrarily different feature spaces into a single deep neural network. We reduce the auxiliary tasks to classification tasks and the multi-task learning to multi-label classification task consequently. Nevertheless, combining multiple representation space without being aware of the target task might be suboptimal. As our second contribution, we show empirically that this is indeed the case and propose to combine multiple tasks after the fine-tuning on the target task.

Motion Deblurring from a Single Image

Meiguang jin · dec. 2018.

With the information explosion, a tremendous amount photos is captured and shared via social media everyday. Technically, a photo requires a finite exposure to accumulate light from the scene. Thus, objects moving during the exposure generate motion blur in a photo. Motion blur is an image degradation that makes visual content less interpretable and is therefore often seen as a nuisance. Although motion blur can be reduced by setting a short exposure time, an insufficient amount of light has to be compensated through increasing the sensor’s sensitivity, which will inevitably bring large amount of sensor noise. Thus this motivates the necessity of removing motion blur computationally. Motion deblurring is an important problem in computer vision and it is challenging due to its ill-posed nature, which means the solution is not well defined. Mathematically, a blurry image caused by uniform motion is formed by the convolution operation between a blur kernel and a latent sharp image. Potentially there are infinite pairs of blur kernel and latent sharp image that can result in the same blurry image. Hence, some prior knowledge or regularization is required to address this problem. Even if the blur kernel is known, restoring the latent sharp image is still difficult as the high frequency information has been removed. Although we can model the uniform motion deblurring problem mathematically, it can only address the camera in-plane translational motion. Practically, motion is more complicated and can be non-uniform. Non-uniform motion blur can come from many sources, camera out-of-plane rotation, scene depth change, object motion and so on. Thus, it is more challenging to remove non-uniform motion blur. In this thesis, our focus is motion blur removal. We aim to address four challenging motion deblurring problems. We start from the noise blind image deblurring scenario where blur kernel is known but the noise level is unknown. We introduce an efficient and robust solution based on a Bayesian framework using a smooth generalization of the 0−1 loss to address this problem. Then we study the blind uniform motion deblurring scenario where both the blur kernel and the latent sharp image are unknown. We explore the relative scale ambiguity between the latent sharp image and blur kernel to address this issue. Moreover, we study the face deblurring problem and introduce a novel deep learning network architecture to solve it. We also address the general motion deblurring problem and particularly we aim at recovering a sequence of 7 frames each depicting some instantaneous motion of the objects in the scene.

Towards a Novel Paradigm in Blind Deconvolution: From Natural to Cartooned Image Statistics

Daniele perrone · july 2015.

In this thesis we study the blind deconvolution problem. Blind deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Recently it has been shown that a class of blind deconvolution formulations and image priors has the no-blur solution as global minimum. Despite this shortcoming, algorithms based on these formulations and priors can successfully solve blind deconvolution. In this thesis we show that a suitable initialization can exploit the non-convexity of the problem and yield the desired solution. Based on these conclusions, we propose a novel “vanilla” algorithm stripped of any enhancement typically used in the literature. Our algorithm, despite its simplicity, is able to compete with the top performers on several datasets. We have also investigated a remarkable behavior of a 1998 algorithm, whose formulation has the no-blur solution as global minimum: even when initialized at the no-blur solution, it converges to the correct solution. We show that this behavior is caused by an apparently insignificant implementation strategy that makes the algorithm no longer minimize the original cost functional. We also demonstrate that this strategy improves the results of our “vanilla” algorithm. Finally, we present a study of image priors for blind deconvolution. We provide experimental evidence supporting the recent belief that a good image prior is one that leads to a good blur estimate rather than being a good natural image statistical model. By focusing the attention on the blur estimation alone, we show that good blur estimates can be obtained even when using images quite different from the true sharp image. This allows using image priors, such as those leading to “cartooned” images, that avoid the no-blur solution. By using an image prior that produces “cartooned” images we achieve state-of-the-art results on different publicly available datasets. We therefore suggests a shift of paradigm in blind deconvolution: from modeling natural image statistics to modeling cartooned image statistics.

New Perspectives on Uncalibrated Photometric Stereo

Thoma papadhimitri · june 2014.

This thesis investigates the problem of 3D reconstruction of a scene from 2D images. In particular, we focus on photometric stereo which is a technique that computes the 3D geometry from at least three images taken from the same viewpoint and under different illumination conditions. When the illumination is unknown (uncalibrated photometric stereo) the problem is ambiguous: different combinations of geometry and illumination can generate the same images. First, we solve the ambiguity by exploiting the Lambertian reflectance maxima. These are points defined on curved surfaces where the normals are parallel to the light direction. Then, we propose a solution that can be computed in closed-form and thus very efficiently. Our algorithm is also very robust and yields always the same estimate regardless of the initial ambiguity. We validate our method on real world experiments and achieve state-of-art results. In this thesis we also solve for the first time the uncalibrated photometric stereo problem under the perspective projection model. We show that unlike in the orthographic case, one can uniquely reconstruct the normals of the object and the lights given only the input images and the camera calibration (focal length and image center). We also propose a very efficient algorithm which we validate on synthetic and real world experiments and show that the proposed technique is a generalization of the orthographic case. Finally, we investigate the uncalibrated photometric stereo problem in the case where the lights are distributed near the scene. In this case we propose an alternating minimization technique which converges quickly and overcomes the limitations of prior work that assumes distant illumination. We show experimentally that adopting a near-light model for real world scenes yields very accurate reconstructions.

Master's theses in Computer Vision

If you want to do your master's thesis project within the field of Computer Vision, there are several options:

  • Internal Master's thesis at the Computer Vision Lab (CVL) Internal master's theses are normally connected to a research project, and explore a specific research idea. Some project suggestions are listed here: CVL Master's thesis proposal repository . If you already have an idea for a project, you may also contact one of the CVL examiners directly. See the list of examiners below.
  • External Master's thesis at a company We maintain a list of research projects defined by external partners. These project proposals are found here: External Master's thesis project proposals . Future external projects may ** also be posted here: LiU exjobbsportal . If you do not find an interesting project on the page above, you may also contact companies/organizations directly. Oftent they have plans for projects, or are able to create a new one for you. A list of suitable companies can be found here: Computer Vision oriented companies/organisations .

If you have tried the possibilities above and still not found any interesting project, you can also directly contact one of the examiners at CVL, see list of examiners below.

Assignment of examiner and internal supervisor

Examiners for a Master's thesis in computer vision:

  • Per-Erik Forssén (CVL Master's thesis coordinator)
  • Michael Felsberg
  • Maria Magnusson
  • Mårten Wadenbäck
  • Bastian Wandt
  • Jörgen Ahlberg
  • Amanda Berg
  • Leif Haglund
  • Lasse Alfredsson

Assignment of examiner is made after you contact the coordinator or an examiner (you will not necessarily get the one you contact). When contacting an examiner, you should provide the following information:

  • Your name and personal number (we need to check your qualifications in Ladok)
  • Name of the company and email to a contact person (for external Master's projects)
  • Whether it is a master's thesis or bachelor's thesis
  • When you want to start
  • A project description (e.g. the ad from the company).
  • Suggested course code for the project, corresponding to your main field of study (Sv:huvudområde) (e.g. TQET33, TQDT33, TQME33, TQMD33, TQTM33).

Thesis presentation in Swedish

  • English to Swedish translations for Computer Vision (in Swedish) .
  • Swedish Optical Terminology (in Swedish).
  • Statistiktermer på Svenska (in Swedish).

Scientific publication of a master's thesis work

It is not uncommon that master's theses in computer vision are of such quality that they can be turned into scientific publications. This usually requires substantial amount of extra work, but could be a good acheivement to put in your CV. If you are interested in submitting your work for peer review at a conference or in a journal, check with your examiner or university supervisor for hints on how to frame the work and where to submit it. If you feel that your university supervisor has helped you substantially, also consider inviting him/her for co-authorship.

Other information sources

  • University regulations regarding Master's thesis projects are definied in Studieinfo .
  • There are also department spectific rules and practical information .
  • Information about master's theses from LiTH. (will soon be moved)
  • An attendance form for master's thesis presentations (Framläggningsblankett).
  • Publishing your student thesis page at LiU Electronic Press. We recommend that the defence is announced one week in advance, at the vision-seminars mailing list. List subscribers may do so by sending an email to: vision-seminars.isy AT lists.liu.se .
  • Session 1 (Klas Nordberg)
  • Session 2 (Marcus Wallenberg)
  • Automatic grammar checking tools for the English language are highly recommented. One such tool is Grammarly.
  • Help with writing in English can also be had from Academic English Support at IKK .
  • ** : This is contingent on this site being fixed to: (i) allow a proposal to be categorized as more than one "Main field of study" , as all Computer Vision projects fall under 2-4 "Main fields of study" (ii) allow easier inclusion of PDF attachments. Right now the proposal has to first be added, then removed, then found, then the attachment added, then the proposal should be added again.

Alumni Spotlight

computer vision master thesis

UCF Program Page       UCF Catalog Page       UCF Today Article

MSCV Brochure

The Master of Science in Computer Vision (MSCV) Program aims to provide technical skills and domain knowledge to the future professionals who seek to acquire expertise in Computer Vision and its related areas. This involves proficiency in acquiring, processing, analyzing, and understanding images, videos, 3D data, and other types of high-dimensional data of the real world. The program consists of a total of 30 credit hours. The fast-growing interests and investments in Artificial Intelligence (AI) in the United States and around the world have to be powered by a well-prepared workforce. This program contributes to meetin the need created by the United States’ shortage of AI personnel.

Course Catalog .-->

MS Degree Requirements

The MS degree will be 30 credit hours at the graduate level. Students must take 6 required courses and remaining 4 courses from the electives list to get to the program total of 10 classes. No thesis is required, but the independent study course will provide an independent learning experience.

  • CAP 6411 – Computer Vision Systems
  • CAP 6412 – Advanced Computer Vision
  • CAP 6419 – 3D Computer Vision
  • CAP 6908 – Independent Study (this course must be registered with a Computer Vision faculty either from CRCV or a faculty in a very closely related program)

Electives (Choose any four):

  • CAP 5516 Medical Image Computing
  • STA 6107 Statistical Computing 2
  • CAP 6908 Independent Study TWO (taken in Computer Vision, with someone from CRCV or very closely related)
  • CAP 5115 Virtual Reality Engineering
  • CAP 6671 Intelligent Systems: Robots, Agents, and Humans
  • CAP 6614 Current Topics in Machine Learning
  • CAP 5619 Artificial Intelligence for FinTech
  • CAP 6121 3D User Interfaces for Games and Virtual Reality
  • CAP 6640 Computer Understanding of Natural Language
  • STA 6238 Logistic Regression
  • STA 5703 Data Mining Methodology 1
  • EEL 5820 Image Processing
  • MAP 6197 Mathematical Introduction to Deep Learning
  • One from {COT 5405, CDA 5106, or other CS graduate class}
  • CAP 5636 Advanced Artificial Intelligence
  • EEL 5825 Machine Learning and Pattern Recognition
  • STA 5104 Advanced Computer Processing of Statistical Data
  • STA 6106 Statistical Computing I
  • EEL 5669 Introduction to Robotics and Autonomous Vehicles
  • CAP 5516 - Medical Image Computing
  • CAP 6411 - Computer Vision Systems
  • CAP 6412 - Advanced Computer Vision
  • CAP 6419 - 3D Computer Vision
  • CAP 6908 - Independent Study ONE (taken in Computer Vision, with someone from CRCV or very closely related)
  • Any unused class from List A
  • EEL 5829 Image Processing
  • STA 5703 Data Mining Methodology I

Firm RULE: At least half of the 10 program classes must be at 6000-level or higher.

MSCV Spotlight

computer vision master thesis

Application Deadlines

Ready to get started, plan of study (pos).

The Plan of Study (POS) , sometimes referred to as the Program of Study, is an agreement between the student, the program, and the University that lists the coursework taken to satisfy the requirements for completing the degree. The POS for students is flexible and unique to each student. However, it must meet university, college, and department rules for minimum number of hours, etc. (see Degree Requirements, above).

All graduate students must have a Plan of Study (POS) on file, approved by the advisor and graduate coordinator, by the completion of 9 credit hours after entering the program. This is mandatory! The College of Graduate Studies automatically places a “hold” on future registration for non-compliance. The default advisor for non-thesis MS students is the Graduate Coordinator.

Deadlines for submission of the POS are October 15 for Fall semester and March 15 for Spring semester.

The POS can, and usually will, be revised later to reflect changes in the courses actually taken, but it is crucial that a POS be on file, signed by the student and the faculty advisor, and approved by the Graduate Program Coordinator. Any variation from the POS must be approved by the Graduate Program Coordinator and then immediately reflected in an updated POS.

Synopsis/Time Line

  • International Student Application Deadline – Advertised as February 23 but we will accept applications until March 1, 2021
  • Domestic Student Application Deadline – July 1, 2021
  • Admission into the MS program
  • File an initial Plan of Study (by the 9th credit hour)
  • Complete coursework
  • Prepare and submit Portfolio

Graduate Contact Info

Advising and Approval Dr. Niels Lobo Phone: (407) 823-2873 Email: [email protected]

  • Zur Metanavigation
  • Zur Hauptnavigation
  • Zur Subnavigation
  • Zum Seitenfuss

Photo: © NYU Depth Dataset V2/Nathan Silberman/Ge Gao

If you are a student at UHH interested in a thesis with our group, you can contact Dr. Christian Wilms (B.Sc. theses) or Dr. Ehsan Yaghoubi (M.Sc. theses). Below is a list with selected titles of B.Sc. and M.Sc. theses completed in our group to orient you towards potential topics. You can also see our recent papers for possible directions for a thesis.

Selection of titles of complete Master theses: - A Deep Learning Approach for Top-down Attention with Attribute Preference - Salient object detection with AttentionMask - 3D Segmentation in the Context of Inscriptions - Active Visual Object Search Using Reinforcement Learning - Saliency-Guided Sign Language Recognition - Object Discovery in 3D Scenes via Shape Analysis using Adapted PCLV - Learning Efficient Deep Feature Representations for Indoor Visual Positioning

Selection of titles of completed Bachelor theses: - Weakly Supervised Object Detection in RoboCup Scenarios - Lokalisierung von Flugzeug-Leitwerken - IoU Predictions for Segmentation Mask Propsals - Klassifikation von malignen Melanomen mittels konditionierten ConvNets - Segmentierung von Schienenbildern zur automatisierten Wartung - Bildklassifikation von Flugzeugtypen - Segmentation of numerical weather prediction data for characterization of atmospheric airmasses - Object Detection in Remote Sensing Image Data Using AttentionMask - TileAttention: Entdeckung sehr kleiner Objekte - Superpixel Pooling for Instance Segmentation

Necessary requirements: You should have some pre-knowledge in computer vision before starting a thesis. For a B.Sc. thesis you should have at least attended the lecture "Einführung in die Bildverarbeitung", the lab course "Praktikum Computer Vision", or have equivalent knowledge. For a M.Sc. thesis, you should have attended the lectures "Computer Vision 1" and "Computer Vision 2", and ideally also the master project, or have equivalent knowledge. Pre-knowledge in machine learning is also helpful.

computer vision master thesis

Search ISY Search LiU.se Find an employee Find a location

Logotype

Computer Vision Laboratory (CVL)

Undergraduate.

  • Computer Vision and Signal Analysis profiles
  • Biological Vision Systems
  • Geometry for Computer Vision
  • Group theoretical methods and their applications
  • Reading Group in Computer and Robot Vision

Master-thesis

  • External Partners
  • Project proposals
  • Edupack Orientation
  • Rolling Shutter

Master thesis project proposals

Internal projects.

  • A list of internal CVL projects can be found in the CVL GIT (open to all LiU students).
  • If you are interested in doing a research related project, but do not see a suitable one listed here, feel free to contact one of the researchers at the lab. We normally have several more opportunities for internal master thesis projects related to research projects. These can often be adapted to the particular interests of the student.

External projects

  • NB! Please first check the list of new external projects .
  • [2022-11-18] Nordic Evolution: Digital Guides for Visually Impaired Athletes
  • [2022-10-10] Zenseact: Multiple computer vision master theses proposals. E.g. Learning-based Road Estimation
  • [2022-09-06] FOI: Neuromorfisk Avbildning
  • [2022-02-21] FOI: Mörkerseende med maskininlärningsbaserad bildfusion
  • [2021-10-14] Scania: Estimation of Scene Depth for Perception in Autonomous Heavy-Duty Vehicles
  • [2021-10-14] Scania: Visual-Inertial Odometry (VIO)
  • [2021-10-14] Scania: Really, really fast tracking in image space
  • [2021-10-14] Scania: Single Stage Instance Segmentation in Autonomous Heavy-Duty Vehicles
  • [2021-10-14] Scania: Trajectory and intention prediction of annotated tracked objects
  • [2021-10-14] Scania: Efficient algorithm development for GPUs
  • [2021-09-09] Viscando AB Gothenburg: Projects in deep learning, signal processing and modelling for traffic and autonomous vehicle safety
  • [2021-02-12] NFC: Fotometrisk stereo på verktygsspår
  • [2020-11-13] FOI Linköping: Deep Learning för 3D-avbildande LiDAR
  • [2020-11-06] Arkus AI: Apply Machine Learning and Computer Vision in Genetic Diagnostics
  • [2020-10-28] Ericsson: 3D reconstruction for mobile devices
  • [2020-10-07] Veoneer: Static and Dynamic Windshield Distortion Modeling
  • [2020-01-09] IEI: Facial Analysis in Thermal Images for Pilot Stress Recognition

More information about Master's thesis projects in Computer Vision .

Last updated: 2023-10-13

Linköping University SE-581 83 LINKÖPING Tel: +46 13 28 10 00

Contact LiU | Maps

Organization

  • Arts & Sciences
  • Educational Sciences
  • Health Sciences
  • Science and Engineering
  • Departments
  • Offices & Administration
  • Collaboration
  • LiU Students
  • LiU Employees
  • LiU Fundraising
  • LiU Electronic Press

Department of Electrical Engineering Phone: +46 13 28 10 00 Visiting address: B:27, Valla

Top of page

  • Press Enter to activate screen reader mode.

Computer Vision Lab

Thesis projects.

We constantly offer interesting and challenging semester and master projects for motivated students at our lab. Below, you can find a list of topics that are currently being offered. Not all projects might be listed, if you are generally interested, do not hesitate to contact one of the supervisors, she/he might also give you an overview of other offered projects. Also, don't hesitate to contact us proposing your own ideas for projects, they are more than welcome.  

Visual Intelligence and Systems (group Yu) Information on Research projects

Biomedical Image Computing (group Konukoglu) Information on research projects

computer vision master thesis

Master of Science in Computer Vision (MSCV)

Program at a glance.

computer vision master thesis

  • In State Tuition
  • Out of State Tuition

Learn more about the cost to attend UCF.

U.S. News & World Report Best Grad Schools Engineering Badge

Detect, Recognize and Track Objects and Events Using Latest Computer Vision

The Master of Computer Vision program provides you with the technical skills and domain knowledge needed to succeed in this fast-growing industry. This involves acquiring, processing, analyzing and understanding images, videos, 3D data and other types of high-dimensional data of the real world employing the latest machine learning techniques.

Currently, UCF is the only public university offering a Master of Computer Vision degree in the U.S. The program builds upon our very successful research program in computer vision, which ranks among top 10 in the nation. Here, you’ll be able to take what you learn in the classroom and apply it to current research upon which future computer vision industries can be built. Plus, with local and national high-tech partners including Lockheed Martin, Elbit Systems, L3Harris, DRS, Accenture and SRI, you can experience the industry first hand through internships and networking opportunities.

As a computer vision engineer, you can help change how we examine the world and solve problems. Our computer vision graduates go on to use modern technology to benefit society, whether that’s right here at home in Central Florida or across the globe.

Two people working on computer.

Application Deadlines

Ready to get started, course overview, computer vision.

Learn about image formation, binary vision, region growing and edge detection, shape representation, dynamic scene analysis, texture, stereo and range images, and knowledge representation.

Machine Learning

Explore the origin/evaluation of machine intelligence; machine learning concepts and their applications in problem solving, planning and “expert systems” symbolic role of human and computers.

Medical Image Computing

Gain the foundation necessary for understanding, visualizing and quantifying medical images with computational methods. Topics include NeuroImaging: fMRI, DTI, MRI, Connectome, Basics of Radiological Image Modalities and their clinical use, and Medical Image Segmentation.

The world is producing more visual data than ever before, so the demand and applications for computer vision are expanding at a rapid pace. For instance, for self-driving cars, medical imaging, safety, security and national defense.” — Niels da Vitoria Lobo, Computer Science Associate Professor , UCF

Computer Vision Skills You’ll Learn

  • Write programs to conduct and perform analysis of visual data.
  • Design and implement new algorithms for recognition, segmentation, indexing, tracking and editing.
  • Perform data acquisition for extremely large and dynamic visual sources.
  • Present and communicate expertise in a clear and concise manner accessible to the general public.

Career Opportunities

  • Research Engineer
  • Computer Vision Specialist
  • Machine Learning Engineer
  • Robotics Engineer
  • Medical Imaging Specialist

Computer Vision News

Check out more stories

University of Central Florida Colleges

computer vision master thesis

Request Information

Enter your information below to receive more information about the Computer Vision (MS) program offered at UCF.

The Master of Computer Vision Program (MSCV) aims to provide technical skills and domain knowledge to the future professionals in acquiring, processing, analyzing, and understanding images, videos, 3D data, and other types of high-dimensional data of the real world. The fast-growing interests and investments in Artificial Intelligence (AI) have to be powered by a well-prepared workforce. This program meets the need created by the United States' shortage of AI personnel.

The curriculum for this degree program includes 6 required classes (18 credit hours) which form the backbone of graduate study for the field.

The remaining 12 credit hours can be selected from the list of elective courses. Electives outside of the provided list require approval from the student's adviser and program coordinator.

Total Credit Hours Required: 30 Credit Hours Minimum beyond the Bachelor's Degree

Program Prerequisites

An undergraduate degree in Computer Science is desirable but not required. Applicants without a strong undergraduate background in Computer Science must demonstrate an understanding of the material covered in the following upper-division undergraduate courses:

  • EEL 4768C Computer Architecture
  • COP 4020 Programming Languages I
  • COP 4600 Operating Systems
  • COT 4210 Discrete Computational Structures

Degree Requirements

Required courses.

  • CAP5415 - Computer Vision (3)
  • CAP6411 - Computer Vision Systems (3)
  • CAP6412 - Advanced Computer Vision (3)
  • CAP6419 - 3D Computer Vision (3)
  • CAP5516 - Medical Image Computing (3)
  • CAP5610 - Machine Learning (3)

Elective Courses

  • Earn at least 12 credits from the following types of courses: All students are required to complete 12 credit hours of electives that are selected after consultation with the student's adviser. At least half of the credit hours of students must be at the 6000 level. Approval may be granted for no more than 6 credit hours of electives to be taken outside of Computer Science, and such approval must occur prior to taking any classes outside of the four listed below. CAP 5908 Independent Study CAP 6908 Independent Study COT 6505 - Computational Methods/Analysis STA 6106 - Statistical Computing

Grand Total Credits: 30

Financial information.

Graduate students may receive financial assistance through fellowships, assistantships, tuition support, or loans. For more information, see the College of Graduate Studies Funding website, which describes the types of financial assistance available at UCF and provides general guidance in planning your graduate finances. The Financial Information section of the Graduate Catalog is another key resource.

Fellowship Information

Fellowships are awarded based on academic merit to highly qualified students. They are paid to students through the Office of Student Financial Assistance, based on instructions provided by the College of Graduate Studies. Fellowships are given to support a student's graduate study and do not have a work obligation. For more information, see UCF Graduate Fellowships, which includes descriptions of university fellowships and what you should do to be considered for a fellowship.

Equipment Fee

Students in the Computer Vision MS program pay a $34 equipment fee each semester that they are enrolled. Part-time students pay $17 per semester.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Master thesis done in Computer Vision Group of Technical University of Munich. Supervisors are Dr. Yvain Queau and Prof. Daniel Cremers.

pengsongyou/msc-thesis

Folders and files, repository files navigation, high quality shape from a rgb-d camera using photometric stereo.

Master thesis done in Computer Vision Group of Technical University of Munich. Supervisors are Dr. Yvain Queau and Prof. Daniel Cremers .

If you use anything related to this thesis, please cite:

Digital Commons @ University of South Florida

  • USF Research
  • USF Libraries

Digital Commons @ USF > College of Engineering > Computer Science and Engineering > Theses and Dissertations

Computer Science and Engineering Theses and Dissertations

Theses/dissertations from 2023 2023.

Refining the Machine Learning Pipeline for US-based Public Transit Systems , Jennifer Adorno

Insect Classification and Explainability from Image Data via Deep Learning Techniques , Tanvir Hossain Bhuiyan

Brain-Inspired Spatio-Temporal Learning with Application to Robotics , Thiago André Ferreira Medeiros

Evaluating Methods for Improving DNN Robustness Against Adversarial Attacks , Laureano Griffin

Analyzing Multi-Robot Leader-Follower Formations in Obstacle-Laden Environments , Zachary J. Hinnen

Secure Lightweight Cryptographic Hardware Constructions for Deeply Embedded Systems , Jasmin Kaur

A Psychometric Analysis of Natural Language Inference Using Transformer Language Models , Antonio Laverghetta Jr.

Graph Analysis on Social Networks , Shen Lu

Deep Learning-based Automatic Stereology for High- and Low-magnification Images , Hunter Morera

Deciphering Trends and Tactics: Data-driven Techniques for Forecasting Information Spread and Detecting Coordinated Campaigns in Social Media , Kin Wai Ng Lugo

Automated Approaches to Enable Innovative Civic Applications from Citizen Generated Imagery , Hye Seon Yi

Theses/Dissertations from 2022 2022

Towards High Performing and Reliable Deep Convolutional Neural Network Models for Typically Limited Medical Imaging Datasets , Kaoutar Ben Ahmed

Task Progress Assessment and Monitoring Using Self-Supervised Learning , Sainath Reddy Bobbala

Towards More Task-Generalized and Explainable AI Through Psychometrics , Alec Braynen

A Multiple Input Multiple Output Framework for the Automatic Optical Fractionator-based Cell Counting in Z-Stacks Using Deep Learning , Palak Dave

On the Reliability of Wearable Sensors for Assessing Movement Disorder-Related Gait Quality and Imbalance: A Case Study of Multiple Sclerosis , Steven Díaz Hernández

Securing Critical Cyber Infrastructures and Functionalities via Machine Learning Empowered Strategies , Tao Hou

Social Media Time Series Forecasting and User-Level Activity Prediction with Gradient Boosting, Deep Learning, and Data Augmentation , Fred Mubang

A Study of Deep Learning Silhouette Extractors for Gait Recognition , Sneha Oladhri

Analyzing Decision-making in Robot Soccer for Attacking Behaviors , Justin Rodney

Generative Spatio-Temporal and Multimodal Analysis of Neonatal Pain , Md Sirajus Salekin

Secure Hardware Constructions for Fault Detection of Lattice-based Post-quantum Cryptosystems , Ausmita Sarker

Adaptive Multi-scale Place Cell Representations and Replay for Spatial Navigation and Learning in Autonomous Robots , Pablo Scleidorovich

Predicting the Number of Objects in a Robotic Grasp , Utkarsh Tamrakar

Humanoid Robot Motion Control for Ramps and Stairs , Tommy Truong

Preventing Variadic Function Attacks Through Argument Width Counting , Brennan Ward

Theses/Dissertations from 2021 2021

Knowledge Extraction and Inference Based on Visual Understanding of Cooking Contents , Ahmad Babaeian Babaeian Jelodar

Efficient Post-Quantum and Compact Cryptographic Constructions for the Internet of Things , Rouzbeh Behnia

Efficient Hardware Constructions for Error Detection of Post-Quantum Cryptographic Schemes , Alvaro Cintas Canto

Using Hyper-Dimensional Spanning Trees to Improve Structure Preservation During Dimensionality Reduction , Curtis Thomas Davis

Design, Deployment, and Validation of Computer Vision Techniques for Societal Scale Applications , Arup Kanti Dey

AffectiveTDA: Using Topological Data Analysis to Improve Analysis and Explainability in Affective Computing , Hamza Elhamdadi

Automatic Detection of Vehicles in Satellite Images for Economic Monitoring , Cole Hill

Analysis of Contextual Emotions Using Multimodal Data , Saurabh Hinduja

Data-driven Studies on Social Networks: Privacy and Simulation , Yasanka Sameera Horawalavithana

Automated Identification of Stages in Gonotrophic Cycle of Mosquitoes Using Computer Vision Techniques , Sherzod Kariev

Exploring the Use of Neural Transformers for Psycholinguistics , Antonio Laverghetta Jr.

Secure VLSI Hardware Design Against Intellectual Property (IP) Theft and Cryptographic Vulnerabilities , Matthew Dean Lewandowski

Turkic Interlingua: A Case Study of Machine Translation in Low-resource Languages , Jamshidbek Mirzakhalov

Automated Wound Segmentation and Dimension Measurement Using RGB-D Image , Chih-Yun Pai

Constructing Frameworks for Task-Optimized Visualizations , Ghulam Jilani Abdul Rahim Quadri

Trilateration-Based Localization in Known Environments with Object Detection , Valeria M. Salas Pacheco

Recognizing Patterns from Vital Signs Using Spectrograms , Sidharth Srivatsav Sribhashyam

Recognizing Emotion in the Wild Using Multimodal Data , Shivam Srivastava

A Modular Framework for Multi-Rotor Unmanned Aerial Vehicles for Military Operations , Dante Tezza

Human-centered Cybersecurity Research — Anthropological Findings from Two Longitudinal Studies , Anwesh Tuladhar

Learning State-Dependent Sensor Measurement Models To Improve Robot Localization Accuracy , Troi André Williams

Human-centric Cybersecurity Research: From Trapping the Bad Guys to Helping the Good Ones , Armin Ziaie Tabari

Theses/Dissertations from 2020 2020

Classifying Emotions with EEG and Peripheral Physiological Data Using 1D Convolutional Long Short-Term Memory Neural Network , Rupal Agarwal

Keyless Anti-Jamming Communication via Randomized DSSS , Ahmad Alagil

Active Deep Learning Method to Automate Unbiased Stereology Cell Counting , Saeed Alahmari

Composition of Atomic-Obligation Security Policies , Yan Cao Albright

Action Recognition Using the Motion Taxonomy , Maxat Alibayev

Sentiment Analysis in Peer Review , Zachariah J. Beasley

Spatial Heterogeneity Utilization in CT Images for Lung Nodule Classication , Dmitrii Cherezov

Feature Selection Via Random Subsets Of Uncorrelated Features , Long Kim Dang

Unifying Security Policy Enforcement: Theory and Practice , Shamaria Engram

PsiDB: A Framework for Batched Query Processing and Optimization , Mehrad Eslami

Composition of Atomic-Obligation Security Policies , Danielle Ferguson

Algorithms To Profile Driver Behavior From Zero-permission Embedded Sensors , Bharti Goel

The Efficiency and Accuracy of YOLO for Neonate Face Detection in the Clinical Setting , Jacqueline Hausmann

Beyond the Hype: Challenges of Neural Networks as Applied to Social Networks , Anthony Hernandez

Privacy-Preserving and Functional Information Systems , Thang Hoang

Managing Off-Grid Power Use for Solar Fueled Residences with Smart Appliances, Prices-to-Devices and IoT , Donnelle L. January

Novel Bit-Sliced In-Memory Computing Based VLSI Architecture for Fast Sobel Edge Detection in IoT Edge Devices , Rajeev Joshi

Edge Computing for Deep Learning-Based Distributed Real-time Object Detection on IoT Constrained Platforms at Low Frame Rate , Lakshmikavya Kalyanam

Establishing Topological Data Analysis: A Comparison of Visualization Techniques , Tanmay J. Kotha

Machine Learning for the Internet of Things: Applications, Implementation, and Security , Vishalini Laguduva Ramnath

System Support of Concurrent Database Query Processing on a GPU , Hao Li

Deep Learning Predictive Modeling with Data Challenges (Small, Big, or Imbalanced) , Renhao Liu

Countermeasures Against Various Network Attacks Using Machine Learning Methods , Yi Li

Towards Safe Power Oversubscription and Energy Efficiency of Data Centers , Sulav Malla

Design of Support Measures for Counting Frequent Patterns in Graphs , Jinghan Meng

Automating the Classification of Mosquito Specimens Using Image Processing Techniques , Mona Minakshi

Models of Secure Software Enforcement and Development , Hernan M. Palombo

Functional Object-Oriented Network: A Knowledge Representation for Service Robotics , David Andrés Paulius Ramos

Lung Nodule Malignancy Prediction from Computed Tomography Images Using Deep Learning , Rahul Paul

Algorithms and Framework for Computing 2-body Statistics on Graphics Processing Units , Napath Pitaksirianan

Efficient Viewshed Computation Algorithms On GPUs and CPUs , Faisal F. Qarah

Relational Joins on GPUs for In-Memory Database Query Processing , Ran Rui

Micro-architectural Countermeasures for Control Flow and Misspeculation Based Software Attacks , Love Kumar Sah

Efficient Forward-Secure and Compact Signatures for the Internet of Things (IoT) , Efe Ulas Akay Seyitoglu

Detecting Symptoms of Chronic Obstructive Pulmonary Disease and Congestive Heart Failure via Cough and Wheezing Sounds Using Smart-Phones and Machine Learning , Anthony Windmon

Toward Culturally Relevant Emotion Detection Using Physiological Signals , Khadija Zanna

Theses/Dissertations from 2019 2019

Beyond Labels and Captions: Contextualizing Grounded Semantics for Explainable Visual Interpretation , Sathyanarayanan Narasimhan Aakur

Empirical Analysis of a Cybersecurity Scoring System , Jaleel Ahmed

Phenomena of Social Dynamics in Online Games , Essa Alhazmi

A Machine Learning Approach to Predicting Community Engagement on Social Media During Disasters , Adel Alshehri

Interactive Fitness Domains in Competitive Coevolutionary Algorithm , ATM Golam Bari

Measuring Influence Across Social Media Platforms: Empirical Analysis Using Symbolic Transfer Entropy , Abhishek Bhattacharjee

A Communication-Centric Framework for Post-Silicon System-on-chip Integration Debug , Yuting Cao

Authentication and SQL-Injection Prevention Techniques in Web Applications , Cagri Cetin

Multimodal Emotion Recognition Using 3D Facial Landmarks, Action Units, and Physiological Data , Diego Fabiano

Robotic Motion Generation by Using Spatial-Temporal Patterns from Human Demonstrations , Yongqiang Huang

A GPU-Based Framework for Parallel Spatial Indexing and Query Processing , Zhila Nouri Lewis

A Flexible, Natural Deduction, Automated Reasoner for Quick Deployment of Non-Classical Logic , Trisha Mukhopadhyay

An Efficient Run-time CFI Check for Embedded Processors to Detect and Prevent Control Flow Based Attacks , Srivarsha Polnati

Force Feedback and Intelligent Workspace Selection for Legged Locomotion Over Uneven Terrain , John Rippetoe

Detecting Digitally Forged Faces in Online Videos , Neilesh Sambhu

Malicious Manipulation in Service-Oriented Network, Software, and Mobile Systems: Threats and Defenses , Dakun Shen

Advanced Search

  • Email Notifications and RSS
  • All Collections
  • USF Faculty Publications
  • Open Access Journals
  • Conferences and Events
  • Theses and Dissertations
  • Textbooks Collection

Useful Links

  • Rights Information
  • SelectedWorks
  • Submit Research

Home | About | Help | My Account | Accessibility Statement | Language and Diversity Statements

Privacy Copyright

header

  • Publications
  • Completed Theses

Bachelor and Master Theses

We permanently offer proposals for bachelor and master thesis topics. Please note that:

  • You need to be a student of RWTH Aachen University. Again: we do NOT offer theses to non-RWTH students.
  • You need prior experience in computer vision, e.g. having taken part in any of our classes.
  • You should have solid programming experience. Prior experience with deep learning frameworks is a plus.
  • We do NOT offer any internships. Please do NOT contact us for an internship.

We accept Master/Bachelor Thesis Applications from students enrolled in RWTH Aachen University (use your @rwth-aachen.de address for sending an email) via this email address: [email protected] . This email address is NOT for PhD applications and depth oral colloquium scheduling.

In your application, we expect to see:

  • transcript of grades
  • (optional) research statement
  • (optional) a code repository (GitHub, GitLab, etc.) where you can show us your coding skills

Do NOT send us attachments - send a link to the shared document instead (use a hosting service like GigaMove, Google Drive, Dropbox, Sciebo etc.).

Our group cannot host internships by students from outside RWTH. Please refrain from sending us internship applications. Internship application emails will not be read and will not be answered.

Below is a list of currently advertised theses from which you can optionally choose. However, you can always contact us regarding your own research proposal.

footer

University of Idaho Library

Theses and Dissertations Collection

Open Access Repository of University of Idaho Graduate ETD

Description

An open access repository of theses and dissertations from University of Idaho graduate students. The collection includes the complete electronic theses and dissertations submitted since approximately 2014, as well as, select digitized copies of earlier documents dating back to 1910.

Top Subjects

computer science ecology electrical engineering natural resource management mechanical engineering plant sciences water resources management education civil engineering environmental science animal sciences forestry engineering materials science agriculture biology chemical engineering wildlife management

Top Programs

natural resources education mechanical engineering computer science electrical and computer engineering plant, soil and entomological sciences civil engineering environmental science english water resources movement & leisure sciences animal and veterinary science anthropology chemical and materials science engineering geology curriculum & instruction bioinformatics & computational biology chemistry

1910 to 2023 View Timeline

1893 PDFs 395 Records 147 Embargoed ETDs View table

Collection as Data (click to download)

Metadata CSV Metadata JSON Subjects JSON Subjects CSV Timeline JSON Facets JSON Source Code

Anastasiia Makarova

Anastasiia Makarova

Phd student.

I work on sequential decision-making and representation learning for structured data, such as ​point clouds or graphs.

Research The main question that motivates my research is: How can we actively learn new complex environments? My interests span multiple topics around sequential decision-making, bayesian optimization, representation learning, generative modeling. I am sparked by designing robust algorithms with quantified uncertainty and well-understood limitations (theoretically and empirically), that would be applicable in society-critical areas.

Previously, I was a doctoral student in Learning and Adaptive Systems Group at ETH Zurich supervised by Andreas Krause . My dissertation focuses on Bayesian optimization, proposing methods for risk-averse and computationally effective decision-making. Prior to that, I received master’s degrees in computer science and math from MIPT and Skoltech and a bachelor’s degree in math and physics from MIPT. In my master’s, I visited Columbia University in NYC and worked on deep learning for weakly-supervised semantic segmentation supervised by Victor Lempitsky and Hod Lipson .

I did research internships at Google DeepMind (RL and RLHF for LLMs) Yandex (deep learning for precipitation nowcasting ) and Amazon Web Services (Bayesian optimization for AutoML).

Dr. Sc., 2023

MSc in Computer Science, 2017

Skolkovo Institute of Science and Technology (Skoltech), Moscow

MSc in Applied Mathematics (with honors), 2017

Moscow Institute of Physics and Technology (Phystech), Moscow

BSc in Applied Math and Physics (with honors), 2015

  • October 2023: Safe risk-averse BO for controller tuning ( paper ) accepted to IEEE Robotics and Automation Letters.
  • September 2023: I successfully defended my doctoral thesis titled “Bayesian Optimization in the wild: risk-averse and computationally-effective decision-making”! Grateful to my advisor Andreas Krause, and to my brilliant collaborators!
  • August 2023: New preprint on arxiv: Adversarial Causal BO .
  • July 2023: New preprint on arxiv: Safe Risk-averse Bayesian Optimization for Controller Tuning .
  • May 2023: Attending ICLR in Rwanda and giving a talk about Model-based Causal BO, slides and talk
  • April 2023: Joining Google DeepMind, Brain team, as a research scientist intern, going to dive into RL from human feedback for LLM
  • January 2023: Model-based Causal BO accepted to ICLR featured spotlight (top 25% of accepted papers)
  • August 2022: Talk at Google TechTalk (Google BayesOpt Speaker Series) , slides and video are available.
  • August 2022: Talk at AWS ML Science Tech Presentations series, slides are available.
  • Summer 2022: Our paper got the best paper award at AutoML Conf! I gave a contributed talk, recording , paper , intuitive blog post .

Publications

Cite Oral talk PDF Code Slides

Cite Best paper award talk PDF Code Video Slides Blog post

Cite PDF Video Slides

Cite PDF Code Poster

Cite PDF Short Video Long Video

Cite PDF Slides Code App

Research areas: Probabilistic Machine Learning, Bayesian Optimization, Deep Learning, Tensors, Computer Vision

Advisor: Prof. Andreas Krause

Worked with Hod Lipson and collaboration with Victor Lempitsky .

  • Worked on deep learning based methods for weakly-supervised semantic segmentation
  • Developed an efficient architecture for image-based plant disease detection
  • Probabilistic Artificial Intelligence at ETH, 2020 (600 students course and 30 TAs)
  • Probabilistic Artificial Intelligence at ETH, 2018, 2019
  • Introduction to Machine Learning at ETH, 2018, 2019
  • ​Advanced Topics in Machine Learning at ETH, 2017, 2018
  • Data Mining: Learning from Large Data Sets at ETH, 2017
  • Signal and Image Processing by Prof. Stamatios Lefkimmiatis at Skoltech, 2017

Supervised theses

I (co-)supervised MSc theses of several bright students, some resulting into research publications:

  • Alicja Chaszczewicz: Following Gradients to Calibrate Equilibrium Reaching Simulators, jointly with Max Paulus, ETH Zurich, December 2020 - May 2021.
  • Ankit Dhall: Learning Representations for Images With Hierarchical Labels ( paper @DiffCVML'20 ), jointly with Octavian Ganea and Dario Pavllo, ETH Zurich, March - September 2019.
  • Erik Daxberger: Mixed-Variable Bayesian Optimization ( paper IJCAI'20 ), jointly with Matteo Turchetta, ETH Zurich, October 2018 - April 2019.
  • Stefan Beyeler: Multi-fidelity Batch Bayesian Optimization for the Calibration of Transport System Simulations, jointly with Matteo Turchetta, ETH Zurich, October 2017 - April 2018.

IMAGES

  1. Master Thesis

    computer vision master thesis

  2. (PDF) A Study on Computer Vision

    computer vision master thesis

  3. Master Thesis

    computer vision master thesis

  4. Final Thesis

    computer vision master thesis

  5. Master thesis in Computer Vision, Data Science in Netherlands 🇳🇱

    computer vision master thesis

  6. Top 7 Computer Vision books to master your learning

    computer vision master thesis

VIDEO

  1. Vision master P JWW Q5 4/27/24

  2. Vision Master

  3. Student presentation of Master Topics and research

  4. Computer Vision

  5. Optimized UAV path planning on VREP

  6. Drone Tracking a Human Target: Test case 9

COMMENTS

  1. Theses

    A list of completed theses and new thesis topics from the Computer Vision Group. Are you about to start a BSc or MSc thesis? Please read our instructions for preparing and delivering your work. PhD Theses Master Theses Bachelor Theses Thesis Topics. Novel Techniques for Robust and Generalizable Machine Learning. PDF Abstract.

  2. Master's theses in Computer Vision

    Internal Master's thesis at the Computer Vision Lab (CVL) Internal master's theses are normally connected to a research project, and explore a specific research idea. Some project suggestions are listed here: CVL Master's thesis proposal repository. If you already have an idea for a project, you may also contact one of the CVL examiners directly.

  3. Computer Vision really cool ideas for a thesis? : r/computervision

    Computer Vision is the scientific subfield of AI concerned with developing algorithms to extract meaningful information from raw images, videos, and sensor data. ... I'm planning to do my Master thesis on Computer Vision (CV) and I'd been brainstorming ideas for my scientific thesis (my degree is on Computer Science). ...

  4. Master Thesis Dissertation, Master in Computer Vision, September 2021 1

    MASTER THESIS DISSERTATION, MASTER IN COMPUTER VISION, SEPTEMBER 2021 1 Event Detection in Football using Graph Convolutional Networks Aditya Sangram Singh Rana ... MASTER IN COMPUTER VISION, SEPTEMBER 2021 3 Fig. 3. Different teams can use different formations in the same season, as can be seen above for a team like FC Barcelona that has ...

  5. Master's Thesis : Deep Learning for Visual Recognition

    Computer Science > Computer Vision and Pattern Recognition. arXiv:1610.05567 (cs) ... View a PDF of the paper titled Master's Thesis : Deep Learning for Visual Recognition, by R\'emi Cad\`ene and 2 other authors. View PDF Abstract: The goal of our research is to develop methods advancing automatic visual recognition. In order to predict the ...

  6. Master of Science in Computer Vision

    The Master of Science in Computer Vision (MSCV) Program aims to provide technical skills and domain knowledge to the future professionals who seek to acquire expertise in Computer Vision and its related areas. ... No thesis is required, but the independent study course will provide an independent learning experience. Required: CAP 5415 ...

  7. PDF Bayesian Deep Learning and Uncertainty in Computer Vision

    thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2019 c Buu Phan 2019. Author's Declaration ... elds such as computer vision [50,55], natural language processing [79,16] and control [64, 54]. Unlike traditional machine learning methods, whose performance relies on ...

  8. Theses : Computer Vision (CV) : Universität Hamburg

    Selection of titles of complete Master theses: - A Deep Learning Approach for Top-down Attention with Attribute Preference ... You should have some pre-knowledge in computer vision before starting a thesis. For a B.Sc. thesis you should have at least attended the lecture "Einführung in die Bildverarbeitung", the lab course "Praktikum Computer ...

  9. PDF Master's Thesis Deep Learning for Visual Recognition

    computer vision reaching previously unattainable performance on many tasks such as im-age classi cation, objects detection, object localization, object tracking, pose estimation, image segmentation or image captioning [10]. This progress have been made possible by the increase in computational resources, thanks to frameworks such as Torch7, modern

  10. Master thesis project proposals

    External projects. NB! Please first check the list of new external projects. [2022-11-18] Nordic Evolution: Digital Guides for Visually Impaired Athletes. [2022-10-10] Zenseact: Multiple computer vision master theses proposals. E.g. Learning-based Road Estimation. [2022-09-06] FOI: Neuromorfisk Avbildning. [2022-02-21] FOI: Mörkerseende med ...

  11. Computer Vision Master Thesis Jobs, Employment

    49 Computer Vision Master Thesis jobs available on Indeed.com. Apply to Technical Lead, Natural Resource Technician, Computer Vision Engineer and more!

  12. Master's Programme 'Artificial Intelligence and Computer Vision'

    News. The English-language programme of HSE Online 'Master of Computer Vision' will change its name to 'Artificial Intelligence and Computer Vision' in 2024. Andrey Savchenko, the programme academic supervisor, shares how the new name will affect the programme semantics, why AI has become the main federal trend in the field of ...

  13. Thesis Projects

    Thesis Projects. We con­stantly of­fer in­ter­est­ing and chal­len­ging semester and mas­ter pro­jects for mo­tiv­ated stu­dents at our lab. Be­low, you can find a list of top­ics that are cur­rently be­ing offered. Not all pro­jects might be lis­ted, if you are gen­er­ally in­ter­ested, do not hes­it­ate to con­tact ...

  14. Master of Computer Vision Degree

    The Master of Computer Vision Program (MSCV) aims to provide technical skills and domain knowledge to the future professionals in acquiring, processing, analyzing, and understanding images, videos, 3D data, and other types of high-dimensional data of the real world. The fast-growing interests and investments in Artificial Intelligence (AI) have ...

  15. GitHub

    Master thesis done in Computer Vision Group of Technical University of Munich. Supervisors are Dr. Yvain Queau and Prof. Daniel Cremers. Topics. sfs msc-project msc-thesis photometric-stereo depth-refinement vibot albedo Resources. Readme Activity. Stars. 10 stars Watchers. 3 watching Forks. 3 forks

  16. PDF Computer Vision and Remote Sensing

    Please feel free to reach out for more topics e.g. on Machine Learning, Deep Learning, Ensemble Learning, Computer Vision, Remote Sensing, Earth Observation, Synthetic Aperture Radar or to conduct your Master thesis with a paid contract directly at the DLR in Oberpfaffenhofen.

  17. Computer Science and Engineering Theses and Dissertations

    Design, Deployment, and Validation of Computer Vision Techniques for Societal Scale Applications, Arup Kanti Dey. PDF. AffectiveTDA: Using Topological Data Analysis to Improve Analysis and Explainability in Affective Computing, Hamza Elhamdadi. PDF. Automatic Detection of Vehicles in Satellite Images for Economic Monitoring, Cole Hill. PDF

  18. Master Thesis Computer Vision jobs

    Olympia, WA. $6,046 - $8,133 a month. Full-time. Monday to Friday. Description FULL-TIME/PERMANENT CLIMATE AND ECOSYSTEM FISHERIES RESEARCH SCIENTIST FISH & WILDLIFE RESEARCH SCIENTIST 1 Fish Program - Intergovernmental…. Posted 13 days ago ·.

  19. Theses

    Bachelor and Master Theses. We permanently offer proposals for bachelor and master thesis topics. Please note that: You need to be a student of RWTH Aachen University. Again: we do NOT offer theses to non-RWTH students. You need prior experience in computer vision, e.g. having taken part in any of our classes. You should have solid programming ...

  20. Master Theses

    Strong knowledge in computer vision concepts, and convolutional neural networks. Hands-on experience with Xilinx FPGAs, Verilog/VHDL/HLS. Excellent programming skills in C, Python. Experience in Tensorflow 2, Git, Docker is a plus. ... In this master thesis, visual transformers will be implemented in the first step. Following verification of ...

  21. Home

    An open access repository of theses and dissertations from University of Idaho graduate students. The collection includes the complete electronic theses and dissertations submitted since approximately 2014, as well as, select digitized copies of earlier documents dating back to 1910.

  22. Anastasiia Makarova

    In my master's, I visited Columbia ... September 2023: I successfully defended my doctoral thesis titled "Bayesian Optimization in the wild: risk-averse and computationally-effective decision-making"! Grateful to my advisor Andreas Krause, and to my brilliant collaborators! ... International Conference on Computer Vision (ICCV). Cite PDF ...

  23. Student Theses

    Student Theses at HSE must be completed in accordance with the University Rules and regulations specified by each educational programme. Summaries of all theses must be published and made freely available on the HSE website. The full text of a thesis can be published in open access on the HSE website only if the authoring student (copyright ...