Designing Efficient Deep Neural Networks: Topological Optimization, Quantization and Multi-Task Learning
dc.contributor.author
Kanakis, Menelaos
dc.contributor.supervisor
Van Gool, Luc
dc.contributor.supervisor
Chli, Margarita
dc.contributor.supervisor
Bilen, Hakan
dc.contributor.supervisor
Chhatkuli, Ajad
dc.date.accessioned
2023-04-03T10:01:48Z
dc.date.available
2023-04-03T06:12:24Z
dc.date.available
2023-04-03T10:01:48Z
dc.date.issued
2023
dc.identifier.uri
http://hdl.handle.net/20.500.11850/606226
dc.identifier.doi
10.3929/ethz-b-000606226
dc.description.abstract
The design of more complex and powerful deep neural networks has consistently advanced the state-of-the-art in a wide range of tasks over time. In the pursuit of increased performance, computational complexity is often severely hindered, as seen by the significant increase in the number of parameters, the required floating-point operations, and latency. While the great advancements of deep neural networks increase the interest in their use in downstream applications such as robotics and augmented reality, these applications require computationally efficient alternatives. This thesis focuses on the design of efficient deep neural networks, specifically, improving performance given computational constraints, or decreasing complexity with minor performance degradation.
Firstly, we present a novel convolutional operation reparameterization and its application to multi-task learning. By reparameterizing the convolutional operations, we can achieve comparable performance to single-task models at a fraction of the total number of parameters.
Secondly, we conduct an extensive study to evaluate the efficacy of self-supervised tasks as auxiliary tasks in a multi-task learning framework. We find that jointly training a target task with self-supervised tasks can improve performance and robustness, commonly outperforms labeled auxiliary tasks, while not requiring modifications to the architecture used at deployment.
Thirdly, we propose a novel transformer layer for efficient single-object visual tracking. We demonstrate that the performance of real-time singleobject trackers can be significantly improved without compromising latency, while consistently outperforming alternative transformer layers.
Finally, we investigated the efficacy of adapting interest point detection and description neural networks for use in computationally limited platforms. We find that mixed-precision quantization of network components, coupled with a binary descriptor normalization layer, yields minor performance degradations while improving the size of sparse 3D maps, matching speed, and inference speed by at least an order of magnitude.
To conclude, this thesis focuses on the design of deep neural networks given computational limitations. With an increasing interest and demand for efficient deep networks, we envision the presented work will pave the way towards even more efficient methods, bridging the gap with better-performing alternatives.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
ETH Zurich
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.title
Designing Efficient Deep Neural Networks: Topological Optimization, Quantization and Multi-Task Learning
en_US
dc.type
Doctoral Thesis
dc.rights.license
In Copyright - Non-Commercial Use Permitted
dc.date.published
2023-04-03
ethz.size
150 p.
en_US
ethz.code.ddc
DDC - DDC::0 - Computer science, information & general works::004 - Data processing, computer science
en_US
ethz.identifier.diss
29092
en_US
ethz.publication.place
Zurich
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02652 - Institut für Bildverarbeitung / Computer Vision Laboratory::03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02652 - Institut für Bildverarbeitung / Computer Vision Laboratory::03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)
en_US
ethz.date.deposited
2023-04-03T06:12:25Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2023-04-03T10:01:50Z
ethz.rosetta.lastUpdated
2024-02-02T21:29:19Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Designing%20Efficient%20Deep%20Neural%20Networks:%20Topological%20Optimization,%20Quantization%20and%20Multi-Task%20Learning&rft.date=2023&rft.au=Kanakis,%20Menelaos&rft.genre=unknown&rft.btitle=Designing%20Efficient%20Deep%20Neural%20Networks:%20Topological%20Optimization,%20Quantization%20and%20Multi-Task%20Learning
Files in this item
Publication type
-
Doctoral Thesis [30298]