Deep learning has achieved significant success in various applications, including image classification, image segmentation, natural language processing, and more. However, traditional deep learning models assume that the input data can be represented as Euclidean vectors. This assumption limits their applicability to structured data that do not conform to Euclidean space, such as symmetric positive definite matrices and distribution functions. When dealing with such structured data, omitting the geometric information and applying traditional deep learning models often leads to suboptimal performance. In this thesis, our focus is to bridge the gap between available deep learning models and structured data of the form described above by incorporating the inherent geometric structures of the data. First, we study how to model fiber bundles in brain images when each voxel along the trajectory is manifold-valued. We show that doing so allows statistical analysis with improved power. We then describe a method to transform one manifold to another, allowing the generation of Orientation Distribution Functions (ODF) images with higher angular information based on a given Diffusion Tensor Imaging (DTI) while preserving meaningful group-wise differences. We also study a problem setting which allows effectively tracking the covariance matrix along a neural network, which is useful when training a certified robust network. Finally, we discuss the use of distance correlation to evaluate the correlation between two different random vectors. This approach offers multiple benefits such as robustness against transferred attack, disentanglement of a generative model, and evaluation of similarity between two neural networks. In summary, our modifications to traditional deep learning models allow for effective utilization of manifold information, resulting in improved performance in terms of speed, efficiency, and robustness across various applications.