Text representation learning is a fundamental task in NLP. It aims to capture the semantic, syntactic, and contextual information present in textual data and transform it into a numerical representation that machine learning models can effectively utilize. Effective text representations are crucial for various downstream tasks such as text classification, information retrieval, machine translation, and question answering, etc. This thesis presents various methodologies to improve sentence-level text representations with language models. Given the fact that subcategories under the same branches are similar and closer to each other, we propose variations of label-aware supervised contrastive loss (LA-SCL) to incorporate and learn the label hierarchy information. Two studies were conducted to improve text representations with personalized features, which can be tailored to specific users by incorporating individual information such as user demographics, or contextual factors. A novel model architecture called personalized transformer memory (PersonalTM) was proposed to effectively involve personalized features given a transformer-based encoder-decoder model on information retrieval tasks. An incremental learning algorithm was proposed to continuously keep the model up-to-date by incorporating streaming personalized data. A study was conducted to perform data augmentation by utilizing GPT-2 to generate text data, specifically focusing on producing utterances that are similar to original data in both semantics and sentiment. Several practical tasks in Social Media Analysis are also conducted in this thesis: several language models were employed to assess the favorability and hesitancy towards COVID-19 on tweets, analyzing their ideological perspectives. Machine learning models were applied to analyze the dynamics of sexual violence and gender justice discourses across four social media platforms.