Beschreibung Synthetic Media bezeichnet digitale Medieninhalte (Bilder, Video, Audio, Text etc.), die durch künstliche Intelligenz erzeugt, bearbeitet oder überhaupt erst möglich gemacht werden. Der Teilbereich "Synthetic Audio" umfasst die Generierung von Sprache (z.B. Imitation realer Sprecher), die Umwandlung von Text in Sprache, aber auch KI-generierte Musik und einzelne Klänge. Der Fokus der Thesis soll auf dem letztgenannten Bereich liegen und sowohl konzeptionell wie auch praktisch die Nutzung von KI zur Erzeugung von Klängen z.B. für Musikproduktion oder die Nutzung in Software betrachten.
Status Frei
Betreuer/-in Frédéric Thiesse (E-Mail)
Beschreibung The combination of predictive algorithms and data-driven decision techniques to solve decision problems in the presence of uncertainty has received increasing attention in the recent literature. However, optimization solvers are often unexplained black boxes whose solutions are not accessible to users. This lack of interpretability can hinder the adoption of data-driven solutions, as practitioners may not understand the solutions or trust the recommended decisions. In this thesis, you will review the current state of this fairly new field of research on the example of scheduling problems. Furthermore, your work will include a practical part in which you will apply the "Schedule Explainer" developed by Čyras et al. (2020). If you are interested in this topic: 1. First inquire if this topic is still available. 2. If the topic is available and it is assigned to you, you must prepare an "exposé". In this, you should summarize the results of your initial research, your resulting research question(s), and your plan for how you will answer those question(s). The exposé should also include a rough outline of the thesis you plan to write. 3. Please arrange an initial meeting only AFTER you have prepared the exposé. 4. The thesis will not be registered until the exposé has been prepared and approved. Please note that the thesis must be registered before the end of the semester in which you have been awarded a supervisory position and plan for buffer for several rounds of revisions of the exposé, if necessary. Related Work: - Čyras, K., Letsios, D., Misener, R., & Toni, F. (2019, July). Argumentation for explainable scheduling. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 2752-2759). - Čyras, K., Karamlou, A., Lee, M., Letsios, D., Misener, R., & Toni, F. (2020). AI-assisted Schedule Explainer for Nurse Rostering. In 19th International Conference on Autonomous Agents and MultiAgent Systems-Demo Track (pp. 2101-2103). Auckland: IFAAMAS.
Status Frei
Betreuer/-in Janine Rottmann (E-Mail)
Beschreibung Despite the promising potential of AI in healthcare, there are also concerns about the reliability of these approaches. To ensure the safety and reliability of predictions, it is critical to assess the uncertainty of AI systems' predictions. Techniques such as Bayesian methods and fuzzy systems provide uncertainty estimates and help understand the uncertainty or variability associated with predictions. In your thesis you will study these methods in more detail. If you are interested in this topic: 1. First inquire if this topic is still available. 2. If the topic is available and it is assigned to you, you must prepare an "exposé". In this, you should summarize the results of your initial research, your resulting research question(s), and your plan for how you will answer those question(s). The exposé should also include a rough outline of the thesis you plan to write. 3. Please arrange an initial meeting only AFTER you have prepared the exposé. 4. The thesis will not be registered until the exposé has been prepared and approved. Please note that the thesis must be registered before the end of the semester in which you have been awarded a supervisory position and plan for buffer for several rounds of revisions of the exposé, if necessary. Literature: - Seoni, S., Jahmunah, V., Salvi, M., Barua, P. D., Molinari, F., & Acharya, U. R. (2023). Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013–2023). Computers in Biology and Medicine, 107441. - Psaros, A. F., Meng, X., Zou, Z., Guo, L., & Karniadakis, G. E. (2023). Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons. Journal of Computational Physics, 477, 111902. - Begoli, E., Bhattacharya, T., & Kusnezov, D. (2019). The need for uncertainty quantification in machine-assisted medical decision making. Nature Machine Intelligence, 1(1), 20-23.
Status Vorgemerkt
Betreuer/-in Janine Rottmann (E-Mail)
Beschreibung Existing studies on the combination of predictive and prescriptive analytics take predictions as fixed and then make choices based on fixed predictions, for example, predictions are parameters in an optimization model. Recent studies call for a fusion of predictive modeling and prescriptive analysis. The growing interest in embedding predictive models in MIPs has led to the development of toolboxes such as JANOS (Bergman et al., 2022), OMLT (Ceccon et al., 2022) and gurobi-machinelearning. First, provide a comprehensive overview of integrated predictive and prescriptive analyses in the current literature and research existing toolboxes. Subsequently, your task is to analyze the functionalities and limitations of the identified toolboxes. You should then evaluate the efficiency of these toolboxes on the example of an optimization problem of your choice (you can refer to the examples given in the toolbox documentations). Conclude your work with a summary and discuss any limitations and open research questions. Knowledge in Python is a Plus! If you are interested in this topic: 1. First inquire if this topic is still available. 2. If the topic is available and it is assigned to you, you must prepare an "exposé". In this, you should summarize the results of your initial research, your resulting research question(s), and your plan for how you will answer those question(s). The exposé should also include a rough outline of the thesis you plan to write. 3. Please arrange an initial meeting only AFTER you have prepared the exposé. 4. The thesis will not be registered until the exposé has been prepared and approved. Please note that the thesis must be registered before the end of the semester in which you have been awarded a supervisory position and plan for buffer for several rounds of revisions of the exposé, if necessary. Literature and Libraries: - Bergman, D., Huang, T., Brooks, P., Lodi, A., & Raghunathan, A. U. (2022). Janos: an integrated predictive and prescriptive modeling framework. INFORMS Journal on Computing, 34(2), 807-816. - Ceccon, F., Jalving, J., Haddad, J., Thebelt, A., Tsay, C., Laird, C. D., & Misener, R. (2022). OMLT: Optimization & machine learning toolkit. Journal of Machine Learning Research, 23(349), 1-8. - https://github.com/Gurobi/gurobi-machinelearning
Status Frei
Betreuer/-in Janine Rottmann (E-Mail)
Beschreibung Real world data has errors, biases and missingness. However, most ML is done on sanitized data, not real-world data. That includes health care applications. Your task is to explore the potential of Large Language Models (LLMs) to create more fair and reality-centric health records.
Status Vorgemerkt
Betreuer/-in Janine Rottmann (E-Mail)
Beschreibung Zum Schutz der menschlichen Gesundheit und Sicherheit im Umgang mit potenziell gefährlichen Chemikalien am Arbeitsplatz oder im Alltag werden sogenannte Sicherheitsdatenblätter mit ausgestellt. Diese geben Auskünfte über etwa die Zusammensetzung oder die Auswirkung auf den Menschen. Diese Sicherheitsdatenblätter werden allerdings nicht immer zuverlässig und konsistent erstellt, d.h. es ist eine Konsistenzprüfung notwendig. Die GeSi Software GmbH bietet eine webbasierte, halbautomatische Lösung für einen solchen Konsistenzcheck an. Ein wichtiger Schritt für die Konsistenzprüfung ist es, zunächst die Informationen aus einem Sicherheitsdatenblatt (gegeben als PDF-Dokument) zu extrahieren, um anschließend die Konsistenzprüfung durchführen zu können. Auch wenn die Sicherheitsdatenblätter partiell standardisiert sind, gibt es Abschnitte darin, die eine sehr große Varianz aufweisen und daher eine vollautomatische Extraktion anspruchsvoll werden lassen. In letzter Zeit haben die Methoden der künstlichen Intelligenz (KI) das Potenzial im Bereich der Textverarbeitung gezeigt und können möglicherweise ein wertvolles Werkzeug für diese Aufgabe sein. Daher liegt der Fokus dieser Masterarbeit auf der Untersuchung und Evaluation des Verbesserungspotenzials durch Erweiterung der bisherigen (regelbasierten) Softwarelösung durch KI. Ihre Aufgabe wäre es, die Anwendbarkeit und Fähigkeiten von großen Sprachmodellen auf diesen Task zu evaluieren. Dazu sind Experimente auf Basis von Few-Shot bzw. Many-Shot angedacht. Dieses Thema wird vom Lehrstuhl von Prof. Thiesse in Zusammenarbeit mit der GeSi Software GmbH betreut. Bei Interesse wenden Sie sich an Prof. Thiesse oder Shreeraj Joglekar. Die genauen Forschungsfragen werden in Absprache mit der GeSi Software GmbH konkretisiert sowie weitere Details zum Thema (verfügbare Daten, Erwartungen, usw.) werden beim ersten Besprechungstermin diskutiert. Bearbeitungssprache: Deutsch oder Englisch. Hinweis: Für diese Masterarbeit bereitgestellte Sicherheitsdatenblätter (Trainings /Testdaten) sind jedoch in deutscher Sprache ausgestellt und der Fokus muss darauf gerichtet sein.
Status Frei
Betreuer/-in Shreeraj Joglekar (E-Mail)
Beschreibung Although machine learning (ML) research has reached a level of maturity where applications can be implemented in real-world business applications, many ML projects never leave the pilot stage. Furthermore, core ML research focuses on optimizing model development rather than exploring the difficulties of model deployment. That is why most ML lifecycle frameworks (e.g., CRISP-DM, TDSP, etc.) barely deal with model deployment, even though the added value of ML projects is achieved in this stage in practice. In order to improve the adoption of ML applications in industrial companies, it is necessary to understand the challenges that companies face when trying to implement ML into their processes. Therefore, the aim of this thesis is to thoroughly investigate a real-world case (i.e. an industrial company – note: contact will be made by supervisor) and identify how companies operationalize ML projects and which key pain points they face when adopting ML applications. The study will explore challenges related to data integration, scalability, privacy, model maintenance, and adaptation in industrial settings. Moreover, the thesis should establish a new ML lifecycle framework based on the findings from the case. In your thesis you will follow a single case study approach. This methodology examines several sources of information, such as secondary material provided by the company as well as semi-structured interviews with different stakeholders throughout the company’s organizational structure. If you are interested in this topic, please follow the following steps: 1. Ask whether this topic is still available. 2. If the topic is available and it is assigned to you, you must prepare an "exposé". In this, you should summarize the results of your initial research, your resulting research question(s), and your plan for how you will answer those question(s). The exposé should also include a rough outline of the thesis you plan to write. 3. Please arrange an initial meeting only AFTER you have prepared the exposé. 4. The thesis will not be registered until the exposé has been prepared and approved. Please note that the thesis must be registered before the end of the semester in which you have been awarded a supervisory position and plan for buffer for several rounds of revisions of the exposé, if necessary. Literature: Paleyes, Andrei; Urma, Raoul-Gabriel; Lawrence, Neil D. (2023): Challenges in Deploying Machine Learning: A Survey of Case Studies. In: ACM Comput. Surv. 55 (6). Baier, L., Jöhren, F., & and Seebacher, S. (2019). CHALLENGES IN THE DEPLOYMENT AND OPERATION OF MACHINE LEARNING IN PRACTICE. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019. AIS. Amershi, S., Begel, A., Bird, C., DeLine, R., Gall, H., Kamar, E., Nagappan, N., Nushi, B., & Zimmermann, T. (2019). Software Engineering for Machine Learning: A Case Study. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP) (pp. 291–300). IEEE
Status Frei
Betreuer/-in Manuel Zall (E-Mail)
Beschreibung Deep learning (DL), which includes construction and training of various neural network architectures, has been remarkably effective in image processing tasks in recent years. One such task includes image classification. Yet, training deep learning systems from scratch is computationally very demanding and frequently not applicable in data-limited areas. To address these challenges, transfer learning techniques have emerged, which entail applying and fine-tuning deep learning architectures that have been pre-trained on a source dataset to intended target dataset(s). According to recent research, transfer learning has proven to be effective on a number of image classification tasks, allowing for a reduction in computation requirements and an improvement in generalizability. Nonetheless, the impact of various source datasets on the model performance with respect to the target task has not received enough attention. In other words, it remains unclear whether and how the model performance (on target data) differs based on a choice of pre-training datasets. This thesis, thus, aims to empirically investigate how the choice of source dataset influences model performance on target dataset. To address this, you will examine pre-trained network architectures and conduct various computational experiments to figure out the most useful source dataset(s) for an image classification task. This mainly involves the assessment of model accuracy and test-set generalizability. The experiments may involve domain specific as well as non-domain specific source datasets. An exemplary target dataset is EuroSAT (land use / land cover data) – encompassing a variety of satellite images divided into 10 classes including forest, agricultural, industrial, among several others. If you are interested in this topic, please directly contact the associated supervisor to schedule the initial discussion. Note: The EuroSAT dataset represents a well-suited exemplary image classification dataset for this thesis. However, you are welcome to select the image classification dataset of your choice and/or domain.
Status Frei
Betreuer/-in Shreeraj Joglekar (E-Mail)
Beschreibung Over recent years, Machine Learning (ML) has been demonstrating the effectiveness in tackling various tasks across an array of disciplines. However, a successful deployment of real-world ML applications involves many processes, besides building the most suited models. These usually involve components, such as data collection and management, model development and training, model hosting and monitoring, and prediction/inference service. Based on the challenges concerning one or more of these components, some valuable ML models may not be adopted into real-world systems at all. In this thesis, you will explore the requirements and difficulties with ML implementation that are unique to the manufacturing industry. The aim here is to generate insights which should assist comprehensive of existing difficulties in ML deployment w.r.t. the manufacturing applications. You are welcome to address a specific sub-sector, process or application area within manufacturing. This investigation should be conducted via a systematic literature survey or a standard literature review. If you are interested in pursuing this topic, please contact the associated supervisor. Related literature: Paleyes, Andrei; Urma, Raoul-Gabriel; Lawrence, Neil D. (2023): Challenges in Deploying Machine Learning: A Survey of Case Studies. In: ACM Comput. Surv. 55 (6).
Status Frei
Betreuer/-in Shreeraj Joglekar (E-Mail)
Beschreibung Students are also welcome to submit ideas or proposals for their own thesis topics. We will be glad to hear about your own topic suggestions and also encourage them. Own topics will be supervised as long as they fall within the general scope of our chair along with the following areas: Areas with technical focus - 1. Applied machine learning: application areas can be within engineering, marketing analytics, finance, and agriculture 2. Uncertainty Quantification (UQ) and analysis in machine learning systems 3. Explainable AI. Areas with focus on economics and business - 1. Information and platform economics 2. Digital transformation along with its economic value 3. Data-driven business models. If you would like to discuss own proposals, please contact Shreeraj Joglekar.
Status Frei
Betreuer/-in Shreeraj Joglekar (E-Mail)
Beschreibung Students are also welcome to submit ideas or proposals for their own thesis topics. We will be glad to hear about your own topic suggestions and also encourage them. Own topics will be supervised as long as they fall within the general scope of our chair and the corresponding research interests of the research assistant. In my case (Manuel Zall), topics will be considered that fall into the following categories/fields: Machine learning applications, MLOps, behavioral research in the intersection of AI/ML and management. If you are interested in discussing your own proposal in these areas, please contact Manuel Zall.
Status Frei
Betreuer/-in Manuel Zall (E-Mail)
Beschreibung Generative AI has gained significant traction in business applications, primarily for tasks such as text summarization, text refinement, and information retrieval. While these applications enhance productivity, the broader potential of Generative AI to support high-level human cognitive capabilities remains underexplored. Beyond automation, Generative AI could play a crucial role in fostering creativity, decision-making, and innovation. This thesis aims to explore how Generative AI can be leveraged to design innovative Human-AI systems that go beyond traditional business applications. This thesis will contribute to the understanding of how Generative AI can be utilized to augment human cognition in business and innovation settings. By defining evaluation metrics and exploring new application areas, the research aims to provide a foundation for future developments in innovative Human-AI collaboration.
Status Vorgemerkt
Betreuer/-in TIm Thorwart-Gumpert (E-Mail)
Beschreibung Die fortschreitende Entwicklung generativer Künstlicher Intelligenz (KI) eröffnet neue Möglichkeiten für die personalisierte Gesundheitsversorgung. Durch die Analyse großer Mengen an Gesundheitsdaten können KI-Modelle individuelle Therapieempfehlungen geben, Krankheitsverläufe prognostizieren und personalisierte Trainings- sowie Ernährungspläne erstellen. Diese Arbeit untersucht die Potenziale generativer KI für eine individualisierte Patientenbetreuung sowie die damit verbundenen Herausforderungen in Bezug auf Datenschutz, ethische Fragestellungen und regulatorische Anforderungen. Ein besonderer Fokus liegt auf aktuellen Forschungsergebnissen und praktischen Anwendungsfällen, beispielsweise in der digitalen Diagnostik oder präventiven Medizin.
Status Frei
Betreuer/-in TIm Thorwart-Gumpert (E-Mail)
Beschreibung Predictive AI models, such as machine learning-based forecasting systems, are widely used in fields like finance, healthcare, and supply chain management. However, their black-box nature often leads to trust issues among users, decision-makers, and stakeholders. Without clear explanations of how predictions are generated, many individuals struggle to fully rely on or interpret AI-driven insights. Generative AI (GenAI), with its natural language capabilities, offers a promising approach to bridging the gap between complex predictive models and human understanding. By generating intuitive, human-like explanations of AI predictions, GenAI could improve transparency, foster user confidence, and encourage more effective adoption of predictive AI in decision-making contexts.
Status Vorgemerkt
Betreuer/-in TIm Thorwart-Gumpert (E-Mail)
Beschreibung Artificial Intelligence (AI) systems are increasingly used in critical decision-making domains, such as healthcare, finance, and disaster response. However, human trust in AI is neither static nor guaranteed—trust can fluctuate based on past experiences, situational factors, and the level of transparency provided by the AI system. A key challenge is ensuring that AI maintains an appropriate level of trust, adapting dynamically to user feedback while avoiding over-reliance or skepticism.
Status Frei
Betreuer/-in TIm Thorwart-Gumpert (E-Mail)
Beschreibung Die Algorithmen sozialer Plattformen, wie zum Beispiel TikTok, sind heute primär auf kurzfristige Nutzerbedürfnisse wie Aufmerksamkeit und Engagement ausgerichtet. Diese Kurzfristigkeit könnte langfristige persönliche Ziele, wie die Entwicklung von Fähigkeiten oder die Förderung von gesundheitsbezogenen Zielen, behindern. Diese Arbeit untersucht, wie Algorithmen so gestaltet werden können, dass sie Nutzern helfen, ihre langfristigen Ziele zu erreichen, während sie gleichzeitig eine Balance zwischen den unmittelbaren Wünschen (Kurzfristigkeit) und den langfristigen Bedürfnissen (Langfristigkeit) finden. Zusätzlich wird die Frage aufgeworfen, wie diese Algorithmen Neutralität gewährleisten und den Nutzern Autonomie lassen können, ohne sie in eine bestimmte Richtung zu lenken. Ein Fokus liegt auf der Analyse von algorithmischen Designprinzipien, die sowohl die Ziele der Plattformbetreiber als auch die der Nutzer berücksichtigen.
Status Frei
Betreuer/-in TIm Thorwart-Gumpert (E-Mail)
Beschreibung Motivation: Transparency is often cited as a critical factor in trust formation with AI systems, yet there is no consensus on which types of transparency (e.g., explainability, disclosure of training data, or bias acknowledgment) contribute most effectively to trust in LLMs. Problem Statement: Despite increased efforts to make AI systems more transparent, it remains unclear which transparency mechanisms are truly beneficial to user trust and which may instead cause confusion or skepticism. This thesis investigates different transparency strategies and their theoretical impact on trust development in LLMs. Research Questions: 1. What are the different types of transparency mechanisms used in LLMs? 2. How do these mechanisms theoretically impact user trust based on existing research? 3. Are there conflicting effects of transparency, such as increased understanding versus increased skepticism? Methodology: • Systematic Literature Review: Analyze existing academic and industry research on transparency and trust in AI. • Comparative Analysis: Compare how different LLM providers (e.g., OpenAI, Google, Anthropic) implement transparency in their models. • Theoretical Framework Development: Synthesize findings into a structured framework outlining the theoretical relationship between transparency and trust in LLMs. Key References: • Liao, Q. V., & Vaughan, J. W. (2023). Ai transparency in the age of llms: A human- centered research roadmap. arXiv preprint arXiv:2306.01941, 10.
Status Frei
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Motivation: Bias in LLMs has led to ethical concerns regarding misinformation, discrimination, and user manipulation. Trust in AI is deeply intertwined with perceptions of fairness, making it crucial to understand how bias affects trust and what mitigation strategies exist. Problem Statement: Bias is an inherent challenge in LLMs due to their training on large-scale datasets that reflect societal inequalities. This thesis critically reviews how bias affects trust in LLMs and evaluates different fairness-oriented interventions proposed in research and industry. Research Questions: 1. How does bias manifest in LLM-generated content? 2. What are the ethical implications of bias in AI, particularly regarding trust? 3. How effective are current fairness-enhancing techniques (e.g., debiasing algorithms, ethical guidelines) in mitigating trust issues? Methodology: • Critical Literature Review: Examine existing research on bias and trust in AI. • Case Study Analysis: Evaluate instances where bias in LLMs led to public trust issues. • Comparative Discussion: Assess different bias-mitigation strategies and their implications for trust. Key References: • Ranjan, R., Gupta, S., & Singh, S. N. (2024). A Comprehensive Survey of Bias in LLMs: Current Landscape and Future Directions. • Blodgett, S. L., Baracas, S., & Wallach, H. (2020). Language (Technology) Is Power: A Critical Survey of “Bias” in NLP. • Weidinger, L., et al. (2021). Ethical and Social Risks of Harm from Language Models.
Status Vorgemerkt
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Motivation: LLMs are increasingly being designed to offer personalized responses, adjusting tone and content based on user interactions. While this can enhance user experience, it also raises questions about privacy, manipulation, and trust. Problem Statement: Personalization in AI is often assumed to increase user engagement and trust. However, it may also create risks such as filter bubbles or loss of objectivity. This thesis explores whether personalization truly fosters trust in LLMs or introduces new concerns. Research Questions: 1. How does personalization in LLMs differ from traditional recommendation systems? 2. What are the theoretical benefits and risks of personalization in trust formation? 3. How do existing trust models in AI apply to personalized interactions with LLMs? Methodology: • Theoretical Literature Review: Examine research on personalization, trust, and AI ethics. • Comparative Analysis: Compare personalization features across major LLM providers. • Conceptual Discussion: Synthesize findings into a discussion on whether personalization enhances or diminishes trust. Key References: • Chen, J., Liu, Z., Huang, X., Wu, C., Liu, Q., Jiang, G., ... & Chen, E. (2024). When large language models meet personalization: Perspectives of challenges and opportunities. World Wide Web, 27(4), 42. • Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2024). The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nature Machine Intelligence, 6(4), 383-392. • Woźniak, S., Koptyra, B., Janz, A., Kazienko, P., & Kocoń, J. (2024). Personalized large language models. arXiv preprint arXiv:2402.09269. • Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2023). Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453.
Status Frei
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Motivation: When LLMs generate misleading or biased responses, user trust can be severely impacted. However, trust repair mechanisms, such as acknowledging mistakes or offering fact-checking, may help mitigate damage and rebuild confidence. Problem Statement: There is limited research on how LLMs can recover user trust after making errors. This thesis explores theoretical frameworks of trust repair in human-computer interaction and applies them to the context of LLMs. Research Questions: 1. What types of trust violations (e.g., factual errors, ethical concerns) occur in LLM 2. interactions? How do existing trust repair strategies (e.g., apologies, fact-checking) apply to AI systems? 3. What challenges exist in designing effective trust repair mechanisms for LLMs? Methodology: • Literature Review: Investigate research on trust violations and repair mechanisms in AI and human-computer interaction. • Case Study Analysis: Review incidents where LLMs lost user trust and assess how they attempted to recover it. • Theoretical Framework Development: Propose a trust repair model applicable to LLMs. Key References: • Martell, M. J., Baweja, J. A., & Dreslin, B. D. (2025). Mitigative Strategies for Recovering From Large Language Model Trust Violations. Journal of Cognitive Engineering and Decision Making, 19(1), 76-95. • Sebo, S. S., Krishnamurthi, P., & Scassellati, B. (2019, March). “I don't believe you”: Investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 57-65). IEEE. • Kim, P. H., et al. (2006). When More Blame is Better Than Less: The Implications of Internal vs. External Attributions for the Repair of Trust. • Lewicki, R. J., & Brinsfield, C. (2017). Trust repair. Annual review of organizational psychology and organizational behavior, 4(1), 287-313.
Status Frei
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Students are also welcome to submit ideas or proposals for their own thesis topics. We will be glad to hear about your own topic suggestions and also encourage them. Own topics will be supervised as long as they fall within the general scope of our chair and the corresponding research interests of the research assistant. In my case (Martin H. Möhle), topics will be considered that fall into the following categories/fields: Large Language Models, LLM-Agents, behavioral research in the intersection of AI applications and process management or automation. If you are interested in discussing your own proposal in these areas, please contact Martin H. Möhle.
Status Frei
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Motivation: Users interact with LLMs in various contexts, from casual conversations to professional applications. However, the specific factors that shape trust in these interactions are not well understood. Problem Statement: Trust signals, such as response consistency, confidence markers, and disclaimers, can influence user perception of LLM reliability. However, research has yet to consolidate these signals into a structured understanding of how they contribute to trust in AI. Research Questions: 1. What are the key trust signals in human-LLM interactions? 2. How do existing theories of trust in human-computer interaction (HCI) apply to LLMs? 3. What potential risks arise from over-reliance on trust signals that may not indicate actual accuracy? Methodology: • Literature Review: Explore trust models in HCI and AI research to identify key trust signals. • Comparative Analysis: Analyze LLM-generated responses (e.g., ChatGPT, Gemini, Claude, DeepSeek) and their use of trust-enhancing mechanisms. • Theoretical Synthesis: Develop a conceptual framework outlining how trust signals function in human-LLM interaction. Key References: • Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635). • Riegelsberger, J., et al. (2005). The Mechanics of Trust: A Framework for Research and Design.
Status Vorgemerkt
Betreuer/-in Martin H. Möhle (E-Mail)
Beschreibung Many large companies are setting up centralized AI / Machine Learning Competence Centers (AI CoEs) to build internal capabilities and scale data-driven innovation. These centers often support multiple business units (BUs) by offering machine learning expertise, tools, and reusable solutions. However, early evidence from practice suggests that these centralized AI teams often struggle to gain traction within the business. In particular, misaligned funding models (e.g., unclear chargebacks, complex cost allocations) and poor incentive structures (e.g., the CoE bears the cost, while BUs receive the benefit) are key obstacles, especially in post-deployment phases of AI applications. Business units may be reluctant to provide the necessary resources (developers, budgets, IT integration) to implement AI solutions, even though they would benefit from them. As a result, promising AI initiatives stall, fail before deployment or are underutilized. This thesis explores the organizational tensions that emerge when central AI capabilities are offered to internal business stakeholders – and how companies can design better funding and governance structures to overcome these frictions. You will conduct a qualitative interview study with experts from different companies to explore how AI competence centers are structured, funded, and integrated into business units. You will analyze the data using thematic analysis to uncover key challenges, particularly around internal funding models and incentives. Your thesis will include a literature review, an interview guide, a thematic analysis of the findings, and practical recommendations for improving the organization of corporate AI initiatives. Literature: Weber, M., Engert, M., Schaffer, N., Weking, J., & Krcmar, H. (2023). Organizational capabilities for ai implementation—coping with inscrutability and data dependency in ai. Information Systems Frontiers, 25(4). Vial, G., Cameron, A. F., Giannelia, T., & Jiang, J. (2023). Managing artificial intelligence projects: Key insights from an AI consulting firm. Information Systems Journal, 33(3)
Status Frei
Betreuer/-in Manuel Zall (E-Mail)