In this reading group meeting, Rupak Kumar Das presents an interesting paper on applying LLMs to fake news detection. The paper is called Bad actor, good advisor: exploring the role of large language models in fake news detection by Hu et al. Their main key finding is...
In this reading group, Dr. Jonathan Dodge covers a meta-analysis of XAI user studies. A meta-analysis allows the investigator to combine experimental samples in such a way as to increase the statistical power of the results. This lets the community better understand...
In this reading group, Dr. Jonathan Dodge presents a paper called Streamlined AI Architecture for Wargaming, by Rose and Ryer. This paper describes APIs to let AI cooperate better with environments and other necessary entities to enable the users to conduct wargaming...
In this reading group meeting, Dr. Jonathan Dodge covers a very interesting paper on the presence of “super weights” in LLMs. These weights are characterized by having high magnitude and high activations, both to the point of being outliers. The paper is...
In this paper, Dr. Jonathan Dodge covers the best paper award winner from ACL 2020, Beyond Accuracy: Behavioral Testing of NLP Models with CheckList, by Ribeiro et al. This paper is quite an interesting one, offering a short list of sanity checks that NLP models...
In this reading group session, the freshly minted Ph.D. Dr. Iyadunni Adenuga presents a paper about how we can manage automation while retaining some human agency. The paper is Collaborative Human–Automation Decision Making by Cummings and Bruni. Presentation link:...
In this paper, Sourav Panda covers a paper that examines approaches for switching optimizers in the middle of learning. The paper is called Improving Generalization Performance by Switching from Adam to SGD, by Keskar and Socher. Presentation Link:...
In this paper, Dr. Jonathan Dodge covers a work based on combining fast and slow cognitive processes for planning. Roughly speaking, system 1 corresponds to intuition and system 2 corresponds to state space search. The paper is called System 1.x: Learning to Balance...
In this reading group session, Dunni Adenuga presents an interesting paper investigating a variety of concerns about dissatisfaction with ChatGPT responses, such as how users resolve their issues. The paper is Understanding Users’ Dissatisfaction with ChatGPT...
The PLAINTEXT Lab is excited to share that Jeff Schulman has been named to the Scientific Advisory Board of the Active Inference Institute. This is a fantastic opportunity for Jeff to expand his research in Active Inference, the free energy principle, and how these...