Rethinking Ranking: Introducing Multi-Conditional Ranking (MCR) with LLMs
With the MCRank benchmark and our EXSIR method, we’ve shown that LLMs can significantly improve their performance on these challenging tasks when guided by structured reasoning.
EMNLP 2024 Highlights
Megagon Labs has curated this blog post to spotlight key developments in agentic systems, AI safety, and human-centered AI.
Optimizing Compound AI Systems
We echo through this blog that the optimization framework for compound AI systems should achieve broader goals such as multi-objective (accuracy, cost, latency, etc.), multi-plan optimization and also handling constraints, especially the budget. Again, these optimization goals are not comprehensive by far but are important for enterprise scenarios.
AmbigNLG: A Tutorial
AmbigNLG tackles ambiguity in Natural Language Generation (NLG) instructions by identifying unclear specifications and refining them for better output quality.
Your Internship in the AI Industry: A Student’s Guide
Use this guide to learn valuable tips to maximize your AI internship experience and walk out with more than just a line item on your resume.
Megagon Labs Summer 2024 Internship Experience
Through this inside peek at our internship program, explore the types of projects we at Megagon Labs formulate for our interns. If you are looking to start an internship soon, take their advice and apply it to your own internships.
MEGAnno in Action: Human-LLM Collaborative Annotation
MEGAnno combines the power of large language models (LLMs) with human expertise to streamline and enhance the data labeling process with a data annotation framework. Throughout this article, we’ll showcase MEGAnno’s capabilities as we provide detailed code snippets.
NAACL 2024 Highlights & Big Trends Shaping NLP
Drawing from our experience at the NAACL conference, the Megagon Labs team has crafted this blog post to highlight three major trends: targeted evaluation, reasoning, and fine-tuning/RAG. These trends represent significant advancements in the field of NLP and showcase the innovative approaches researchers are taking to enhance the capabilities of LLMs.
Unlocking the Potential of Transformers for Long-Form Text Matching: A Simple yet Powerful Approach
Long-form text matching is a critical problem to solve in the field of Natural Language Processing (NLP) and Information Retrieval (IR). We propose a simple yet effective solution using sequence pair classification with Transformer models, demonstrating its superiority over state-of-the-art Siamese network-based methods.
Order Matters: Assessing LLM Sensitivity in Multiple-Choice Tasks
Explore the relationship between option arrangement and performance variations in Large Language Models (LLMs) during multiple-choice tasks. Through meticulous analysis, we uncovered substantial sensitivity of LLMs to the order of answer options, with performance fluctuations of up to 75% across different benchmarks.