The ability to automatically, consistently, and correctly understand (and extract) information from textual sources is a key characteristic of many real-world AI applications. This trait is similarly critical in the Human Resources (HR) domain. Current state-of-the-art large pre-trained language models have recently demonstrated impressive performance on a wide range of NLP tasks, including natural-language generation, summarization, question answering, reading comprehension, and named entity recognition/resolution. But they have also shown limitations in areas like interpretability, controllability, transparency, and fairness.
At Megagon Labs we focus on how to take advantage of large pre-trained language models and go beyond the current state of the art. We work on the investigation, proposal, and deployment of new models, systems, and approaches that boost natural language processing capabilities. We do this by defining new architectures, using hybrid neuro-symbolic paradigms, and exploring domain-specific characteristics that positively impact the quality, consistency, fairness, and truthfulness of our solutions on HR and related domains.
Recent Publications:
Reasoning Capacity in Multi-Agent Systems: Limitations, Challenges and Human-Centered Solutions
Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions
Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates
Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels