This paper proposes a new task of disambiguating instructions in natural language and demonstrates its importance. We introduce AmbigNLG, a novel task designed to tackle the challenge of task ambiguity in instructions for Natural Language Generation (NLG). It constructs benchmark data, AmbigNLG, and performs detailed analysis, including classifying ambiguities and evaluating the effectiveness of disambiguation. This is extremely important research in an age where it has become commonplace to give instructions to large-scale language models in the form of prompts, and NLP24 determined that this paper is worthy of the Young Researcher Encouragement Award due to its high originality and potential for development.