Calls

Grammatical Inference is the research area at the intersection of Machine Learning and Formal Language Theory. Since 1993, the International Conference on Grammatical Inference (ICGI) has been the meeting place for presenting, discovering, and discussing the latest research results on the foundations of learning languages, from theoretical and algorithmic perspectives to their applications (natural language or document processing, bioinformatics, model checking and software verification, program synthesis, robotic planning and control, intrusion detection…).

This 16th edition of ICGI will be held in-person in Rabat, the modern capital with deep-rooted history of Morocco located on the Atlantic Coast. To celebrate the 30th anniversary of the ICGI conference, the program will include a distinguished lecture by Dana Angluin. The program will also include two invited talks, a half-day tutorial at the beginning of the conference on formal languages and neural models for learning on sequences by Will Merrill, as well as oral presentations of accepted papers.

Important dates
  • Deadline for submissions is: March 1, 2023  March 12, 2023 (anywhere on Earth)
  • Notification of acceptance: May 15, 2023
  • Camera-ready copy: June 15, 2023
  • Conference: July 10-13, 2023
Topics of interest

Typical topics of interest include (but are not limited to):

  • Theoretical aspects of grammatical inference: learning paradigms, learnability results, complexity of learning.
  • Learning algorithms for language classes inside and outside the Chomsky hierarchy. Learning tree and graph grammars. 
  • Learning probability distributions over strings, trees or graphs, or transductions thereof.
  • Theoretical and empirical research on query learning, active learning, and other interactive learning paradigms.
  • Theoretical and empirical research on methods using or including, but not limited to, spectral learning, state-merging, distributional learning, statistical relational learning, statistical inference, or Bayesian learning
  • Theoretical analysis of neural network models and their expressiveness through the lens of formal languages.
  • Experimental and theoretical analysis of different approaches to grammar induction, including artificial neural networks, statistical methods, symbolic methods, information-theoretic approaches, minimum description length, complexity-theoretic approaches, heuristic methods, etc.
  • Leveraging formal language tools, models, and theory to improve the explainability, interpretability, or verifiability of neural networks or other black box models.
  • Learning with contextualized data: for instance, Grammatical Inference from strings or trees paired with semantic representations, or learning by situated agents and robots.
  • Novel approaches to grammatical inference: induction by DNA or quantum computing, evolutionary approaches, new representation spaces, etc.
  • Successful applications of grammatical learning to tasks in fields including, but not limited to, natural language processing and computational linguistics, model checking and software verification, bioinformatics, robotic planning and control, and pattern recognition.
Types of contributions

We welcome three types of papers:

  • Formal or technical papers describe original contributions (theoretical, methodological, or conceptual) in the field of grammatical inference. A technical paper should clearly describe the situation or problem tackled, the relevant state of the art, the position or solution suggested, and the benefits of the contribution.
  • Position papers can describe completely new research positions, approaches, or open problems. Current limits can be discussed. In all cases, rigor in the presentation will be required. Such papers must describe precisely the situation, problem, or challenge addressed, and demonstrate how current methods, tools, and ways of reasoning, may be inadequate.
  • Tool papers describing a new tool for grammatical inference. The tool must be publicly available and the paper has to contain several use-case studies describing the use of the tool. In addition, the paper should clearly describe the implemented algorithms, input parameters and syntax, and the produced output.
Guidelines for authors

Accepted papers will be published within the Proceedings of Machine Learning Research series. The total length of the paper should not exceed 12 pages on A4-size paper (references and appendix may exceed this limit but Authors are warned that Reviewers may not read after page 12). The prospective authors are strongly recommended to use the JMLR style file for LaTeX since it will be the required format for the final published version.

All papers should be submitted electronically by March 01, 2023; the submission URL is:
https://www.easychair.org/conferences/?conf=icgi2023

The peer review process is double-blind: we expect submitted papers to be anonymous.

    

Comments are closed.