Competition Format and Rules

As in ICCMA'17, the 2019 edition of the competition will feature seven main tracks, respectively on the complete (CO), preferred (PR), stable (ST), semi-stable (SST), stage (STG), grounded (GR), and Ideal (ID) semantics. Each of these tracks is composed of 4 (resp. 2 for single-status semantics, i.e., grounded and ideal) tasks, one for each reasoning task. Furthermore, four new dynamic tracks will be organized to test how solver behave on evolving frameworks.

Each solver participating in the competition can support, i.e. compete in, an arbitrary set of tasks. If a solver supports all the tasks of a track, it also automatically participates in the corresponding track.

Each solver will have 4GByte of RAM to compute the results of tasks in both the classical and dynamic track.

Tasks

A task is a reasoning problem under a particular semantics. Following the same approach in ICCMA'15 and ICCMA'17, we consider four different problems:

  • DC-σ: Given F=(A,R), a ∈ A decide whether a is credulously accepted under σ.
  • DS-σ: Given F=(A,R), a ∈ A decide whether a is skeptically accepted under σ.
  • SE-σ: Given F=(A,R) return some set E ⊆ A that is a σ-extension.
  • EE-σ: Given F=(A,R) enumerate all sets E ⊆ A that are σ-extensions.

for the seven semantics σ ∈ {CO, PR, ST, SST, STG, GR, ID}. For single-status semantics (GR and ID) only the problems SE and DC are considered (EE is equivalent to SE, and DS is equivalent to DC). Note that DC-CO and DC-PR are equivalent as well, but in order to allow the participation in the preferred track without implementing tasks on the complete semantics (or viceversa), we repeat the task.

The combination of problems with semantics amounts to a total number of 24 tasks. Each solver participating in a task will be queried with a fixed number of instances corresponding to the task with a timeout of 10 minutes each. For each instance, a solver gets

  • (0,1] points, if it delivers the correct result (it may be incomplete). In case of testing SE, DC, and DS, the assigned score is 1 if the solver returns the correct answer (respectively "yes", "no", or just an extension). In case of EE, a solver receives a (0,1] fraction of points depending on the percentage of found enumerated extensions (1 if it returns all of them);
  • -5 points, if it delivers an incorrect result;
  • 0 points otherwise.

The terms correct and incorrect for the different reasoning problems are defined as follows.

  • DC-σ (resp. DS-σ): if the queried argument is credulously (resp. skeptically) accepted in the given abstract framework under σ, the result is correct if it is YES and incorrect if it is NO; if the queried argument is not credulously (resp. skeptically) accepted in the given abstract framework under σ, the result is correct if it is NO and incorrect if it is YES.
  • SE-σ: the result is correct if it is a σ-extension of the given abstract framework and incorrect if it is a set of arguments that is not a σ-extension of the given abstract framework. If the given abstract framework has no σ-extensions, then the result is correct if it is NO and incorrect if it is any set of arguments.
  • EE-σ: the result is correct if all the returned sets are σ-extensions, and incorrect if it contains one set (or more) of arguments that is not a σ-extension of the given abstract framework.

Intuitively, a result is neither correct nor incorrect (and therefore gets 0 points) if (i) it is empty (e.g., the timeout was reached without answer) or (ii) it is not parsable (e.g., some unexpected error message).

The score of a solver for a particular task is the sum of points over all instances. The ranking of solvers for the task is then based on the scores in descending order. Ties are broken by the total time it took the solver to return correct results.

For semi-stable and stage semantics, we recall that those semantics coincide with stable for AFs that possess at least one stable extension.

Classical Tracks

All tasks for a particular semantics constitute a track. The ranking of solvers for a track is based on the sum of scores over all tasks of the track. Again, ties are broken by the total time it took the solver to return correct results. Note that in order to make sure that each task has the same impact on the evaluation of the track, all tasks for one semantics will have the same number of instances.

The winner of each track will be awarded.

Dynamic Tracks

The 2019 edition of ICCMA also features additional tracks to evaluate solvers on dynamic Dung's frameworks. The aim is to test those solvers dedicated to efficiently recompute a solution after a small change in the original abstract framework.

In this case, an instance consists of an initial framework (as for classical tracks) and an additional file storing a sequence of additions/deletions of attacks on the initial framework (at least 15 changes). This file will be provided through a simple text format, e.g., a sequence of +att(a,b) (attack addition) or -att(d,e) (attack deletion). The final single output needs to report the solution for the initial framework and as many outputs as the number of changes.

The four new dynamic tracks concern the following semantics and problems, for a total of 14 different tasks:
  • Complete semantics (SE, EE, DC, DS)
  • Preferred semantics (SE, EE, DC, DS)
  • Stable semantics (SE, EE, DC, DS)
  • Grounded semantics (only (SE) and (DC))

Each solver participating in the competition can support, i.e. compete in, an arbitrary set of tasks. If a solver supports all the tasks of a track, it also automatically participates in the corresponding track.

The result is correct if the whole output considering the starting framework plus all the changes is correct (1 point in case of SE, DC, DS, (0, 1] points in case of EE). It is not correct if the overall output is not correct (-5 points). Otherwise, in case the overall answer is neither correct nor correct, 0 points will be assigned. In order to evaluate whether the output is correct or not, we will use the same rules used for the classical tracks above in this page. The timeout to compute an answer for each framework/change is 10 minutes (half the time used for instance in classical tracks). The timeout to compute an answer is, for n changes, 5*(n+1) minutes (that is a total of 45 minutes for 8 changes). Notice that the timeout is intended for the whole dynamic task and not for each output of the framework corresponding to a change. In the final ranking, ties will be broken by the total time it took the solver to return correct results for all the considered frameworks (starting plus changes).

A result is correct and complete if, for n changes, n+1 correct and complete results are given. The score for a correct and complete result is 1 as usual.

A partial (incomplete) result is considered correct if it gives less than n+1 answers, but each of the given answer is correct and complete (w.r.t. the corresponding static tasks). This rules holds for all the problems (SE, DC, DS, EE) in the dynamic track. A correct but incomplete result will score a value in (0, 1], depending on the rate of correct sub-solutions given. Exception: in case the considered dynamic task involves enumeration (i.e., EE), if the last solution a solver provides is correct but partial, then the whole answer will be evaluated as the last problem was not solved at all, considering the answer as partial and correct. A fraction of 1/n points, depending on the percentage of returned enumerated extensions will be assigned.

If any of the sub-solution is incorrect, then the overall output is considered incorrect (-5 points).

Otherwise, in case no answer is given, 0 points will be assigned (usually due to a timeout).

The timeout to compute an answer for the dynamic track is 5 minutes for each framework/change (half of the time in the classical track for a single instance). For example, considering n changes, a timeout of 5*(n+1) minutes is given: for 8 changes, a global timeout of 45 minutes.

In the final ranking, ties will be broken by the total time it took the solver to return correct results for all the considered frameworks (starting plus changes).

More information about the format of input files and the expected output is available at http://iccma2019.dmi.unipg.it/res/SolverRequirements.pdf.