Hindawi Publishing Corporation Computational Intelligence and Neuroscience Volume 2015, Article ID 286354, 10 pages http://dx.doi.org/10.1155/2015/286354

Research Article A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles Ricardo Soto,1,2,3 Broderick Crawford,1,4,5 Cristian Galleguillos,1 Fernando Paredes,6 and Enrique Norero7 1

Pontificia Universidad Cat´olica de Valpara´ıso, 2362807 Valpara´ıso, Chile Universidad Aut´onoma de Chile, 7500138 Santiago, Chile 3 Universidad Cientifica del Sur, Lima 18 Lima, Peru 4 Universidad Central de Chile, 8370178 Santiago, Chile 5 Universidad San Sebasti´an, 8420524 Santiago, Chile 6 Escuela de Ingenier´ıa Industrial, Universidad Diego Portales, 8370109 Santiago, Chile 7 Facultad de Ingenier´ıa, Universidad Santo Tom´as, 2561694 Vi˜na del Mar, Chile 2

Correspondence should be addressed to Cristian Galleguillos; [email protected] Received 7 January 2015; Accepted 24 April 2015 Academic Editor: Christian W. Dawson Copyright © 2015 Ricardo Soto et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a 𝑛2 × 𝑛2 grid, composed of 𝑛 columns, 𝑛 rows, and 𝑛 subgrids, each one containing distinct integers from 1 to 𝑛2 . Such a puzzle belongs to the NPcomplete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.

1. Introduction The Sudoku puzzle is a combinatorial problem consisting of assigning 𝑛2 digits, from 1 to 𝑛2 , in each cell matrix of size 𝑛2 × 𝑛2 . The matrix is composed of 𝑛2 rows, 𝑛2 columns, and 𝑛2 subgrids of size 𝑛 × 𝑛, as shown in Figure 1. The problem has a set of simple rules; in each region every digit must be assigned only one time, and hence all digits must be assigned in each cell of each region. Any digit will be repeated 𝑛2 times scattered across the grid but not repeated in same rows, columns, and subgrids. The common size of Sudoku is 𝑛 = 3; thus the puzzle is defined as a 9 × 9 matrix with nine 3 × 3 subgrids. Each Sudoku instance starts with some values, called the givens and the difficulty of the puzzle depends on the positioning of those givens along the matrix. An instance has a unique solution if it contains at least 17 givens [1].

During the last years, Sudokus have appeared as interesting problems to test constraint satisfaction and optimization algorithms because of their NP-completeness and different modeling capabilities. In this context, several approaches from different domains have been proposed. For instance, exact methods such as constraint programming [2, 3] and SAT [4] are in general efficient techniques to solve Sudokus. On the approximate methods domain, metaheuristics have proven to be efficient as well [5, 6]. Some hybrids combining exact and approximate methods have also been reported [7, 8], as well as techniques such as Sinkhorn balancing [9], rewriting rules [10], and entropy minimization [11]. In this work, we introduce a new hybrid that smartly integrates a global constraint, namely, the alldifferent constraint, in a classic tabu search procedure. The alldifferent constraint comes from the constraint

2

Computational Intelligence and Neuroscience Column

Subgrid

1

9

7 3

8

2 9

6

5

5

3

9

1

2

8 4

6 3 Row

1 4

7

1 7

3

instance. In [16], another GA is presented improving the selection, crossover, and mutation operators. They achieve a better convergence rate and stability with respect to the classic GA. In [8], a hybrid combining AC3 and tabu search is reported, where the idea is to apply AC3 at each iteration of the metaheuristic in order to systematically attempt to reduce the variable domains. A similar approach using cuckoo search is presented in [7], but the AC3 is only employed as a preprocessing phase. In [17], the alldifferent constraint is used to reduce variable domains by overlapping the 27 Sudoku constraints. The approach succeeds for easy instances and some other ones, but in more complex instances the solution is reached with a complete search solver. In Section 6, a comparison of the proposed algorithm with respect to the best results reached by hybrids and approximate methods is given.

Figure 1: Sudoku puzzle instance: AI Escargot.

3. Tabu Search programming world and has specially been designed for the efficient domain reduction of variables involved in constraints that must be pairwise different. This global constraint works perfectly for Sudokus since all the puzzle constraints can be expressed as a pairwise comparison. We implement the alldifferent constraint following Puget’s approach [12], which identifies Hall intervals [13] and then filters the domains. This allows one to efficiently propagate the constraints, considerably reducing the search space and alleviating the work of the tabu search. As a consequence, the search process is accelerated, while the quality of solutions is maintained. We illustrate interesting experimental results where our proposed algorithm outperforms the best results reported in the literature. This paper is organized as follows. In Section 2, we describe the previous work. Section 3 presents the classic tabu search. The alldifferent constraint is presented in Section 4. The proposed algorithm is presented in Section 5, followed by the corresponding experimental results. Finally, we conclude and give some directions for future work.

2. Related Work In this paper, we concentrate on incomplete search methods, specially on solving hard instances of the puzzle. Within this scenario, different approaches have been suggested, mainly based on metaheuristics. For instance, in [14], the Sudoku puzzles are modeled as a combinatorial optimization problem where the objective function is the minimization of the incorrectly placed numbers on the board. The previous model is solved by using simulated annealing, but the approach is mostly focused on producing valid Sudokus than on the performance of the resolution. In [15], where a particle swarm optimizer (PSO) for solving Sudokus is presented, the goal of authors was to validate the use of geometric operators for PSO for complex combinatorial spaces. In [5], a classic genetic algorithm is tuned with similar geometric operators, particularly Hamming space crossovers and swap space crossovers, reporting good solutions for a hard Sudoku

Tabu search (TS), introduced by Glover [18, 19], is a metaheuristic mainly concerned with combinatorial optimization problems. TS has successfully been employed for working on different kind of real-life problems as well as problems from operation research and computer science, such as the traveling salesman problem, the knapsack problem, and the timetabling problem. TS is based on local search over single solutions, employing a given solution 𝑆 as a start point. Then, this starting solution will be improved across small changes, being those solutions called “neighbors” of 𝑆, iteratively until some stopping criterion has been reached. The local search moves from neighbor to neighbor as long as possible according to a minimization/maximization of a defined objective function. Normally there exist problems with some moves so that 𝑆 may be trapped in local optimum where the local search cannot find any further neighborhood solution. To tackle the previous problem, TS makes use of a memory structure named tabu list, which is a feature that distinguishes it from other incomplete methods. The aim of the tabu list is (i) to evade poor-scoring areas; (ii) to dodge unpromising areas and return to previously reached ones. Hence, in the tabu list some data is kept of the recently visited solutions, with the aim of avoiding them if these are bad solutions, and so improving the efficiency of the search process. The tabu list is considered the most important feature of TS. Algorithm 1 describes the classic procedure of TS. As input, the algorithm receives a primary solution that includes the givens values and an empty cell in the other positions, and as output it returns the best solution scored. At line 3, a while statement manages the iterations of the process until the defined stop criteria is reached. For instance, the stop condition is a maximum iteration limit or a threshold on the evaluation function. In this implementation, we use as evaluation function the minimization of remaining values to

Computational Intelligence and Neuroscience

3

Input: 𝑆0 Output: 𝑆𝑏𝑒𝑠𝑡 (1) 𝑆𝑏𝑒𝑠𝑡 ← 𝑆0 (2) 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡 ← 0 (3) While ¬ Stop Condition do (4) 𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝐿𝑖𝑠𝑡 ← 0 (5) For (𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ∈ 𝑆𝑏𝑒𝑠𝑡𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑜𝑟ℎ𝑜𝑜𝑑 ) do (6) If ¬ ContainsAnyFeatures(𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 , 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡) (7) push(𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝐿𝑖𝑠𝑡, 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ) (8) End If (9) End For (10) 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ← LocateBestCandidate(𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝐿𝑖𝑠𝑡) (11) If cost(𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ) ≤ cost(𝑆𝑏𝑒𝑠𝑡 ) (12) 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡 ← FeatureDifferences(𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 , 𝑆𝑏𝑒𝑠𝑡 ) (13) 𝑆𝑏𝑒𝑠𝑡 ← 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 (14) While 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡 > 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡𝑠𝑖𝑧𝑒 do (15) ExpireFeature(𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡) (16) End While (17) End If (18) End While (19) Return 𝑆𝑏𝑒𝑠𝑡 Algorithm 1: Tabu Search.

complete the puzzle. At line 7, new potential solutions are created by a neighboring procedure, these solutions are added to the candidate list exclusively if they do not include new solution elements on the tabu list. Then, a promising best candidate is selected on condition which is the best quality solution according to the evaluation by the cost function. At line 11, the cost evaluation of the chosen candidate is compared. If it improves the best solution (𝑆𝑏𝑒𝑠𝑡 ) cost, the differences of those are added to the tabu list and the 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 becomes the new 𝑆𝑏𝑒𝑠𝑡 . Finally, some features are expired in the tabu list, and generally in the same order they were included, permitting in next iterations to add solutions to the candidate list which contains the expired features.

4. CP Overview and the alldifferent Constraint Constraint programming (CP) is a paradigm for solving combinatorial search and optimization problems mainly from domains such as scheduling, planning, and vehicle routing. In CP, a problem is modeled by relating all involved variables of the problem in constraints terms, and a constraint solver is employed to solve it. CP consists in two identifiable stages: (i) Modeling: stating constraints involving the problem variables; (ii) Solving: finding a solution satisfying all the constraints. Hence, all problems are represented in terms of decision variables and constraints, and the aim of the constraint solver is to find an assignment to all the variables that satisfies all the constraints.

4.1. Basic Concepts. A Constraint Satisfaction Problem (CSP) is defined by: (i) A finite set of variables, 𝑋 = 𝑥1 , 𝑥2 , . . . , 𝑥𝑛 . (ii) A domain for each variable, 𝐷(𝑋) = 𝐷(𝑥1 ), 𝐷(𝑥2 ), . . . , 𝐷(𝑥𝑛 ), also noted as 𝑑1 , 𝑑2 , . . . , 𝑑𝑛 , where 𝑑𝑖 is the domain of 𝑥𝑖 . (iii) A finite set of constraints 𝐶(𝑋), where 𝑐(𝑥1 , . . . , 𝑥𝑛 ) denotes a constraint involving variables 𝑥1 , . . . , 𝑥𝑛 . A CSP is denoted by the tuple 𝑃⟨𝑋, 𝐷, 𝐶⟩. A CSP has solution only if every constraint in 𝐶 is satisfied, and it is called a consistent CSP; further if no solution exists, it is an inconsistent CSP. Algorithms based on backtracking such as the forward checking are in general employed to solve CSPs. [20]. 4.2. alldifferent Constraint. The alldifferent constraint commonly appears in problems which are based on permutations or when disjoint paths need to cover a directed graph [21–23], among other problems that involve constraints of pairwise difference. The main ability of this constraint is that it exploits the global information of the problem constraint, instead of handling each pairwise constraint independently. Exploiting the whole information leads to a more efficient domain filtering as explained in [24]. In the following, we provide some necessary definitions. Definition 1. A 𝑘-ary constraint connecting variables in 𝑋 with domains 𝐷(𝑋) is defined as a subset of the cartesian product ∀𝑑 ∈ 𝐷(𝑋) and it is intended as the set of allowed 𝑘-tuples for these 𝑘 variables. A constraint that involves one variable (e.g., 1-ary: 𝑥1 = 8) is called unary constraint and a binary one (e.g., 2-ary:

4

Computational Intelligence and Neuroscience

𝑥1 ≠ 𝑥2 ) involves 2 variables, and so on. In general, a 𝑘ary constraint has a scope of size 𝑘. A conjunction of several simpler constraints is called a global constraint providing a more simple model for a problem; one of these constraints is the well-known alldifferent constraint. The alldifferent constraint is a constraint of difference between all variables involved in the variable relation and specified that the value assigned to the variables must be pairwise different. Definition 2. Let 𝑥1 , 𝑥2 , . . . , 𝑥𝑛 be variables with respective finite domains 𝐷(𝑥1 ), 𝐷(𝑥2 ), . . . , 𝐷(𝑥𝑛 ), then alldifferent (𝑥1 , . . . , 𝑥𝑛 ) = {(𝑑1 , . . . , 𝑑𝑛 ) | ∀𝑖 𝑑𝑖 ∈ 𝐷 (𝑥𝑖 ) , ∀𝑖=𝑗̸ 𝑑𝑖 ≠ 𝑑𝑗 } .

(1)

Example 6. Consider the following CSP which is represented in Figure 2: 𝑥1 ∈ {1, 2} , 𝑥2 ∈ {1, 2} , 𝑥3 ∈ {1, 2, 3} ,

(2)

𝑥4 ∈ {1, 2, 4} alldifferent (𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) . In order to make bounds consistency of the CSP, the inconsistent values must be removed. The algorithm founds Hall intervals, in this case when the interval is set to {1, 2}. Since |𝐼| = 2 and to satisfy the theorem, 𝐼 : {min𝐷𝑖 , max𝐷𝑖 } ∩ 𝐼 = 0, the interval must be removed from 𝑥3 and 𝑥4 domains. The new domains for all variables will be

Since the introduction of the alldifferent constraint [25], several filtering algorithms have been developed [24], depending on the desired degree of local consistency from “weaker” local consistency with low degree of filtering but short-time to “stronger” with an efficient filtering in a longer runtime. In this work, we employ the alldifferent constraint based on bounds consistency [12] and Hall’s marriage theorem [13]. This implementation provides stronger propagation behavior, checking for exhaustion of all subranges of possible values [24]. Definition 3 (bounds consistency). Let 𝑐 be a constraint 𝑐(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) with 𝑛 > 1; a CSP is bounds consistent if for all variables and each value 𝑑𝑖 from its domain, 𝑑𝑖 ∈ {min𝐷(𝑥𝑖 ), max𝐷(𝑥𝑖 )}, there exist values 𝑑𝑗 ∈ [min𝐷(𝑥𝑗 ), max𝐷(𝑥𝑗 )] for all 𝑗 ∈ {1, . . . , 𝑛} − 𝑖 such that (𝑑1 , 𝑑2 , . . . , 𝑑𝑛 ) ∈ 𝐶. min𝐷 and max𝐷 represent the minimum and maximum value, respectively, from the domain 𝐷. Definition 4 (Hall’s theorem). Let 𝑋 be a set of variables and 𝐷 the corresponding finite variable domains. Suppose 𝐺 is a bipartite graph with bipartition (𝑋, 𝐷). There exists a matching that covers 𝑋 if and only if for every subset 𝐼 ⊆ 𝐷, |𝐷(𝐼)| ≤ |𝐼| is fulfilled. Then 𝐼 is called a Hall interval if |𝐼| = |𝐾𝐼 | with 𝐾𝐼 = {𝑥𝑖 | 𝐷𝑖 ⊆ 𝐼}. Theorem 5. The constraint alldifferent(𝑥1 , . . . , 𝑥𝑛 ) is bounds consistent if and only if {𝑑𝑖 ≥ 1 | ∀𝑑𝑖 ∈ 𝐷} and (i) for each interval 𝐼: |𝐼| ≥ |𝐾𝐼 |, (ii) for each Hall interval 𝐼 : {min𝐷𝑖 , max𝐷𝑖 }∩𝐼 = 0; ∀𝑥𝑖 ∉ 𝐾𝐼 . Proof. We proceed by induction, observing that the case |𝐼| = 1 obviously holds, because all domains are greater than 1. Let I be a Hall interval and 𝑥𝑖 ∉ 𝐾𝐼 . If alldifferent(𝑥1 , 𝑥2 , . . . , 𝑥𝑛 ) is bounds consistent, it has a solution when 𝑥𝑖 = min𝐷𝑖 , by Definition 3.

𝑥1 ∈ {1, 2} , 𝑥2 ∈ {1, 2} , 𝑥3 ∈ {3} ,

(3)

𝑥4 ∈ {4} . In summary, there exists a solution to an alldifferent constraint if and only if for each subset of variables, the union of their domains holds the adequate values to match every one of them with a distinct value. In the previous example, when the Hall interval is set to 𝐼 = {1, 2}, being represented by green and red lines, we note that the values from 𝑥1 and 𝑥2 domains cannot be assigned to any other variable. Hence, the values of the interval 𝐼 are removed from 𝑥3 and 𝑥4 variables. The reduced domains are only sets with feasible values with respect to all constraints. Example 7. We illustrate with another example applying the alldifferent constraint on the most difficult Sudoku instance, and it is called AI Escargot ([28]). Each Sudoku instance is composed of three types of constraints: row, columns, and subgrid. We begin by enforcing the alldifferent constraint on the rows of the Sudoku puzzle. We have used just 3 constraints corresponding to the first three Sudoku rows (enclosed with dashed lines in Figure 3) instead of the nine ones in order to simplify the illustration, but the filtering technique is applied to all constraints. After applying the alldifferent constraint from the first to third rows, the domains are only reduced but no value is discovered due the difficulty of the instance. The values deleted from the reduced domains did not satisfy the constraint; they have been already taken for another cell on the same row. In Figure 4, the alldifferent constraint is applied to columns of the puzzle. Only the first three columns are shown. In Figure 5, the alldifferent constraint is applied to the subgrids of the puzzle; the reduced domains by

Computational Intelligence and Neuroscience

5

Input: 𝐷, 𝐶 Output: 𝑆𝑏𝑒𝑠𝑡 (1) 𝑆𝑏𝑒𝑠𝑡 , 𝐷 ← alldifferent(𝐷, 𝐶) (2) tabuList ← 0 (3) While ¬ Stop Condition do (4) CandidateList ← 0 (5) While Len(CandidateList) ≤ 𝐿𝑒𝑛𝑔𝑡ℎ𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝐿𝑖𝑠𝑡 do (6) push(CandidateList, CandidateGenerator(𝑆𝑏𝑒𝑠𝑡 , 𝐷)) (7) End While (8) 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ← LocateBestCandidate(CandidateList) (9) 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 , 𝐷 ← alldifferent(𝐷𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 , 𝐶) (10) If cost(𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 ) ≤ cost(𝑆𝑏𝑒𝑠𝑡 ) (11) 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡 ← FeatureDifferences(𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 , 𝑆𝑏𝑒𝑠𝑡 ) (12) 𝑆𝑏𝑒𝑠𝑡 ← 𝑆𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 (13) While 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡 > 𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡𝑠𝑖𝑧𝑒 do (14) ExpireFeature(𝑇𝑎𝑏𝑢𝐿𝑖𝑠𝑡) (15) End While (16) End If (17) End While (18) Return 𝑆𝑏𝑒𝑠𝑡 Algorithm 2: Hybrid alldifferent-Tabu search.

Table 1: Difficulty of tested instances.

Initial domain

x1

x2

x3

x4

Authors 1

2

3

4

[26]

alldierent constraint

Hall interval: {1, 2} x1 x2 x3 x4

1

2

3

4

[27] [28]

Easy 1, 2 E Easy —

Difficulty groups Medium Difficult 3, 4, 5 — C, D SD Medium Hard — —

Most difficult — — — AI Escargot

Reduced domain x1 x2 x3 x4

1

2

3

4

Figure 2: Example of alldifferent constraint through Hall’s theorem approach.

the previous domain filterings are used. At this point, the overall domain size of the subgrid has been reduced from 57 elements to 20, being more than 64% of domain reduction, significantly decreasing the amount of possible assignments.

5. Proposed Algorithm The main idea of the proposed algorithm (Algorithm 2) is to employ the alldifferent constraint so as to filter the concerned variable domains as a preprocessing phase (line 1) and at every iteration of the Tabu search (line 9). The alldifferent constraint is applied iteratively over all structures of the grid (rows, columns, and subgrids). In the preprocessing phase and within each iteration some values are deleted from unfeasible regions, easing the work of the search process of the metaheuristic.

At line 6, we have limited the search neighboring procedure to assignments from the filtered domain. The randomization is still used, but just for randomizing the value selection of filtered values. As stop condition (line 3), we use the full coverage of the grid, and it means that solution is found and a maximum of iterations which has been fixed to 10,000. As output, the procedure returns 𝑆𝑏𝑒𝑠𝑡 , which is the outstanding solution achieved by the algorithm.

6. Experimental Results In this section, we present a performance evaluation of the proposed algorithm to solve Sudokus. The tested benchmarks are classified in diverse kinds of complexity, including the AI Escargot which is considered the most difficult instance [28]. A useful difficulty classification including easy, medium, and hard Sudoku instances has been proposed in [26]. Here, we extend this classification in order to incorporate additional instances reported in the literature [29, 30]. The reclassification is depicted in Table 1. All difficulty classification groups have 3 instances (a, b, and c) per subgroup, except for the AI Escargot which is a single instance. All instances have unique solution.

6

Computational Intelligence and Neuroscience 1

6 3

1

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

7

1 2 3 4 5 6 7 8 9

9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

3 1

9 6 5 3

2 8

7 5 9 4

4 1 7 3

1 2 3 4 5 6 7 8 9

9

2 1

3 2

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

8

1 2 3 4 5 6 7 8 9

7 8

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

9 6

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

5

1 2 3 4 5 6 7 8 9

1 2 3 4 5 6 7 8 9

Figure 3: alldifferent constraint applied to the rows of AI Escargot instance. Only the first three rows are shown.

1

6 3

1

6 3

1 1 1 3

4 2 2 4

5 3 4 5

6 4 6 6

7 7 7 7

9 8 8 9

2 3 5 6 8 9 1 2 4 5 6 8 9

3 1

9 6 5 3

2 8

4 1 7

3 1

4

7

9 5 9

4

2 1

3

8

7

2 3 5 6 7 8 1 2 3 4 7 8 1 2 4 6 7 8 1 2 3 5 7 8 9 2 3 4 5 6 8 9 1 2 4 5 6 8 9

9 5

1 7

2 3 5 6 8 1 4 5 6 7 9 3 4 5 6 7 9 1 2 3 5 7 8 9 2 3 5 6 8 9

Figure 4: alldifferent constraint applied to the columns of the AI Escargot instance, using the reduced domains of variables in common with the previous reduced ones. Only the first three columns are shown.

Firstly, we have compared the filtering technique used in previous work [7, 8]. As mentioned before, this phase is very important and useful to reduce the search space, consequently facilitating the metaheuristic work. The filtering technique employed was arc consistency-3 (AC-3) [31]. AC3 examines the arcs between pairs of variables and removes those values from the domains which are not consistent with the involved constraints. If a domain of a variable changes, the involved arcs of the variable, which its domain has been recently reduced, are examined again to check the arc consistency of the reduced domain. Table 2 illustrates the percentage of domain reduction for each Sudoku instance. The results exhibit the fact that the alldifferent constraint outperforms the AC-3 algorithm in terms of filtering capabilities. This is produced due to the ability of the alldifferent constraint to employ the global information of the pairwise constraints instead

of handling the constraints independently as the AC3 does. Let us note that the ability to infer a greater number of elements which do not belong to the domain of the problem solution depends only on the problem constraints. The alldifferent constraint and the characteristics of the problem, |𝑋| = 𝑛, enable the use of Hall’s theorem to infer the reduction of domains until each domain variable has one element (in the best case) by eliminating the elements of the domains in which they never be part of any (the only) solution. Table 3 depicts the required runs to successfully solve 30 times each Sudoku instance considering 10,000 iterations as limit. The symbol ↑ 50 indicates that more than 50 runs are needed to successfully solve 30 times the instance. We contrast the proposed approach with the best performing algorithms reported in the literature (AC3-TS [8], AC3-CS [7], and GA [26, 29]). The results for easy instances show

Computational Intelligence and Neuroscience

7

7

1

3

9

2 9

6

5

3

8 5 9 2

8

1

6

4 1

3 4

7

1

3

7 2

5

6

7 8

1

4

5

7

9

2

4

7

8

3

2

3

4

6

6

8

9

2

7

8

Figure 5: alldifferent constraint applied to the subgrids of AI Escargot instance, using the reduced domains of variables in common with the previous reduced ones. Only the first subgrid is shown.

Table 2: Comparison of effectiveness by filtering techniques. Difficulty group

Easy

Medium

Difficult Most difficult

Difficulty subgroup 1 2 E Easy 3 4 5 C D Medium SD Hard AI Escargot

𝑎 100% 100% 100% 100% 100% 86% 87% 100% 75% 100% 80% 68%

% Domain reduction alldifferent 𝑏 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 80% 87% 71%

no relevant differences. However, for medium, SD, and hard instances, the performance of the proposed alldiff-TS is greatly better. Table 4 contrasts the proposed approach with AC3-TS, which is the best performing one from previously reported approaches. We compare the minimum, average, and maximum iterations needed to successfully solve each Sudoku

𝑐 100% 100% 100% 100% 89% 82% 80% 100% 100% 100% 84% 100%

𝑎 100% 79% 100% 100% 92% 75% 73% 88% 68% 81% 69% 66%

% Domain reduction AC-3 𝑏 100% 100% 100% 100% 74% 75% 74% 100% 73% 78% 68% 74% 70%

𝑐 100% 100% 100% 100% 82% 76% 79% 100% 77% 76% 68% 68%

instance. We consider 30 runs for each instance. The results exhibit the fact that alldiff-TS achieves the constraint satisfaction of all tested instances requiring considerable less iterations than AC3-TS (A graphical comparison can be seen in Figures 6, 7, and 8). Let us remark that TS has a high participation in the search process and the work is not only done by the filtering technique.

8

Computational Intelligence and Neuroscience Table 3: Solving Sudokus considering 10,000 iterations as maximum.

Difficulty group

Easy

Medium

Difficult Most difficult

Difficulty subgroup 1 2 E Easy 3 4 5 C D Medium SD Hard AI Escargot

𝑎 30 30 30 30 30 30 30 30 30 30 30 30

Tries alldiff-TS 𝑏 𝑐 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30

𝑎 30 30 30 30 30 30 30 30 41 30 30 30

Tries AC3-TS 𝑏 30 30 30 30 30 30 30 30 30 30 30 30 30

𝑐 30 30 30 30 30 30 30 30 30 30 ↑ 50 46

𝑎 30 ↑ 50 30 30 48 ↑ 50 ↑ 50 — — ↑ 50 — ↑ 50

Tries AC3-CS 𝑏 30 30 30 30 ↑ 50 ↑ 50 ↑ 50 — — ↑ 50 — ↑ 50 —

𝑐 30 30 30 30 ↑ 50 ↑ 50 ↑ 50 — — ↑ 50 — —

𝑎 — — — 30 — — — — — 22 — 2

Tries GA 𝑏 — — — 30 — — — — — — — — —

𝑐 — — — 30 — — — — — — — —

Table 4: Iterations needed (minimum, average, maximum, and standard deviation) considering 30 runs. Difficulty group

Iterations Difficulty subgroup Minimum

alldiff-TS Average Maximum

𝜎

Minimum

AC3-TS Average Maximum

𝜎

1 2 E Easy

1 1 1 1

1 1 1 1

1 1 1 1

0 0 0 0

1 1 1 1

1 7.4 1 1

1 58 1 1

0 11.7 0 0

Medium

3 4 5 C D Medium

1 1 1 1 1 1

2.0 13.1 20 1 27.2 1

7 95 276 1 424 1

1.6 23.5 45.0 0 64.5 0

3 2 5 1 5 5

90.2 163.3 143.9 4.0 1,897.6 138.8

897 1,318 1,778 29 9,088 865

140.5 226.3 230.4 6.0 2,214.3 173.9

Difficult

SD Hard

3 1

51.5 195.0

298 1881

68.8 397.0

10 4

5,005.6 1,872.4

29,329 7,590

7,358.1 2,217.6

Most difficult

AI Escargot

52

1,248.3

6,328

1,378.6

57

2,566.7

7,224

2,076.6

Easy

7. Conclusion and Future Work In this paper, we have presented a new hybrid that integrates the powerful alldifferent constraint into a classic tabu search algorithm. The alldifferent constraint is employed to efficiently delete the values from domains that do not conduct to any feasible solution. The role of this filter is to act prior to the TS procedure but also in the search cycle, which permits progressive filtering of the best solutions. This allows us to relieve the work of the metaheuristic in order to achieve faster solving processes. We have carried out a set of experiments in order to contrast our approach with the best performing approximate methods and hybrids

reported in the literature. We have considered different complexity instance Sudokus, including the AI Escargot which is considered the most difficult one. The result has exhibited encouraging results, where the proposed approach noticeably outperforms the previous algorithms reported in the literature. A clear direction for future work is to study the integration of the alldifferent constraint on additional metaheuristics and to contrast performance. Particularly, the addition of global constraints on a cuckoo search [32] algorithm would be a promising hybrid, given the fact that CS algorithm has exhibited great performance and has already been combined with filtering techniques.

Computational Intelligence and Neuroscience

9 10,000

64 32

1,000 16 8

100

4 10 2

3

4

5

C

D

Medium

Hard

SD

AI Escargot

3

4

5

C

D

Medium

Hard

SD

AI Escargot

Easy

E

2

1 1

SD

Hard

AI Escargot

alldiff-TS AC3-TS

Medium

D

C

5

4

3

Easy

E

2

1

1

Standard deviation

Figure 6: Comparing minimum needed iterations for Sudoku solving.

8,000 6,000 4,000 2,000

100,000

Easy

E

10,000

2

1

0

alldiff-TS AC3-TS

1,000 100

Figure 8: Comparing average needed iterations for Sudoku solving and their standard deviations.

10

SD

Hard

Medium

D

C

5

AI Escargot

alldiff-TS AC3-TS

4

3

Easy

E

2

1

1

Figure 7: Comparing maximum needed iterations for Sudoku solving.

Another interesting research direction will be the study of variations of the classic alldifferent constraint working in conjunction with approximate methods. An example is the symm alldifferent constraint [24], which will be useful in the resolution of the well-known round-robin tournament.

Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments Cristian Galleguillos is supported by Postgraduate Grant PUCV 2015. Ricardo Soto is supported by Grant CONICYT/FONDECYT/INICIACION/11130459. Broderick

Crawford is supported by Grant CONICYT/FONDECYT/ REGULAR/1140897. Fernando Paredes is supported by Grant CONICYT/FONDECYT/REGULAR/1130455.

References [1] G. McGuire, B. Tugemann, and G. Civario, “There is no 16clue Sudoku: solving the Sudoku minimum number of clues problem,” http://arxiv.org/abs/1201.0749. [2] F. Rossi, P. van Beek, and T. Walsh, Handbook of Constraint Programming, Elsevier, 2006. [3] H. Simonis, “Sudoku as a constraint problem,” in Proceedings 4th International Works. Modelling and Reformulating Constraint Satisfaction Problems, B. Hnich, P. Prosser, and B. Smith, Eds., pp. 13–27, 2005. [4] I. Lynce and J. Ouaknine, “Sudoku as a SAT problem,” in Proceedings of the 9th International Symposium on Artificial Intelligence and Mathematics (ISAIM ’06), Springer, January 2006. [5] A. Moraglio, J. Togelius, and S. Lucas, “Product geometric crossover for the Sudoku puzzle,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’06), pp. 470–476, IEEE, Vancouver, Canada, July 2006. [6] M. Asif and R. Baig, “Solving NP-Complete problem using ACO algorithm,” in Proceedings of the International Conference on Emerging Technologies (ICET ’09), pp. 13–16, Islamabad, Pakistan, October 2009. [7] R. Soto, B. Crawford, C. Galleguillos, E. Monfroy, and F. Paredes, “A prefiltered cuckoo search algorithm with geometric

10

[8]

[9]

[10]

[11]

[12]

[13] [14] [15]

[16]

[17]

[18] [19] [20] [21]

[22]

[23]

[24]

Computational Intelligence and Neuroscience operators for solving Sudoku problems,” The Scientific World Journal, vol. 2014, Article ID 465359, 12 pages, 2014. R. Soto, B. Crawford, C. Galleguillos, E. Monfroy, and F. Paredes, “A hybrid AC3-tabu search algorithm for solving Sudoku puzzles,” Expert Systems with Applications, vol. 40, no. 15, pp. 5817–5821, 2013. T. K. Moon, J. H. Gunther, and J. J. Kupin, “Sinkhorn solves Sudoku,” IEEE Transactions on Information Theory, vol. 55, no. 4, pp. 1741–1746, 2009. G. Santos-Garc´ıa and M. Palomino, “Solving Sudoku puzzles with rewriting rules,” Electronic Notes in Theoretical Computer Science, vol. 176, no. 4, pp. 79–93, 2007, Proceedings of the 6th International Workshop on Rewriting Logic and Its Applications (WRLA ’06). J. Gunther and T. Moon, “Entropy minimization for solving Sudoku,” IEEE Transactions on Signal Processing, vol. 60, no. 1, pp. 508–513, 2012. J. Puget, “A fast algorithm for the bound consistency of alldiffconstraints,” in Proceedings of the 15th National Conference on Artificial Intelligence (AAAI ’98) and 10th Innovative Applications of Artificial Intelligence Conference (IAAI ’98), pp. 359–366, Madison, Wis, USA, July 1998. J. Hall, “Distinct representatives of subsets,” Bulletin of the American Mathematical Society, vol. 54, pp. 922–926, 1948. R. Lewis, “Metaheuristics can solve Sudoku puzzles,” Journal of Heuristics, vol. 13, no. 4, pp. 387–401, 2007. A. Moraglio and J. Togelius, “Geometric particle swarm optimization for the Sudoku puzzle,” in Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation (GECCO ’07), pp. 118–125, ACM, July 2007. X. Q. Deng and Y. D. Li, “A novel hybrid genetic algorithm for solving Sudoku puzzles,” Optimization Letters, vol. 7, no. 2, pp. 241–257, 2013. F. Lardeux, E. Monfroy, F. Saubion, B. Crawford, and C. Castro, “Overlapping Alldifferent constraints and the Sudoku puzzle,” in 34th Conferencia Latinoamericana de Informatica (CLEI ’08), Santa Fe, Argentina, September 2008. F. Glover, “Tabu search—part I,” ORSA Journal on Computing, vol. 1, no. 3, pp. 190–206, 1989. F. Glover, “Tabu search—part II,” ORSA Journal on Computing, vol. 2, no. 1, pp. 4–32, 1990. R. Dechter, Constraint Processing, Elsevier Morgan Kaufmann, 2003. C. P. Gomes and D. Shmoys, “Completing quasigroups or Latin squares: a structured graph coloring problem,” in Proceedings of the Computational Symposium on Graph Coloring and Generalizations, Ithaca, NY, USA, 2002. N. Barnier and P. Brisset, “Graph coloring for air traffic flow management,” in Proceedings of the 4th International Workshop on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimisation Problems (CP-AI-OR ’02), N. Jussien and F. Laburthe, Eds., pp. 133–147, Le Croisic, France, March 2002. S. Asaf, H. Eran, Y. Richter et al., “Applying constraint programming to identification and assignment of service professionals,” in Principles and Practice of Constraint Programming—CP 2010, D. Cohen, Ed., vol. 6308 of Lecture Notes in Computer Science, pp. 24–37, Springer, Berlin, Germany, 2010. W. J. van Hoeve, “The alldifferent constraint: a survey,” Computing Research Repository, CoRR cs.PL/0105015, 2001.

[25] J.-L. Lauriere, “A language and a program for stating and solving combinatorial problems,” Artificial Intelligence, vol. 10, no. 1, pp. 29–127, 1978. [26] T. Mantere and J. Koljonen, “Solving, rating and generating Sudoku puzzles with GA,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’07), pp. 1382–1389, September 2007. [27] T. Mantere and J. Koljonen, “Sudoku research page,” 2008, http://lipas.uwasa.fi/∼timan/sudoku/. [28] A. Inkala, AI Escargot—The Most Diffcult Sudoku Puzzle, Lulu, 978th edition, 2007. [29] T. Mantere and J. Koljonen, “Solving and rating Sudoku puzzles with genetic algorithms,” in New Developments in Artificial Intelligence and the Semantic Web: Proceedings of the 12th Finnish Artificial Intelligence Conference (STeP ’06), pp. 86–92, 2006. [30] T. Mantere and J. Koljonen, “Solving and analyzing Sudokus with cultural algorithms,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’08), pp. 4053–4060, IEEE World Congress on Computational Intelligence, June 2008. [31] A. K. Mackworth, “Consistency in networks of relations,” Artificial Intelligence, vol. 8, no. 1, pp. 99–118, 1977. [32] X.-S. Yang and S. Deb, “Cuckoo search via L´evy flights,” in Proceedings of the World Congress on Nature and Biologically Inspired Computing (NABIC ’09), pp. 210–214, December 2009.

A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.

The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n colu...
819KB Sizes 1 Downloads 21 Views