Accepted Manuscript Set selection dynamical system neural networks with partial memories, with applications to Sudoku and KenKen puzzles B. Boreland, G. Clement, H. Kunze PII: DOI: Reference:

S0893-6080(15)00088-X http://dx.doi.org/10.1016/j.neunet.2015.04.008 NN 3477

To appear in:

Neural Networks

Received date: 19 October 2014 Revised date: 5 March 2015 Accepted date: 20 April 2015 Please cite this article as: Boreland, B., Clement, G., & Kunze, H. Set selection dynamical system neural networks with partial memories, with applications to Sudoku and KenKen puzzles. Neural Networks (2015), http://dx.doi.org/10.1016/j.neunet.2015.04.008 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Manuscript Click here to view linked References

Set Selection Dynamical System Neural Networks with Partial Memories, with Applications to Sudoku and KenKen Puzzles B. Boreland1 , G. Clement1 , H. Kunze1

Abstract After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ±1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship.

1. Introduction This work concerns Hopfield/cellular neural networks [7, 8] and set selection problems [10]. In the recent literature [9], Sudoku were considered in a Hopfield neural network framework, using energy functions and a neural network coprocessor. Subsequent work by other authors continued to treat this application in various ways [1, 4, 6, 14, 15]. In this paper, we continue this train of thought. We present a solution method for Sudoku via set selection dynamical system neural networks. We also establish that a different network can be used to solve KenKen puzzles. KenKen are grid-based puzzles similar to Sudoku in some ways, with added constraints on subsets of grid cells [11]. The discussion in Section 2 reminds the reader how such neural networks solve finite set selection problems: one can establish a relationship between feasible equilibria of the dynamical system and answer sets to the selection problem. On the other hand, the feasible equilibria of memory model dynamical system neural networks correspond to stored memories, predetermined states of the network in which every neuron is either on or off. In this paper, we introduce a 1 Department

of Mathematics and Statistics, University of Guelph, Guelph, Canada.

Preprint submitted to Elsevier

March 2, 2015

neural network model that combines the set selection model with stored memories on subsets of states in the network (which we refer to as partial memories). We establish that feasible equilibria of this new model correspond to answers of a set selection problem that agree with partial memories. In the application section, we show that such a network can be used to solve KenKen puzzles. It is worth stating that the notion of partial memories exists in the machine learning literature [2, 13]. In Section 2, we summarize briefly set selection and memory model neural networks. We define set selection neural network with partial memories in Section 3 and establish the connection between feasible equilibria and answers. In Section 4 we discuss our approach to using these neural networks to solve Sudoku and KenKen, and we illustrate that the number of iterations used by discretized network correlates strongly with the difficulty of the puzzle. Finally, in Section 5, we summarize the work and present a few conclusions. 2. Background: Set Selection and Memory Model Neural Networks Discussion of the dynamic behavior of dynamical system neural networks like those in this section is given in [3]. Proofs of the two results in this section can be found in [10]. We first define the set selection problem (SSP). Given S = {1, . . . , N } Sj ⊂ S, j = 1, . . . , r

Find an answer set A ⊂ S such that |A ∩ Si | = 1, i = 1, . . . , r

(SSP)

Suppose that i is contained in a number ni ≥ 1 of the subsets Sj , and define the ODEs dxi 2 X X gk (xk ), i = 1, . . . , N, (SSPODE) = −xi + 1 − dt ni Sj k∈Sj i∈Sj k6=i

where gk (xk ) =

 

0, xi ≤ −εi nondecreasing, |xi | ≤ εi  1, xi ≥ εi

(1)

with 0 < εi ≤ 21 , are the (continuous) output gain functions. Theorem 1. Every answer set A of (SSP) corresponds to a stable equilibrium point of (SSPODE) with each xi = ±1. And every equilibrium point of (SSPODE) with each xi = ±1 corresponds to an answer set A of (SSP). On the other hand, given the set S = {1, . . . , N } of N neurons, the memory model problem (MMP) requires the recognition of a length-N binary string. We define the distinct memories ml = {a length-N binary string}, with components mli equal to 0 or 1, i = 1, . . . , N , l = 1, . . . , M , 1 ≤ M ≤ 2N . 2

Given an arbitrary length-N string with each component in [0, 1]

Reply with one of the memories

(MMP)

One example application of (MMP) is optical character recognition, where the memories are characters in an alphabet and input string is a distorted or noisy character. We define the ODEs M X  dxi = −xi + 2mli − 1 Il (g1 (x1 ), . . . , gN (xN )), i = 1, . . . , N, (MMPODE) dt l=1

where

Il (g1 (x1 ), . . . , gN (xN )) =

N Y  l  mj gj (xj ) + (1 − mlj )(1 − gj (xj )) .

j=1

Theorem 2. If x is an equilibrium point of (MMPODE) with components xi = ±1 then mj = gj (xj ) produces a memory m in {ml }M l=1 . And if x is defined by x = 2ml − 1 for some l ∈ {1, . . . , M }, then x is an equilibrium point of (MMPODE) with gj (xj ) = mlj , j = 1, . . . , N . 3. Set Selection Neural Networks with Partial Memories In the memory model of Section 2, a memory is related to the state of all neurons in the network. In this section, we introduce the notion of a partial memory, related to the state of some neurons in the network. That is, let B1 ⊂ S with |B1 | = N1 ≤ N ; we define a partial memory m = {a length-N1 binary string}, with components mi equal to either 0 or 1 for each i ∈ B1 . We now introduce (PMP), a set selection problem involving partial memories. In the formulation, we assume that each element of S occurs in exactly one partial memory, and we express the partial memory involvement in terms of the subsets Bl on which the partial memories are defined. For each l, it is helpful to introduce Blk ⊂ Bl , containing the elements of Bl for which the corresponding partial memory element equals 1. Given S = {1, . . . , N } Sj ⊂ S, j = 1, . . . , r

Find an answer set A ⊂ S such that |A ∩ Si | = 1, i = 1, . . . , r SM and A = l=1 Blkl for some 1 ≤ kl ≤ Ml (PMP)

Bl = partitioning of S satisfying • i ∈ Bl for one l, 1 ≤ l ≤ M, 1 ≤ i ≤ N ) SM M X • l=1 Bl = S ⇒ Nl = N • |Bl | = Nl l=1 Blk ⊂ Bl , Blk 6= ∅, k = 1, . . . , Ml , li = unique index l such that i ∈ Bl 3

We combine both types of ODEs from Section 2, getting Mli X  dxi 2 X X gk (xk ) − xi + (2 mkli i − 1)Ili k (g) (PMPODE) = −xi + 1 − dt ni Sj k∈Sj i∈Sj k6=i

k=1

where gk (xk ) is defined as in (1), and, for k = 1, . . . , Mli , Y   Ili k (g) = [ mkli j gj (xj ) + (1 − mkli j )(1 − gj (xj ))]

(2)

j∈Bli

with, (mkl )j =

(

1

if

0

if

j ∈ Blk j 6∈ Blk

(3)

Theorem 3. x ¯ is an equilibrium of (PMPODE) with x ¯i = ±1 if and only if x ¯ corresponds to an answer set A of (PMP), where A

=

M [

l=1

and x ¯i

=

Blkl for some kl , 1 ≤ kl ≤ Ml ,

 k  l 2 mli i − 1 = ±1. i

Proof. (⇒) Let x ¯ be an equilibrium point with components x ¯i = ±1; then gi (¯ xi ) = 0 or 1. Define A such that i ∈ A if x ¯i = 1 and i 6∈ A if x ¯i = −1. Also define P ⊂ S such that i ∈ P means ∃ !k, 1 ≤ k ≤ Mi , such that x ¯j =  2 mkli j − 1 ∀j ∈ Bli . Then i 6∈ P means that for every k, 1 ≤ k ≤ Mi , we have  x ¯j 6= 2 mkli j − 1 for some j ∈ Bli . In addition, if i ∈ P then j ∈ P ∀j ∈ Bli , and i ∈ P means x ¯ corresponds to a memory restricted to Bli .   For i ∈ P , we have gj (¯ xj ) = 0 = mkli j or gj (¯ xj ) = 1 = mkli j . We conclude that one of the pairwise products in (2) equals one and the other equals zero, for each j, giving Ili k = 1 for exactly one k. Furthermore, at x ¯ (PMPODE) simplifies to 2 X X gk (¯ xk ) = 0, for i ∈ P. (4) −¯ xi + 1 − ni Sj k∈Sj i∈Sj k6=i

 For i 6∈ P , we see gj (¯ xj ) 6= mkli j , giving from (2) that Ili k = 0 for every k, and (PMPODE) simplifies to −2¯ xi + 1 −

2 X X gk (¯ xk ) = 0, for i 6∈ P. ni

(5)

Sj k∈Sj i∈Sj k6=i

We first prove that x ¯ with components x ¯i = ±1 can only be an equilibrium if P = S. Consider i ∈ P . If x ¯i = 1 then (4) gives gk (¯ xk ) = 0 for all k ∈ Sj , 4

i ∈ Sj , so we conclude that for i ∈ P and i ∈ Sj at most one neuron is on in Sj . If x ¯i = −1 then (4) gives X X gk (¯ xk ) (6) ni = Sj k∈Sj i∈Sj k6=i

If i ∈ Sj , since at most one neuron can be on in Sj , we have that the double sum on the right in (6) equals either 0 or 1. Since ni > 0, we conclude that for i ∈ P and i ∈ Sj exactly one neuron is on in Sj . Now consider i 6∈ P . If x ¯i = 1 then (5) gives X X ni gk (¯ xk ) = − , 2 Sj k∈Sj i∈Sj k6=i

which is impossible. Thus, the only case we have to consider is x ¯i = −1 for all i 6∈ P . In this case, (5) gives 3ni 2

=

X X

gk (¯ xk )

Sj k∈Sj i∈Sj k6=i

=

X X

Sj k∈Sj i∈Sj k∈P k6=i

=

X

Sj i∈Sj

gk (¯ xk ) since gk (¯ xk ) = 0 for k 6∈ P

1 since k ∈ P and k ∈ Sj means gk (¯ xk ) = 1 for exactly one k

= ni , a contradiction. We conclude that i ∈ P for all i or equivalently P = S. In other words, if x ¯ is an equilibrium with x ¯i = ±1 then ∀i ∃ !k, 1 ≤ k ≤ Mi such that x ¯j = 2 mkli j − 1 SM ∀j ∈ Bli . This means A = l=1 Blkl for some kl , 1 ≤ kl ≤ Ml and implies that |A ∩ Sj | = 1, ∀j. (⇐) Let A be an answer set and defined x ¯ as in the statement of the theorem. kl If i ∈ A then x ¯i = 1 and gi (¯ xi ) = 1 = (mli i )i ; if i 6∈ A then x ¯i = −1 and kl

gi (¯ xi ) = 0 = (mli i )i . Now, using (2) and uniqueness, we have that Ili k (g(¯ x)) = This means that

Mli X

(2 mkli

k=1



(

0 1

if k 6= kli if k = kli

− 1)I (g) li k i 5

=x ¯i x=¯ x

(7)

and (PMPODE) gives −¯ xi + 1 −

2 X X gk (¯ xk ) = 0. ni

(8)

Sj k∈Sj i∈Sj k6=i

If x ¯i = 1, then |A ∩ Sj | = 1 for j = 1, . . . , r implies the double sum in (8) equals zero and (8) holds. If x ¯i = −1, then |A ∩ Sj | = 1 for j = 1, . . . , r implies that each subset Sj contains one k 6= i such that x ¯k = 1 so for every m 6= k in Sj , x ¯m = −1 and gm (¯ xm ) = 0. The double sum in (8) equals ni and (8) holds. 4. Application to Mathematical Puzzles We formulate Sudoku in terms of (SSPODE) and KenKen in terms of (PMPODE), producing via Theorem 1 and Theorem 3, respectively, a neural network dynamical system for each puzzle. We set εi = 0.5 in all output gain functions, make these functions linear ramps, and solve the differential equations numerically using Euler’s method with a time step of 0.1. The network has precisely one answer set solution, corresponding to the unique solution of the puzzle. However, such a network will in general have many infeasible equilibria, stationary points at which at least one neuron is in transition. As a result, we define the thresholds Ton = 0.9 and Tof f = −0.95. As the network evolves, we make a decision on the digit in a cell when exactly one neuron in the cell is at or above Ton and all other neurons in the cell are at or below Tof f . These threshold values lead to the successful solution of all puzzles presented in this application section and almost all of the many more puzzles we have tried. (On occasion, we have encountered a stubborn puzzle that could be solved by the network with minor adjustments to the threshold parameters; such puzzles always seem to involve a particularly challenging solution technique used by the human solver.) 4.1. Sudoku A Sudoku puzzle typically consists of a 9 × 9 square grid into which each of the numbers 1 through 9 must appear 9 times so that each digit appears exactly once in each row, once in each column, and once in each 3 × 3 sub-grid. Some digits are given initially, and the solver is supposed to use reasoning skills to fill in the entire grid. A sample puzzle is presented in Figure 1(a). We formulate the puzzle in terms of the set selection neural network (SSP) by considering a cube of neurons, as in Figure 1(b). For each slice of the cube, number the grid squares left-to-right, top-to-bottom, from 1 to 81. For the 9 × 9 grid, we define the rows Rk = {9(k − 1) + 1, 9(k − 1) + 2 . . . , 9k}, columns Ck = {k, k + 9, . . . , k + 72}, and the sub-grids G1 = {1, 2, 3, 10, 11, 12, 19, 20, 21} through to G9 = {61, 62, 63, 70, 71, 72, 79, 80, 81}. It is helpful to define S = {(i, j) | i = 1, . . . , 81, j = 1, . . . , 9}, so that (i, j) refers to cell i in slice j. We

6

(a)

(b)

Figure 1: (a) A sample Sudoku puzzle, and (b) the cube of 729 neurons.

associate digit j to slice j. Define the subsets of S Rk,l Ck,l

= {(i, j) | i ∈ Rk , j = l} = row Rk in slice l

= {(i, j) | i ∈ Ck , j = l} = column Ck in slice l

Zk

=

Gk,l

=

{(i, j) | i = k, j ∈ {1, . . . , 9}} = cell k in all slices

{(i, j) | i ∈ Gk , j = l} = sub-grid Gk in slice l

(9) (10) (11) (12)

See Figure 1(b). The set S could be redefined as consisting of 729 digits and the above subsets could be defined as subsets Sj , j = 1, . . . , 324. We can construct (SSPODE), with component xi,j associated to cell (i, j) and ni,j = 4 for all i and j. The answer set A of (SSP) then corresponds to a solution of a Sudoku puzzle once we flatten the cube: at an equilibrium point with ±1 entries, for each k = 1, . . . , 81, if xi,j = 1 then cell k in the flat grid is filled with digit j. We note that [9] observes that the neural network, with initial conditions determined by given values, when left to evolve on its own almost always lands at an infeasible equilibrium point. As a result, as the network evolves, we make intermediate decisions based on the thresholds. Each time a new decision is made, we reset all undecided neurons and let the system evolve anew. We use the following test for new decisions If for some i, xi,j ≥ Ton and xi,k ≤ Tof f , k 6= j then our decision is that digit j is in cell i When imposing the decision that digit j is in cell i, we set  xi,j = 1,      xi,k = −1 for k 6= j, xl,j = −1, where i ∈ Rk , l ∈ Rk , l 6= i,   xl,j = −1, where i ∈ Ck , l ∈ Ck , l 6= i,    xl,j = −1, where i ∈ Gk , l ∈ Gk , l 6= i. 7

(13)

(a)

(b)

(c)

(d)

Figure 2: The solution of the soduku at iteration (a) 500, (b) 1000, (c) 1500, and (d) 1913. Blue digits were given, and red digits are decisions

The pseudocode of the solution algorithm is While not (all cells decided) do Impose all decisions as in (13) Perform one step of Euler’s method If threshold test is met then Make new decision Set undecided neurons to -1

(14)

The numbers given in the puzzle are treated as a priori decisions in the solution algorithm. All empty cells are undecided. At each time step, the algorithm lets all neurons evolve and then reasserts the decision constraints. Example 1. Figure 2 illustrates the evolution of a solution. In each snapshot, the leading neuron with state variable greater than zero (if there is one) appears in grey; the darker the grey, the closer that neuron is to reaching a fired state and potentially being chosen as the answer for its cell. The initially given digits are drawn in blue and the decisions of the network are drawn in red. The solution algorithm repeatedly restarts the solution trajectory from a new initial condition after each decision. In this way, we find that the neural network almost always avoids settling to an infeasible equilibrium point and, in that disappointing case, the puzzle is always of high difficulty. Indeed, online and printed sources categorize Sudoku in terms of difficulty. The following example explores the relationship between puzzle difficulty and the number of iterations the neural network uses to solve the puzzle. Example 2. We attempt to solve a selection of 15 puzzles from [5, 12], using our neural network with Ton = −Tof f = T , for several values of T . The difficulty level of each puzzle is measured using Pappocom’s Sudoku software. The results of the process are displayed in Table 1. We see that more difficult puzzles on average require more iterations to solve; this result occurs regardless of the step size used. We also note that puzzles rated Easy or Medium were solved with all choices of thresholds, whereas Hard puzzles were solved only with certain threshold values. The relationship between threshold values, solvability with our 8

Puzzle 23 from [12] 97 from [12] 79 from [12] 1 from [12] 73 from [12] 102 from [12] 28 from [12] 112 from [5] 68 from [12] 54 from [12] 242 from [5] 35 from [5] 239 from [5] 41 from [5] 1 from [5]

Difficulty Easy Easy Easy Easy Easy Medium Medium Medium Medium Medium Hard Hard Hard Hard Hard

D = 0.5 713 761 770 833 921 824 828 859 901 925 985 998 1010 1076 ∞

D = 0.6 985 866 861 928 1008 972 911 1057 1056 1009 ∞ 950 1014 1045 ∞

D = 0.7 990 1006 974 857 1156 1092 996 1047 1177 1183 ∞ 1161 ∞ 1267 1177

D = 0.8 1221 1061 1216 1189 1320 1208 1196 1291 1438 1429 ∞ 1399 1276 1575 ∞

D = 0.9 1461 1297 1204 1261 1569 1533 1403 1446 1599 1406 ∞ 1723 1436 1641 1947

Table 1: Number of iterations needed to solve the chosen Sudoku in Example 2; a value of ∞ represents the neural network not solving the puzzle with the thresholds Ton = −Tof f = D.

solution algorithm, and the distribution of infeasible equilibrium is of interest, of course. We can program the network to start over at a different choice of thresholds in the case that it converges to an infeasible equilibrium point at the current threshold values. 4.2. KenKen A KenKen puzzle is an N × N grid that is divided into connected groups of cells, sometimes called “cages.” As in Sudoku, each of the digits 1 to N must be placed in each row and each column exactly once. Each cage provides an additional constraint by specifying an operation (addition, subtraction, division, or multiplication) and a value. The digits entered into the cage must combine to produce the given value under the given operation. For subtraction or division, the digits are combined from largest to smallest. No numbers are given initially, although when a cage consists of a single cell the number that goes in the cell is known from the constraint. It is interesting to note that cages can enter into more than one row or column and, in such cases, a digit or digits may repeat within the cage. Figure 3(a) presents a 9 × 9 KenKen. Similar to Section 4.1, we formulate the puzzle in terms of the set selection neural network with partial memories (PMP). We define S = {(i, j) | i = 1, . . . , N 2 , j = 1, . . . , N } and introduce the row, column, and cell subsets Rk,l , Ck,l , and Zk,l as (9)-(11), for N digits. In addition, we introduces blocks corresponding to the cages, which we number 1 through M . The cages form a set of nonoverlapping subsets of the grid the union of which is the grid. We define Bl = {cage k in all slices} , l = 1, . . . , M. 9

(15)

(a)

(b)

Figure 3: (a) A sample KenKen puzzle, and (b) the cube of 729 neurons.

For each Bl , we define the nonempty subsets Blk , k = 1, . . . , Ml , based on the cage constraint. A human solver looks at the cage constraints and thinks of all possible combinations of digits that could be entered. The Blk capture these possibilities as partial memories. We consider each possible set of digits to be an Nl -length binary string ml with (mkl )j = 1 if digit j is in cell k and (mkl )j = 0 if digit j is not in cell k. This definition combines with (3) to define the subsets Blk . Figure 3(b) illustrates a cube of 729 neurons induced by a 9 × 9 KenKen. We construct (PMPODE), with components xi,j associated to cell (i, j) and ni,j = 3 for all i and j. The answer set of (PMP) corresponds to the solution of the KenKen puzzle once we flatten the cube, as described in Section 4.1. All components are initially set to −1. We reuse the decision test (13) and the algorithm (14); decisions are imposed as in (13) without the final consideration of the subgrids. Example 3. Figure 4 shows the evolution of the solution. In each snapshot, the leading neuron with positive state variable (if there is one) appears in grey, with the darkness of the grey indicating how close the neuron is to being in a fired state. Decided neurons are drawn in red. The next example explores the relationship between puzzle difficulty (as determined by online sources or creator) and the number of iterations required to solve the puzzle. Example 4. Ten 6 × 6 KenKen puzzles were selected at each of five different difficulties: easiest, easy, medium, hard and expert. All 50 puzzles were solved by the neural network. Table 2 presents the mean, minimum, and maximum number of iterations needed for each difficulty level. We see that the mean values increase with difficulty. The results suggest that the number of neural network iterations can serve as a measure of puzzle difficulty. (As with Sudoku, this result occurs even when we change the step size at which all puzzles are solved.) 10

(a)

(b)

(c)

(d)

Figure 4: The solution of the KenKen at iteration (a) 500, (b) 1000, (c) 1500, and (d) 2262.

In a variant of the standard KenKen puzzle, no mathematical operation is given for the cages. For each cage, the solver must consider all possible sets of digits that work for a single choice of operator. The formulation of the neural network can be adjusted to allow these extra possibilities as partial memories. We present one such puzzle as a final example. Difficulty Easiest Easy Medium Hard Expert

Mean 261 281 359 364 413

Min 219 175 337 269 347

Max 297 395 391 494 485

Table 2: Number of iterations needed to solve the chosen KenKen puzzles in Example 4

11

(a)

(b)

(c)

(d)

Figure 5: (a) A KenKen puzzle with no specified operations, and solution snapshots at iteration (b) 500, (c) 1000, and (d) 1419.

Example 5. Consider the 4 × 4 KenKen puzzle in Figure 5(a), with no operations given. In this cute puzzle, all cages have a value of 12. The cage in the top left corner allows choices using addition or multiplication. For example, any permutation of the digits 1, 5, and 6 work with addition, as do 3, 4, and 5. The digits 1, 3, and 4 work with multiplication, as do 1, 2, and 6. We construct the neural network allowing the different choices of operations. The evolution of the network is illustrated in Figure 5(b)-(d). 5. Conclusions In this work, we have formulated two set selection problems, one with partial memories, in terms of dynamical system neural networks, with applications to solving Sudoku and KenKen puzzles, respectively. The main theoretical result establishes an equivalence of the set-theoretic KenKen formulation and a corresponding model using differential equations. The validity of the result is demonstrated in several examples. In addition to presenting examples in which Sudoku and KenKen are solved, we illustrate that the number of iterations needed by the computerized network to reach equilibrium (the puzzle solution) seems to be related to the identified difficulty level of the puzzle. That is, having chosen a step size in Euler’s method, which we use to solve the differential equations numerically, we find that increasing the difficulty of the puzzle generally increases the number of iterations needed to reach a solution. Acknowledgements This work has been supported in part by research grants from the Natural Sciences and Engineering Research Council of Canada (NSERC). [1] P. Babu, K. Pelckmans, P. Stoica, and J. Li, Linear systems, sparse solutions, and Sudoku, IEEE Signal Processing Letters, 17(1), 2010, 40–42. [2] M. Biba, S. Ferilli, F. Esposito, N. Di Mauro, and T.M.A. Basile, A fast partial memory approach to incremental learning through an advanced data storage framework. Proceedings of the Fifteenth Italian Symposium on Advanced Database Systems, SEBD 2007, 2007, 52-63. 12

[3] M. Cohen and S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Transactions on Systems, Man and Cybernetics, 13(5), 1983, 815–826. [4] G. Dahl, Permutation matrices related to Sudoku, Linear Algebra and Its Applications, 430(8–9), 2009, 2457–2463. [5] R. Fraiman, Jumbo Sudoku Pocket, Time Inc. Home Entertainment, New York, 2007. [6] A.F. Gabor and G.J. Woeginger, How *not* to solve a Sudoku, Operations Research Letters, 38(6), 2010, 582–584. [7] J.J. Hopfield, Neural Networks and Physical Systems With Emerging Computational Abilities, Proceedings of the National Academy of Science, 79, 1982, 2554–2558. [8] J.J. Hopfield and D.W. Tank, “Neural” Computation of Decisions in Optimization Problems, Biological Cybernetics, 52, 1985, 141–152. [9] J.J. Hopfield, Searching for Memories, Sudoku, Implicit Check Bits, and the Iterative Use of Not-Always-Correct Rapid Neural Computation, Neural computation, 20, 2008, 1119–1164. [10] C. Jeffries, Code Recognition and Set Selection with Neural Networks, Birkh¨auser, Boston, 1991. [11] Deepak Kulkarni, Enjoying Math: Learning Problem Solving with KenKen Puzzles, Recreational Math Publications, 2012. [12] F. Longo, Mensa Absolutely Nasty Sudoku Level 1, Sterling Publishing, New York, 2007. [13] M.A. Maloof and R.S. Michalski, A Method for Partial-Memory Incremental Learning and its Application to Computer Intrusion Detection, Proceedings of 7th IEEE International Conference on Tools with Artificial Intelligence, 1995, 392–397. [14] T.K. Moon, J.H. Gunther, and J.J. Kupin, Sinkhorn solves Sudoku, IEEE Transactions on Information Theory, 55(4), 2009, 1741–1746. [15] J-M. Wu, P-H. Hsu, and C-Y. Lio, Sudoku Associative Memory, Neural Networks, 57, 2014, 112–127.

13

Set selection dynamical system neural networks with partial memories, with applications to Sudoku and KenKen puzzles.

After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with ...
2MB Sizes 2 Downloads 8 Views