Blog Posts Business Management

Constraint programming

We’ve all seen the riddle, widely known as the zebra puzzle, which goes something like this:

There are five houses.

The Spaniard owns the dog

The Norwegian lives next to the blue house.

The man who smokes Chesterfields lives in the house next to the man with the fox.

Etcetera, etcetera.

… Who owns the zebra?

How do you go about solving such a puzzle manually? And more interestingly: how would you go about solving it using a computer? Would you choose to model this puzzle as a Mixed-Integer Program (MIP) in order to feed it to a solver like CPLEX? Or would you pick your favorite general-purpose programming language and write all the logic for solving the puzzle yourself? Sure, you’d reach the right answer, but it seems like a labor-intensive approach.

Wouldn’t it be great if you could just state the problem and the computer would somehow instantly know how to solve it? After all, that’s the holy grail of programming. You may think this sounds too good to be true, but actually such a thing already exists: it’s called Constraint Programming (CP).

To the best of my knowledge, CP is not taught on any standard curriculum in the Netherlands. It was, however, a topic in the conference organized by the Dutch Network on the Mathematics of Operations Research (LNMB) in 2014 (http://www.lnmb.nl/conferences/2014/programlnmbconference/), and there my suspicions were confirmed: a show of hands indicated that hardly anyone in the audience had ever heard of – let alone used – CP.

I happened to come across CP when studying computer science in Australia in 2011. Generally speaking, I do find that computer scientists are more familiar with CP than econometrists or OR practitioners. And that’s a shame. Because for OR specialists it’s very natural to think of a problem in the way CP requires: in terms of constraints and, possibly, an objective.

Constraints

In CP, constraints can look like your typical LP constraint (e.g. x ≤ 5). However, you can combine these with logical statements such as “A or B is true”. In addition, you can use expressions like `for all’ or `exists’. That way, you can phrase constraints like: “For all variables x in (x1, x2, x3), there exists a variable y in (y1, y2, y3) such that x=1-y”. Another powerful tool you can use in building your constraints is implications: for example, “a < b -> not (A and B are true).

Constraints are in fact any logical statement that you can phrase this way, which allows you to model many problems. The zebra puzzle is a nice example, but I can imagine you want to get a better feel for the type of problems that one could use CP for. Examples include: solving a Sudoku, a job-shop, a stable-marriage problem or a knapsack. One thing I particularly like is how the CP syntax allows you to formulate the problem in a fashion that’s very similar to the way you think about the problem in your head.

Solving the problem

The problems are solved via backtracking, local search and/or constraint propagation. In a way, this is just like what you would do if you had to solve the problem manually (think about the zebra puzzle): you exclude infeasible options and – using the logic in the constraints – determine how to proceed, until you find a consistent solution.

Typically, the problems for which CP is used successfully are <it>finite domain<it> problems (i.e. the ones where variables can take a finite number of values). A finite domain-solver examines remaining possible values of variables and adds new constraints that restrict the problem to a part of the solution space – in a way, it guesses where the solution might lie. It then propagates the constraints of the original problem to determine what other values are still possible in solutions. Of course, the guess does not have to be correct, so the alternative option (the negation of the newly-added constraint) must also be analyzed. For the problems where we’re simply trying to find a feasible solution, the finite domain-solver finishes when it has detected that all constraints are satisfied (or unsatisfiability is shown). Alternatively, if we are trying to find an optimum rather than just feasibility, the solver finishes when all options have been considered (either individually, or implicitly by rejecting a whole part of the search space).

If the description of the finite domain solver sounds a bit too abstract for you, perhaps a visual example can clarify things further. Consider the n queens problem, which requires that we place n queens on an n × n chessboard so that none can attack any others. We can model this with a variable q[i] that records in which row the queen in column i is placed. A typical (partial) search tree for n = 9 is illustrated in the left of the Figure below. This shows the search path when we first set q[1] = 1: this removes values from the domains of other variables. So that, for example, q[2] cannot take the values 1 or 2. Then q[2] = 3 was set, this further removes values from the domains of other variables. Then q[3] was set to 5 (its earliest possible value).

The state of the chess board after these three decisions is shown in Figure (a), where the crowns indicate the position of the queens fixed already and the stars indicate positions where we cannot place a queen since it would be able to take an already placed queen.

Figure: Partial search trees for 9 queens: (a) the state after the addition of q[1] = 1, q[2] = 4, q[3] = 5 (b) the initial propagation on adding further q[6] = 4.

Figure: Partial search trees for 9 queens: (a) the state after the addition of q[1] = 1, q[2] = 4, q[3] = 5 (b) the initial propagation on adding further q[6] = 4.

Am I suggesting that we now throw all our previous methods overboard and solve everything with CP? Absolutely not. CP is a relatively new method and therefore not as mature as, say, MIP. Since hundreds of PhD theses have been devoted to improving and speeding up MIPs, one cannot expect CP always to be able to compete: if you see a way to model your problem as an MIP, that’s probably going to be the faster and more scalable way to go. I’m simply suggesting you become aware of CP, and maybe even give it a try yourself. Discover how elegant it is to phrase constraints in a CP language. See how trying to using expressions like ‘for all’, ‘exists’ and ‘all different’ shape your thinking about the problem. It’s interesting, and more than anything: it’s fun.

It is important to remember that modeling is programming, and like programming you can write efficient and inefficient models. In contrast to typical LP solvers, a CP’s solving speed actually <it>benefits</it> from adding constraints. The reason for this is that the underlying CP-solver can use these constraints to improve the constraint propagation. Which is why CP users often add redundant constraints that were already implicitly present in their model.

Sometimes the search space can be reduced by adding constraints which break a symmetry in the problem. For example, consider solving a dinner party problem where you’re seating people round a circular table and want everyone to sit next to people they like. It’s easy to see how we can immediately fix the seat of one person. And we don’t really care whether the people are ordered clockwise or counterclockwise round the table. So to remove these mirrored solutions, we can require the seat of a second person be in the clockwise half of the table relative to the first, fixed person. These simple additions allow the problem to be solved much faster.

MiniZinc

There are several languages that support constraint programming, but I will just highlight one: MiniZinc. MiniZinc is a constraint modeling language developed by researchers from the University of Melbourne and Monash University. To be fair, MiniZinc can handle more than CP alone: depending on the kind of model, it can solve the problem using a CP-solver, Boolean satisfiability problem (SAT) solver or MIP solver. But for the purposes of this blog, I decided to focus only on the CP part.

Has this article aroused your curiosity enough that you want to try it for yourself? You can download MiniZinc (http://www.minizinc.org/) and its tutorial (http://www.minizinc.org/downloads/doc-latest/minizinc-tute.pdf). I also recommend a massive open online course (MOOC) on the topic available here: https://www.mooc-list.com/course/modeling-discrete-optimization-coursera. All free of charge!

Exercise

A magic square of side n is an arrangement of the numbers from 1 to n*n such that each row, column, and major diagonal all sum to the same value.

Here is a 3×3 magic square:

2 7 6

9 5 1

4 3 8

Exercise: Write a MiniZinc program to generate a magic square for size n

Most examples and figures used in this post are courtesy of Kim Marriott (Monash University).

 

The post Constraint programming appeared first on This Complex World.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/constraint-programming/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×