Printing is only available in GTK versions >= 2.10. We can only embed
the page setup dialog on GTK >= 2.18, so on a GTK version less than
that, we must use a separate page setup dialog.
In GTK, printing is usually done one page at a time, so also modify
printing.c to allow printing of a single page at a time.
Create a separate drawing API for drawing to the screen and for
printing. Create a vtable for functions which need to be different
depending on whether they were called from the printing or drawing
API.
When a function is called from the printing API, it is passed a
separate instance of the frontend than if it were called from the
drawing API. In that instance of the frontend, an appropriate vtable
is available depending on whether it was called from the printing or
drawing API.
The low-level functions used for printing are enabled even if printing
is not enabled. This is in case we ever need to use them for something
other than printing with GTK. For example, using Cairo as a printing
backend when printing from the command line. Enabling the low-level
functions even when GTK printing is not available also allows them to
be compiled under as many build settings as possible, and thus lowers
the chance of undetected breakage.
Move the definition of ROOT2 from ps.c to puzzles.h so other files can
use it (gtk.c needs it for hatching).
Also add myself to the copyright list.
[Committer's note: by 'printing', this log message refers to the GTK
GUI printing system, which handles selecting a printer, printing to a
file, previewing and so on. The existing facility to generate
printable puzzles in Postscript form by running the GTK binaries in
command-line mode with the --print option is unaffected. -SGT]
Asher Gordon points out that when Michael Quevillon was added to the
LICENCE file in commit 8af0c2961, he didn't also make it in to the
copy of the same list in puzzles.but.
When no icons are available, n_xpm_icons will be 0, and
menu_about_event() will try to access xpm_icons[n_xpm_icons-1]. Since
n_xpm_icons is 0, this becomes xpm_icons[-1] which is an invalid
value, causing a segfault.
Instead, check if n_xpm_icons is 0, and if so, don't pass any icon to
gtk_show_about_dialog().
I had occasion recently to want to take a puzzle screenshot on a
machine that didn't have the GTK3 libraries installed, but is advanced
enough to build the GTK2+Cairo version of the puzzles. That _ought_ to
be good enough to take screenshots using bare Cairo without GTK; I
think the only reason why I didn't bother to support it before is
because on GTK2, frontend_default_colours() queries the widget style
to choose a shade of grey. But we have a fixed fallback shade of grey
to use on GTK3, so it's easy to just use that same fallback in
headless GTK2.
One of these days I should sit down and work out exactly when a run of
autoconf generates an autom4te.cache directory, and why it can
suddenly turn up unexpectedly one day after years of never needing to
put it in .gitignore!
Apparently gcc 9 is clever enough to say 'Hey, runtime field width in
an sprintf targeting a fixed-size buffer!', but not clever enough to
notice that the width was computed earlier as the max of lots of
default-width sprintfs into the same buffer (so _either_ it's safe, or
else - on a hypothetical platform with a 263-bit int - the damage was
already done).
Added a bounds check or two to keep it happy.
This change doesn't alter the overall power of the Dominosa solver; it
just moves the boundary between Hard and Extreme so that fewer puzzles
count as Hard, and more as Extreme. Specifically, either of the two
cases of set analysis potentially involving a double domino with both
squares in the set is now classed as Extreme.
The effects on difficulty are that Hard is now easier, and Extreme is
at least easier _on average_. But the main purpose is the effect on
generation time: before this change, Dominosa Extreme was the slowest
puzzle present to generate in the whole collection, by a factor of
more than three. I think the forcing-chain deductions just didn't make
very many additional grids soluble, so that the generator had to try a
lot of candidates before finding one that could be solved using
forcing chains but not with all the other techniques put together.
This is a technique I've had on the todo list (and been using by hand)
for years: a domino can't be placed if it would divide the remaining
area of the grid into pieces containing an odd number of squares.
The findloop subsystem is already well set up for finding domino
placements that would divide the grid, and the new is_bridge query
function can now tell me the sizes of the area on each side of the
bridge, which makes it trivial to implement this deduction by simply
running findloop and iterating over the output array.
This one tells you if a graph edge _is_ a bridge (i.e. it has inverted
sense to the existing is_loop_edge query). But it also returns
auxiliary data, telling you: _if_ this edge were removed, and thereby
disconnected some connected component of the graph, what would be the
sizes of the two new components?
The data structure built up by the algorithm very nearly contained
enough information to answer that already: because we assign
sequential indices to all the vertices in a traversal of our spanning
forest, and any bridge must be an edge of that forest, it follows that
we already know the size of _one_ of the two new components, just by
looking up the (minindex,maxindex) values for the vertex at the child
end of the edge.
To determine the other subcomponent's size, we subtract that from the
size of the full component. That's not quite so easy because we don't
already store that - but it's trivial to annotate every vertex's data
with a pointer back to the topmost node of the spanning forest in its
component, and then we can look at the (minindex,maxindex) pair of
that. So now we know the size of the original component and the size
of one of the pieces it would be split into, so we can just subtract.
We've already spotted when a domino occurs twice in the _same_ forcing
chain. But now we also spot when it occurs twice in the same _pair_ of
complementary forcing chains, one in each of the two options. Then it
must appear in one of those two places, and hence, can't appear
anywhere else.
This is necessary to solve the following test puzzle that someone sent
me in 2006 and I just recovered from my email archive:
6:65111036543150325534405211110064266620632365204324442053
Without this new deduction, the previous solver can't solve that
puzzle even at full power, but the half-solved state it leaves the
grid in has an obvious move in the top right corner (placing the 6-2
domino vertically in that corner forces two 3-0s to its left). Now
that kind of move can be made too, and the solver can handle this
puzzle (grading it as Hard).
In other front ends, Shift-click is an alternative to the middle
button, and Ctrl-click the right button. Apparently I completely
forgot to implement this in the JS front end. Better late than never.
I realised that even with the extra case for a double domino
potentially using two squares in a set, I'd missed two tricks.
Firstly, if the double domino is _required_ to use two of the squares,
you can rule out any placement in which it only uses one. But I was
only ruling out those in which it used _none_.
Secondly, if you have the same number of squares as dominoes, so that
the double domino _can_ but _need not_ use two of the squares, then I
previously thought there was no deduction you could make at all. But
there is! In that situation, the double does have to use _one_ of the
squares, or else there would be only the n-1 heterogeneous dominoes to
go round the n squares. So you can still rule out placements for the
double which fail to overlap any square in the set, even if you can't
(yet) do anything about the other dominoes involved.
Now we don't just ensure that alloc_try_hard arranged a confounder for
every domino; we also make sure that the full Basic-mode solver can't
place even a single domino with certainty.
Now, as well as grading a puzzle by the highest difficulty it needed
during its solution, I can check _how much_ of a given puzzle is
soluble if you remove the higher difficulty levels.
The new Hard and Extreme difficulty levels allow you to make a start
on a grid even if there is no individual domino that can be easily
placed. So it's more elegant to _enforce_ that, in the same way that
Hard-mode Slant tries to avoid the initial toeholds that Easy mode
depends on.
Hence, I've refactored the domino layout code into several alternative
versions. The new one, enabled at Hard mode and above, arranges that
every domino has more than one possible position, so that you have to
use some kind of hard deduction to even get off the ground.
While I'm at it, the old layout system has had a makeover: in the
course of its refactoring, I've arranged to iterate over the domino
values _and_ locations in random order, instead of going over the
locations in grid order. The idea is that that might eliminate a
directional bias. But more importantly, it changes the previous
meaning of random number seeds.
I decided not to go all the way up to order-9 Extreme, because that
takes a lot of CPU to generate. People can select it by hand if they
don't mind that.
As with several other puzzles, the harder difficulty levels turn out
to be impossible to generate at very small sizes, which I fudge by
replacing them with the hardest level actually feasible.
This technique borrows its name from the Solo / Map deduction in which
you find a path of vertices in your graph each of which has two
possible values, such that a choice for one end vertex of the chain
forces everything along it. In Dominosa, an approximate analogue is a
path of squares each of which has only two possible domino placements
remaining, and it has the extra-useful property that it's
bidirectional - once you've identified such a path, either all the odd
domino placements along it must be right, or all the even ones. So if
you can find an inconsistency in either set, you can rule out the
whole lot and settle on the other set.
Having done that basic analysis (which turns out to be surprisingly
easy with an edsf to help), there are multiple ways you can actually
rule out one of the same-parity chains. One is if the same domino
would have to appear twice in it; another is if the set of dominoes
that the chain would place would rule out all the choices for some
completely different square. There are probably others too, but those
are the ones I've implemented.
I'm about to have a need to sort an array based on auxiliary data held
in a variable that's not globally accessible, so I need a sort routine
that accepts an extra parameter and passes it through to the compare
function.
Sorting algorithm is heapsort, because it's the N log N algorithm I
can implement most reliably.
In particular, reorganise the priorities. I think forcing chains are
the most important thing that still wants adding; the parity search
would be easy enough but I don't know how often it would actually be
_useful_; the extended set analysis would be nice but I don't know how
to make it computationally feasible.
I've always had the vague idea that the usual set analysis deduction
goes wrong when there are two adjacent squares, because they might be
opposite ends of the same domino and mess up the count. But I just
realised that actually you can correct for that by adjusting the
required count by one: if you have four 0 squares which between them
can only be parts of 0-0, 0-1 and 0-2, then the only way this can work
is if two of them are able to be the 0-0 - but in that case, you can
still eliminate those dominoes from all placements elsewhere. So set
analysis _can_ work in that situation; you just have to compensate for
the possible double.
(This enhanced form _might_ turn out to be something that needs
promoting into a separate difficulty level, but for the moment, I'll
try leaving it in Hard and seeing if that's OK.)
This is another thing I've been doing with my own brain for ages as a
more interesting alternative to scouring the grid for the simpler
deduction that must be there somewhere - but now the solver can
understand it too, so it can generate puzzles that _need_ it (or at
least need something beyond the simpler strategies it understands).
Currently, there are just two difficulty levels. 'Basic' is the same
as the old solver; 'Trivial' is the subset that guarantees the puzzle
can be solved using only the two simplest deductions of all: 'this
domino can only go in one place' and 'only one domino orientation can
fit in this square'.
The solver has also acquired a -g option, to grade the difficulty of
an input puzzle using this system.
The new solver should be equivalent to the previous solver's
intelligence level, but it's more usefully split up into basic
data-structure maintenance and separate deduction routines that you
can omit some of. So it's a better basis to build on when adding
further deductions or dividing the existing ones into tiers.
The new solver also produces much more legible diagnostics, when the
command-line solver is run in -v mode.
I've made the existing optional solver diagnostics appear as the
verbose output of the solver program. They're not particularly legible
at the moment, but they're better than nothing.
If you drag an arrow on to a square which is already filled in as part
of a completed region, or whose counterpart is filled in, or whose
counterpart is actually a dot, then the game can't actually place a
double arrow. Previously, it didn't find that out until execute_move
time, at which point it was too late to prevent a no-op action from
being placed on the undo chain.
Now we do those checks in interpret_move, before generating the move
string that tries to place the double arrow in the first place. So
execute_move can now enforce by assertion that arrow-placement moves
it gets are valid.
Fixes an assertion failure in which you move the keyboard cursor to a
peg, press Enter to indicate that you're about to jump it to
somewhere, and then instead execute an undo or redo action which
replaces the peg with a hole. Thanks to Vitaly Ostrosablin for the
report.
I've only just found out that it has the effect of treating the argv
words not as plain filenames, but as arguments to Perl default 'open',
i.e. if they end in | then the text before that is treated as a
command. That's not what was intended!
Thanks to Rocco Matano for reporting that in the -DCOMBINED version of
the Windows front end, switching games causes a crash because the
presets menu from the old midend is still left over in fe, and its
presence inhibits the setup code from making a new one. Now we throw
it out at the same time as we throw out the old midend itself.
Also, the condition 'if (!fe->preset_menu)' was misguided. I think the
point of that was to avoid pointlessly tearing down and rebuilding the
preset menu when we're _not_ changing game - but that's a cost too
small to worry about if it causes the slightest trouble. Now
fe->preset_menu should always be NULL at that point in the function,
so I've replaced the if with an assert.
Moving the snaffle_colours() call earlier is fine in GTK 3, where the
potential call to frontend_default_colour doesn't depend on the window
already having been created. But it falls over in GTK 2 where it does.
Moved the non-headless-mode version of that call back to where it was
before the --screenshot change.
I had this idea today and immediately wondered why I'd never had it
before!
To generate the puzzle screenshots used on the website and as program
icons, we run the GTK front end with the --screenshot option, which
sets up GTK, insists on connecting to an X server (or other display),
draws the state of a puzzle on a Cairo surface, writes that surface
out to a .png file, and exits.
But there's no reason we actually need the GTK setup during that
process, especially because the surface we do the drawing on is our
_own_ surface, not even one provided to us by GTK. We could just set
up a Cairo surface by itself, draw on it, and save it to a file.
Calling gtk_init is not only pointless, but actively inconvenient,
because it means the build script depends on having an X server
available for the sole purpose of making gtk_init not complain.
So now I've simplified things, by adding a 'headless' flag in
new_window and the frontend structure, which suppresses all uses of
actual GTK, leaving only the Cairo surface setup and enough supporting
stuff (like colours) to generate the puzzle image. One awkward build
dependency removed.
This means that --screenshot no longer works in GTK 2, which I don't
care about, because it only needs to run on _one_ platform.
Another thing I spotted while trawling the whole source base was that
a couple of games had omitted 'static' on a lot of their internal
functions. Checking with nm, there turned out to be quite a few more
than I'd spotted by eye, so this should fix them all.
Also added one missing 'const', on the lookup table nbits[] in Tracks.
I noticed this during the bool trawl: both of these games have an
array of flags indicating which grid squares are immutable starting
clues, and copy it in every call to dup_game, which is completely
unnecessary because it doesn't change during play. So now each one
lives in a reference-counted structure, as per my usual practice in
similar cases elsewhere.
This is the main bulk of this boolification work, but although it's
making the largest actual change, it should also be the least
disruptive to anyone interacting with this code base downstream of me,
because it doesn't modify any interface between modules: all the
inter-module APIs were updated one by one in the previous commits.
This just cleans up the code within each individual source file to use
bool in place of int where I think that makes things clearer.
This commit removes the old #defines of TRUE and FALSE from puzzles.h,
and does a mechanical search-and-replace throughout the code to
replace them with the C99 standard lowercase spellings.
This shouldn't be a disruptive change at all: findloop_run and
findloop_is_loop_edge now return bool in place of int, but client code
should automatically adjust without needing any changes.
Now the flag passed to edsf_merge to say whether two items are the
same or opposite is a bool, and so is the flag returned via a pointer
argument from edsf_canonify.
The latter requires client code to be updated to match (otherwise
you'll get a pointer type error), so I've done that update in Loopy,
which is edsf's only current in-tree client.
Not many changes here: the 'dotted' flag passed to print_line_dotted
is bool, and so is the printing_in_colour flag passed to
print_get_colour. Also ps_init() takes a bool.
line_dotted is also a method in the drawing API structure, but it's
not actually filled in for any non-print-oriented implementation of
that API. So only front ends that do platform-specific _printing_
should need to make a corresponding change. In-tree, for example,
windows.c needed a fix because it prints via Windows GDI, but gtk.c
didn't have to do anything, because its CLI-based printing facility
just delegates to ps.c.
This changes parameters of midend_size and midend_print_puzzle, the
return types of midend_process_key, midend_wants_statusbar,
midend_can_format_as_text_now and midend_can_{undo,redo}, the 'bval'
field in struct config_item, and finally the return type of the
function pointer passed to midend_deserialise and identify_game.
The last of those changes requires a corresponding fix in clients of
midend_deserialise and identify_game, so in this commit I've also
updated all the in-tree front ends to match. I expect downstream front
ends will need to do the same when they merge this change.