A Python library for program synthesis and symbolic execution that combines Z3's constraint solving with LLM-guided synthesis. Put holes in your Python code and let holey
fill them using formal constraints, natural language specifications, or both.
The symbolic execution is
inspired by Philip Zucker's blog post "Symbolic Execution by Overloading __bool__
",
but explores all branches exhaustively instead of randomly and fleshes out the concepts towards solving Python Programming Puzzles.
The solver incorporates heuristics from LLMs in addition to symbolic execution.
python
with support forpip
(e.g.conda
), tested with Python 3.12z3
orcvc5
or both -- on mac with Homebrew, can install withbrew install z3 cvc5
git clone --recursive https://github.com/namin/holey.git
conda create -n holey python=3.12
conda activate holey
pip install -e ".[test,ollama,anthropic]"
python puzzle_solver.py --help
python puzzle_solver.py --name-prefix HelloWorld:0
python puzzle_solver.py >results.txt 2>&
Set ANTHROPIC_API_KEY
for Claude or default to local Ollama.
python puzzle_solver.py --name-prefix ListIn:1 --llm
The symbolic execution alone currently solves:
- 53% (192 out of 360) of
int
puzzles, - 20% (71 out of 363) of
str
puzzles, - 36% (263 out of 723) overall.
with the following errors:
- 54 timeouts after 3 seconds at staging time (while generating the SMTLIB program)
- 210 errors at at staging time
- 66 SMTLIB programs returning
sat
but the originalsat
function failing on synthesized model input, - 130 SMTLIB programs returning non-
sat
(e.g.unsat
,unknown
or timing out after 2 seconds timeouts after staging (while building the SMTLIB program), errors during staging time, the SMTLIB - 992 (out of 1715) puzzles not yet even attempted because their type is not
int
orstr
, such asfloat
,list
(of various specialization), etc.
See a detailed stdout log of the current run.
.
├──
README.md
├──
benchmarks
│ └──
PythonProgrammingPuzzles
benchmark added as git
submodule
├──
holey
│ ├──
__init__.py
│ ├──
backend.py
backend to SMTLIB batch processes
│ ├──
core.py
includes tracer, symbolic classes, ...
│ ├──
llm.py
support for LLM generation and code extraction
│ └──
preprocessor.py
includes node transformer and sat driver
├──
log
│ └──
results.txt
example run
├──
puzzle_solver.py
main routine for benchmark solver
├──
pyproject.toml
└──
tests
└──
test_core.py
ran with python -m pytest
, basic and LLM-generated
I need help in completely fleshing out the symbolic executor as well as designing and implementing LLM-based heuristics to complement it. See the contributing guidelines, in particular discussing a workflow to find and fix issues driven by the benchmarks.