test-python
failed- Job ID
019c7e62-9e33-b561-6ceb-644a11add684- Created
- 2026-02-21 04:08:38 UTC
- Updated
- 2026-02-21 04:08:38 UTC
- Duration
- 3m 36s
- Source Ref
- 844d2169f9020f68d43d5a8587683b94a62346ea
- Source URL
- https://github.com/catalystcommunity/reactorcide.git
- Runner Image
10.16.0.1:5000/public/reactorcide/runnerbase:dev- Priority
- 10
- Queue
- reactorcide-jobs
Logs
Cloning into '/workspace'...
Updating files: 10% (32/305)
Updating files: 11% (34/305)
Updating files: 12% (37/305)
Updating files: 13% (40/305)
Updating files: 14% (43/305)
Updating files: 15% (46/305)
Updating files: 16% (49/305)
Updating files: 17% (52/305)
Updating files: 18% (55/305)
Updating files: 19% (58/305)
Updating files: 20% (61/305)
Updating files: 21% (65/305)
Updating files: 22% (68/305)
Updating files: 23% (71/305)
Updating files: 24% (74/305)
Updating files: 25% (77/305)
Updating files: 26% (80/305)
Updating files: 27% (83/305)
Updating files: 28% (86/305)
Updating files: 29% (89/305)
Updating files: 30% (92/305)
Updating files: 31% (95/305)
Updating files: 32% (98/305)
Updating files: 33% (101/305)
Updating files: 34% (104/305)
Updating files: 35% (107/305)
Updating files: 36% (110/305)
Updating files: 37% (113/305)
Updating files: 38% (116/305)
Updating files: 39% (119/305)
Updating files: 40% (122/305)
Updating files: 41% (126/305)
Updating files: 42% (129/305)
Updating files: 43% (132/305)
Updating files: 44% (135/305)
Updating files: 45% (138/305)
Updating files: 46% (141/305)
Updating files: 47% (144/305)
Updating files: 48% (147/305)
Updating files: 49% (150/305)
Updating files: 50% (153/305)
Updating files: 51% (156/305)
Updating files: 52% (159/305)
Updating files: 53% (162/305)
Updating files: 54% (165/305)
Updating files: 55% (168/305)
Updating files: 56% (171/305)
Updating files: 57% (174/305)
Updating files: 58% (177/305)
Updating files: 59% (180/305)
Updating files: 60% (183/305)
Updating files: 61% (187/305)
Updating files: 62% (190/305)
Updating files: 63% (193/305)
Updating files: 64% (196/305)
Updating files: 65% (199/305)
Updating files: 66% (202/305)
Updating files: 67% (205/305)
Updating files: 68% (208/305)
Updating files: 69% (211/305)
Updating files: 70% (214/305)
Updating files: 71% (217/305)
Updating files: 72% (220/305)
Updating files: 73% (223/305)
Updating files: 74% (226/305)
Updating files: 75% (229/305)
Updating files: 76% (232/305)
Updating files: 77% (235/305)
Updating files: 78% (238/305)
Updating files: 79% (241/305)
Updating files: 80% (244/305)
Updating files: 81% (248/305)
Updating files: 82% (251/305)
Updating files: 83% (254/305)
Updating files: 84% (257/305)
Updating files: 85% (260/305)
Updating files: 86% (263/305)
Updating files: 87% (266/305)
Updating files: 88% (269/305)
Updating files: 89% (272/305)
Updating files: 90% (275/305)
Updating files: 91% (278/305)
Updating files: 92% (281/305)
Updating files: 93% (284/305)
Updating files: 94% (287/305)
Updating files: 95% (290/305)
Updating files: 96% (293/305)
Updating files: 97% (296/305)
Updating files: 98% (299/305)
Updating files: 99% (302/305)
Updating files: 100% (305/305)
Updating files: 100% (305/305), done.
=== Running Python Tests ===
Using CPython 3.13.12 interpreter at: /usr/local/bin/python3.13
Creating virtual environment at: .venv
Building runnerlib @ file:///workspace/runnerlib
Downloading cryptography (4.3MiB)
Downloading pygments (1.2MiB)
Downloaded pygments
Downloaded cryptography
Built runnerlib @ file:///workspace/runnerlib
warning: Failed to hardlink files; falling back to full copy. This may lead to degraded performance.
If the cache and target directories are on different filesystems, hardlinking may not be supported.
If this is intentional, set `export UV_LINK_MODE=copy` or use `--link-mode=copy` to suppress this warning.
Installed 22 packages in 124ms
============================= test session starts ==============================
platform linux -- Python 3.13.12, pytest-8.3.5, pluggy-1.6.0
rootdir: /workspace/runnerlib
configfile: pyproject.toml
plugins: cov-7.0.0
collected 397 items
tests/test_config.py .................... [ 5%]
tests/test_container_advanced.py ......... [ 7%]
tests/test_container_isolation.py ...F [ 8%]
tests/test_container_validation.py ................... [ 13%]
tests/test_directory_operations.py ......F..... [ 16%]
tests/test_docker_execution.py FFFFFFFFFF [ 18%]
tests/test_dynamic_secret_masking.py FFF [ 19%]
tests/test_dynamic_secrets.py FsF [ 20%]
tests/test_eval.py ..................................................... [ 33%]
............. [ 36%]
tests/test_eval_cli.py .........FFF....F.... [ 42%]
tests/test_git_operations.py ........F. [ 44%]
tests/test_git_ops.py ....... [ 46%]
tests/test_integration.py ...F....... [ 49%]
tests/test_job_isolation.py FF.F [ 50%]
tests/test_plugins.py ....................... [ 55%]
tests/test_register_secret.py ............ [ 58%]
tests/test_secrets.py ................... [ 63%]
tests/test_secrets_local.py ............................ [ 70%]
tests/test_secrets_resolver.py ............................. [ 78%]
tests/test_secrets_server.py ........ [ 80%]
tests/test_source_preparation.py .FF.FF.FF...... [ 83%]
tests/test_validation.py ......................F.... [ 90%]
tests/test_workflow.py ..................................... [100%]
=================================== FAILURES ===================================
______ TestContainerIsolation.test_work_directory_isolation_with_prepare _______
self = <runnerlib.tests.test_container_isolation.TestContainerIsolation object at 0x7f87abef9e00>
def test_work_directory_isolation_with_prepare(self):
"""Test that prepare_job_directory respects work directory changes."""
with tempfile.TemporaryDirectory() as work_dir1:
with tempfile.TemporaryDirectory() as work_dir2:
original_cwd = os.getcwd()
try:
# Prepare job 1
os.chdir(work_dir1)
config1 = RunnerConfig(
code_dir="/job/src",
job_dir="/job/src",
job_command="echo job1",
runner_image="alpine:latest"
)
job_path1 = prepare_job_directory(config1)
assert job_path1.exists()
> assert str(job_path1).startswith(work_dir1)
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0x7f87abb72fd0>('/tmp/tmphyh4bqab')
E + where <built-in method startswith of str object at 0x7f87abb72fd0> = '/job'.startswith
E + where '/job' = str(PosixPath('/job'))
tests/test_container_isolation.py:158: AssertionError
__________ TestDirectoryOperations.test_cleanup_removes_job_directory __________
self = <runnerlib.tests.test_directory_operations.TestDirectoryOperations object at 0x7f87abf29d00>
def test_cleanup_removes_job_directory(self):
"""Test that cleanup removes the job directory."""
job_dir = Path("./job")
job_dir.mkdir(exist_ok=True)
# Create some files
(job_dir / "file.txt").write_text("Content")
(job_dir / "subdir").mkdir(exist_ok=True)
(job_dir / "subdir" / "nested.txt").write_text("Nested")
# Perform cleanup
cleanup_job_directory()
# Job directory should be gone
> assert not job_dir.exists()
E AssertionError: assert not True
E + where True = exists()
E + where exists = PosixPath('job').exists
tests/test_directory_operations.py:172: AssertionError
_________________________ test_basic_docker_execution __________________________
def test_basic_docker_execution():
"""Test that we can execute a simple container with Docker."""
# Create a temporary working directory
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory structure
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a simple test script
test_script = job_dir / "test.sh"
test_script.write_text("""#!/bin/sh
echo "Hello from Docker container"
echo "Current directory: $(pwd)"
echo "Job directory contents:"
ls -la /job/
exit 0
""")
test_script.chmod(0o755)
# Run the container using runnerlib CLI
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/test.sh",
"--code-dir", "/job",
"--job-dir", "/job",
],
capture_output=True,
text=True,
cwd=work_dir, # Run from the temp directory
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("STDOUT:", result.stdout)
print("STDERR:", result.stderr)
print("Return code:", result.returncode)
# Verify the execution
> assert result.returncode == 0, f"Container execution failed with code {result.returncode}"
E AssertionError: Container execution failed with code 1
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:51: AssertionError
----------------------------- Captured stdout call -----------------------------
STDOUT:
STDERR: 2026-02-21T04:09:52.012995+00:00 Configuration validation failed:
2026-02-21T04:09:52.013106+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
⚠️ Configuration warnings:
• runner_image: Using 'latest' tag or no tag specified
💡 Consider using a specific version tag for reproducible builds
Return code: 1
____________________ test_docker_with_environment_variables ____________________
def test_docker_with_environment_variables():
"""Test Docker execution with environment variables."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create script that uses environment variables
test_script = job_dir / "env_test.sh"
test_script.write_text("""#!/bin/sh
echo "TEST_VAR=$TEST_VAR"
echo "CUSTOM_VAR=$CUSTOM_VAR"
if [ "$TEST_VAR" = "test_value" ]; then
echo "Environment variables work!"
exit 0
else
echo "Environment variables failed"
exit 1
fi
""")
test_script.chmod(0o755)
# Create env file (use relative path from working directory)
env_file = job_dir / "test.env"
env_file.write_text("""# Test environment
TEST_VAR=test_value
CUSTOM_VAR=custom_value
""")
# Run with environment file - needs to be relative path starting with ./job/
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/env_test.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--job-env", "./job/test.env",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("ENV TEST STDOUT:", result.stdout)
print("ENV TEST STDERR:", result.stderr)
> assert result.returncode == 0, f"Environment test failed with code {result.returncode}"
E AssertionError: Environment test failed with code 1
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:108: AssertionError
----------------------------- Captured stdout call -----------------------------
ENV TEST STDOUT:
ENV TEST STDERR: 2026-02-21T04:09:52.513571+00:00 Configuration validation failed:
2026-02-21T04:09:52.513652+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
⚠️ Configuration warnings:
• runner_image: Using 'latest' tag or no tag specified
💡 Consider using a specific version tag for reproducible builds
___________________________ test_docker_with_python ____________________________
def test_docker_with_python():
"""Test running Python code in a container."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create Python script
py_script = job_dir / "test.py"
py_script.write_text("""
import sys
import os
print(f"Python version: {sys.version.split()[0]}")
print(f"Working directory: {os.getcwd()}")
print(f"Job files: {os.listdir('/job')}")
# Test that we can write output
with open('/job/output.txt', 'w') as f:
f.write("Test output from Python container\\n")
print("Successfully wrote output file")
sys.exit(0)
""")
# Run Python container
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.11-alpine",
"--job-command", "python /job/test.py",
"--code-dir", "/job",
"--job-dir", "/job",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("PYTHON TEST STDOUT:", result.stdout)
print("PYTHON TEST STDERR:", result.stderr)
> assert result.returncode == 0, f"Python container failed with code {result.returncode}"
E AssertionError: Python container failed with code 1
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.11...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_docker_execution.py:162: AssertionError
----------------------------- Captured stdout call -----------------------------
PYTHON TEST STDOUT:
PYTHON TEST STDERR: 2026-02-21T04:09:53.218561+00:00 Configuration validation failed:
2026-02-21T04:09:53.218678+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
_________________________ test_docker_failure_handling _________________________
def test_docker_failure_handling():
"""Test that container failures are properly reported."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a script that fails
fail_script = job_dir / "fail.sh"
fail_script.write_text("""#!/bin/sh
echo "This script will fail"
echo "Error: Something went wrong" >&2
exit 42
""")
fail_script.chmod(0o755)
# Run container that should fail
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/fail.sh",
"--code-dir", "/job",
"--job-dir", "/job",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("FAIL TEST STDOUT:", result.stdout)
print("FAIL TEST STDERR:", result.stderr)
print("FAIL TEST RETURN CODE:", result.returncode)
# Should propagate the exit code
> assert result.returncode == 42, f"Expected exit code 42, got {result.returncode}"
E AssertionError: Expected exit code 42, got 1
E assert 1 == 42
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:212: AssertionError
----------------------------- Captured stdout call -----------------------------
FAIL TEST STDOUT:
FAIL TEST STDERR: 2026-02-21T04:09:53.972497+00:00 Configuration validation failed:
2026-02-21T04:09:53.972611+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
⚠️ Configuration warnings:
• runner_image: Using 'latest' tag or no tag specified
💡 Consider using a specific version tag for reproducible builds
FAIL TEST RETURN CODE: 1
____________________________ test_docker_available _____________________________
def test_docker_available():
"""Test that Docker is available and working."""
> result = subprocess.run(
["docker", "version", "--format", "{{.Server.Version}}"],
capture_output=True,
text=True
)
tests/test_docker_execution.py:242:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.13/subprocess.py:554: in run
with Popen(*popenargs, **kwargs) as process:
/usr/local/lib/python3.13/subprocess.py:1039: in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Popen: returncode: 255 args: ['docker', 'version', '--format', '{{.Server.V...>
args = ['docker', 'version', '--format', '{{.Server.Version}}']
executable = b'docker', preexec_fn = None, close_fds = True, pass_fds = ()
cwd = None, env = None, startupinfo = None, creationflags = 0, shell = False
p2cread = -1, p2cwrite = -1, c2pread = 11, c2pwrite = 12, errread = 13
errwrite = 14, restore_signals = True, gid = None, gids = None, uid = None
umask = -1, start_new_session = False, process_group = -1
def _execute_child(self, args, executable, preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
restore_signals,
gid, gids, uid, umask,
start_new_session, process_group):
"""Execute program (POSIX version)"""
if isinstance(args, (str, bytes)):
args = [args]
elif isinstance(args, os.PathLike):
if shell:
raise TypeError('path-like args is not allowed when '
'shell is [REDACTED]')
args = [args]
else:
args = list(args)
if shell:
# On Android the default shell is at '/system/bin/sh'.
unix_shell = ('/system/bin/sh' if
hasattr(sys, 'getandroidapilevel') else '/bin/sh')
args = [unix_shell, "-c"] + args
if executable:
args[0] = executable
if executable is None:
executable = args[0]
sys.audit("subprocess.Popen", executable, args, cwd, env)
if (_USE_POSIX_SPAWN
and os.path.dirname(executable)
and preexec_fn is None
and (not close_fds or _HAVE_POSIX_SPAWN_CLOSEFROM)
and not pass_fds
and cwd is None
and (p2cread == -1 or p2cread > 2)
and (c2pwrite == -1 or c2pwrite > 2)
and (errwrite == -1 or errwrite > 2)
and not start_new_session
and process_group == -1
and gid is None
and gids is None
and uid is None
and umask < 0):
self._posix_spawn(args, executable, env, restore_signals, close_fds,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
return
orig_executable = executable
# For transferring possible exec failure from child to parent.
# Data format: "exception name:hex errno:description"
# Pickle is not used; it is complex and involves memory allocation.
errpipe_read, errpipe_write = os.pipe()
# errpipe_write must not be in the standard io 0, 1, or 2 fd range.
low_fds_to_close = []
while errpipe_write < 3:
low_fds_to_close.append(errpipe_write)
errpipe_write = os.dup(errpipe_write)
for low_fd in low_fds_to_close:
os.close(low_fd)
try:
try:
# We must avoid complex work that could involve
# malloc or free in the child process to avoid
# potential deadlocks, thus we do all this here.
# and pass it to fork_exec()
if env is not None:
env_list = []
for k, v in env.items():
k = os.fsencode(k)
if b'=' in k:
raise ValueError("illegal environment variable name")
env_list.append(k + b'=' + os.fsencode(v))
else:
env_list = None # Use execv instead of execve.
executable = os.fsencode(executable)
if os.path.dirname(executable):
executable_list = (executable,)
else:
# This matches the behavior of os._execvpe().
executable_list = tuple(
os.path.join(os.fsencode(dir), executable)
for dir in os.get_exec_path(env))
fds_to_keep = set(pass_fds)
fds_to_keep.add(errpipe_write)
self.pid = _fork_exec(
args, executable_list,
close_fds, tuple(sorted(map(int, fds_to_keep))),
cwd, env_list,
p2cread, p2cwrite, c2pread, c2pwrite,
errread, errwrite,
errpipe_read, errpipe_write,
restore_signals, start_new_session,
process_group, gid, gids, uid, umask,
preexec_fn, _USE_VFORK)
self._child_created = True
finally:
# be sure the FD is closed no matter what
os.close(errpipe_write)
self._close_pipe_fds(p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
# Wait for exec to fail or succeed; possibly raising an
# exception (limited in size)
errpipe_data = bytearray()
while True:
part = os.read(errpipe_read, 50000)
errpipe_data += part
if not part or len(errpipe_data) > 50000:
break
finally:
# be sure the FD is closed no matter what
os.close(errpipe_read)
if errpipe_data:
try:
pid, sts = os.waitpid(self.pid, 0)
if pid == self.pid:
self._handle_exitstatus(sts)
else:
self.returncode = sys.maxsize
except ChildProcessError:
pass
try:
exception_name, hex_errno, err_msg = (
errpipe_data.split(b':', 2))
# The encoding here should match the encoding
# written in by the subprocess implementations
# like _posixsubprocess
err_msg = err_msg.decode()
except ValueError:
exception_name = b'SubprocessError'
hex_errno = b'0'
err_msg = 'Bad exception data from child: {!r}'.format(
bytes(errpipe_data))
child_exception_type = getattr(
builtins, exception_name.decode('ascii'),
SubprocessError)
if issubclass(child_exception_type, OSError) and hex_errno:
errno_num = int(hex_errno, 16)
if err_msg == "noexec:chdir":
err_msg = ""
# The error must be from chdir(cwd).
err_filename = cwd
elif err_msg == "noexec":
err_msg = ""
err_filename = None
else:
err_filename = orig_executable
if errno_num != 0:
err_msg = os.strerror(errno_num)
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: 'docker'
/usr/local/lib/python3.13/subprocess.py:1991: FileNotFoundError
____________________ test_container_with_working_directory _____________________
def test_container_with_working_directory():
"""Test that working directory is set correctly in container."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory with subdirectory
job_dir = work_dir / "job"
job_dir.mkdir()
sub_dir = job_dir / "subdir"
sub_dir.mkdir()
# Create test file in subdirectory
test_file = sub_dir / "data.txt"
test_file.write_text("test data")
# Create script that checks working directory
test_script = job_dir / "pwd_test.sh"
test_script.write_text("""#!/bin/sh
echo "Current directory: $(pwd)"
echo "Directory contents:"
ls -la
echo "Subdir exists:"
ls -d subdir
exit 0
""")
test_script.chmod(0o755)
# Run with working directory set to /job
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh pwd_test.sh", # Note: no /job/ prefix since we're in that dir
"--code-dir", "/job",
"--job-dir", "/job",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("WORKING DIR TEST:", result.stdout)
print("STDERR:", result.stderr)
> assert result.returncode == 0, f"Container execution failed: {result.stderr}"
E AssertionError: Container execution failed: 2026-02-21T04:09:54.899122+00:00 Configuration validation failed:
E 2026-02-21T04:09:54.899219+00:00 ❌ Configuration has errors:
E • system: docker is not available in PATH
E 💡 Install docker: https://docs.docker.com/get-docker/
E
E ⚠️ Configuration warnings:
E • runner_image: Using 'latest' tag or no tag specified
E 💡 Consider using a specific version tag for reproducible builds
E
E
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:296: AssertionError
----------------------------- Captured stdout call -----------------------------
WORKING DIR TEST:
STDERR: 2026-02-21T04:09:54.899122+00:00 Configuration validation failed:
2026-02-21T04:09:54.899219+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
⚠️ Configuration warnings:
• runner_image: Using 'latest' tag or no tag specified
💡 Consider using a specific version tag for reproducible builds
______________________________ test_dry_run_mode _______________________________
def test_dry_run_mode():
"""Test dry-run mode doesn't actually execute container."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a script that should NOT run
test_script = job_dir / "should_not_run.sh"
test_script.write_text("""#!/bin/sh
echo "ERROR: This should not execute in dry-run mode!"
exit 1
""")
test_script.chmod(0o755)
# Run in dry-run mode
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/should_not_run.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--dry-run",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("DRY RUN OUTPUT:", result.stdout)
print("DRY RUN STDERR:", result.stderr)
> assert result.returncode == 0, f"Dry-run failed: {result.stderr}"
E AssertionError: Dry-run failed: 2026-02-21T04:09:55.378357+00:00 Configuration validation failed:
E 2026-02-21T04:09:55.378444+00:00 ❌ Configuration has errors:
E • system: docker is not available in PATH
E 💡 Install docker: https://docs.docker.com/get-docker/
E
E ⚠️ Configuration warnings:
E • runner_image: Using 'latest' tag or no tag specified
E 💡 Consider using a specific version tag for reproducible builds
E
E
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:338: AssertionError
----------------------------- Captured stdout call -----------------------------
DRY RUN OUTPUT:
DRY RUN STDERR: 2026-02-21T04:09:55.378357+00:00 Configuration validation failed:
2026-02-21T04:09:55.378444+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
⚠️ Configuration warnings:
• runner_image: Using 'latest' tag or no tag specified
💡 Consider using a specific version tag for reproducible builds
_____________________________ test_node_container ______________________________
def test_node_container():
"""Test Node.js container execution."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create Node.js script
js_script = job_dir / "test.js"
js_script.write_text("""
console.log('Node version:', process.version);
console.log('Platform:', process.platform);
console.log('Working dir:', process.cwd());
process.exit(0);
""")
# Run Node container
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "node:18-alpine",
"--job-command", "node /job/test.js",
"--code-dir", "/job",
"--job-dir", "/job",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("NODE TEST OUTPUT:", result.stdout)
print("NODE TEST STDERR:", result.stderr)
> assert result.returncode == 0, f"Node container failed: {result.stderr}"
E AssertionError: Node container failed: 2026-02-21T04:09:56.108515+00:00 Configuration validation failed:
E 2026-02-21T04:09:56.108623+00:00 ❌ Configuration has errors:
E • system: docker is not available in PATH
E 💡 Install docker: https://docs.docker.com/get-docker/
E
E
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'node:18-alp...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_docker_execution.py:382: AssertionError
----------------------------- Captured stdout call -----------------------------
NODE TEST OUTPUT:
NODE TEST STDERR: 2026-02-21T04:09:56.108515+00:00 Configuration validation failed:
2026-02-21T04:09:56.108623+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
____________________ test_container_with_multiple_env_vars _____________________
def test_container_with_multiple_env_vars():
"""Test passing multiple environment variables via CLI."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create test script
test_script = job_dir / "multi_env.sh"
test_script.write_text("""#!/bin/sh
echo "VAR1=$VAR1"
echo "VAR2=$VAR2"
echo "VAR3=$VAR3"
if [ "$VAR1" = "value1" ] && [ "$VAR2" = "value2" ] && [ "$VAR3" = "value3" ]; then
echo "All environment variables set correctly!"
exit 0
else
echo "Environment variables not set correctly"
exit 1
fi
""")
test_script.chmod(0o755)
# Run with multiple env vars in a single --job-env (newline separated)
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/multi_env.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--job-env", "VAR1=value1\nVAR2=value2\nVAR3=value3",
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
> assert result.returncode == 0, f"Multi-env test failed: {result.stderr}"
E AssertionError: Multi-env test failed: 2026-02-21T04:09:56.896037+00:00 Configuration validation failed:
E 2026-02-21T04:09:56.896142+00:00 ❌ Configuration has errors:
E • system: docker is not available in PATH
E 💡 Install docker: https://docs.docker.com/get-docker/
E
E ⚠️ Configuration warnings:
E • runner_image: Using 'latest' tag or no tag specified
E 💡 Consider using a specific version tag for reproducible builds
E
E
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:429: AssertionError
________________________ test_selective_secret_masking _________________________
def test_selective_secret_masking():
"""Test selective masking of secrets using --secret-values-list."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create test script that prints environment variables
test_script = job_dir / "selective_test.sh"
test_script.write_text("""#!/bin/sh
echo "API_KEY=$API_KEY"
echo "PUBLIC_VALUE=$PUBLIC_VALUE"
echo "SECRET_TOKEN=$SECRET_TOKEN"
echo "CONFIG_PATH=$CONFIG_PATH"
exit 0
""")
test_script.chmod(0o755)
# Run with environment vars and explicitly mark only some as secrets
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "alpine:latest",
"--job-command", "sh /job/selective_test.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--job-env", "API_KEY=my-secret-api-key-123\nPUBLIC_VALUE=not-a-secret\nSECRET_TOKEN=super-secret-token\nCONFIG_PATH=/etc/config",
"--secret-values-list", "my-secret-api-key-123,super-secret-token", # Only mask these specific values
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
> assert result.returncode == 0, f"Selective masking test failed: {result.stderr}"
E AssertionError: Selective masking test failed: 2026-02-21T04:09:57.640289+00:00 Configuration validation failed:
E 2026-02-21T04:09:57.640454+00:00 ❌ Configuration has errors:
E • system: docker is not available in PATH
E 💡 Install docker: https://docs.docker.com/get-docker/
E
E ⚠️ Configuration warnings:
E • runner_image: Using 'latest' tag or no tag specified
E 💡 Consider using a specific version tag for reproducible builds
E
E
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'alpine:late...mage: Using 'latest' tag or no tag specified\n 💡 Consider using a specific version tag for reproducible builds\n\n").returncode
tests/test_docker_execution.py:474: AssertionError
________________________ test_value_printed_then_masked ________________________
def test_value_printed_then_masked():
"""Test that dynamic registration masks values in subsequent output.
Due to the nature of streaming output and socket communication, we cannot
guarantee that output printed immediately before registration will be unmasked.
However, we CAN demonstrate that:
1. Values not in the initial secrets list are not masked initially
2. After dynamic registration, those values ARE masked in new output
"""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a script that demonstrates dynamic masking
test_script = job_dir / "show_masking.py"
test_script.write_text("""#!/usr/bin/env python3
import socket
import json
import struct
import os
import time
import sys
import subprocess
# This is our sensitive value that we'll get at runtime
api_token = "UNIQUEVALUE-abc123xyz789-ENDUNIQUE"
print("=" * 50)
print("DEMONSTRATION OF DYNAMIC SECRET MASKING")
print("=" * 50)
# First, show that without registration, the value appears in subprocess output
print("\\n1. Running subprocess BEFORE registration:")
sys.stdout.flush()
result = subprocess.run(
["sh", "-c", f"echo 'Token is: {api_token}'"],
capture_output=True,
text=True
)
print(f" Subprocess output: {result.stdout.strip()}")
sys.stdout.flush()
# Now register this value as a secret
socket_path = os.environ.get('REACTORCIDE_SECRETS_SOCKET')
if socket_path:
print(f"\\n2. Registering secret via socket...")
sys.stdout.flush()
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(socket_path)
msg = json.dumps({'action': 'register', 'secrets': [api_token]}).encode()
sock.send(struct.pack('!I', len(msg)))
sock.send(msg)
response = sock.recv(1024)
print(f" Registration response: {response.decode().strip()}")
sock.close()
# Give it a moment to process
time.sleep(0.2)
# Now show that the value IS masked in new output
print("\\n3. After registration, value is masked:")
print(f" API Token: {api_token}")
print(f" Authorization: Bearer {api_token}")
sys.stdout.flush()
else:
print("ERROR: No secrets socket available!")
exit(1)
print("\\n" + "=" * 50)
print("TEST COMPLETE")
print("=" * 50)
""")
test_script.chmod(0o755)
# Run the job with an explicit empty secrets list to prevent default masking
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.9-alpine",
"--job-command", "python3 -u /job/show_masking.py", # -u for unbuffered output
"--code-dir", "/job",
"--job-dir", "/job",
"--secret-values-list", "", # Empty list prevents default masking of all values
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("\n--- OUTPUT ---")
print(result.stdout)
print("\n--- ERRORS ---")
print(result.stderr)
# Verify the behavior
> assert result.returncode == 0, f"Script failed with code {result.returncode}"
E AssertionError: Script failed with code 1
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.9-...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_dynamic_secret_masking.py:110: AssertionError
----------------------------- Captured stdout call -----------------------------
--- OUTPUT ---
--- ERRORS ---
2026-02-21T04:09:58.124103+00:00 Configuration validation failed:
2026-02-21T04:09:58.124176+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
________________ test_multiple_values_masked_after_registration ________________
def test_multiple_values_masked_after_registration():
"""Test masking multiple values registered at different times."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create test script
test_script = job_dir / "progressive_masking.sh"
test_script.write_text("""#!/bin/sh
# Function to register a secret
register_secret() {
python3 -c "
import socket, json, struct, os
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(os.environ['REACTORCIDE_SECRETS_SOCKET'])
msg = json.dumps({'action': 'register', 'secrets': ['$1']}).encode()
sock.send(struct.pack('!I', len(msg)))
sock.send(msg)
sock.close()
"
sleep 0.5
}
# First secret
SECRET1="database-pass-123"
echo "Step 1: Database password is: $SECRET1"
# Register first secret
register_secret "$SECRET1"
echo "Step 2: Database password is: $SECRET1"
# Second secret
SECRET2="api-key-456"
echo "Step 3: API key is: $SECRET2"
# Register second secret
register_secret "$SECRET2"
echo "Step 4: Database password is: $SECRET1"
echo "Step 5: API key is: $SECRET2"
# Third secret
SECRET3="webhook-token-789"
echo "Step 6: Webhook token is: $SECRET3"
register_secret "$SECRET3"
echo "Step 7: All secrets:"
echo " Database: $SECRET1"
echo " API: $SECRET2"
echo " Webhook: $SECRET3"
""")
test_script.chmod(0o755)
# Run the job with an explicit empty secrets list
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.9-alpine",
"--job-command", "sh /job/progressive_masking.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--secret-values-list", "", # Empty list prevents default masking
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("\n--- OUTPUT ---")
print(result.stdout)
> assert result.returncode == 0
E AssertionError: assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.9-...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_dynamic_secret_masking.py:209: AssertionError
----------------------------- Captured stdout call -----------------------------
--- OUTPUT ---
__________________ test_immediate_masking_in_streaming_output __________________
def test_immediate_masking_in_streaming_output():
"""Test that masking applies immediately to streaming output."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a script that outputs continuously
test_script = job_dir / "streaming_test.py"
test_script.write_text("""#!/usr/bin/env python3
import socket
import json
import struct
import os
import time
import sys
# Flush output immediately
sys.stdout.flush()
secret_value = "streaming-secret-999"
# Output the secret multiple times before registration
for i in range(3):
print(f"Before [{i}]: secret={secret_value}")
sys.stdout.flush()
time.sleep(0.1)
# Register the secret
socket_path = os.environ.get('REACTORCIDE_SECRETS_SOCKET')
if socket_path:
print("\\nRegistering secret...")
sys.stdout.flush()
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(socket_path)
msg = json.dumps({'action': 'register', 'secrets': [secret_value]}).encode()
sock.send(struct.pack('!I', len(msg)))
sock.send(msg)
response = sock.recv(1024)
sock.close()
print("Secret registered!\\n")
sys.stdout.flush()
# Wait for registration to process
time.sleep(0.5)
# Output the secret multiple times after registration
for i in range(3):
print(f"After [{i}]: secret={secret_value}")
sys.stdout.flush()
time.sleep(0.1)
""")
test_script.chmod(0o755)
# Run the job with an explicit empty secrets list
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.9-alpine",
"--job-command", "python3 /job/streaming_test.py",
"--code-dir", "/job",
"--job-dir", "/job",
"--secret-values-list", "", # Empty list prevents default masking
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("\n--- STREAMING OUTPUT ---")
print(result.stdout)
> assert result.returncode == 0
E AssertionError: assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.9-...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_dynamic_secret_masking.py:305: AssertionError
----------------------------- Captured stdout call -----------------------------
--- STREAMING OUTPUT ---
_______________________ test_dynamic_secret_registration _______________________
def test_dynamic_secret_registration():
"""Test that jobs can register secrets dynamically via socket."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create a test script that fetches and uses a secret
test_script = job_dir / "dynamic_secret_test.sh"
test_script.write_text("""#!/bin/sh
# Simulate fetching a secret from an external service
FETCHED_SECRET="super-dynamic-secret-12345"
echo "Before registration: FETCHED_SECRET=$FETCHED_SECRET"
# Register the secret so it gets masked
if [ -n "$REACTORCIDE_SECRETS_SOCKET" ]; then
echo "Socket available at: $REACTORCIDE_SECRETS_SOCKET"
# Use Python to register the secret
python3 -c "
import socket, json, struct
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect('$REACTORCIDE_SECRETS_SOCKET')
msg = json.dumps({'action': 'register', 'secrets': ['$FETCHED_SECRET']}).encode()
sock.send(struct.pack('!I', len(msg)))
sock.send(msg)
response = sock.recv(1024)
print('Registration response:', response.decode())
sock.close()
"
# Give the server a moment to process
sleep 0.5
else
echo "Warning: No secrets socket available"
fi
# Now use the secret again - it should be masked
echo "After registration: FETCHED_SECRET=$FETCHED_SECRET"
echo "Using secret in command: curl -H 'Authorization: Bearer $FETCHED_SECRET' example.com"
""")
test_script.chmod(0o755)
# Run the container with our test script
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.9-alpine", # Has Python for our registration
"--job-command", "sh /job/dynamic_secret_test.sh",
"--code-dir", "/job",
"--job-dir", "/job",
"--secret-values-list", "", # Empty list to prevent default masking
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("STDOUT:", result.stdout)
print("STDERR:", result.stderr)
> assert result.returncode == 0, f"Dynamic secret test failed with code {result.returncode}"
E AssertionError: Dynamic secret test failed with code 1
E assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.9-...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_dynamic_secrets.py:75: AssertionError
----------------------------- Captured stdout call -----------------------------
STDOUT:
STDERR: 2026-02-21T04:10:00.136236+00:00 Configuration validation failed:
2026-02-21T04:10:00.136400+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
________________________ test_multiple_dynamic_secrets _________________________
def test_multiple_dynamic_secrets():
"""Test registering multiple secrets dynamically."""
with tempfile.TemporaryDirectory() as tmpdir:
work_dir = Path(tmpdir)
# Create job directory
job_dir = work_dir / "job"
job_dir.mkdir()
# Create test script
test_script = job_dir / "multi_secret_test.py"
test_script.write_text("""#!/usr/bin/env python3
import socket
import json
import struct
import os
import time
# Simulate getting multiple secrets
secrets = [
"database-password-abc123",
"api-key-def456",
"webhook-secret-ghi789"
]
print("Obtained secrets:", secrets)
# Register them all at once
socket_path = os.environ.get('REACTORCIDE_SECRETS_SOCKET')
if socket_path:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(socket_path)
msg = json.dumps({'action': 'register', 'secrets': secrets}).encode()
sock.send(struct.pack('!I', len(msg)))
sock.send(msg)
response = sock.recv(1024)
print("Registration response:", response.decode())
sock.close()
# Wait for processing
time.sleep(0.5)
# Now use them - should all be masked
print("Database connection: password=database-password-abc123")
print("API header: X-API-Key=api-key-def456")
print("Webhook validation: secret=webhook-secret-ghi789")
else:
print("No secrets socket available")
""")
test_script.chmod(0o755)
# Run the test
result = subprocess.run(
[
sys.executable, "-m", "src.cli", "run",
"--runner-image", "python:3.9-alpine",
"--job-command", "python3 /job/multi_secret_test.py",
"--code-dir", "/job",
"--job-dir", "/job",
"--secret-values-list", "", # Empty list to prevent default masking
],
capture_output=True,
text=True,
cwd=work_dir,
env={**subprocess.os.environ, "PYTHONPATH": str(Path(__file__).parent.parent)}
)
print("STDOUT:", result.stdout)
print("STDERR:", result.stderr)
> assert result.returncode == 0
E AssertionError: assert 1 == 0
E + where 1 = CompletedProcess(args=['/workspace/runnerlib/.venv/bin/python', '-m', 'src.cli', 'run', '--runner-image', 'python:3.9-...s errors:\n • system: docker is not available in PATH\n 💡 Install docker: https://docs.docker.com/get-docker/\n\n').returncode
tests/test_dynamic_secrets.py:233: AssertionError
----------------------------- Captured stdout call -----------------------------
STDOUT:
STDERR: 2026-02-21T04:10:00.668071+00:00 Configuration validation failed:
2026-02-21T04:10:00.668151+00:00 ❌ Configuration has errors:
• system: docker is not available in PATH
💡 Install docker: https://docs.docker.com/get-docker/
_____________ TestEvalCommand.test_eval_pr_uses_base_ref_for_diff ______________
self = <runnerlib.tests.test_eval_cli.TestEvalCommand object at 0x7f87ab6c6990>
temp_dirs = (PosixPath('/tmp/tmpe76y8ksw/ci'), PosixPath('/tmp/tmpe76y8ksw/src'), PosixPath('/tmp/tmpe76y8ksw/ci/.reactorcide/jobs'), PosixPath('/tmp/tmpe76y8ksw/triggers.json'))
def test_eval_pr_uses_base_ref_for_diff(self, temp_dirs):
"""Test that PR events use pr_base_ref for changed files diff."""
ci_dir, src_dir, jobs_dir, triggers_file = temp_dirs
_write_yaml(jobs_dir / "test.yaml", {
"name": "test",
"triggers": {"events": ["pull_request_opened"]},
"job": {"image": "alpine:latest", "command": "make test"},
})
(src_dir / ".git").mkdir()
with patch("src.workflow.changed_files", return_value=["file.py"]) as mock_changed:
result = runner.invoke(app, [
"eval",
"--ci-source-dir", str(ci_dir),
"--source-dir", str(src_dir),
"--event-type", "pull_request_opened",
"--branch", "feature/foo",
"--pr-base-ref", "[REDACTED]",
"--triggers-file", str(triggers_file),
])
# Verify it was called with origin/[REDACTED] as the from_ref
> mock_changed.assert_called_once_with(
"origin/[REDACTED]", "HEAD", str(src_dir)
)
tests/test_eval_cli.py:344:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.13/unittest/mock.py:991: in assert_called_once_with
return self.assert_called_with(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='changed_files' id='140220671750704'>
args = ('origin/[REDACTED]', 'HEAD', '/tmp/tmpe76y8ksw/src'), kwargs = {}
expected = call('origin/[REDACTED]', 'HEAD', '/tmp/tmpe76y8ksw/src')
actual = call('[REDACTED]', 'HEAD', '/tmp/tmpe76y8ksw/src')
_error_message = <function NonCallableMock.assert_called_with.<locals>._error_message at 0x7f87ab420b80>
cause = None
def assert_called_with(self, /, *args, **kwargs):
"""assert that the last call was made with the specified arguments.
Raises an AssertionError if the args and keyword args passed in are
different to the last call to the mock."""
if self.call_args is None:
expected = self._format_mock_call_signature(args, kwargs)
actual = 'not called.'
error_message = ('expected call not found.\nExpected: %s\n Actual: %s'
% (expected, actual))
raise AssertionError(error_message)
def _error_message():
msg = self._format_mock_failure_message(args, kwargs)
return msg
expected = self._call_matcher(_Call((args, kwargs), two=True))
actual = self._call_matcher(self.call_args)
if actual != expected:
cause = expected if isinstance(expected, Exception) else None
> raise AssertionError(_error_message()) from cause
E AssertionError: expected call not found.
E Expected: changed_files('origin/[REDACTED]', 'HEAD', '/tmp/tmpe76y8ksw/src')
E Actual: changed_files('[REDACTED]', 'HEAD', '/tmp/tmpe76y8ksw/src')
/usr/local/lib/python3.13/unittest/mock.py:979: AssertionError
___________ TestEvalCommand.test_eval_push_uses_head_parent_for_diff ___________
self = <runnerlib.tests.test_eval_cli.TestEvalCommand object at 0x7f87ab6c6a80>
temp_dirs = (PosixPath('/tmp/tmpk4g1njen/ci'), PosixPath('/tmp/tmpk4g1njen/src'), PosixPath('/tmp/tmpk4g1njen/ci/.reactorcide/jobs'), PosixPath('/tmp/tmpk4g1njen/triggers.json'))
def test_eval_push_uses_head_parent_for_diff(self, temp_dirs):
"""Test that push events use HEAD^ for changed files diff."""
ci_dir, src_dir, jobs_dir, triggers_file = temp_dirs
_write_yaml(jobs_dir / "test.yaml", {
"name": "test",
"triggers": {"events": ["push"]},
"job": {"image": "alpine:latest", "command": "make test"},
})
(src_dir / ".git").mkdir()
with patch("src.workflow.changed_files", return_value=["file.py"]) as mock_changed:
result = runner.invoke(app, [
"eval",
"--ci-source-dir", str(ci_dir),
"--source-dir", str(src_dir),
"--event-type", "push",
"--branch", "[REDACTED]",
"--triggers-file", str(triggers_file),
])
> mock_changed.assert_called_once_with(
"HEAD^", "HEAD", str(src_dir)
)
tests/test_eval_cli.py:372:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.13/unittest/mock.py:991: in assert_called_once_with
return self.assert_called_with(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <MagicMock name='changed_files' id='140220671747008'>
args = ('HEAD^', 'HEAD', '/tmp/tmpk4g1njen/src'), kwargs = {}
expected = call('HEAD^', 'HEAD', '/tmp/tmpk4g1njen/src')
actual = call('[REDACTED]', 'HEAD', '/tmp/tmpk4g1njen/src')
_error_message = <function NonCallableMock.assert_called_with.<locals>._error_message at 0x7f87ab421120>
cause = None
def assert_called_with(self, /, *args, **kwargs):
"""assert that the last call was made with the specified arguments.
Raises an AssertionError if the args and keyword args passed in are
different to the last call to the mock."""
if self.call_args is None:
expected = self._format_mock_call_signature(args, kwargs)
actual = 'not called.'
error_message = ('expected call not found.\nExpected: %s\n Actual: %s'
% (expected, actual))
raise AssertionError(error_message)
def _error_message():
msg = self._format_mock_failure_message(args, kwargs)
return msg
expected = self._call_matcher(_Call((args, kwargs), two=True))
actual = self._call_matcher(self.call_args)
if actual != expected:
cause = expected if isinstance(expected, Exception) else None
> raise AssertionError(_error_message()) from cause
E AssertionError: expected call not found.
E Expected: changed_files('HEAD^', 'HEAD', '/tmp/tmpk4g1njen/src')
E Actual: changed_files('[REDACTED]', 'HEAD', '/tmp/tmpk4g1njen/src')
/usr/local/lib/python3.13/unittest/mock.py:979: AssertionError
___________ TestEvalCommand.test_eval_no_git_dir_skips_changed_files ___________
self = <runnerlib.tests.test_eval_cli.TestEvalCommand object at 0x7f87ab6e3310>
temp_dirs = (PosixPath('/tmp/tmphvv7wn3w/ci'), PosixPath('/tmp/tmphvv7wn3w/src'), PosixPath('/tmp/tmphvv7wn3w/ci/.reactorcide/jobs'), PosixPath('/tmp/tmphvv7wn3w/triggers.json'))
def test_eval_no_git_dir_skips_changed_files(self, temp_dirs):
"""Test that eval skips changed files detection when no .git dir exists."""
ci_dir, src_dir, jobs_dir, triggers_file = temp_dirs
_write_yaml(jobs_dir / "test.yaml", {
"name": "test",
"triggers": {"events": ["push"]},
"paths": {"include": ["src/**"]},
"job": {"image": "alpine:latest", "command": "make test"},
})
# No .git directory - should skip changed files and still match
# (path filtering is skipped when changed_files is None)
result = runner.invoke(app, [
"eval",
"--ci-source-dir", str(ci_dir),
"--source-dir", str(src_dir),
"--event-type", "push",
"--triggers-file", str(triggers_file),
])
assert result.exit_code == 0
> assert triggers_file.exists()
E AssertionError: assert False
E + where False = exists()
E + where exists = PosixPath('/tmp/tmphvv7wn3w/triggers.json').exists
tests/test_eval_cli.py:400: AssertionError
________ TestEvalSourcePreparation.test_eval_clones_source_when_missing ________
self = <runnerlib.tests.test_eval_cli.TestEvalSourcePreparation object at 0x7f87abbe7750>
def test_eval_clones_source_when_missing(self):
"""Test that eval clones source repo when .git dir doesn't exist."""
with tempfile.TemporaryDirectory() as tmpdir:
base = Path(tmpdir)
ci_dir = base / "ci"
src_dir = base / "src"
src_dir.mkdir() # Exists but no .git (like worker creates)
jobs_dir = ci_dir / ".reactorcide" / "jobs"
jobs_dir.mkdir(parents=True)
triggers_file = base / "triggers.json"
# Create a fake "remote" source repo
remote_dir = base / "remote_src"
remote_dir.mkdir()
from git import Repo
remote_repo = Repo.init(remote_dir)
(remote_dir / "[REDACTED].py").write_text("print('hello')")
remote_repo.index.add(["[REDACTED].py"])
remote_repo.index.commit("Initial commit")
_write_yaml(jobs_dir / "test.yaml", {
"name": "test",
"triggers": {"events": ["push"]},
"job": {"image": "alpine:latest", "command": "make test"},
})
result = runner.invoke(app, [
"eval",
"--ci-source-dir", str(ci_dir),
"--source-dir", str(src_dir),
"--event-type", "push",
"--branch", "[REDACTED]",
"--source-url", str(remote_dir),
"--triggers-file", str(triggers_file),
])
> assert result.exit_code == 0
E AssertionError: assert 1 == 0
E + where 1 = <Result GitCommandError('git checkout [REDACTED]', 128)>.exit_code
tests/test_eval_cli.py:567: AssertionError
____________ TestGitOperations.test_checkout_creates_job_directory _____________
self = <runnerlib.tests.test_git_operations.TestGitOperations object at 0x7f87abb68a50>
test_repo = '/tmp/tmpytjh8zu6'
job_config = RunnerConfig(code_dir='/job/src', job_dir='/job', job_command='echo test', runner_image='alpine:latest', job_env=None,...=None, source_type=None, source_url=None, source_ref=None, ci_source_type=None, ci_source_url=None, ci_source_ref=None)
def test_checkout_creates_job_directory(self, test_repo, job_config):
"""Test that checkout creates the job directory structure."""
# Ensure job dir doesn't exist initially
if Path("./job").exists():
shutil.rmtree("./job")
# Try [REDACTED] first, fallback to master
try:
checkout_git_repo(test_repo, "[REDACTED]", job_config)
except Exception:
checkout_git_repo(test_repo, "master", job_config)
# Verify job directory was created
> assert Path("./job").exists()
E AssertionError: assert False
E + where False = exists()
E + where exists = PosixPath('job').exists
E + where PosixPath('job') = Path('./job')
tests/test_git_operations.py:213: AssertionError
---------------------------- Captured stdout setup -----------------------------
Initialized empty Git repository in /tmp/tmpytjh8zu6/.git/
[master (root-commit) 160a9ef] Initial commit
1 file changed, 1 insertion(+)
create mode 100644 test.txt
[feature 9b3f37a] Feature changes
2 files changed, 2 insertions(+), 1 deletion(-)
create mode 100644 new.txt
---------------------------- Captured stderr setup -----------------------------
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are '[REDACTED]', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Switched to a new branch 'feature'
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:11:43.579088+00:00 Cloning repository: /tmp/tmpytjh8zu6
2026-02-21T04:11:43.622676+00:00 Checking out ref: [REDACTED]
2026-02-21T04:11:43.643590+00:00 Fetching PR ref: refs/pull/47/head
2026-02-21T04:11:43.659038+00:00 Fetching PR ref: refs/merge-requests/47/head
2026-02-21T04:11:43.673282+00:00 Fetching all remote refs...
2026-02-21T04:11:43.718648+00:00 Cloning repository: /tmp/tmpytjh8zu6
2026-02-21T04:11:43.760636+00:00 Checking out ref: master
2026-02-21T04:11:43.770926+00:00 Repository checked out to: /job/src
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:11:43.578941+00:00 [INFO] [runnerlib] Cloning git repository url=/tmp/tmpytjh8zu6 ref=[REDACTED]
2026-02-21T04:11:43.709277+00:00 [ERROR] [runnerlib] Failed to clone repository url=/tmp/tmpytjh8zu6 error=GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
2026-02-21T04:11:43.709453+00:00 Failed to checkout repository: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
2026-02-21T04:11:43.718525+00:00 [INFO] [runnerlib] Cloning git repository url=/tmp/tmpytjh8zu6 ref=master
2026-02-21T04:11:43.770799+00:00 [INFO] [runnerlib] Repository cloned successfully path=/job/src
_ TestDirectoryManagementIntegration.test_directory_validation_with_real_filesystem _
self = <runnerlib.tests.test_integration.TestDirectoryManagementIntegration object at 0x7f87abbcc690>
def test_directory_validation_with_real_filesystem(self):
"""Test directory validation against real filesystem."""
config = get_config(
code_dir='/job/src',
job_dir='/job/work',
job_command='test',
runner_image='test:image'
)
# Test validation without directories
with patch('shutil.which', return_value="/usr/bin/docker"):
result = validate_config(config, check_files=True)
# Should have warnings about missing directories
> assert result.has_warnings
E assert False
E + where False = ValidationResult(is_valid=True, errors=[], warnings=[]).has_warnings
/workspace/runnerlib/tests/test_integration.py:121: AssertionError
___________________ TestJobIsolation.test_work_dir_isolation ___________________
self = <runnerlib.tests.test_job_isolation.TestJobIsolation object at 0x7f87abbcd1d0>
def test_work_dir_isolation(self):
"""Test that jobs use separate work directories."""
with tempfile.TemporaryDirectory() as temp_dir1:
with tempfile.TemporaryDirectory() as temp_dir2:
# Change to first temp directory
original_cwd = os.getcwd()
try:
# Test job 1 in temp_dir1
os.chdir(temp_dir1)
config1 = RunnerConfig(
code_dir="/job/src",
job_dir="/job/src",
job_command="echo 'job1'",
runner_image="alpine:latest"
)
job_path1 = prepare_job_directory(config1)
assert job_path1.exists()
> assert str(job_path1).startswith(temp_dir1)
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0x7f87abe9cf00>('/tmp/tmpp1y9a8y_')
E + where <built-in method startswith of str object at 0x7f87abe9cf00> = '/job'.startswith
E + where '/job' = str(PosixPath('/job'))
tests/test_job_isolation.py:35: AssertionError
________________ TestJobIsolation.test_concurrent_job_isolation ________________
self = <runnerlib.tests.test_job_isolation.TestJobIsolation object at 0x7f87abbcd310>
def test_concurrent_job_isolation(self):
"""Test that concurrent jobs don't interfere with each other."""
import threading
import time
results = {}
errors = {}
def run_job(job_id: str, work_dir: str):
"""Run a job in its own work directory."""
try:
original_cwd = os.getcwd()
os.chdir(work_dir)
config = RunnerConfig(
code_dir="/job/src",
job_dir="/job/src",
job_command=f"echo 'job-{job_id}'",
runner_image="alpine:latest"
)
job_path = prepare_job_directory(config)
# Create a unique file for this job
test_file = job_path / f"job-{job_id}.txt"
test_file.write_text(f"Data for job {job_id}")
# Simulate some work
time.sleep(0.1)
# Verify the file still exists and has correct content
assert test_file.exists()
assert test_file.read_text() == f"Data for job {job_id}"
# Check no files from other jobs exist
other_files = list(job_path.glob("job-*.txt"))
assert len(other_files) == 1
assert other_files[0].name == f"job-{job_id}.txt"
results[job_id] = True
except Exception as e:
errors[job_id] = str(e)
finally:
os.chdir(original_cwd)
# Create temporary directories for each job
temp_dirs = []
threads = []
try:
# Start multiple jobs concurrently
for i in range(5):
temp_dir = tempfile.mkdtemp(prefix=f"job-{i}-")
temp_dirs.append(temp_dir)
thread = threading.Thread(
target=run_job,
args=(str(i), temp_dir)
)
thread.start()
threads.append(thread)
# Wait for all jobs to complete
for thread in threads:
thread.join(timeout=5)
# Verify all jobs succeeded
> assert len(errors) == 0, f"Jobs failed: {errors}"
E AssertionError: Jobs failed: {'0': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])", '2': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])", '1': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])", '3': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])", '4': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])"}
E assert 5 == 0
E + where 5 = len({'0': "assert 5 == 1\n + where 5 = len([PosixPath('/job/job-1.txt'), PosixPath('/job/job-2.txt'), PosixPath('/job/job...xPath('/job/job-2.txt'), PosixPath('/job/job-4.txt'), PosixPath('/job/job-0.txt'), PosixPath('/job/job-3.txt')])", ...})
/workspace/runnerlib/tests/test_job_isolation.py:138: AssertionError
_______________ TestJobIsolation.test_container_mount_isolation ________________
self = <runnerlib.tests.test_job_isolation.TestJobIsolation object at 0x7f87ab6b7820>
mock_popen = <MagicMock name='Popen' id='140220671752048'>
@patch('subprocess.Popen')
def test_container_mount_isolation(self, mock_popen):
"""Test that containers mount only their job's directory."""
# Mock the Popen object with proper behavior
mock_process = MagicMock()
mock_process.poll.side_effect = [None, None, 0] # Running, running, then finished
mock_process.returncode = 0
mock_process.stdout.readline.return_value = '' # No output (text mode)
mock_process.stderr.readline.return_value = '' # No errors
mock_process.communicate.return_value = ('', '') # Empty re[REDACTED]ing output
mock_popen.return_value = mock_process
with tempfile.TemporaryDirectory() as temp_dir:
# Save original cwd if possible
try:
original_cwd = os.getcwd()
except FileNotFoundError:
# If current dir doesn't exist, use temp dir as fallback
original_cwd = temp_dir
try:
os.chdir(temp_dir)
config = RunnerConfig(
code_dir="/job/src",
job_dir="/job/src",
job_command="echo test",
runner_image="alpine:latest"
)
# Prepare job directory
job_path = prepare_job_directory(config)
# Create a test file
test_file = job_path / "test.txt"
test_file.write_text("test data")
# Run container
> run_container(config)
/workspace/runnerlib/tests/test_job_isolation.py:257:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
config = RunnerConfig(code_dir='/job/src', job_dir='/job/src', job_command='echo test', runner_image='alpine:latest', job_env=N...=None, source_type=None, source_url=None, source_ref=None, ci_source_type=None, ci_source_url=None, ci_source_ref=None)
additional_args = None
def run_container(
config: RunnerConfig,
additional_args: Optional[List[str]] = None
) -> int:
"""Run the job container using docker with full configuration support.
Args:
config: Runner configuration
additional_args: Additional arguments to pass to the job command
Returns:
Exit code of the container process
Raises:
ValueError: If configuration is invalid
FileNotFoundError: If docker is not available
"""
# Create plugin context for the execution
plugin_context = PluginContext(
config=config,
phase=PluginPhase.PRE_SOURCE_PREP,
metadata={}
)
try:
# Execute pre-source-prep plugins
plugin_manager.execute_phase(PluginPhase.PRE_SOURCE_PREP, plugin_context)
# Basic validation is handled by CLI layer
# Check if docker is available
if not shutil.which("docker"):
logger.error("Docker is not available in PATH")
> raise FileNotFoundError("docker is not available in PATH")
E FileNotFoundError: docker is not available in PATH
/workspace/runnerlib/src/container.py:114: FileNotFoundError
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:11:46.165099+00:00 [ERROR] [runnerlib] Docker is not available in PATH
___________ TestSourcePreparation.test_no_source_preparation_default ___________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87ab4ed450>
def test_no_source_preparation_default(self):
"""Test job with no source preparation (default - source_type not set)."""
# Configure without specifying source_type
config = get_config(job_command="echo 'hello'")
# Prepare source should return None
result = prepare_source(config)
> assert result is None
E AssertionError: assert PosixPath('/job/src') is None
/workspace/runnerlib/tests/test_source_preparation.py:89: AssertionError
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:11:56.809090+00:00 Cloning git repository: https://[REDACTED].com/[REDACTED].git
2026-02-21T04:12:06.333236+00:00 Checking out ref: [REDACTED]
2026-02-21T04:12:06.944668+00:00 Repository checked out to: /job/src
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:11:56.808865+00:00 [INFO] [runnerlib] Preparing source type=git url=https://[REDACTED].com/[REDACTED].git ref=[REDACTED]
2026-02-21T04:11:56.809021+00:00 [INFO] [runnerlib] Preparing git source url=https://[REDACTED].com/[REDACTED].git ref=[REDACTED] target=/job/src
2026-02-21T04:12:06.944578+00:00 [INFO] [runnerlib] Git source prepared successfully path=/job/src
______________ TestSourcePreparation.test_git_source_preparation _______________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87ab3efce0>
def test_git_source_preparation(self):
"""Test git source preparation."""
# Create a test git repository
test_repo_dir = Path(self.temp_dir) / "test_repo"
test_repo_dir.mkdir()
repo = Repo.init(test_repo_dir)
# Add a test file
test_file = test_repo_dir / "test.txt"
test_file.write_text("test content")
repo.index.add([str(test_file)])
repo.index.commit("Initial commit")
# Configure with git source
config = get_config(
job_command="cat /job/src/test.txt",
source_type="git",
source_url=str(test_repo_dir),
source_ref="[REDACTED]"
)
# Prepare source
> result = prepare_source(config)
/workspace/runnerlib/tests/test_source_preparation.py:113:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/workspace/runnerlib/src/source_prep.py:563: in prepare_source
return _prepare_git_source(config.source_url, config.source_ref, target_path)
/workspace/runnerlib/src/source_prep.py:400: in _prepare_git_source
_checkout_with_fetch_fallback(repo, source_ref)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
repo = <git.repo.base.Repo '/job/src/.git'>, source_ref = '[REDACTED]'
def _checkout_with_fetch_fallback(repo: Repo, source_ref: str) -> None:
"""Checkout a git ref, fetching PR refs as fallback if needed.
For PR events, the source_ref SHA may not exist in the default clone
because it lives under refs/pull/<N>/head (GitHub) or
refs/merge-requests/<N>/head (GitLab). This function tries a direct
checkout first, then falls back to fetching the specific PR ref.
Args:
repo: GitPython Repo instance (already cloned)
source_ref: Git reference to checkout (branch, tag, or commit SHA)
Raises:
GitCommandError: If all checkout attempts fail
"""
# Try direct checkout first — works for branches, tags, and commits on fetched branches
try:
repo.git.checkout(source_ref)
return
except GitCommandError:
logger.debug("Direct checkout failed, trying fetch fallbacks", fields={"ref": source_ref})
# Try fetching the specific SHA (works if server has uploadpack.allowReachableSHA1InWant)
try:
repo.git.fetch("origin", source_ref)
repo.git.checkout(source_ref)
log_stdout(f"Fetched and checked out ref: {source_ref}")
return
except GitCommandError:
logger.debug("Fetch by SHA failed", fields={"ref": source_ref})
# Try PR-specific refs using REACTORCIDE_PR_NUMBER
pr_number = os.getenv("REACTORCIDE_PR_NUMBER", "")
if pr_number:
# GitHub: refs/pull/<N>/head
pr_refs = [
f"refs/pull/{pr_number}/head",
f"refs/merge-requests/{pr_number}/head", # GitLab
]
for pr_ref in pr_refs:
try:
log_stdout(f"Fetching PR ref: {pr_ref}")
repo.git.fetch("origin", f"{pr_ref}:refs/remotes/origin/pr-head")
repo.git.checkout(source_ref)
log_stdout(f"Checked out PR ref: {source_ref}")
return
except GitCommandError:
logger.debug("PR ref fetch failed", fields={"pr_ref": pr_ref})
# Last resort: fetch all remote refs (handles any branch the SHA might be on)
try:
log_stdout("Fetching all remote refs...")
repo.git.fetch("origin", "+refs/heads/*:refs/remotes/origin/*")
repo.git.checkout(source_ref)
log_stdout(f"Checked out ref after full fetch: {source_ref}")
return
except GitCommandError:
pass
# Nothing worked — raise with a clear message
> raise GitCommandError(
f"git checkout {source_ref}",
128,
stderr=f"Could not checkout ref '{source_ref}' after all fetch attempts",
)
E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
E cmdline: git checkout [REDACTED]
E stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
/workspace/runnerlib/src/source_prep.py:72: GitCommandError
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:12:06.996945+00:00 Cloning git repository: /tmp/tmpqh9l1nig/test_repo
2026-02-21T04:12:07.062195+00:00 Checking out ref: [REDACTED]
2026-02-21T04:12:07.076410+00:00 Fetching PR ref: refs/pull/47/head
2026-02-21T04:12:07.084605+00:00 Fetching PR ref: refs/merge-requests/47/head
2026-02-21T04:12:07.095278+00:00 Fetching all remote refs...
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:12:06.996821+00:00 [INFO] [runnerlib] Preparing source type=git url=/tmp/tmpqh9l1nig/test_repo ref=[REDACTED]
2026-02-21T04:12:06.996910+00:00 [INFO] [runnerlib] Preparing git source url=/tmp/tmpqh9l1nig/test_repo ref=[REDACTED] target=/job/src
2026-02-21T04:12:07.112667+00:00 [ERROR] [runnerlib] Failed to prepare git source url=/tmp/tmpqh9l1nig/test_repo error=GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
2026-02-21T04:12:07.112759+00:00 Failed to checkout repository: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
______________ TestSourcePreparation.test_dual_source_preparation ______________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87ab57d130>
def test_dual_source_preparation(self):
"""Test preparation of both source and ci_source."""
# Create source repo (untrusted code)
source_repo_dir = Path(self.temp_dir) / "source_repo"
source_repo_dir.mkdir()
source_repo = Repo.init(source_repo_dir)
(source_repo_dir / "app.py").write_text("print('hello from PR')")
source_repo.index.add(["app.py"])
source_repo.index.commit("PR commit")
# Create CI repo (trusted code)
ci_repo_dir = Path(self.temp_dir) / "ci_repo"
ci_repo_dir.mkdir()
ci_repo = Repo.init(ci_repo_dir)
(ci_repo_dir / "pipeline.py").write_text("print('running tests')")
ci_repo.index.add(["pipeline.py"])
ci_repo.index.commit("CI commit")
# Configure with both sources
config = get_config(
job_command="python /job/ci/pipeline.py",
source_type="git",
source_url=str(source_repo_dir),
source_ref="[REDACTED]",
ci_source_type="git",
ci_source_url=str(ci_repo_dir),
ci_source_ref="[REDACTED]"
)
# Prepare CI source first (as the CLI does)
> ci_result = prepare_ci_source(config)
/workspace/runnerlib/tests/test_source_preparation.py:170:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/workspace/runnerlib/src/source_prep.py:633: in prepare_ci_source
return _prepare_git_source(config.ci_source_url, config.ci_source_ref, target_path)
/workspace/runnerlib/src/source_prep.py:400: in _prepare_git_source
_checkout_with_fetch_fallback(repo, source_ref)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
repo = <git.repo.base.Repo '/job/ci/.git'>, source_ref = '[REDACTED]'
def _checkout_with_fetch_fallback(repo: Repo, source_ref: str) -> None:
"""Checkout a git ref, fetching PR refs as fallback if needed.
For PR events, the source_ref SHA may not exist in the default clone
because it lives under refs/pull/<N>/head (GitHub) or
refs/merge-requests/<N>/head (GitLab). This function tries a direct
checkout first, then falls back to fetching the specific PR ref.
Args:
repo: GitPython Repo instance (already cloned)
source_ref: Git reference to checkout (branch, tag, or commit SHA)
Raises:
GitCommandError: If all checkout attempts fail
"""
# Try direct checkout first — works for branches, tags, and commits on fetched branches
try:
repo.git.checkout(source_ref)
return
except GitCommandError:
logger.debug("Direct checkout failed, trying fetch fallbacks", fields={"ref": source_ref})
# Try fetching the specific SHA (works if server has uploadpack.allowReachableSHA1InWant)
try:
repo.git.fetch("origin", source_ref)
repo.git.checkout(source_ref)
log_stdout(f"Fetched and checked out ref: {source_ref}")
return
except GitCommandError:
logger.debug("Fetch by SHA failed", fields={"ref": source_ref})
# Try PR-specific refs using REACTORCIDE_PR_NUMBER
pr_number = os.getenv("REACTORCIDE_PR_NUMBER", "")
if pr_number:
# GitHub: refs/pull/<N>/head
pr_refs = [
f"refs/pull/{pr_number}/head",
f"refs/merge-requests/{pr_number}/head", # GitLab
]
for pr_ref in pr_refs:
try:
log_stdout(f"Fetching PR ref: {pr_ref}")
repo.git.fetch("origin", f"{pr_ref}:refs/remotes/origin/pr-head")
repo.git.checkout(source_ref)
log_stdout(f"Checked out PR ref: {source_ref}")
return
except GitCommandError:
logger.debug("PR ref fetch failed", fields={"pr_ref": pr_ref})
# Last resort: fetch all remote refs (handles any branch the SHA might be on)
try:
log_stdout("Fetching all remote refs...")
repo.git.fetch("origin", "+refs/heads/*:refs/remotes/origin/*")
repo.git.checkout(source_ref)
log_stdout(f"Checked out ref after full fetch: {source_ref}")
return
except GitCommandError:
pass
# Nothing worked — raise with a clear message
> raise GitCommandError(
f"git checkout {source_ref}",
128,
stderr=f"Could not checkout ref '{source_ref}' after all fetch attempts",
)
E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
E cmdline: git checkout [REDACTED]
E stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
/workspace/runnerlib/src/source_prep.py:72: GitCommandError
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:12:07.213333+00:00 🔐 Preparing trusted CI source (type: git)
2026-02-21T04:12:07.213420+00:00 Cloning git repository: /tmp/tmpiynp6uhc/ci_repo
2026-02-21T04:12:07.232746+00:00 Checking out ref: [REDACTED]
2026-02-21T04:12:07.248301+00:00 Fetching PR ref: refs/pull/47/head
2026-02-21T04:12:07.256395+00:00 Fetching PR ref: refs/merge-requests/47/head
2026-02-21T04:12:07.264411+00:00 Fetching all remote refs...
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:12:07.213259+00:00 [INFO] [runnerlib] Preparing CI source type=git url=/tmp/tmpiynp6uhc/ci_repo ref=[REDACTED]
2026-02-21T04:12:07.213390+00:00 [INFO] [runnerlib] Preparing git source url=/tmp/tmpiynp6uhc/ci_repo ref=[REDACTED] target=/job/ci
2026-02-21T04:12:07.282953+00:00 [ERROR] [runnerlib] Failed to prepare git source url=/tmp/tmpiynp6uhc/ci_repo error=GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
2026-02-21T04:12:07.283033+00:00 Failed to checkout repository: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
__________________ TestSourcePreparation.test_ci_source_only ___________________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87abb59e10>
def test_ci_source_only(self):
"""Test preparation of CI source without regular source."""
# Create CI repo
ci_repo_dir = Path(self.temp_dir) / "ci_repo"
ci_repo_dir.mkdir()
ci_repo = Repo.init(ci_repo_dir)
(ci_repo_dir / "deploy.sh").write_text("#!/bin/bash\necho deploying")
ci_repo.index.add(["deploy.sh"])
ci_repo.index.commit("CI commit")
# Configure with only CI source
config = get_config(
job_command="bash /job/ci/deploy.sh",
ci_source_type="git",
ci_source_url=str(ci_repo_dir),
ci_source_ref="[REDACTED]"
)
# Prepare CI source
> ci_result = prepare_ci_source(config)
/workspace/runnerlib/tests/test_source_preparation.py:206:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/workspace/runnerlib/src/source_prep.py:633: in prepare_ci_source
return _prepare_git_source(config.ci_source_url, config.ci_source_ref, target_path)
/workspace/runnerlib/src/source_prep.py:400: in _prepare_git_source
_checkout_with_fetch_fallback(repo, source_ref)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
repo = <git.repo.base.Repo '/job/ci/.git'>, source_ref = '[REDACTED]'
def _checkout_with_fetch_fallback(repo: Repo, source_ref: str) -> None:
"""Checkout a git ref, fetching PR refs as fallback if needed.
For PR events, the source_ref SHA may not exist in the default clone
because it lives under refs/pull/<N>/head (GitHub) or
refs/merge-requests/<N>/head (GitLab). This function tries a direct
checkout first, then falls back to fetching the specific PR ref.
Args:
repo: GitPython Repo instance (already cloned)
source_ref: Git reference to checkout (branch, tag, or commit SHA)
Raises:
GitCommandError: If all checkout attempts fail
"""
# Try direct checkout first — works for branches, tags, and commits on fetched branches
try:
repo.git.checkout(source_ref)
return
except GitCommandError:
logger.debug("Direct checkout failed, trying fetch fallbacks", fields={"ref": source_ref})
# Try fetching the specific SHA (works if server has uploadpack.allowReachableSHA1InWant)
try:
repo.git.fetch("origin", source_ref)
repo.git.checkout(source_ref)
log_stdout(f"Fetched and checked out ref: {source_ref}")
return
except GitCommandError:
logger.debug("Fetch by SHA failed", fields={"ref": source_ref})
# Try PR-specific refs using REACTORCIDE_PR_NUMBER
pr_number = os.getenv("REACTORCIDE_PR_NUMBER", "")
if pr_number:
# GitHub: refs/pull/<N>/head
pr_refs = [
f"refs/pull/{pr_number}/head",
f"refs/merge-requests/{pr_number}/head", # GitLab
]
for pr_ref in pr_refs:
try:
log_stdout(f"Fetching PR ref: {pr_ref}")
repo.git.fetch("origin", f"{pr_ref}:refs/remotes/origin/pr-head")
repo.git.checkout(source_ref)
log_stdout(f"Checked out PR ref: {source_ref}")
return
except GitCommandError:
logger.debug("PR ref fetch failed", fields={"pr_ref": pr_ref})
# Last resort: fetch all remote refs (handles any branch the SHA might be on)
try:
log_stdout("Fetching all remote refs...")
repo.git.fetch("origin", "+refs/heads/*:refs/remotes/origin/*")
repo.git.checkout(source_ref)
log_stdout(f"Checked out ref after full fetch: {source_ref}")
return
except GitCommandError:
pass
# Nothing worked — raise with a clear message
> raise GitCommandError(
f"git checkout {source_ref}",
128,
stderr=f"Could not checkout ref '{source_ref}' after all fetch attempts",
)
E git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
E cmdline: git checkout [REDACTED]
E stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
/workspace/runnerlib/src/source_prep.py:72: GitCommandError
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:12:07.340218+00:00 🔐 Preparing trusted CI source (type: git)
2026-02-21T04:12:07.340314+00:00 Cloning git repository: /tmp/tmpwdn5oazg/ci_repo
2026-02-21T04:12:07.365384+00:00 Checking out ref: [REDACTED]
2026-02-21T04:12:07.376551+00:00 Fetching PR ref: refs/pull/47/head
2026-02-21T04:12:07.388462+00:00 Fetching PR ref: refs/merge-requests/47/head
2026-02-21T04:12:07.397229+00:00 Fetching all remote refs...
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:12:07.340127+00:00 [INFO] [runnerlib] Preparing CI source type=git url=/tmp/tmpwdn5oazg/ci_repo ref=[REDACTED]
2026-02-21T04:12:07.340281+00:00 [INFO] [runnerlib] Preparing git source url=/tmp/tmpwdn5oazg/ci_repo ref=[REDACTED] target=/job/ci
2026-02-21T04:12:07.432614+00:00 [ERROR] [runnerlib] Failed to prepare git source url=/tmp/tmpwdn5oazg/ci_repo error=GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
2026-02-21T04:12:07.432734+00:00 Failed to checkout repository: Cmd('git') failed due to: exit code(128)
cmdline: git checkout [REDACTED]
stderr: 'Could not checkout ref '[REDACTED]' after all fetch attempts'
______________ TestSourcePreparation.test_git_source_missing_url _______________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87ab53c650>
def test_git_source_missing_url(self):
"""Test that git source without URL raises ValueError."""
config = get_config(
job_command="echo 'test'",
source_type="git"
# source_url not provided
)
> with pytest.raises(ValueError, match="source_url is required"):
E Failed: DID NOT RAISE <class 'ValueError'>
/workspace/runnerlib/tests/test_source_preparation.py:233: Failed
----------------------------- Captured stdout call -----------------------------
2026-02-21T04:12:07.469588+00:00 Cloning git repository: https://[REDACTED].com/[REDACTED].git
2026-02-21T04:12:17.363324+00:00 Checking out ref: [REDACTED]
2026-02-21T04:12:17.880291+00:00 Repository checked out to: /job/src
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:12:07.469481+00:00 [INFO] [runnerlib] Preparing source type=git url=[REDACTED] ref=[REDACTED]
2026-02-21T04:12:07.469553+00:00 [INFO] [runnerlib] Preparing git source url=[REDACTED] ref=[REDACTED] target=/job/src
2026-02-21T04:12:17.880160+00:00 [INFO] [runnerlib] Git source prepared successfully path=/job/src
______________ TestSourcePreparation.test_copy_source_missing_url ______________
self = <runnerlib.tests.test_source_preparation.TestSourcePreparation object at 0x7f87ab53c750>
def test_copy_source_missing_url(self):
"""Test that copy source without URL raises ValueError."""
config = get_config(
job_command="echo 'test'",
source_type="copy"
# source_url not provided
)
with pytest.raises(ValueError, match="source_url is required"):
> prepare_source(config)
/workspace/runnerlib/tests/test_source_preparation.py:245:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/workspace/runnerlib/src/source_prep.py:568: in prepare_source
return _prepare_copy_source(config.source_url, target_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
source_url = 'https://[REDACTED].com/[REDACTED].git'
target_path = PosixPath('/job/src')
def _prepare_copy_source(source_url: str, target_path: Path) -> Path:
"""Prepare source code by copying from a local directory.
Args:
source_url: Path to source directory
target_path: Where to copy the directory
Returns:
Path to the copied directory
"""
source_path = Path(source_url).resolve()
if not source_path.exists():
> raise FileNotFoundError(f"Source directory does not exist: {source_path}")
E FileNotFoundError: Source directory does not exist: /tmp/tmpo7euhd05/https:/[REDACTED].com/[REDACTED].git
/workspace/runnerlib/src/source_prep.py:433: FileNotFoundError
----------------------------- Captured stderr call -----------------------------
2026-02-21T04:12:17.893577+00:00 [INFO] [runnerlib] Preparing source type=copy url=https://[REDACTED].com/[REDACTED].git ref=[REDACTED]
_____ TestConfigValidator.test_validate_file_system_job_directory_missing ______
self = <runnerlib.tests.test_validation.TestConfigValidator object at 0x7f87abb66c50>
def test_validate_file_system_job_directory_missing(self):
"""Test file system validation when job directory is missing."""
# Ensure ./job doesn't exist
job_path = Path("./job")
if job_path.exists():
shutil.rmtree(job_path)
try:
errors, warnings = self.validator._validate_file_system(self.valid_config)
# Should have warning about missing directory
> assert len(warnings) >= 1
E assert 0 >= 1
E + where 0 = len([])
/workspace/runnerlib/tests/test_validation.py:304: AssertionError
=========================== short test summary info ============================
FAILED tests/test_container_isolation.py::TestContainerIsolation::test_work_directory_isolation_with_prepare
FAILED tests/test_directory_operations.py::TestDirectoryOperations::test_cleanup_removes_job_directory
FAILED tests/test_docker_execution.py::test_basic_docker_execution - Assertio...
FAILED tests/test_docker_execution.py::test_docker_with_environment_variables
FAILED tests/test_docker_execution.py::test_docker_with_python - AssertionErr...
FAILED tests/test_docker_execution.py::test_docker_failure_handling - Asserti...
FAILED tests/test_docker_execution.py::test_docker_available - FileNotFoundEr...
FAILED tests/test_docker_execution.py::test_container_with_working_directory
FAILED tests/test_docker_execution.py::test_dry_run_mode - AssertionError: Dr...
FAILED tests/test_docker_execution.py::test_node_container - AssertionError: ...
FAILED tests/test_docker_execution.py::test_container_with_multiple_env_vars
FAILED tests/test_docker_execution.py::test_selective_secret_masking - Assert...
FAILED tests/test_dynamic_secret_masking.py::test_value_printed_then_masked
FAILED tests/test_dynamic_secret_masking.py::test_multiple_values_masked_after_registration
FAILED tests/test_dynamic_secret_masking.py::test_immediate_masking_in_streaming_output
FAILED tests/test_dynamic_secrets.py::test_dynamic_secret_registration - Asse...
FAILED tests/test_dynamic_secrets.py::test_multiple_dynamic_secrets - Asserti...
FAILED tests/test_eval_cli.py::TestEvalCommand::test_eval_pr_uses_base_ref_for_diff
FAILED tests/test_eval_cli.py::TestEvalCommand::test_eval_push_uses_head_parent_for_diff
FAILED tests/test_eval_cli.py::TestEvalCommand::test_eval_no_git_dir_skips_changed_files
FAILED tests/test_eval_cli.py::TestEvalSourcePreparation::test_eval_clones_source_when_missing
FAILED tests/test_git_operations.py::TestGitOperations::test_checkout_creates_job_directory
FAILED tests/test_integration.py::TestDirectoryManagementIntegration::test_directory_validation_with_real_filesystem
FAILED tests/test_job_isolation.py::TestJobIsolation::test_work_dir_isolation
FAILED tests/test_job_isolation.py::TestJobIsolation::test_concurrent_job_isolation
FAILED tests/test_job_isolation.py::TestJobIsolation::test_container_mount_isolation
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_no_source_preparation_default
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_git_source_preparation
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_dual_source_preparation
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_ci_source_only
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_git_source_missing_url
FAILED tests/test_source_preparation.py::TestSourcePreparation::test_copy_source_missing_url
FAILED tests/test_validation.py::TestConfigValidator::test_validate_file_system_job_directory_missing
============ 33 failed, 363 passed, 1 skipped in 149.85s (0:02:29) =============