need some help here :).
I have 2 containers A and B running on my host machine. /tmp dir is configurated as a volume in A and is volume mounted to B. Container A starts B with:
docker run -d --init --it --privileged --name container_B --env SSH_AUTH_SOCK -v /tmp:/tmp --port 2222:22
So that B will run in detached mode and stay alive.
In container A, I need to attach to B and run some commands, here is my code, I'm using the python docker SDK:
import docker
import os
class RunCommands:
def __init__(self, name):
self.SSH_AUTH_SOCK = os.environ["SSH_AUTH_SOCK"]
self.docker_client = docker.from_env()
self.container = None
self.container_name = name
...
def run_commands(self):
# 1st time: ssh keys are loaded correctly
res = subprocess.run(["ssh-add", "-L"], stderr=subprocess.STDOUT,
stdout=subprocess.PIPE, check=True, text=True)
print(res.stdout)
env_vars = {
"SSH_AUTH_SOCK": self.SSH_AUTH_SOCK,
}
commands = [
r"""
bash -c "echo 'hello'"
"""
]
self.container = self.docker_client.containers.get(
self.container_name)
# run some commands
for command in commands:
exec_result = self.container.exec_run(
cmd=command, tty=True, stream=True, privileged=True, environment=env_vars)
for line in exec_result.output:
print(line.decode('utf-8', errors='ignore'), end='', flush=True)
# 2nd time: Error!
res = subprocess.run(["ssh-add", "-L"], stderr=subprocess.STDOUT,
stdout=subprocess.PIPE, check=True, text=True)
print(res.stdout)
So the ssh agent is alive and keys are loaded (tested with the 1st time "ssh-add -L"), but after A attached to B and run exec_run() with SSH_AUTH_SOCK forward to B, ssh-add -L failed with:
Error connecting to agent: No such file or directory
subprocess.CalledProcessError: Command '['ssh-add', '-L']' returned non-zero exit status 2
Seems like B killed the ssh socket after the execution of the commands even though it's still alive (detached mode). Can anyone explain this to me? How can I prevent this from happening? I need the ssh agent to do other things in A. Thank you!