I have been struggling with this problem now for a few days.
The task would be to connect to an sftp server, iterate through a list of files to 1. Fetch the file from sftp 2. Perform some file manipulation locally, then go to next file in the list.
This would be my minimal example:
set filelist_path filelist.txt
proc slurp {file} {
set fh [open $file r]
set ret [read $fh]
close $fh
return $ret
}
set filelist [slurp $filelist_path]
set rundir [file normalize [file dirname .]]
### Start connection to ftp server
spawn sftp ${user}@${host}
expect "Password:"
send "$pass\r"
### Download and manipulate files in the list
foreach item $filelist {
expect "sftp>"
send_user "Processing $item : \n"
set filedir [file dirname ${item}]
set outdir ${rundir}/tmp${filedir}
puts [exec mkdir -p $outdir]
send "lcd $outdir \r"
send "get -p /data'${item}'\r"
send "lcd $rundir \r"
puts [exec <some cmd>]
puts [exec rm -r ./tmp]
}
send "quit\r"
expect eof
My problem is, that local manipulation is never performed or iteration does not wait for the manipulation to finish. I have tried playing with the 'expect "sftp>"' order and put some 'interact' statements to see what is going on, but with no success.
The problem is that you aren't waiting for the commands you
sendto finish before processing the file. It takes time to move a file, even a small one. The easiest way is toexpect "sftp>"after each command yousend, even the trivial ones. It's how you verify that the other program has finished what it was doing.And move the initial one out of the loop; think of it as being part of connection setup.
I'd be inclined to write a procedure to send a command and wait for it to finish. That could then also check for error cases in a systematic way (and
expectgives you the tools to do so). Errors are where programming gets more complicated, after all...