How can I use a scalar as input to open3 in perl

160 Views Asked by At

I have a scalar that I want to feed into open3 as the input. For example

my $sql = "select * from table;";
open( SQL, "<", \$sql );

my ($output);
open3( '<&SQL', $output, $output, "mysql -h 127.0.0.1" );

However, the open3 is in a different module:

package main;

use Example::Runner;

my $sql = "select * from table;";
open( my $in_handle, "<", \$sql );

my ($out_handle);
Example::Runner::run( $in_handle, $out_handle, $out_handle
    'mysql -h 127.0.0.1' );

Then in another file:

package Example::Runner;

sub run {
    my ($in, $out, $err, @command) = @_;
    open3( ?, $out, $err, "mysql -h 127.0.0.1" );
}

The problem is, in Example::Runner I have a reference I could read from <$in>, but what i need is something i can prefix with '<&' so that open3 will use it as the STDIN for the command it executes. Any idea how i convert a reference to a handle into something open3 can will use for its STDIN?

EDIT:

Its pretty clear that my contrived example is not enough... The reason I am not using {{DBI}} directly is that this code is actually part of a larger body of code that I use for footprintless automation. In other words, I have an environment of 30+ servers on which my admins have installed no special tools (just stuff that comes standard in RHEL 5/6). These servers are broken into sets of servers (db, app, web), for each environment (local, dev, qa, beta, prod), for each project (...). Anyway, one very common task is copying databases from one place to another. We accomplish that with a command akin to:

use IPC::Open3::Callback::CommandRunner;
use IPC::Open3::Callback::Command qw(command pipe_command);

my $source_config = {hostname => 'proj1-prod-db', sudo_username => 'db'};
my $dest_config = {hostname => 'proj1-prod-db', sudo_username => 'db'};
my $command_runner = IPC::Open3::Callback::CommandRunner->new();
$command_runner->run_or_die( pipe_command(
    command( "mysqldump dbname", $source_config ),
    command( "mysql dbname", $dest_config ) ) );
# runs: ssh proj1-prod-db "sudo -u db mysqldump dbname" | ssh proj1-dev-db "sudo -u db mysql dbname"

This is the MOST basic version of cloning our production database back to a development environment (a more typical version includes a lot of switches on each command and a lot of piped commands in the middle). So, I wrote a library of abstractions around this (IPC::Open3::Callback::*). Along the way we ran into the need to perform some SQL commands that need to be run after the database is copied. So, we added the ability to run an arbitrary set of SQL scripts (based on the source and the destination of the clone operation). I could run them with a command like this:

$command_runner->run_or_die( pipe_command(
    "cat $post_restore",
    command( "mysql dbname", $dest_config ) ) );

But I have come across the need to munge some of the content of the SQL script so i wanted to slurp it in, do a little work on it, then provide it to the $command_runner as STDIN. That said, I attempted to deal with this using fileno:

sub safe_open3_with {
    my ($in_handle, $out_handle, $err_handle, @command) = @_;

    my @args = (
        $in_handle ? '<&' . fileno( $in_handle ) : undef,
        $out_handle ? '>&' . fileno( $out_handle ) : undef,
        $err_handle ? '>&' . fileno( $err_handle ) : undef,
        @command
    );
    return ( $^O =~ /MSWin32/ ) ? _win_open3(@args) : _nix_open3(@args);
}

But if $in_handle is a scalar ref, it wont work. Anyway, that's the long story.

1

There are 1 best solutions below

2
On

open \$var doesn't work because it doesn't create a system file handle from which the child can read.

$ perl -E'open(my $fh, "<", \"abc") or die $!; say fileno($fh);'
-1

First, you need a pipe.

pipe(local *CHILD_STDIN, local *TO_CHILD)
   or die("Can't create pipe: $!\n");

my $pid = open3($cmd, '<&CHILD_STDIN', local *FROM_CHILD, undef);

Then, you'd print the data for mysql to read to TO_CHILD.

print(TO_CHILD do { local $/; <$in> });
close(TO_CHILD);

But that's dangerous. You are risking a deadlock. (A deadlock will occur if the child tries to send a large[1] amount to STDOUT or STDERR when you are trying to send a large[1] amount to its STDIN.) To avoid this problem, you'd need a select loop. This is very hard. You don't want to use something this low level. Use IPC::Run3 or IPC::Run instead of open3 as they do all the dirty work for you.

use IPC::Run3 qw( run3 );
run3($shell_cmd, \$sql, \my $out, \my $err);

Better yet, avoid the needless shell:

run3([ $prog, @args ], \$sql, \my $out, \my $err);

But why are you using a client designed for human use as your interface? You probably should be using DBI.


  1. I believe that the rather small 4KiB is a "large" amount on some systems, though I seem to remember pipes having 128KiB on one of my Linux machines.