Paul Kiddie

Redirecting STDOUT to variable in perl and running child scripts

January 18, 2010

I recently created a perl script (lets call it 1.) to process a network trace and compute the number of bytes received for a given protocol (e.g. udp, tcp or routing protocols such as AODV, DSR) for a given time interval for each node in the trace. The output of this is a comma seperated value (CSV) delimited output to STDOUT, which I redirect into a file when invoked from cmd using the standard > operator, for graphing.

Today I wanted to compute routing protocol overheads, as a percentage of routing packets vs. all packets (data and routing) for a network trace. This requires running perl script (1.) twice with different  protocol numbers so I thought writing a container perl script to invoke script (1.) was the way to go. I admit I’m a bit of a perl newbie so please bear with me if this is obvious!

The perl script needed to achieve several objectives:

  1. Redirect STDOUT to a variable
  2. Execute perl script (1.)  for ALL protocols
  3. Process variable holding STDOUT into data structure
  4. Repeat steps 2 and 3 for routing protocol.
  5. Reset STDOUT.
  6. Perform calculation from two data structures and output routing overheads as CSV, which I could redirect to a file using standard > operator.

Here’s the code:

my $var;
open(OLDOUT,">&STDOUT") or die "Unable to save STDOUT $!\\n"; #save STDOUT handle to OLDOUT
close STDOUT;
open(STDOUT,">", \\$var) || die "Unable to open STDOUT: $!"; #open STDOUT handle to use $var
@ARGV = ($opt\_infile,$opt\_class,$opt\_ipaddr,"all","table",$opt\_start\_time,$opt\_end\_time);
#invoke perl script (1.) with above args
do("datareceived2.pl");
my @dataandroutingcsv = split "\\n",$var;
open(STDOUT,">&OLDOUT");
#process @dataandroutingcsv removed for brevity
close STDOUT;
#reinit $var
undef $var;
open(STDOUT,">", \\$var) || die "Unable to open STDOUT: $!"; #reopen STDOUT to use $var
@ARGV = ($opt\_infile,$opt\_class,$opt\_ipaddr,$opt\_protocol,"table",$opt\_start\_time,$opt\_end\_time);
#invoke perl script (1.) again with new args
do("datareceived2.pl");
my @routingcsv = split "\\n",$var;
open(STDOUT,">&OLDOUT"); #redirect STDOUT to use OLDOUT (for printing results to console)
#process @routingcsv, print result CSV removed for brevity

Some gotchas which caused a little head-scratching:

  1. exit(0) in child script (datareceived2.pl) terminated running of caller perl script, so I removed this.
  2. datarecieved2.pl used non-strict mode which meant variables were global and caused a little confusion, so I added use strict; at the top of both scripts to point out those variables which needed to be localised through the my keyword. http://en.wikibooks.org/wiki/Perl_Programming/Functions#Important_note:_global_and_local_variables gave me good insight.
  3. use of require vs. do. require loads and executes the perl script once and once only whilst do executes the perl script as many times as you call it. In my case, I needed do.  Thanks to http://soniahamilton.wordpress.com/2009/05/09/perl-use-require-import-and-do/

👋 I'm Paul Kiddie, a software engineer working in London. I'm currently working as a Principal Engineer at trainline.