Reverse connection and port forwarding

From KitwarePublic
Revision as of 17:35, 8 February 2010 by Burlen (Talk | contribs)

Jump to: navigation, search

This page describes how to configure a ParaView client/server connection between a compute node cluster and your desktop when it is not possible to open a direct connection between compute node and desktop. There are a number of options to accomplish this task. The choice of which one is right given your situation depends on the configuration of both your local network and the cluster.

Reverse connection over an ssh tunnel

ParaView reverse connection (blue arrow) over a reverse ssh tunnel (black arrow).

This method can be automated in ParaView or executed manually. In the automated approach the .pvsc calls ssh with both the tunnel options and the commands to submit the batch job. For example:

ssh -R XXXX:localhost:YYYY remote_machine

This means that port XXXX on remote_machine will be the port to which the server must connect. Port YYYY (e.g., 11111) on your client machine is the one on which PV listens. You'd have to tell the server (in the batch submission script, for example) the name of the node and port XXXX to which to connect. For example the ParaView server might be started like this:

mpiexec pvserver --reverse-connection --client-host=WWWW --server-port=XXXX

For more information on the .pvsc format see this.


  1. Two important options in the server side sshd configuration are GatewayPorts and AllowTcpForwarding.
  2. A firewall between the client and server may also interfere.

Forward or Reverse Connection Over A Two Hop ssh Tunnel

ParaView forward connection over a two hop ssh tunnel (black arrow).

In the case there is an additional firewall in between the cluster's front end and compute nodes or the cluster sshd configuration prevents the above method from working one can make a two hop tunnel. This is slightly more complicated and requires two ssh sessions, and may not be automated. (If you know of a way please let us know!).

This will require two terminals, the first terminal is used to submit the batch job, once the job is scheduled, one manually gets the root node's hostname and uses the second terminal to establish the tunnel. Once the tunnel is established one launches ParaView server and proceeds as usual. A normal or reverse ssh tunnel may be used, however since this is a manual method the forward tunnel may be easier.

In the following denoted by t1$ and t2$, say fe is the front end on your cluster. In the first terminal:

t1$ ssh fe
t1$ qsub -I -V -l select=XX -l walltime=XX:XX:XX

XX is replaced by your values. After the job is scheduled you're automatically ssh'd into some compute node, which we'll say has hostname NODE. In the second terminal:

t2$ ssh -L ZZZZ:NODE:YYYY fe

ZZZZZ is a port number on your workstation. YYYYY is a port number on the server that is not blocked by the clusters internal firewall (see your sys admin). Now back to terminal one, and your waiting compute node:

t1$ mpiexec pvserver --server-port=YYYYY

Reverse Connection with Portfwd

Using a reverse connection the paraview client will bind to a port and wait for an incoming connection from the server. When the mpi enabled paraview server (referred to as pvserver) is launched on a set of compute nodes, the 0th pvserver process opens a connection to the host machine where the paraview client is waiting.

The problem arises when it is not possible for the compute node to connect to the host machine- the compute node can only connect to a login node. The problem is solved by forwarding traffic from a port on the login node to a port on the host machine. This can be accomplished using a simple utility called portfwd found at

For this example, let's say the paraview client is waiting for a reverse connection on port 11111 of host client_host (client_host:11111), and the compute node can only connect to port 11111 of host login_node (login_node:11111). In this example pvserver is launched on the compute nodes using the command:

mpirun -np 512 pvserver --reverse-connection --client-host=login_node --server-port=11111

Next, we use portfwd to forward login_node:11111 to client_host:11111. In some situtations it is possible to accomplish this kind of port fowarding using ssh tunnels, but portfwd is much simpler. First write a configuration file, fwd.cfg:

  forwards localhost:11111 to client_host:11111
  tcp { 11111 { => client_host:11111 } }

Then launch portfwd on the login node (this example uses the flag --foreground to keep portfwd running in the foreground so it can be killed with control-c):

portfwd --config fwd.cfg --foreground

Now when the pvserver connects to login_node:11111 the traffic will be forwarded to client_host:11111.