The tunnel-api
sub-command can be used to access an on-premise cluster from a cloud cluster to allow orchestrating migrations from on-premise clusters using MTC where access is not possible otherwise.
An OpenVPN client on the on-premise cluster will connect to a server running on the cloud cluster and the OpenVPN server is exposed to the client using a load balancer address on the cloud cluster.
A service created on the cloud cluster is used to expose the on-premise clusters API to MTC running on the cloud cluster.
Note: To connect multiple on-premise source clusters to the cloud cluster use a separate namespace for each.
Example
crane tunnel-api --namespace openvpn-311 \
--destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \
--source-context default/192-168-122-171-nip-io:8443/admin \
--source-image: my.registry.server:5000/konveyor/openvpn:latest \
--proxy-host my.proxy.server \
--proxy-port 3128 \
--proxy-user foo \
--proxy-pass bar
When configuring the source cluster in MTC, the API URL is https://proxied-cluster.${namespace}.svc.cluster.local:8443
.
Optional: Set the image registry for direct image migrations to proxied-cluster.${namespace}.svc.cluster.local:5000
.
Replace ${namespace}
with either openvpn
or the specified namespace when running the command to set up the tunnel.
It may take three to five minutes after setup completes for the load balancer address to become resolvable. During this time the client will be unable to connect and establish a connection and the tunnel will not function.
During this time, run oc get pods
in the specified namespace for setup, and monitor the logs of the OpenVPN container to see the connection establish.
Example
oc logs -f -n openvpn-311 openvpn-7b66f65d48-79dbs -c openvpn